Message ID | 20190503073317.30098-1-ruscur@russell.cc |
---|---|
State | RFC |
Headers | show |
Series | [RFC] docker: Add support for using eatmydata in the database | expand |
On 3/5/19 5:33 pm, Russell Currey wrote: > When running tox on a VM with presumably pretty busy spinning disks, > using eatmydata with the database took running one configuration's test > suite from (no exaggeration) 20 minutes down to 60 seconds. > > It makes a huge difference to test speed, so we should make it easily > available for developers. The primary motivation here was to > automatically test each patch in a timeframe that isn't insane. > > Open to ideas on how to organise this, whether we do it for MySQL too > (which we probably should), whether the base directory should have these > files in it, what to call the Dockerfile, etc. I think it's a good > thing to have in the repo, though. > > Signed-off-by: Russell Currey <ruscur@russell.cc> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> > --- > docker-compose-eatmydata.yml | 32 +++++++++++++++++++++++++++++++ > tools/docker/Dockerfile.eatmydata | 9 +++++++++ > 2 files changed, 41 insertions(+) > create mode 100644 docker-compose-eatmydata.yml > create mode 100644 tools/docker/Dockerfile.eatmydata > > diff --git a/docker-compose-eatmydata.yml b/docker-compose-eatmydata.yml > new file mode 100644 > index 0000000..27d1604 > --- /dev/null > +++ b/docker-compose-eatmydata.yml > @@ -0,0 +1,32 @@ > +version: "3" > +services: > + db: > + build: > + context: . > + dockerfile: ./tools/docker/Dockerfile.eatmydata > + volumes: > + - ./tools/docker/db/postdata:/var/lib/postgresql/data > + environment: > + - POSTGRES_PASSWORD=password > + > + web: > + build: > + context: . > + dockerfile: ./tools/docker/Dockerfile > + args: > + - UID > + depends_on: > + - db > + command: python3 manage.py runserver 0.0.0.0:8000 > + volumes: > + - .:/home/patchwork/patchwork/ > + ports: > + - "8000:8000" > + environment: > + - UID > + - PW_TEST_DB_HOST=db > + - PW_TEST_DB_PORT=5432 > + - PW_TEST_DB_TYPE=postgres > + - PW_TEST_DB_USER=postgres > + - PW_TEST_DB_PASS=password > + - PGPASSWORD=password > diff --git a/tools/docker/Dockerfile.eatmydata b/tools/docker/Dockerfile.eatmydata > new file mode 100644 > index 0000000..693cbb3 > --- /dev/null > +++ b/tools/docker/Dockerfile.eatmydata > @@ -0,0 +1,9 @@ > +FROM postgres:9.6 > + > +RUN apt-get update \ > + && apt-get install -y eatmydata \ > + && apt-get autoremove -y \ > + && rm -rf /var/lib/apt/lists/* > + > +ENTRYPOINT [ "/usr/bin/eatmydata", "/usr/local/bin/docker-entrypoint.sh" ] > +CMD ["postgres"] >
Russell Currey <ruscur@russell.cc> writes: > When running tox on a VM with presumably pretty busy spinning disks, > using eatmydata with the database took running one configuration's test > suite from (no exaggeration) 20 minutes down to 60 seconds. As the author and been-attempting-to-no-longer-be-maintainer-of eatmydata, this 20x improvement in test execution time doesn't really surprise me. It turns out that properly flushing things to disk is *really* expensive.
Russell Currey <ruscur@russell.cc> writes: > When running tox on a VM with presumably pretty busy spinning disks, > using eatmydata with the database took running one configuration's test > suite from (no exaggeration) 20 minutes down to 60 seconds. > > It makes a huge difference to test speed, so we should make it easily > available for developers. The primary motivation here was to > automatically test each patch in a timeframe that isn't insane. > > Open to ideas on how to organise this, whether we do it for MySQL too > (which we probably should), whether the base directory should have these > files in it, what to call the Dockerfile, etc. I think it's a good > thing to have in the repo, though. So I really want to do this but I don't really like the idea of docker-compose-eatmydata, it's just a bit ugly. I am hoping that we can get the tox-docker thing working and integrate from there, depending on how that ends up working. Regards, Daniel > > Signed-off-by: Russell Currey <ruscur@russell.cc> > --- > docker-compose-eatmydata.yml | 32 +++++++++++++++++++++++++++++++ > tools/docker/Dockerfile.eatmydata | 9 +++++++++ > 2 files changed, 41 insertions(+) > create mode 100644 docker-compose-eatmydata.yml > create mode 100644 tools/docker/Dockerfile.eatmydata > > diff --git a/docker-compose-eatmydata.yml b/docker-compose-eatmydata.yml > new file mode 100644 > index 0000000..27d1604 > --- /dev/null > +++ b/docker-compose-eatmydata.yml > @@ -0,0 +1,32 @@ > +version: "3" > +services: > + db: > + build: > + context: . > + dockerfile: ./tools/docker/Dockerfile.eatmydata > + volumes: > + - ./tools/docker/db/postdata:/var/lib/postgresql/data > + environment: > + - POSTGRES_PASSWORD=password > + > + web: > + build: > + context: . > + dockerfile: ./tools/docker/Dockerfile > + args: > + - UID > + depends_on: > + - db > + command: python3 manage.py runserver 0.0.0.0:8000 > + volumes: > + - .:/home/patchwork/patchwork/ > + ports: > + - "8000:8000" > + environment: > + - UID > + - PW_TEST_DB_HOST=db > + - PW_TEST_DB_PORT=5432 > + - PW_TEST_DB_TYPE=postgres > + - PW_TEST_DB_USER=postgres > + - PW_TEST_DB_PASS=password > + - PGPASSWORD=password > diff --git a/tools/docker/Dockerfile.eatmydata b/tools/docker/Dockerfile.eatmydata > new file mode 100644 > index 0000000..693cbb3 > --- /dev/null > +++ b/tools/docker/Dockerfile.eatmydata > @@ -0,0 +1,9 @@ > +FROM postgres:9.6 > + > +RUN apt-get update \ > + && apt-get install -y eatmydata \ > + && apt-get autoremove -y \ > + && rm -rf /var/lib/apt/lists/* > + > +ENTRYPOINT [ "/usr/bin/eatmydata", "/usr/local/bin/docker-entrypoint.sh" ] > +CMD ["postgres"] > -- > 2.21.0 > > _______________________________________________ > Patchwork mailing list > Patchwork@lists.ozlabs.org > https://lists.ozlabs.org/listinfo/patchwork
On Fri, 2019-05-03 at 17:33 +1000, Russell Currey wrote: > When running tox on a VM with presumably pretty busy spinning disks, > using eatmydata with the database took running one configuration's test > suite from (no exaggeration) 20 minutes down to 60 seconds. > > It makes a huge difference to test speed, so we should make it easily > available for developers. The primary motivation here was to > automatically test each patch in a timeframe that isn't insane. > > Open to ideas on how to organise this, whether we do it for MySQL too > (which we probably should), whether the base directory should have these > files in it, what to call the Dockerfile, etc. I think it's a good > thing to have in the repo, though. > > Signed-off-by: Russell Currey <ruscur@russell.cc> What are the implications of doing this _from a development perspective_ and can we do this by default in the two existing docker- compose files? Given that we're only talking about a development environmnent where information presumably isn't that important allied to the fact that we have the ability to back up and restore from known good points (the dbbackup, dbrestore management commands, respectively), do we need to be seriously concerned about dataloss? Stephen
On 17/9/19 11:36 pm, Stephen Finucane wrote: > On Fri, 2019-05-03 at 17:33 +1000, Russell Currey wrote: >> When running tox on a VM with presumably pretty busy spinning disks, >> using eatmydata with the database took running one configuration's test >> suite from (no exaggeration) 20 minutes down to 60 seconds. >> >> It makes a huge difference to test speed, so we should make it easily >> available for developers. The primary motivation here was to >> automatically test each patch in a timeframe that isn't insane. >> >> Open to ideas on how to organise this, whether we do it for MySQL too >> (which we probably should), whether the base directory should have these >> files in it, what to call the Dockerfile, etc. I think it's a good >> thing to have in the repo, though. >> >> Signed-off-by: Russell Currey <ruscur@russell.cc> > > What are the implications of doing this _from a development > perspective_ and can we do this by default in the two existing docker- > compose files? Given that we're only talking about a development > environmnent where information presumably isn't that important allied > to the fact that we have the ability to back up and restore from known > good points (the dbbackup, dbrestore management commands, > respectively), do we need to be seriously concerned about dataloss? We don't need to worry about dataloss in a development environment, though perhaps some people are borrowing our dockerfiles for a production deployment?
diff --git a/docker-compose-eatmydata.yml b/docker-compose-eatmydata.yml new file mode 100644 index 0000000..27d1604 --- /dev/null +++ b/docker-compose-eatmydata.yml @@ -0,0 +1,32 @@ +version: "3" +services: + db: + build: + context: . + dockerfile: ./tools/docker/Dockerfile.eatmydata + volumes: + - ./tools/docker/db/postdata:/var/lib/postgresql/data + environment: + - POSTGRES_PASSWORD=password + + web: + build: + context: . + dockerfile: ./tools/docker/Dockerfile + args: + - UID + depends_on: + - db + command: python3 manage.py runserver 0.0.0.0:8000 + volumes: + - .:/home/patchwork/patchwork/ + ports: + - "8000:8000" + environment: + - UID + - PW_TEST_DB_HOST=db + - PW_TEST_DB_PORT=5432 + - PW_TEST_DB_TYPE=postgres + - PW_TEST_DB_USER=postgres + - PW_TEST_DB_PASS=password + - PGPASSWORD=password diff --git a/tools/docker/Dockerfile.eatmydata b/tools/docker/Dockerfile.eatmydata new file mode 100644 index 0000000..693cbb3 --- /dev/null +++ b/tools/docker/Dockerfile.eatmydata @@ -0,0 +1,9 @@ +FROM postgres:9.6 + +RUN apt-get update \ + && apt-get install -y eatmydata \ + && apt-get autoremove -y \ + && rm -rf /var/lib/apt/lists/* + +ENTRYPOINT [ "/usr/bin/eatmydata", "/usr/local/bin/docker-entrypoint.sh" ] +CMD ["postgres"]
When running tox on a VM with presumably pretty busy spinning disks, using eatmydata with the database took running one configuration's test suite from (no exaggeration) 20 minutes down to 60 seconds. It makes a huge difference to test speed, so we should make it easily available for developers. The primary motivation here was to automatically test each patch in a timeframe that isn't insane. Open to ideas on how to organise this, whether we do it for MySQL too (which we probably should), whether the base directory should have these files in it, what to call the Dockerfile, etc. I think it's a good thing to have in the repo, though. Signed-off-by: Russell Currey <ruscur@russell.cc> --- docker-compose-eatmydata.yml | 32 +++++++++++++++++++++++++++++++ tools/docker/Dockerfile.eatmydata | 9 +++++++++ 2 files changed, 41 insertions(+) create mode 100644 docker-compose-eatmydata.yml create mode 100644 tools/docker/Dockerfile.eatmydata