One recent feature that has the ability to change the way developers work on their local machines are dev containers. In short, a dev container is a docker container that provides a working environment for a specific project and can be used by the IDE to develop the project without ever installing anything on the developer machine except the IDE, docker and git. It is especially interesting if you switch between projects with different ecosystems or the same ecosystem in different version often.
Most modern IDEs support dev containers, even if it still feels a bit unpolished in some implementations. In my opinion, the advantages of dev containers outweigh the additional complexity. The main advantage is the guarantee that all development systems are set up correctly and in sync. Our instructions to set up projects shrank from a lengthy wiki page to four bullet points that are essentially the same process for all projects.
But one question needs to be answered: If you have a docker-based build and perhaps even a docker-based delivery and operation, how do you keep their Dockerfiles in sync with your dev container Dockerfile?
My answer is to not split the project’s Dockerfiles, but layer them in one file as a multi-stage build. Let’s view the (rather simple) Dockerfile of one small project:
# --------------------------------------------
FROM python:3.10-alpine AS project-python-platform
COPY requirements.txt requirements.txt
RUN pip3 install --upgrade pip
RUN pip3 install --no-cache-dir -r requirements.txt
# --------------------------------------------
FROM project-python-platform AS project-application
WORKDIR /app
# ----------------------
# Environment
# ----------------------
ENV FLASK_APP=app
# ----------------------
COPY . .
CMD python -u app.py
The interesting part is introduced by the horizontal dividers: The Dockerfile is separated into two parts, the first one called “project-python-platform” and the second one “project-application”.
The first build target contains everything that is needed to form the development environment (python and the project’s requirements). If you build an image from just the first build target, you get your dev container’s image:
docker build --pull --target project-python-platform -t dev-container .
The second build target uses the dev container image as a starting point and includes the project artifacts to provide an operations image. You can push this image into production and be sure that your development effort and your production experience are based on the same platform.
If you frowned because of the “COPY . .” line, we make use of the .dockerignore feature for small projects.
Combining multi-stage builds with dev containers keeps all stages of your delivery pipeline in sync, even the zeroth stage – the developer’s machine.
But what if you have a more complex scenario than just a python project? Let’s look at the Dockerfile of a python project with a included javascript/node/react sub-project:
# --------------------------------------------
FROM python:3.9-alpine AS chronos-python-platform
COPY requirements.txt requirements.txt
RUN pip3 install --upgrade pip
RUN pip3 install --no-cache-dir -r requirements.txt
# --------------------------------------------
FROM node:16-bullseye AS chronos-node-platform
COPY chronos_client/package.json .
COPY chronos_client/package-lock.json .
RUN npm -v
RUN npm ci --ignore-scripts
# --------------------------------------------
FROM chronos-node-platform AS chronos-react-builder
COPY chronos_client .
# build the client javascript application
RUN npm run build
# --------------------------------------------
FROM chronos-python-platform AS chronos-application
WORKDIR /app
COPY . .
# set environment variables
ENV ZEITGEIST_PASSWORT=''
COPY --from=chronos-react-builder /build ./chronos_client/build
CMD python -u chronos.py
It is the same approach, just more layers on the cake, four in this example. There is one small caveat: If you want to build the “chronos-node-platform”, the preceding “chronos-python-platform” gets built, too. That delays things a bit, but only once in a while.
The second to last line might be interesting if you aren’t familiar with multi-stage builds: The copy command takes compiled files from the third stage/layer and puts them in the final layer that is the operations image. This ensures that the compiler is left behind in the delivery pipeline and only the artifacts are published.
I’m sure that this layer cake approach is not feasible for all project setups. It works for us for small and medium projects without too much polyglot complexity. The best aspect is that it separates project-specific knowledge from approach knowledge. The project-specific things get encoded in the Dockerfile, the approach knowledge is the same for all projects and gets documented in the Wiki – once.