One problem I often have when dockerizing my C++ Jenkins CI projects is handling incremental builds, for both our own code and the dependencies. Starting builds from scratch can take tens of minutes, too long for my taste.
My build stack is usually conan as a dependency manager and CMake/Ninja for building. Conan will usually try to download precompiled dependencies, but often enough, those are not available for my specific combination of compiler settings and flags, so it’ll build them on demand with the --build=missing flag. That usually takes the bulk of the time needed for a full build. So it makes sense to keep the dependencies cached, once they are built. However, since we use Docker to setup the build environment, they are all lost by default.
Who Owns What?
The obvious solution is to mount a folder on the build host to keep the conan cache using the -v / –volume option for docker run. This can be done by setting the CONAN_HOME environment variable, and I usually use one cache per build folder, which seems like a good compromise between speed and isolation.
But that causes other problems: docker will create all the files for the user inside the container, which is root by default, creating a whole bunch of files that the CI host user cannot delete, e.g. when a branch gets deleted. This breaks the CI setup to a point where manual intervention is required. A somewhat simple clutch is the -u user:group option to docker run, which will execute the build with the given user. The problem I was having with that, however, was that this user did not have access to user-scoped tool installations like conan via pipx.
User-specific Images
My current strategy to deal with this is to inject the host CI user and group into the docker ‘builder’ image, and then do all the building in the container using that user, as if using the CI host user on the metal. The Dockerfile looks like this:
FROM gcc:14.1-bookwormRUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y dist-upgradeRUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install \ cmake \ debhelper \ ninja-build \ python-is-python3 \ python3-pip \ pipxARG HOST_USER_IDARG HOST_GROUP_IDRUN groupadd -g ${HOST_GROUP_ID} hostgroup && \ useradd hostuser -u ${HOST_USER_ID} -g ${HOST_GROUP_ID} -m -s /bin/bash && \ mkdir /conan_home && chown hostuser:hostgroup /conan_homeUSER hostuserENV PATH="$PATH:/home/hostuser/.local/bin"RUN pipx install conanENV CONAN_HOME=/conan_homeWORKDIR /build_root# Build the viewer deb packageCMD ["/source_root/build.sh"]
After doing the user-independent setup, this declares two ARGs for retrieving the user and group IDs, and then sets up a user with those in the docker image, calling it hostuser:hostgroup internally. Note that the names will not leak out of the container, only the IDs do.
It installs conan via pipx as that user and makes sure it is in the PATH for the build later. This is the real advantage of passing the user into the image creation: user specific things can be installed!
In our Jenkinsfile, I build the image from that while injecting the current user via the –build-arg option:
docker build . --iidfile docker_image_id \ --build-arg HOST_USER_ID=`id -u` \ --build-arg HOST_GROUP_ID=`id -g`
This expects three folders to be mounted: /source_root for the sources/repository, /build_root for the out-of-source build, and /conan_home for the conan cache. Important: make sure these folders are created by the CI user before passing them to docker, or it will create them with the wrong owner. I’m only creating the latter two, since the first one is obviously created by Jenkins.
mkdir -p docker/build docker/conan
Once the folders are set up and the image is built, I run the actual build in a container via:
docker run --rm \ -v `pwd`:/source_root:ro \ -v `pwd`/docker/conan:/conan_home \ -v `pwd`/docker/build:/build_root \ `cat docker_image_id`
That should run the actual build and populate the conan cache. After that I extract the artifacts I need and remove the docker image and ID file with:
docker image rm `cat docker_image_id` && rm docker_image_id
And we’re done!