Experimenting with CMake’s unity builds

CMake has an option, CMAKE_UNITY_BUILD, to automatically turn your builds into unity-builds, which is essentially combining multiple source files into one. This is supposed to make your builds more efficient. You can just enable enable it while executing the configuration step of your CMake builds, so it is really easy to test. It might just work without any problems. Here are some examples with actual numbers of what that does with build times.

Project A

Let us first start with a relatively small project. It is a real project we have been developing, that reads sensor data, transports it over the network and displays it using SDL and Dear ImGui. I’m compiling it with Visual Studio (v17.13.6) in CMake folder mode, using build insights to track the actual time used. For each configuration, I’m doing a clean rebuild 3 times. The steps are the number of build statements that ninja runs.

Unity Build#StepsTime 1Time 2Time 3
OFF4013.3s13.4s13.6s
ON2810.9s10.7s9.7s

That’s a nice, but not massive, speedup of 124,3% for the median times.

Project A*

Project A has a relatively high number of non-compile steps: 1 step is code generation, 6 steps are static library linking, and 7 steps are executable linking. That’s a total of 14 non-compile steps, which are not directly affected by switching to unity builds. 5 of the executables in Project A are non-essential, basically little test programs. So in an effort to decrease the relative number of non-compile steps, I disabled those for the next test. Each of those also came with an additional source file, so the total number of steps decreased by 10. This really only decreased the relative amount of non-compile steps from 35% to 30%, but the numbers changes quite a bit:

Unity Build#StepsTime 1Time 2Time 3
OFF309.9s10.0s9.7s
ON189.0s8.8s9.1s

Now the speedup for the median times was only 110%.

Project B

Project B is another real project, but much bigger than Project A, and much slower to compile. It’s a hardware orchestration system with a web interface. As the project size increases, the chance for something breaking when enabling unity builds also increases. In no particular order:

  • Include guards really have to be there, even if that particular header was not previously included multiple times
  • Object files will get a lot bigger, requiring /bigobj to be enabled
  • Globally scoped symbols will name-clash across files. This is especially true for static globals or things in unnamed namespaces, which basically don’t do their job anymore. More subtly, things moved into the global namespace will also clash, such as the classes with the same name moved into the global namespace via using namespace.

In general, that last point will require the most work to resolve. If all fails, you can disable unity build on a target via set_target_properties(the_target PROPERTIES UNITY_BUILD OFF) or even just skip specific files for unity build inclusion via SKIP_UNITY_BUILD_INCLUSION. In Project B, I only had to do this for files generated by CMakeRC. Here are the results:

Unity Build#StepsTime 1Time 2Time 3
OFF416279.4s279.3s284,0s
ON11873.2s76.6s74.5s

That’s a massive speedup of 375%, just for enabling a build-time switch.

When to use this

Once your project has a certain size, I’d say definitely use this on your CI pipeline, especially if you’re not doing incremental builds. It’s not just time, but also energy saved. And faster feedback cycles are always great. Enabling it on developer machines is another matter: it can be quite confusing when the files you’re editing do not correspond to what the build system is building. Also, developers usually do more incremental builds where the advantages are not as high. I’ve also used hybrid approaches where I enable unity builds only for code that doesn’t change that often, and I’m quite satisfied with that. Definitely add an option to turn that off for debugging though. Have you had similar experiences with unity builds? Do tell!

Beware of using Git LFS on Github

In my private game programming projects, I am often using data files alongside my code for all kinds of game assets like images and sounds. So I thought it might be a good idea to use the Git Large File Storage (=LFS) extension for that.

What is Git LFS?

Essentially, if you’re not using it, the file will be in your local .git folder if it was part of your repository at any time in your history. E.g. if you accidentally added&committed a 800mb video files and then deleted it again, they will still be in your local .git folder. This problem multiplies when using a CI with many branches: each branch will typically have a copy of all files ever used in your repository. This is not a problem with source code files, because they are not that big and they can be compressed really well with different versions of themselves, which is what git typically does.

With Git LFS, the big files are only stored as references in the .git folder. This means that you might need an additional request to your remote when checking them out again, but it will save you lots space and traffic when cloning repositories.

In my previous projects on github, I just did not enable LFS for my assets. And that worked fine, as my assets are usually pretty small and I don’t change them often. But this time I wanted to try it.

Sorry, Github, what?

Imagine my suprise when I got an e-mail from github last month warning me that my LFS traffic quota is almost reached and I have to pay to extend it. What? I never had and traffic quota problems without LFS. Github doesn’t even seem to have one, if I just keep my big files in ‘pure’ git. So that’s what I get for trying to safe Github traffic.

Now the LFS quota is a meager 1 gb per month with Github Pro. That’s nothing. Luckily, my current project is not asset heavy: the full repo is very small at ~60mb. But still the quota was reached with me as a single developer. How did that happen? I just enabled CI for my project on my home server and I was creating lots of branches my CI wanted to build. That’s only 12 branches cloned for the 80% warning to be reached.

Workarounds

Jenkins, which I’m using as a CI tool, has the ability to use a ‘reference repository’ when cloning. This can be used to get the bulk of the data from a local remote, while getting the rest from Github. This is what I’m now using to avoid excess LFS traffic. It is a bit of a pain to set up: you have to manually maintain this reference repository, Jenkins will not do it for you, and you have to do that on each agent. I only have one at this point, so that’s an okay trade-off. But next time, Isure won’t use Git LFS on Github, if I can avoid it.

A Tale of Hidden Variables

Today was one of those frustrating moments that every developer encounters at some point. We were working on a Docker Compose setup and observed behavior that could only happen if a specific environment variable had been set. To ensure that this environment variable wasn’t being set I scoured through the Docker Compose file, checked the local environment variables using the export command, and grepped all the relevant files in the project directory. But no matter what I did, this environment variable was still haunting us, wreaking havoc on the setup.

After what felt like an eternity of troubleshooting, we finally uncovered the culprit: an old, hidden .env file left over from a long-forgotten configuration. This file had been silently setting the environment variable I was desperately trying to eliminate.

Here’s how it all unfolded and what I learned from the experience:

When I first suspected that the environment variable might be lurking somewhere in the project, my instinct was to use grep to search for it in all the files within my local directory. I ran something along the lines of:

grep -r 'MY_ENV_VAR' *

To my surprise, nothing relevant showed up. I had expected this command to search through everything in my local directory. However, I had forgotten one important detail: grep doesn’t search hidden files by default when you use *.

Since .env files are typically hidden (starting with a dot), grep completely skipped over them. Little did I know, that old .env file was sitting quietly in the background, setting the environment variable that was causing all my issues.

After some frustration, my colleague finally had the realization that there might be hidden files at play. In Unix-like operating systems, files that start with a dot (.), like .env, are treated as hidden and are not listed or searched by default with common commands. Just as hidden variables in physics could influence particles without being directly observable, the hidden .env file was affecting my environment variables without being immediately visible.

To include hidden files in your search, you need to modify the grep command to look for them explicitly:

grep -r 'MY_ENV_VAR' . --include=".*"

This experience led me to reflect on whether deployment-relevant files like .env should be hidden in the first place, since they can easily be overlooked during debugging. It also makes them more prone to being forgotten. Hidden files are easy to miss when troubleshooting, especially when you’re under pressure.

Given that .env files can have a significant impact on the behavior of applications, containerized setups, and CI/CD pipelines, making them hidden by default might not always be the best approach. After all, if an environment variable has the power to alter how an entire application runs, it’s something we want to be highly visible and readily accessible.

In the end, this experience taught me two important lessons:

Always search for hidden files when troubleshooting issues related to environment variables. If your Docker Compose or other environment-dependent setups aren’t behaving as expected, don’t forget to check for hidden .env files.

Consider the visibility of critical configuration files. Should .env files be hidden by default, or should they be treated as first-class citizens in our directory structures? In many cases, keeping them visible might help avoid unexpected behavior and wasted hours of debugging.