Non-determinism in C++

A deterministic program, when given the same input, will always result in the same output. This intuitive, albeit quite fuzzily defined, property is often times pretty important for correct program. Sources of non-determinism can be quite subtle – and once they creep into your program, they can propagate and amplify and have enormous consequences. It is pretty much the well-known butterfly effect.

When discussing this problem, it is important to know what exactly makes up the input and the output of the program. For example, when logging times to a logfile, and considering this an actual output, no two runs will ever be the same – so this is usually not considered an output relevant for determinism. Which brings us to the first common source of non-determinism:

Time

If any part of your program depends on the time it is run at, it will be easily be non-deterministic. Common cases are using the time to initializing some variable depending on the time, or using the time for some kind of numerical integration, like computing a value over time. Also, using execution time as an output respective for determinism is hopeless on a normal desktop computer – but can be crucial for a real-time system.

Random number generation

Random number generation seems like an obvious candidate, yet most random number generators are not really random, but only pseudo-random. For example, std::mersenne_twister_engine will generate the same sequence of values every time, when initialized with the same seed. So do not initialize this with a non-deterministic input like the time, and it will be predictable. However, std::random_device might not share this property and give you fresh non-deterministic input. As a weird middle ground, std::default_random_engine will probably give you the same results when compiled with the same compiler/standard-lib, but on another compiler version or OS, it will not. Subtle.

The allocator

Another source of non-determinism that is pretty tricky is the allocator. For example, consider the following piece of code:

template <class T>
T sum(std::set<Thingy*> const& set)
{
  T result{};
  for (auto const& each : set)
    result += each->value();
  return result;
}

Is this deterministic or not? It depends. Now let’s assume that all the Thingys were allocated using standard new. In that case, the actual pointers, Thingy* are non-deterministic, and hence the order of the Thingy*s in the set is random. But does this matter? Well if T is std::uint32_t, it does not. Order in addition does not matter for unsigned integers, even with overflows. However, if T is float, then it does matter and the whole result becomes unpredictable, at least in the general case (it will even be predictable, if e.g. all the numbers in the computation are integers that are exactly representable as floats). Other languages have “insertion-ordered” containers to get around this problem. A sensible approximation in C++ is to use the (unordered_)set and (unordered_)map containers together with another list to iterate on.

The thread scheduler

When you cannot really control the order of instructions, which is really the whole point of threading, you will have a harder time making things deterministic. Like the allocator problem, this is usually also paired with floating-point arithmetic. The workaround here is to make sure that the order of computation does not influence the final result. One common way around this is to sort the output by a unique criteria. For example, if you use multiple threads to report the intersections of a bunch of line segments, you can later sort them by their position in space.

There’s of course the honorable mention for uninitialized variables, but I’m sure your static analyzer will complain about it. Any interaction with the “outside” of your program, any side-effect, be it filesystems, user input, output or cosmic radiation can lead to non-determinism, so be sure to know the context well enough and plan accordingly to your determinism requirements.

Integrating conan, CMake and Jenkins

In my last posts on conan, I explained how to start migrating your project to use a few simple conan libraries and then how to integrate a somewhat more complicated library with custom build steps.

Of course, you still want your library in CI. We previously advocated simply adding some dependencies to your source tree, but in other cases, we provisioned our build-systems with the right libraries on a system-level (alternatively, using docker). Now using conan, this is all totally different – we want to avoid setting up too many dependencies on our build-system. The fewer dependencies they have, the less likely they will accidentally be used during compilation. This is crucial to implement portability of your artifacts.

Setting up the build-systems

The build systems still have to be provisioned. You will at least need conan and your compiler-suite installed. Whether to install CMake is a point of contention – since the CMake-Plugin for Jenkins can do that.

Setting up the build job

The first thing you usually need is to configure your remotes properly. One way to do this is to use conan config install command, which can synchronize remotes (or the whole of the conan config) from either a folder, a zip file or a git repository. Since I like to have stuff readable in plain text in my repository, I opt to store my remotes in a specific folder. Create a new folder in your repository. I use ci/conan_config in this example. In it, place a remotes.txt like this:

bincrafters https://api.bintray.com/conan/bincrafters/public-conan True
conan-center https://conan.bintray.com True

Note that conan needs a whole folder, you cannot read just this file. Your first command should then be to install these remotes:

conan config install ci/conan_config

Jenkins’ CMake for conan

The next step prepares for installing our dependencies. Depending on whether you’re building some of those dependencies (the --build option), you might want to have CMake available for conan to call. This is a problem when using the Jenkins CMake Plugin, because that only gives you cmake for its specific build steps, while conan simply uses the cmake executable by default. If you’re provisioning your build-systems with conan or not building any dependencies, you can skip this step.
One way to give conan access to the Jenkins CMake installation is to run a small CMake script via a “CMake/CPack/CTest execution” step and have it configure conan appropriatly. Create a file ci/configure_for_conan.cmake:

execute_process(COMMAND conan config set general.conan_cmake_program=\"${CMAKE_COMMAND}\")

Create a new “CMake/CPack/CTest execution” step with tool “CMake” and arguments “-P ci/configure_for_conan.cmake”. This will setup conan with the given cmake installation.

Install dependencies and build

Next run the conan install command:

mkdir build && cd build
conan install .. --build missing

After that, you’re ready to invoke cmake and the build tool with an additional “CMake Build” step. The build should now be up and running. But who am I kidding, the build is always red on first try 😉

Using protobuf with conan and CMake

In my last post, I showed how I got my feet wet while migrating the dependencies of my existing code-base to conan. The first major hurdle I saw coming when I started was adding something with a “special” build step, e.g. something like source-preprocessing. In my case, this was protobuf, where a special build-step converts .proto files to sources and headers.

In my previous solution, my devenv build scripts would install the protobuf converter binary to my devenv’s bin/ folder, which I then used to run my preprocessing. At first, it was not obvious how to do this with conan. It turns out that the lovely people and bincrafters made this pretty comfortable. conan_basic_setup() will add all required package paths to your CMAKE_MODULE_PATH, which you can use to include() some bundled CMake scripts that will either let you execute the protobuf-compiler via a target or run protobuf_generate to automagically handle the preprocessing. It’s probably worth noting, that this really depends on how the package is made. Conan does not really have an official way on how to handle this.

Let’s start with some sample code – Person.proto, like the sample from the protobuf website:

message Person {
  required string name = 1;
  required int32 id = 2;
  optional string email = 3;
}

And some sample code that uses it:

#include "Person.pb.h"

int main(int argn, char** argv)
{
  Person message;
  message.set_name("Hello Protobuf");
  std::cout << message.name() << std::endl;
}

Again, we’re using the bincrafters repository for our dependencies in a conanfile.txt:

[requires]
protobuf/3.6.1@bincrafters/stable
protoc_installer/3.6.1@bincrafters/stable

[options]

[generators]
cmake

Now we just need to wire it all up in the CMakeLists.txt

cmake_minimum_required(VERSION 3.0)
project(ProtobufTest)

include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
conan_basic_setup(TARGETS KEEP_RPATHS)

# This loads the cmake/protoc-config.cmake file
# from the protoc_installer dependency
include(cmake/protoc-config)

set(TARGET_NAME ProtobufSample)

# Just add the .proto files to the target
add_executable(${TARGET_NAME}
  Person.proto
  ProtobufTest.cpp
)

# Let this function to the magic
protobuf_generate(TARGET ${TARGET_NAME})

# Need to use protobuf, of course
target_link_libraries(${TARGET_NAME}
  PUBLIC CONAN_PKG::protobuf
)

# Make sure we can find the generated headers
target_include_directories(${TARGET_NAME}
  PUBLIC ${CMAKE_CURRENT_BINARY_DIR}
)

There you have it! Pretty neat, and all without a brittle find_package call.

Migrating an existing C++ codebase to conan

This is a bit of a battle report of migrating the dependencies in my C++ projects to use the conan package manager.
In the past weeks I have started to use conan in half a dozen both work and personal projects. Here’s my experiences so far.

Before

The first real project I started with was my personal game project. The “before” setup used a mixture if techniques to handle dependencies and uses CMake to do most of the heavy lifting.
Most dependencies reside in the “devenv”, which is a separate CMake project that I use to build and bundle the dependencies in a specific installation folder. It uses ExternalProject_Add for most parts (e.g. Boost, SDL, Lua, curl and OpenSSL), add_subdirectory for a few others (pugixml and lz4) and just install(FILES...) for a few header only libs like JSON for Modern C++, Catch2 and spdlog. It should be noted that there are relatively few interdependencies between the projects in there.
Because it is more convenient to update, I keep a few dependencies that I control myself directly in the source tree, either as git externals or just copies of the source files.
I try to keep usage of system dependencies to a minimum so that the resulting binary is more portable to the average gamer who does not want to know about libraries and dependencies and such nonsense. This setup has been has been mostly painless and working for my three platforms Windows, Linux and Mac – at least as long as I did not try to change it significantly.

Baby steps

Since not all my dependencies are available on conan and small iterations are usually more successful, I decided to proceed by changing only a single dependency to conan. For this dependency, it’s a good idea to pick something that does not have many compile-time options and is more or less platform agnostic. So I opted for boost over, e.g. SDL or wxWidgets. Boost was also one of the most painful dependencies to build, if only for the insane amount of files it produces and the time it takes to copy those ten-thousands of files to the install location.

Getting started..

There are currently two popular variants of boost available through conan. The “normal” variant on conan’s main repository/remote “conan-center” and a modular version that splits boost into its component libraries on the bincrafters remote, e.g. Boost.Filesystem. The modular version is more appealing conceptually, and I also had a better time getting it to work in my first tests, so I picked that. I did a quick grep for #include <boost/ through my code for an initial guess which boost libraries I needed to get and created a corresponding conanfile.txt in my project root.

[requires]
boost_filesystem/1.69.0@bincrafters/stable
boost_math/1.69.0@bincrafters/stable
boost_random/1.69.0@bincrafters/stable
boost_property_tree/1.69.0@bincrafters/stable
boost_assign/1.69.0@bincrafters/stable
boost_heap/1.69.0@bincrafters/stable
boost_optional/1.69.0@bincrafters/stable
boost_program_options/1.69.0@bincrafters/stable
boost_iostreams/1.69.0@bincrafters/stable
boost_system/1.69.0@bincrafters/stable

[options]
boost:shared=False

[generators]
cmake

Now conan plays really nice with “single configuration generators” like the new CMake/Ninja support in VS2017 and onward. Basically, just cd into your build dir and call something like conan install -s build_type=Debug -s build_type=x86 whenever you want to update dependencies. More info can be found in the official documentation. The workflow for CLion is essentially the same.

Using it in your build

After the last command, conan will download (or build) the dependencies and generate a file with all the corresponding paths.
To use it, include it from cmake like this:

include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
conan_basic_setup(TARGETS KEEP_RPATHS)

It will then provide targets for all the requested boost libraries that you can link to like this:

target_link_libraries(myTarget
  PUBLIC CONAN_PKG::boost_filesystem
)

I wanted to make sure that the compiler build using the new boost files and not the old ones. Because I have a generic include into my devenv that was still going to be in my compilers include-paths for all the other dependencies, so I just renamed boost’s header include folder on disc. After my first successful compile I felt confident enough to delete them.

First problems

There was one major problem: some of my in-source dependencies had their own claim on using boost via passed CMake variables, Boost_LIBRARY_DIRS and Boost_INCLUDE_DIR. I adapted their CMakeLists.txt to allow for injecting appropriate targets instead. Not the cleanest solution, but it got my builds green again fast.

There’s a still a lot to cover on this: The other platforms had their own quirks and I migrated way more than just this first project. Also, there is still ways to go for a full migration with my game project. But more on that in my next blog post…

A programmer’s report on the Global Game Jam

Last weekend was this year’s global game jam. A game jam is a type of event where the participants – the “jammers” – create a game in a fixed, usually quite short, amount of time with some additional restrictions like a theme. Usually, this means computer games, although you can also make board or pen and paper games.

A worldwide event

As the name implies, the global game jam is a worldwide event. Most bigger cities have “hubs”, i.e. jam locations, where the jammers gather at the beginning of the weekend in their local time zone. Then, a few motivating keynote videos are shown before the theme is announced. Last year’s theme was “Transmission”, while this year’s theme was “What Home means to you”. A game using that theme is to be created within the next 48-hours.

The pitch

People get a few minutes to brainstorm about the theme and maybe get an idea that they want to implement. There is kind of a small stage area, where people with those ideas pitch them to the others in hopes of getting a team assembled. Last year, a guy even made a short power-point presentation to pitch his idea.
While you can make a game on your own, that is pretty hard. You typically need at least programming, graphics, audio, music, project-management and game-design expertise on your team. It is rare to find that in a single person.

Implementation

So if you have a nice idea, or join a team with one – now implementation work starts.

Last year, my team quickly abandoned of using Unity as the engine for our terra-forming game, since no one on the team had any experience with it. We switched to Love2D, a neat little framework based on Lua. It is a very lightweight little framework and Lua is an elegant engine. Given that our team had no dedicated graphics people, I am pretty proud of the little game that we finished: Terraformer

This year, I joined another team and we picked the Godot engine to create our game. Godot comes with its own programming language called “Godot script”. The syntax looks like it borrows heavily from both Python and JavaScript. It is, however, quite pleasant to work with, except that most higher-order functional features that are all so common on most languages seem to be missing. But it does have coroutines, so there’s that.
Either way, working with godot was mostly pretty pleasant, except for its interaction with git. Just opening specific parts of the game in the godot editor would change files, and sometimes we’d get huge nonsensical changes, and merge conflicts, when someone on another platform (i.e. between Mac and Windows) pushed something.
We ended up making this little game: Home God
I’m particularly proud of the character movement controls. I think they turned out quite fluidly.

Not just for game programmers…

The global game jam is an awesome experience for non-game programmers looking to improve their craft. In fact, most participants in my hub where at most hobbyists, from what I could tell. The tight time-budget and the objective of just getting it to work will give many programmers a totally different perspective of how to do things. Things that steal your time will hurt a lot.

How to teach C++

In the closing Keynote of this year’s Meeting C++, Nicolai Josuttis remarked how hard it can be to teach C++ with its ever expanding complexity. His example was teaching rookies about initialization in C++, i.e. whether to use assignment =, parens () or curly braces {}. He also asked for more application-level programmers to participate.

Well, I am an application programmer, and I also have experience with teaching C++. Last year I held a C++ introductory course for experienced C programmers who mostly had never used C++ before. From my experience, I can completely agree with what Nico had to say about teaching C++. The language and its subtlety can be truly overwhelming.

Most of the complexity in C++ boils down to tuning your code for optimal performance. We cannot just leave that out, can we? After all, the sole reason to use C++ is performance, right?

My approach

Performance is one key reason for using C++, no doubt about it. There is a few more, but let us not get distracted. What is even more important is the potential to optimize for performance. But you can do that later! It is actually quite crazy what performance crimes you can get away with in C++ and still have something pretty fast overall, especially with move-semantics and improved RVO.

Given that, I picked a simple subset to start with:

  • Pass by-value only, do not use pointers nor references.
  • Structure your programs around simple data-only structs and functions transforming them.
  • Make good use of the data structures and algorithms from std.

Believe me, you too can write pretty useful programs this way. This is not too far from a data-oriented style anyways.
So, yes, we can actually leave out the performance specific parts for quite some time, while still making sure our programs can be optimized eventually.

And from there?

You can gradually start introducing references and move semantics, when measurement shows copying affecting the performance. This way you can introduce tooling like a profiler and see the effects off passing things around by reference in a language where everything, by default, is passed by value. These things will start to make sense in a context.

The arguably better approach to move semantics is of course types that prevent you from copying, but allow moving. Most resources with side-effects are like this: file handles, locks, threads. You will need to introduce RAII for this to make sense, e.g. writing ctors and dtors.

But you still do not need to write templates, use virtual-function polymorphism, or even pointers. But when you get to those, you will have a good context to use them.

Have you tought C++ and some experience to share? I would like to hear about it!

luabind deboostified tips and tricks

luabind deboostified is a fork of the luabind project that helps exposing APIs to Lua. As the name implies, it replaces the boost dependency with modern C++, which makes it a lot more pleasant to work with.

Here are a few tips and tricks I learned while working with it. Some tricks might be applicable to the original luabind – I do not know.

1. Splitting module registration

You can split the registration code for different classes. I usually add a register function per class, like this:

struct A {
  void doSomething();
  static luabind::scope registerWithLua();
};

struct B {
  void goodStuff();
  static luabind::scope registerWithLua();
};

You can then combine their registration code into a single module on the Lua side:

void registerAll(lua_State* L) {
  luabind::module(L)[
    A::registerWithLua(),
    B::registerWithLua()];
}

The implementation of a registration function looks like this:

luabind::scope A::registerWithLua()
{
  return luabind::class_<A>("A")
    .def("doSomething", &A::doSomething);
}

2. Multiple policies and multiple return values

Unlike C++, Lua has real multiple return values. You can use that by utilizing the return value policies that luabind offers. Lets say, you want to write this in Lua:

local x, y = a.getPosition()

The C++ side could look like this:

void getPosition(A const& a, float& x, float& y);

The deboostified fork needs its policies supplied in a type list. Let’s use a small helper meta-function to build that:

template <typename... T>
using joined = 
  typename luabind::meta::join<T...>::type;

Once you have that, you can expose it like this:

luabind::def("getPosition", &getPosition,
              joined<
                luabind::pure_out_value<2>,
                luabind::pure_out_value<3>
              >());

3. Specialized data structures using luabind::object

Using the converters in luabind is not the only way to make Lua values from C++. Almost everything you can do in Lua itself, you can do with luabind::object. Here is a somewhat contrived example:

luabind::object repeat(luabind::object what,
                       int count) {
  // Create a new table object
  auto result = luabind::newtable(
    what.interpreter());
  // Fill it as an array [1..N]
  for (int i = 1; i <= count; ++i)
    result[i] = what;
  return result;
}

This function can then be exported via luabind::def and used just like any other function. This is just the tip of the iceberg, though. For example, you can also write functions that, at runtime, behave differently when a number is passed in as when a table is passed in. You can find out the Lua type with luabind::type(myObject).

Of course, as soon as you want to create new objects to return to Lua, you need the lua_State pointer in that function. Using the interpreter from a passed-in luabind::object is one way, but I have yet to find another pleasant way to do this. It is probably possible to use the policies to do this, and have them pass that in as a special parameter, but for now I am using some complicated machinery to bind lambda functions that capture the Lua interpreter.

That’s it for now..

Keep in mind that these are not thoroughly researched best-practices, but patterns I have used to solve actual problems. There might be better solutions out there – if you know any, please let me know. Hope this helped!