Think of your code as a maintenance minefield

Most of the cost, effort and time of a software project is spent on the maintenance phase, the modification of a software product after delivery. If you think about all these resources as “negative investments” or debt settlement and try to associate your spendings with specific code areas or even single lines of code, you’ll probably find that the maintenance cost per line is not equally distributed. There are lots of lines of code that outlast the test of time without any maintenance work at all, a fair amount of lines that require moderate attention and some lines that seem to require constant and excessive developer care.

If you transfer this image to another metaphor, your code presents itself like a minefield for maintenance effort: Most of the area is harmless and safe to travel. But there are some positions that will just blow up once touched. The difference is that as a software developer, you don’t tread on the minefield, but you catch the flak if something happens.

You should try to deliver your code free of maintenance mines.

Spotting a maintenance mine

Identifying a line of code as a maintenance mine after the fact is easy. You probably already recognize the familiar code as “troublesome” because you’ve spent hours trying to understand and fix it. The commit history of your version control system can show you the “hottest” lines in your code – the areas that were modified most often. If you add tests for each new bug, you’ll find that the code is probably tested really well, with tests motivated by different bug issues. In hindsight, you can clearly distinguish low-effort code from high maintenance code.

But before delivery, all code looks the same. Or does it?

An example of a maintenance mine

Let’s look at an example. Our system monitors critical business data and sends out alerts if certain conditions are met. One implementation of the part sending the alerts is a simple e-mail sender. The code is given here:


public class SendEmailService {

  public void sendTo(
                Person person,
                String subject,
                String body) {
    execCmd(
         buildCmd(
               person.email(), subject, body));
  }

  private String buildCmd(String recipientMailAdress, String subject, String body){
    return "'/usr/bin/mutt -t " + recipientMailAdress + " -u " + subject + " -m " + body + "'";
  }

  private int execCmd(String command) throws IOException{
    return Runtime.getRuntime()
                  .exec(command).exitValue();
  }
}

This code has two interesting problems:

  • The first problem is that it is written in Java, a platform agnostic programming language, but depends on being run on a linux (or sufficiently similar unixoid) operating system. The system it runs on needs to supply the /usr/bin/mutt program and have the e-mail sending settings properly configured or else every try to run the send command will result in an error. This implicit dependency on the configuration of the production system isn’t the best way to deal with the situation, but it’s probably a one-time pain. The problem clearly presents itself and once the system is set up in the right way, it is gone (until somebody tampers with the settings again). And my impression is that this code separates two concerns between development and operations rather nicely: Development provides software that can send specific e-mails if operations provides a system that is capable of sending e-mails. No need to configure the system for e-mail sending and doing it again for the software on said system.
  • The second problem looks like a maintenance mine. In the line where the code passes the command line to the operating system (by calling Runtime.getRuntime().exec()), a Process object is returned that is only asked for its exitValue(), implicating a wait for the termination of the system command. The line looks straight and to the point. No need to store and handle intermediate objects if you aren’t interested in them. But perhaps, you should care:

By default, the created process does not have its own terminal or console. All its standard I/O (i.e. stdin, stdout, stderr) operations will be redirected to the parent process, where they can be accessed via the streams obtained using the methods getOutputStream(), getInputStream(), and getErrorStream(). The parent process uses these streams to feed input to and get output from the process. Because some native platforms only provide limited buffer size for standard input and output streams, failure to promptly write the input stream or read the output stream of the process may cause the process to block, or even deadlock.

Emphasize by me, see also: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Process.html

This means that the Process object’s stdout and stderr outputs are stored in buffers of unknown (and system dependent) size. If one of these buffers fills up, the execution of the command just stops, as if somebody had paused it indefinitely. So, depending on your call’s talkativeness, your alert e-mail will not be sent, your system will appear to have failed to recognize the condition and you’ll never see a stacktrace or error exit value. All other e-mails (with less chatter) will go through just fine. This is a guaranteed source of frantic telephone calls, headaches and lost trust in your system and your ability to resolve issues.

And all the problems originate from one line of code. This is a maintenance mine with a stdout fuse.

The fix for this line might lie in the use of the ProcessBuilder class or your own utility code to drain the buffers. But how would you discover the mine before you deliver it?

Mines often lie at borders

One thing that stands out in this line of code is that it passes control to the “outside”. It acts as a transit point to the underlying operating system and therefor has a lot of baggage to check. There are no safety checks implemented, so the transit must be regarded as unsafe. If you look out for transit points in your code (like passing control to the file system, the network, a database or another external system), make sure you’ve read the instructions and requirements thoroughly. The problems of a maintenance mine aren’t apparent in your code and only manifest themselves during the interaction with the external system. And this is a situation that happens disproportionately often in production and comparably seldom during development.

So, think of your code as a maintenance minefield and be careful around its borders.

What is your minesweeper story? Drop us a comment.

A new star in software verification?

F* (pronounced F star) is a functional programming language with a dependent type system that supports verification. What follows is a story about my experiences with F* and a concluding opinion why languages like F* could be useful for the working software developer.

There is a big project using F* and related tools for the ambitious goal to build a verified implementation of TLS.
So far, judging from the website, there is a verified efficient implementation of the relevant cryptographic primitives but their implementation of the protocol is not yet verified. That is certainly already a great achievement and got me interested F*.

Setting it up

There is a F*-Tutorial which comes with an online editor, which is great if you just want to play a bit with the language. You don’t have to install anything, provided you have a recent mainstream browser. The tutorial is fun to do, but after I spent some time on it, I wanted to know what it is like to use F* not on a prescribed path.

Sooner or later, you probably want to install it on your machine, which is not a problem at all, if you are on linux system and an emacs user. There is a binary and an emacs mode which is also easy to install on windows but the combination was somehow forbiddingly slow on my (fast) machine. In the following, pictures will show excerpts from the emacs-mode.

Programming in F*

Let me first show you, what programming in F* looks like. Here is a definition of the commonly known (higher order) function “filter” which takes a predicate (function to bool in this case) and a list and returns the “sublist” of all things on the list matching the predicate:

filter1

The arrows and strange dots are just “->” and “::” rendered by the emacs mode. Otherwise, it is exactly the pattern matching definition that would also work in OCaml or F# (where you can write exactly the same thing):

The second argument “l” is matched against the two things a list might be, either the empty list “[]” or a list with consisting of some first element “x” and a list “xs”. In the first case, we are done and in the second case we decide if “x” should be in the result list and recursively call “filter” to deal with the “xs”.

The type of “filter” can be inferred by F*, but we could have declared it before the definition like this:

filter3

The alpha is really fancy emacs-mode rendering for ” ‘a ” and stands for an arbitrary type. This is also the type F* would infer, since what we did does not require anything special from the element type of the lists.

So the declaration says that our function takes some predicate “p” a list of things p may be applied to and returns a list of the same type just with “Tot” written in front of it. The latter tells us, that filter is a total function, i.e. it terminates and will not throw an exception (I guess there is a bit more to it, but so far that explanation worked well for me). There is also a modifier “ML” which would tell us, that the function behaves like a function in OCaml – so it could throw runtime exceptions or loop forever. After introducing you to some verification features, I will tell you why this distinction is important here.

Let us first rewrite our declaration a bit:

filter4

I replaced the alpha with a fixed type, the natural numbers (that will simplify things…) and introduced variables “p” and “l” that we can use now to construct types. What kind of types? Well, in this case, I want to construct a replacement for “(list N)” which tells us a bit more about the result of filtering a list. One reason to have types is to make sure, that the result of some construction meets a specification. Usually these kinds of specifications are quite coarse and might just tell you, that the result is an integer. So far, the specification of the filter function only tells us, that the result will be a list of natural numbers. In F*, we can do a lot more. The following specifies the result of filter to be the sublist of all elements of “l” that satisfy “p”:

filter5

Between the curly braces, I put a formula in predicate logic, which describes this specification. The ” ” is a logical “and”, the “∀” is the “forall” from math and ” element x l’ ” is some function which evaluates to true if x is an element of l’ and false if this is not the case.

Now, one of the things that make me happy when playing with F* is that the definition of filter I gave in the beginning checks against this new declaration as is. This is due to F* using a problem solver to prove that our specification is satisfied and the not-so-random coincidence that to prove the formula, you can use the same structural induction on the list as we used to recursively define “filter”.

For completeness, here is my definition of “element”:

element1

The “#a:eqtype” means, that “a” is some type such that comes with a total equality function for terms of type “a”. The natural numbers used above are such an “a”. Btw: Those natural numbers are not limited in size, like e.g. “unsigned int”. They behave a lot like “BigInteger” in some languages and are certainly not fast, but they are good for proving things.

And now, I should tell you, that the declaration of “filter” which nicely specified what filter does, is probably not what you should do. In fact, it is usually better to split things into definitions and “Lemma”s. So here is again the original definition of filter (with inferred type) and a Lemma about it:

filter6

The “Lemma” is again like “Tot” above – it modifies the type. Technically, “filter_specification” is a function, but the only thing important here, is if its definition (“let rec filter_specification = […]”) type checks with the declaration (“val filter_specification […]”) given – because that means that the definition is a proof of the proposition encoded in the declaration. In particular, it is not important what this function returns. In fact, “Lemma” produces a type all of whose terms are equal.

Verifying Project Euler exercises

First thing I usually do when checking out a new programming language is to solve a couple of Project Euler exercises to get fluent in the basic constructions. So, I thought, why shouldn’t I try the same with F* and prove something about my solution code?

Problem 1 asks us to add all multiples of 3 and 5 below 1000. Usually, my mind drops the “below” and I get a wrong result. But not so this time! Here is my definition of the natural numbers below 1000, which produces the correct list of numbers, since specifying precisely what it is made me read the problem text very carefully:

naturals_below_1000

Now, “naturals_below_1000” is specified by its type, which turned out to work well in this case, since it is not a general purpose function I will reuse.

Now for the task at hand, we need to filter numbers that are multiples of 3 or 5. I took the liberty to equivalently say, numbers divisible by 3 or 5:

divisible

I used modulo arithmetic “%” defined in some library to have an easy definition of “divides”. Also note, that I use definitions in proving that I defined the correct list. Both are a bit fishy if I were to claim that I proved some term in my program meets some mathematical definition commonly used for the concepts in the problem text. So you could call that lazy or efficient according to taste.

Now, the number which the exercise asks for can be defined:

ex1

F* can not just check if everything is correct but also evaluate expression. In particular the expression “ex1” which turns out to be the correct answer to the problem.

Is it of use for the working software developer?

I used Scala some years ago to develop a large prototype and, while there were some pains, it worked about as well as expected and I’m quite sure it would have been less fun and less efficient to use Java for the same project. One important point in the decision to use Scala was the possibility to use Java libraries.
F# has the same kind of advantage and I learned to love its type system from OCaml, which apparently was the basis for F#. So for the same reasons that made Scala work for me when constructing prototypes, I could decide to use F# in similar situations. I don’t know, how realistic my hope is, that F* can be used on top of F# one day, but that would be a great improvement over just having F# which is already something I look forward to. So far, it as possible to extract F# code from an F* program.

Usually, tools like F* are thought of and presented as tools to effectively prevent bugs, i.e. catching all of them no matter the effort. I think, what we need for our daily programming tasks are tools for preventing bugs efficiently. My guess is, that dependent types in a practical language like F# are such a tool. This is of course a heavily biased view, since I have a background involving some dependent type theory, which makes me blind to the effort of learning how dependent types work.

One problem with dependently typed languages is, that small changes can break lots of proofs that took effort to prove. This is particularly bad, if you spell out the proofs in all detail and it means refactoring can become quite expensive.

From what I have seen, the use of a problem solver when type checking reduces this problem quite a bit, so this might not be a real issue here and with other dependently typed languages that do something similar. And of, substitute technologies that achieve similar goals, like unit tests or comments,  also increase refactoring work.

So ultimately, apart from the fun, I also look forward at using F* as tool. As I mentioned before, I already knew about dependent types and could translate what I see when learning F*, so I can’t tell you how much effort it takes, if you don’t already know those things. But what I can tell you is, that learning dependent type theory was certainly among the most rewarding scientific experiences in my life, so maybe you want to do that anyway.

Clean deployment of .NET Core application

Microsofts .NET Core framework has rightfully earned its spot among cross-platform frameworks. We like to use it for example as a RESTful backend for our react frontends. If you are not burying your .NET Core application in a docker container without the need to configure/customize it you may feel agitated by its default deployment layout: All the dependencies live next to some JSON configuration files in one directory.

While this is ok if you do not need to look in there for a configuration file and change something you may like to clean it up and put the files into different folders. This can be achieved by customizing your MS build but it is all but straightforward!

Our goal

  1. Put all of our dependencies into a lib directory
  2. Put all of our configuration files int a configuration directory
  3. Remove unneeded files

The above should not require any interaction but be part of the regular build process.

The journey

We need to customize the MSBuild system to achieve our goal because the deps.json file must be rewritten to change the location of our dependencies. This is the hardest part! First we add the RoslynCodeTaskFactory as a package reference to our MSbuild in the csproj of our project. That we we can implement tasks using C#. We define two tasks that will help us in rewriting the deps.json:

<Project ToolsVersion="15.8" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <UsingTask TaskName="RegexReplaceFileText" TaskFactory="CodeTaskFactory" AssemblyFile="$(RoslynCodeTaskFactory)" Condition=" '$(RoslynCodeTaskFactory)' != '' ">
    <ParameterGroup>
      <InputFile ParameterType="System.String" Required="true" />
      <OutputFile ParameterType="System.String" Required="true" />
      <MatchExpression ParameterType="System.String" Required="true" />
      <ReplacementText ParameterType="System.String" Required="true" />
    </ParameterGroup>
    <Task>
      <Using Namespace="System" />
      <Using Namespace="System.IO" />
      <Using Namespace="System.Text.RegularExpressions" />
      <Code Type="Fragment" Language="cs">
        <![CDATA[ File.WriteAllText( OutputFile, Regex.Replace(File.ReadAllText(InputFile), MatchExpression, ReplacementText) ); ]]>
      </Code>
    </Task>
  </UsingTask>

  <UsingTask TaskName="RegexTrimFileText" TaskFactory="CodeTaskFactory" AssemblyFile="$(RoslynCodeTaskFactory)" Condition=" '$(RoslynCodeTaskFactory)' != '' ">
    <ParameterGroup>
      <InputFile ParameterType="System.String" Required="true" />
      <OutputFile ParameterType="System.String" Required="true" />
      <MatchExpression ParameterType="System.String" Required="true" />
    </ParameterGroup>
    <Task>
      <Using Namespace="System" />
      <Using Namespace="System.IO" />
      <Using Namespace="System.Text.RegularExpressions" />
      <Code Type="Fragment" Language="cs">
        <![CDATA[ File.WriteAllText( OutputFile, Regex.Replace(File.ReadAllText(InputFile), MatchExpression, "") ); ]]>
      </Code>
    </Task>
  </UsingTask>
</Project>

We put the tasks in a file called RegexReplace.targets file in the Build directory and import it in our csproj using <Import Project="Build/RegexReplace.targets" />.

Now we can just add a new target that is executed after the publish target to our main project csproj to move the assemblies around, rewrite the deps.json and remove unwanted files:

  <Target Name="PostPublishActions" AfterTargets="AfterPublish">
    <ItemGroup>
      <Libraries Include="$(PublishUrl)\*.dll" Exclude="$(PublishUrl)\MyProject.dll" />
    </ItemGroup>
    <ItemGroup>
      <Unwanted Include="$(PublishUrl)\MyProject.pdb;$(PublishUrl)\.filenesting.json" />
    </ItemGroup>
    <Move SourceFiles="@(Libraries)" DestinationFolder="$(PublishUrl)/lib" />
    <Copy SourceFiles="Build\MyProject.runtimeconfig.json;Build\web.config" DestinationFiles="$(PublishUrl)\MyProject.runtimeconfig.json;$(PublishUrl)\web.config" />
    <Delete Files="@(Libraries)" />
    <Delete Files="@(Unwanted)" />
    <RemoveDir Directories="$(PublishUrl)\Build" />
    <RegexTrimFileText InputFile="$(PublishUrl)\MyProject.deps.json" OutputFile="$(PublishUrl)\MyProject.deps.json" MatchExpression="(?&lt;=&quot;).*[/|\\](?=.*\.dll|.*\.exe)" />
    <RegexReplaceFileText InputFile="$(PublishUrl)\MyProject.deps.json" OutputFile="$(PublishUrl)\MyProject.deps.json" MatchExpression="&quot;path&quot;: &quot;.*&quot;" ReplacementText="&quot;path&quot;: &quot;.&quot;" />
  </Target>

The result

All this work should result in a working application with a root directory layout like in the image. As far as we know the remaining files like the web.config, the main project assembly and the two json files cannot easily relocated. The resulting layout is nevertheless quite clean and makes it easy for administrators to find the configuration files they need to customize.

Of course one can argue if the result is worth the hassle but if your customers’ administrators and operations value it you should do it.

Have unregistered classes throw with the unity DI container

The unity container (not to be confused with game engine) is one of the most popular dependency injection tools for C#.
However, by default the unity container will try to Resolve() all classes that it can. If you do not register a class, it will will often just succeed anyways.
I much prefer explicitly registering classes, and resolution just throwing and exception if I try to resolve something I did not register.
There’s a viable solution for that on stackoverflow, but it fails to throw when trying to resolve a class that was only registered via its interface.
Here’s our fixed version:

public class UnityRegistrationTracking : UnityContainerExtension
{
  private readonly ConcurrentDictionary<NamedTypeBuildKey, bool> registrations =
    new ConcurrentDictionary<NamedTypeBuildKey, bool>();

  protected override void Initialize()
  {
    base.Context.Registering += Context_Registering;
    base.Context.Strategies.Add(
        new ValidateRegistrationStrategy(this.registrations), UnityBuildStage.Setup);
  }

  private void Context_Registering(object sender, RegisterEventArgs e)
  {
    var buildKey = new NamedTypeBuildKey(e.TypeFrom, e.Name);
    this.registrations.AddOrUpdate(buildKey, true,
      (key, oldValue) => true);
  }

  public class ValidateRegistrationStrategy : BuilderStrategy
  {
    private ConcurrentDictionary<NamedTypeBuildKey, bool> registrations;

    public ValidateRegistrationStrategy(ConcurrentDictionary<NamedTypeBuildKey, bool> registrations)
    {
      this.registrations = registrations;
    }

    public override void PreBuildUp(ref BuilderContext context)
    {
        var buildKey = new NamedTypeBuildKey(context.RegistrationType, context.Name);
        if (!this.registrations.ContainsKey(buildKey))
        {
          throw new ResolutionFailedException(buildKey.Type, buildKey.Name,
            string.Format("Type {0} was not explicitly registered in the container.", buildKey.Type.Name));
        }
    }
  }
}

We hook into two parts of the unity API here:

  1. The registration, which is called when you call Unity.RegisterType
  2. The resolution process, which is called when unity tries to resolve a specific instance.

The first part happens in Context_Registering. We just store the registration in dictionary for later. It is important to use TypeFrom as a key, since we want to refer to objects by the interfaces they are registered with, not their concrete implementations.
The second part is the ValidateRegistrationStrategy. All registered BuilderStrategy objects go in a list that is processed when an object is built. The UnityBuildStage.Setup acts as a sorting key, to make sure that this strategy is executed as early as possible.
In the strategy, we check whether the requested type was previously registered, and throw an exception if it was not. It is important to use context.RegistrationType here, since context.Type will again contain the concrete type, and not the interface.

Migrating from JScience quantities to Unit API 2.0

If you’re developing software that operates a lot with physical quantities you absolutely should use a library that defines types for quantities and supports safe conversions between units of measurements. Our go-to library for this in Java was JScience. The latest version of JScience is 4.3.1, which was released in 2012.

Since then a group of developers has formed that strives towards the standardization of a units API for Java. JScience maintainer Jean-Marie Dautelle is actively involved in this effort. The group operates under the name Units of Measurement alongside with their GitHub presence unitsofmeasurement.

Over the years there have been several JSRs (Java Specification Requests) by the group:

The current state of affairs is JSR-385, which is the basis of this post. The Units of Measurement API 2.0, or Unit API 2.0 for short, was released in July 2019.

JARs

While JScience is distributed as one JAR (~600 KiB), a setup of Unit API involves three JARs (~300 KiB in total):

  • unit-api-2.0.jar
  • indriya-2.0.jar
  • uom-lib-common-2.0.jar

JScience offers a lot more functionality than just quantities and units, but that’s the part we have been using and what we are interested in.

The unit-api JAR only defines interfaces, which is the scope of JSR-385. So you need an implementation to do anything useful with it. The reference implementation is called Indriya, provided by the second JAR. The third JAR, uom-lib-common, is a utility library used by Indriya for common functionality shared with other projects under the Units of Measurement umbrella.

Using quantities

Here’s a simple use of a physical quantity with JScience, in this example Length:

import org.jscience.physics.amount.Amount;

import javax.measure.quantity.Length;

import static javax.measure.unit.SI.*;

// ...

final Amount<Length> d = Amount.valueOf(214, CENTI(METRE));
final double d_metre = d.doubleValue(METRE);

And here’s the equivalent code using Units API 2.0 and Indriya:

import tech.units.indriya.quantity.Quantities;

import javax.measure.Quantity;
import javax.measure.quantity.Length;

import static javax.measure.MetricPrefix.CENTI;
import static tech.units.indriya.unit.Units.METRE;

// ...

final Quantity<Length> d = Quantities.getQuantity(214, CENTI(METRE));
final double d_metre = d.to(METRE).getValue().doubleValue();

Consistency

While JScience also defines aliases with alternative spellings like METER and constants for many prefixed units like CENTIMETER or MILLIMETER, Indriya encourages consistency and only allows METRE, CENTI(METRE), MILLI(METRE).

Quantity names

Most quantities have the same names in both projects, but there are some differences:

  • Amount<Duration> becomes Quantity<Time>
  • Amount<Velocity> becomes Quantity<Speed>

In these cases Unit API uses the correct SI names, i.e. time and speed. Wikipedia explains the difference between speed and velocity.

Arithmetic operations

The method names for the elementary arithmetic operations have changed:

  • plus() becomes add()
  • minus() becomes subtract()
  • times() becomes multiply()

Only the method name for division is the same:

  • divide() is still divide()

However, the runtime exceptions thrown on division by zero are different:

  • JScience: java.lang.ArithmeticException: / by zero
  • Indriya: java.lang.IllegalArgumentException: cannot initalize a rational number with divisor equal to ZERO

Type hints

If you divide or multiply two quantities the Java type system needs a type hint, because it doesn’t know the resulting quantity. Here’s how this looks in JScience versus Unit API:

With JScience:

Amount<Area> a = Amount.valueOf(100, SQUARE_METRE);
Amount<Length> b = Amount.valueOf(10, METRE);
Amount<Length> c = a.divide(b)
                    .to(METRE);

With Unit API:

Quantity<Area> a = Quantities.getQuantity(100, SQUARE_METRE);
Quantity<Length> b = Quantities.getQuantity(10, METRE);
Quantity<Length> c = a.divide(b)
                      .asType(Length.class);

Comparing quantities

If you want to compare quantities via compareTo(), isLessThan(), etc. you need quantities of type ComparableQuantity. The Quantities.getQuantity() factory method returns a ComparableQuantity, which is a sub-interface of Quantity.

Defining custom units

Defining custom units is very similar to JScience. Here’s an example for degree (angle), which is not an SI unit:

public static final Unit<Angle> DEGREE_ANGLE =
    new TransformedUnit<>("°", RADIAN,
        MultiplyConverter.ofPiExponent(1).concatenate(MultiplyConverter.ofRational(1, 180)));

Lombok-Tooling surprise

Some months ago we took over development and maintenance of a Java EE-based web application. The project has ok-code for the most part and uses mostly standard libraries from the Java ecosystem. One of them is lombok which strives to reduce the boilerplate code needed and make the code more readable and concise.

Fortunately there is a plugin for IntelliJ, our favourite Java IDE that understands lombok and allows for easy navigation and code hints. For example you can jump from getMyField() to the lombok-annotated backing field of the getter and so on.

This sounds very good but some day we were debugging a weird behaviour. An abstract class contained a special implementation of a Map as its field and was annotated with @Getter and @Setter. But somehow the type and contents of the Map changed without calling the setter.

What was happening here? After a quite some time digging spent in the debugger we noticed, that the getter was overridden in a subclass. Normally, IntelliJ shows an Icon with navigation options beside overridden/overriding methods. Unfortunately for us not for lombok annotations!

Consider the following code:

public class SuperClass {

    @Getter
    private List strings = new ArrayList();

    public List getInts() {
        return new ArrayList();
    }
}

public class NotSoSuperClass extends SuperClass {
    @Override
    public List getStrings() {
        return Arrays.asList("Many", "Strings");
    }

    @Override
    public List getInts() {
        return Arrays.asList(1,2,3);
    }
}

The corresponding code in IntelliJ looks like this:

2019-08-08 10_54_49-lombok-plugin-surprise-demo

Notice that IntelliJ puts a nice little icon next to the line number where the class or a methods is subclassed/overridden. But not so for the lombok getter. This tiny detail lead to quite a surprise and cost us some hours.

Of course you can argue, that the code design is broken, but hey, that was the state and the tools are there to help you discover such weird quirks.

We opened an issue for the lombok IntelliJ plugin, so maybe it will be enhanced to provide such additional tooling information to be on par with plain old java code.

 

Non-determinism in C++

A deterministic program, when given the same input, will always result in the same output. This intuitive, albeit quite fuzzily defined, property is often times pretty important for correct program. Sources of non-determinism can be quite subtle – and once they creep into your program, they can propagate and amplify and have enormous consequences. It is pretty much the well-known butterfly effect.

When discussing this problem, it is important to know what exactly makes up the input and the output of the program. For example, when logging times to a logfile, and considering this an actual output, no two runs will ever be the same – so this is usually not considered an output relevant for determinism. Which brings us to the first common source of non-determinism:

Time

If any part of your program depends on the time it is run at, it will be easily be non-deterministic. Common cases are using the time to initializing some variable depending on the time, or using the time for some kind of numerical integration, like computing a value over time. Also, using execution time as an output respective for determinism is hopeless on a normal desktop computer – but can be crucial for a real-time system.

Random number generation

Random number generation seems like an obvious candidate, yet most random number generators are not really random, but only pseudo-random. For example, std::mersenne_twister_engine will generate the same sequence of values every time, when initialized with the same seed. So do not initialize this with a non-deterministic input like the time, and it will be predictable. However, std::random_device might not share this property and give you fresh non-deterministic input. As a weird middle ground, std::default_random_engine will probably give you the same results when compiled with the same compiler/standard-lib, but on another compiler version or OS, it will not. Subtle.

The allocator

Another source of non-determinism that is pretty tricky is the allocator. For example, consider the following piece of code:

template <class T>
T sum(std::set<Thingy*> const& set)
{
  T result{};
  for (auto const& each : set)
    result += each->value();
  return result;
}

Is this deterministic or not? It depends. Now let’s assume that all the Thingys were allocated using standard new. In that case, the actual pointers, Thingy* are non-deterministic, and hence the order of the Thingy*s in the set is random. But does this matter? Well if T is std::uint32_t, it does not. Order in addition does not matter for unsigned integers, even with overflows. However, if T is float, then it does matter and the whole result becomes unpredictable, at least in the general case (it will even be predictable, if e.g. all the numbers in the computation are integers that are exactly representable as floats). Other languages have “insertion-ordered” containers to get around this problem. A sensible approximation in C++ is to use the (unordered_)set and (unordered_)map containers together with another list to iterate on.

The thread scheduler

When you cannot really control the order of instructions, which is really the whole point of threading, you will have a harder time making things deterministic. Like the allocator problem, this is usually also paired with floating-point arithmetic. The workaround here is to make sure that the order of computation does not influence the final result. One common way around this is to sort the output by a unique criteria. For example, if you use multiple threads to report the intersections of a bunch of line segments, you can later sort them by their position in space.

There’s of course the honorable mention for uninitialized variables, but I’m sure your static analyzer will complain about it. Any interaction with the “outside” of your program, any side-effect, be it filesystems, user input, output or cosmic radiation can lead to non-determinism, so be sure to know the context well enough and plan accordingly to your determinism requirements.