Have unregistered classes throw with the unity DI container

The unity container (not to be confused with game engine) is one of the most popular dependency injection tools for C#.
However, by default the unity container will try to Resolve() all classes that it can. If you do not register a class, it will will often just succeed anyways.
I much prefer explicitly registering classes, and resolution just throwing and exception if I try to resolve something I did not register.
There’s a viable solution for that on stackoverflow, but it fails to throw when trying to resolve a class that was only registered via its interface.
Here’s our fixed version:

public class UnityRegistrationTracking : UnityContainerExtension
{
  private readonly ConcurrentDictionary<NamedTypeBuildKey, bool> registrations =
    new ConcurrentDictionary<NamedTypeBuildKey, bool>();

  protected override void Initialize()
  {
    base.Context.Registering += Context_Registering;
    base.Context.Strategies.Add(
        new ValidateRegistrationStrategy(this.registrations), UnityBuildStage.Setup);
  }

  private void Context_Registering(object sender, RegisterEventArgs e)
  {
    var buildKey = new NamedTypeBuildKey(e.TypeFrom, e.Name);
    this.registrations.AddOrUpdate(buildKey, true,
      (key, oldValue) => true);
  }

  public class ValidateRegistrationStrategy : BuilderStrategy
  {
    private ConcurrentDictionary<NamedTypeBuildKey, bool> registrations;

    public ValidateRegistrationStrategy(ConcurrentDictionary<NamedTypeBuildKey, bool> registrations)
    {
      this.registrations = registrations;
    }

    public override void PreBuildUp(ref BuilderContext context)
    {
        var buildKey = new NamedTypeBuildKey(context.RegistrationType, context.Name);
        if (!this.registrations.ContainsKey(buildKey))
        {
          throw new ResolutionFailedException(buildKey.Type, buildKey.Name,
            string.Format("Type {0} was not explicitly registered in the container.", buildKey.Type.Name));
        }
    }
  }
}

We hook into two parts of the unity API here:

  1. The registration, which is called when you call Unity.RegisterType
  2. The resolution process, which is called when unity tries to resolve a specific instance.

The first part happens in Context_Registering. We just store the registration in dictionary for later. It is important to use TypeFrom as a key, since we want to refer to objects by the interfaces they are registered with, not their concrete implementations.
The second part is the ValidateRegistrationStrategy. All registered BuilderStrategy objects go in a list that is processed when an object is built. The UnityBuildStage.Setup acts as a sorting key, to make sure that this strategy is executed as early as possible.
In the strategy, we check whether the requested type was previously registered, and throw an exception if it was not. It is important to use context.RegistrationType here, since context.Type will again contain the concrete type, and not the interface.

Migrating from JScience quantities to Unit API 2.0

If you’re developing software that operates a lot with physical quantities you absolutely should use a library that defines types for quantities and supports safe conversions between units of measurements. Our go-to library for this in Java was JScience. The latest version of JScience is 4.3.1, which was released in 2012.

Since then a group of developers has formed that strives towards the standardization of a units API for Java. JScience maintainer Jean-Marie Dautelle is actively involved in this effort. The group operates under the name Units of Measurement alongside with their GitHub presence unitsofmeasurement.

Over the years there have been several JSRs (Java Specification Requests) by the group:

The current state of affairs is JSR-385, which is the basis of this post. The Units of Measurement API 2.0, or Unit API 2.0 for short, was released in July 2019.

JARs

While JScience is distributed as one JAR (~600 KiB), a setup of Unit API involves three JARs (~300 KiB in total):

  • unit-api-2.0.jar
  • indriya-2.0.jar
  • uom-lib-common-2.0.jar

JScience offers a lot more functionality than just quantities and units, but that’s the part we have been using and what we are interested in.

The unit-api JAR only defines interfaces, which is the scope of JSR-385. So you need an implementation to do anything useful with it. The reference implementation is called Indriya, provided by the second JAR. The third JAR, uom-lib-common, is a utility library used by Indriya for common functionality shared with other projects under the Units of Measurement umbrella.

Using quantities

Here’s a simple use of a physical quantity with JScience, in this example Length:

import org.jscience.physics.amount.Amount;

import javax.measure.quantity.Length;

import static javax.measure.unit.SI.*;

// ...

final Amount<Length> d = Amount.valueOf(214, CENTI(METRE));
final double d_metre = d.doubleValue(METRE);

And here’s the equivalent code using Units API 2.0 and Indriya:

import tech.units.indriya.quantity.Quantities;

import javax.measure.Quantity;
import javax.measure.quantity.Length;

import static javax.measure.MetricPrefix.CENTI;
import static tech.units.indriya.unit.Units.METRE;

// ...

final Quantity<Length> d = Quantities.getQuantity(214, CENTI(METRE));
final double d_metre = d.to(METRE).getValue().doubleValue();

Consistency

While JScience also defines aliases with alternative spellings like METER and constants for many prefixed units like CENTIMETER or MILLIMETER, Indriya encourages consistency and only allows METRE, CENTI(METRE), MILLI(METRE).

Quantity names

Most quantities have the same names in both projects, but there are some differences:

  • Amount<Duration> becomes Quantity<Time>
  • Amount<Velocity> becomes Quantity<Speed>

In these cases Unit API uses the correct SI names, i.e. time and speed. Wikipedia explains the difference between speed and velocity.

Arithmetic operations

The method names for the elementary arithmetic operations have changed:

  • plus() becomes add()
  • minus() becomes subtract()
  • times() becomes multiply()

Only the method name for division is the same:

  • divide() is still divide()

However, the runtime exceptions thrown on division by zero are different:

  • JScience: java.lang.ArithmeticException: / by zero
  • Indriya: java.lang.IllegalArgumentException: cannot initalize a rational number with divisor equal to ZERO

Type hints

If you divide or multiply two quantities the Java type system needs a type hint, because it doesn’t know the resulting quantity. Here’s how this looks in JScience versus Unit API:

With JScience:

Amount<Area> a = Amount.valueOf(100, SQUARE_METRE);
Amount<Length> b = Amount.valueOf(10, METRE);
Amount<Length> c = a.divide(b)
                    .to(METRE);

With Unit API:

Quantity<Area> a = Quantities.getQuantity(100, SQUARE_METRE);
Quantity<Length> b = Quantities.getQuantity(10, METRE);
Quantity<Length> c = a.divide(b)
                      .asType(Length.class);

Comparing quantities

If you want to compare quantities via compareTo(), isLessThan(), etc. you need quantities of type ComparableQuantity. The Quantities.getQuantity() factory method returns a ComparableQuantity, which is a sub-interface of Quantity.

Defining custom units

Defining custom units is very similar to JScience. Here’s an example for degree (angle), which is not an SI unit:

public static final Unit<Angle> DEGREE_ANGLE =
    new TransformedUnit<>("°", RADIAN,
        MultiplyConverter.ofPiExponent(1).concatenate(MultiplyConverter.ofRational(1, 180)));

Lombok-Tooling surprise

Some months ago we took over development and maintenance of a Java EE-based web application. The project has ok-code for the most part and uses mostly standard libraries from the Java ecosystem. One of them is lombok which strives to reduce the boilerplate code needed and make the code more readable and concise.

Fortunately there is a plugin for IntelliJ, our favourite Java IDE that understands lombok and allows for easy navigation and code hints. For example you can jump from getMyField() to the lombok-annotated backing field of the getter and so on.

This sounds very good but some day we were debugging a weird behaviour. An abstract class contained a special implementation of a Map as its field and was annotated with @Getter and @Setter. But somehow the type and contents of the Map changed without calling the setter.

What was happening here? After a quite some time digging spent in the debugger we noticed, that the getter was overridden in a subclass. Normally, IntelliJ shows an Icon with navigation options beside overridden/overriding methods. Unfortunately for us not for lombok annotations!

Consider the following code:

public class SuperClass {

    @Getter
    private List strings = new ArrayList();

    public List getInts() {
        return new ArrayList();
    }
}

public class NotSoSuperClass extends SuperClass {
    @Override
    public List getStrings() {
        return Arrays.asList("Many", "Strings");
    }

    @Override
    public List getInts() {
        return Arrays.asList(1,2,3);
    }
}

The corresponding code in IntelliJ looks like this:

2019-08-08 10_54_49-lombok-plugin-surprise-demo

Notice that IntelliJ puts a nice little icon next to the line number where the class or a methods is subclassed/overridden. But not so for the lombok getter. This tiny detail lead to quite a surprise and cost us some hours.

Of course you can argue, that the code design is broken, but hey, that was the state and the tools are there to help you discover such weird quirks.

We opened an issue for the lombok IntelliJ plugin, so maybe it will be enhanced to provide such additional tooling information to be on par with plain old java code.

 

When everything’s an issue

Years ago, I read the novel “Manna” by Marshall Brain. It’s a science fiction story about the robotic takeover and it features “Manna”, an (artificially) intelligent work management software that replaces human managers and runs the shop. The story starts with “Manna” and goes on to explore the implications of such a system on mankind. It’s a good read and contains a lot of thoughts about what kind of labor we want to do.

The idea that really captivated me was the company that runs itself. Don’t get me wrong: Most organizations are so big that the individual employee cannot see the big picture anymore. Those organizations seem to “run itself” to the untrained eye, but it is still humans that manage the workload. And like all humans, they make mistakes and, perhaps very subtle, infuse their own selfish goals into the process. But an organization that has its goals and instructs its workers (humans and machines alike) directly is an interesting thought for me.

It also is totally unrealistic with today’s technology and probably contains some risks that should be explored carefully before implementing such a system in the wild.

But what about a more down-to-earth approach that achieves the core advancement of “Manna” without many or all of the risks? What if the organization doesn’t instruct, but makes its needs visible and relies on humans to interpret and schedule those needs and fulfill them? In essence, a “Manna” system without the sensors and decision-making and certainly without the creepy snooping tendencies. Built with today’s technology, that’s called an automated work scheduler.

And that is what we’ve built at our company. We use an issue tracking system to manage and schedule our project work already. We extended its usage to manage and schedule our administrative work, too. Now, every work unit in the company is (or could be) accompanied by an issue in the issue tracker. And just like software developers don’t change code without an issue, we don’t change our company’s data or decisions without an issue that also provides a place for documentation related to the process. We’ve come to the conclusion that most of those administrative issues are recurring. So we automated their creation.

Our very early stage “Manna” system is called “issue scheduler”, a highly creative name on its own. It is a system that basically contains a lot of glorified cron expressions and just enough data to create a meaningful issue in the issue tracker, should a cron expression fire. So basically, our company creates issues for us on a fixed schedule. Let’s look at some examples:

  • We add a new article to our developer blog (you’re reading it right now) every week. This means that every week, our “issue scheduler” creates a blog issue and assigns it to the next author in line. This is done some time in advance to give the author enough time to prepare and possibly trade with other authors. Our developer blog has the “need” for one article each week, but it doesn’t require a particular topic or author. This need is made visible by the automatic blog issues and it is our duty to fulfill this need. On a side note: Maybe you’ve noticed that I wrote two blog articles in direct succession. There is definitely some issue trading going on behind the scenes right now!
  • We tend to have many plants in our office. To look at something green and living adds to our comfort. But those plants have needs, too. They probably make their needs pretty visible, but we aren’t expert plant caregivers. So we gave the “issue scheduler” some entries to inform us about the regular watering and fertilization duties for our office plants. A detailed description of the actual work exists in our company wiki and a link to it gives the caregiver of the week all the information that’s needed.
  • Every month, we are required to file a sales tax summary report. This is a need of the german government agencies that we incorporate into our company’s needs. To work on this issue, you need to have more information and security clearances than fits on a wiki page, but to process is documented nonetheless. So once a month, our company automatically creates an issue that says “do your taxes now!” and assigns it to our administrative employees.

These are three examples of recurring tasks that are covered by our poor man’s “Manna” system. To give you a perspective on the scale of this system for a small company like ours, we currently have about 140 distinct rules for recurring issues. Some of them fire almost every day, some of them sleep for years and wake up just in time to express a certain need of the company that otherwise would surerly be forgotten or rediscovered after the fact.

This approach relieves us from the burden to remember all the tasks and their schedules and lets us concentrate on completing them. And our system, in contrast to “Manna” in the story, isn’t judging or controlling. If you don’t think the plants need any more water, just resolve the issue with “won’t fix”. Perhaps you can explain your decision in a short comment for other humans, but our “issue scheduler” won’t notice.

This isn’t the robotic takeover, after all. It’s just automated scheduling of recurring work. And it works great.

Twelve years a bug

Recently, a friend recommended the talk “Love your bugs” by Allison Kaptur to me. It’s a good talk with a powerful message: Every bug is a chance to learn. The only prerequisite for this chance: You need to be aware of the bug. You need to see it and then understand and fix it.

What if you don’t see a bug for over a decade?

You can still learn from it – a lot. Here is a story about a bug that lingered in my code for twelve years and had impact on the software results without anybody, including me, noticing.

Let’s say the software is a measurement system that records physical measurement values that cannot be reenacted easily. The samples are measured in rapid succession and then stored away. The measurement results act as a quality quantifier as in “the sample was this good at that time”. The sample’s quality diminishes over time, even with perfect storage conditions. And just to be sure that the measurement is accurate, it isn’t performed once, but twice in quick succession: The measurement device traverses the sample in one direction, recording the values and uses the return path for a second measurement. This results in two measurement streaks that should, in theory, be very similar.

The raw measurement values are aggregated in different ways. In the final report, the maximum value over both measurement streaks is the most prominent and most important aspect. There is nothing exciting or error-prone about finding the maximum value in a value series, even twelve years ago. But I tested the code nonetheless. The whole value aggregation code was under test and had a good test coverage. All tests showed their thumbs up.

But twelve years ago, at the end of a workweek, on a Friday at 17 o’clock, I made a small and easy refactoring to the code. I know this in such detail because of the wonders of version control. Without version control, I probably would have learnt a lot less from this bug.

The code before the refactoring contained two nearly identical sections for the measurement streaks, making it duplicated code. There were only two differences: The first section of code counted from 0 to 99 and stored the values in the first streak’s array. The second section counted from 99 to 0, because the measurement device travels backwards over the sample, and stored the values into the second array.

My refactoring brought both pieces of code together: A new method with two parameters was introduced and called at both places. The first parameter specified a series of positions (0 to 99 or backwards), the second parameter specified the array to store into. All automated tests approved the changes. My manual testing showed no differences. The refactoring was going live.

Twelve years later, while working on a new engine for the measurement device, the customer took the raw values from the journal and performed some manual calculations. The maximum value calculation was wrong. It wasn’t wrong all of the time, but also not correct for all measurements. When it went wrong, it selected the second greatest value or, seldom, the third greatest value as the maximum value.

All the tests still insisted that everything is correct – as it was for twelve long years. There was no other change to the code that could affect the aggregation in any way.

There are two different ways that lead me to find the bug’s origin. The first way was to inspect the trail of commits in the area of code that performs the aggregation. My findings were that the abovementioned refactoring was the most likely culprit. The second way was to write more tests to ensure that yes, the maximum value calculation was indeed correct if given the correct values. Using the examples my customer had examined, I could prove with enough certainty that, given the input of all 200 values, the correct maximum value was returned in all cases. The aggregation therefore wasn’t given all values!

And that lead directly to the bug: The first measurement streak used the same array to store the values as the second streak. During the refactoring, I must have copied and pasted the call to the new method and forgotten to change the second parameter. Of 200 measured values, only 100 got stored permanently. The first 100 values were stored and promptly overwritten as the measurement device returned to its park position. Change the calls to store in both arrays and everything works like intended and like before the refactoring.

How did no test, automated or manual, catch this bug? It turns out that most automated tests were too focussed to indicate a problem. All unit tests that secured the maximum value calculation used given sets of values, they didn’t care about the origin of these value sets. The integration tests that covered the whole measurement process should have raised objections to the refactoring. But they used given sets of measurement values, too. And by chance, the given maximum value was in the second measurement streak. In fact, the given set of measurement values was produced by a loop that just increased a value. The greatest value was always the last one.

The manual tests had the same problem: The simulated measurement device produced measurement values that were either fixed or random. If your whole measurement uses the same fixed values, you don’t see it if half of them went missing. And if your measurement uses random values, you’ll have to pay close attention to a detail that isn’t in focus because it is unchanged. Except that one time when it was changed recently. Remember the change date? Friday afternoon, only minutes before the weekend? Not the best time to manually test a change that is a simple standard refactoring, after all.

So, what have I learnt? First: automated tests, even with great test coverage, aren’t enough. There is so much leeway in the setup of these tests that they will have blind spots without your knowledge. Second: A code review by another human (or even by the same human, some days later) might have caught the bug. It was painfully obvious in hindsight. The problem? “Might have” is an heuristics, just like your automated tests are.

My guess is that mutation testing would have shown the blind spot of the existing tests – among several hundred others. The heuristics is now your trained eye that sifts through the results and separates false positives from true positives.

Right now, I’ve fixed the bug, kept all additional tests and added one more: An integration test that performs a measurement with fixed values and checks nothing else but if all 200 values are stored at the end of the measurement. It’s oddly specific, but conveys this story in an automated fashion.

Oh, and I made another refactoring: I’ve replaced the arrays with collections. Hopefully, I won’t regret this one twelve years in the future. I’ve made it on a Monday.

Non-determinism in C++

A deterministic program, when given the same input, will always result in the same output. This intuitive, albeit quite fuzzily defined, property is often times pretty important for correct program. Sources of non-determinism can be quite subtle – and once they creep into your program, they can propagate and amplify and have enormous consequences. It is pretty much the well-known butterfly effect.

When discussing this problem, it is important to know what exactly makes up the input and the output of the program. For example, when logging times to a logfile, and considering this an actual output, no two runs will ever be the same – so this is usually not considered an output relevant for determinism. Which brings us to the first common source of non-determinism:

Time

If any part of your program depends on the time it is run at, it will be easily be non-deterministic. Common cases are using the time to initializing some variable depending on the time, or using the time for some kind of numerical integration, like computing a value over time. Also, using execution time as an output respective for determinism is hopeless on a normal desktop computer – but can be crucial for a real-time system.

Random number generation

Random number generation seems like an obvious candidate, yet most random number generators are not really random, but only pseudo-random. For example, std::mersenne_twister_engine will generate the same sequence of values every time, when initialized with the same seed. So do not initialize this with a non-deterministic input like the time, and it will be predictable. However, std::random_device might not share this property and give you fresh non-deterministic input. As a weird middle ground, std::default_random_engine will probably give you the same results when compiled with the same compiler/standard-lib, but on another compiler version or OS, it will not. Subtle.

The allocator

Another source of non-determinism that is pretty tricky is the allocator. For example, consider the following piece of code:

template <class T>
T sum(std::set<Thingy*> const& set)
{
  T result{};
  for (auto const& each : set)
    result += each->value();
  return result;
}

Is this deterministic or not? It depends. Now let’s assume that all the Thingys were allocated using standard new. In that case, the actual pointers, Thingy* are non-deterministic, and hence the order of the Thingy*s in the set is random. But does this matter? Well if T is std::uint32_t, it does not. Order in addition does not matter for unsigned integers, even with overflows. However, if T is float, then it does matter and the whole result becomes unpredictable, at least in the general case (it will even be predictable, if e.g. all the numbers in the computation are integers that are exactly representable as floats). Other languages have “insertion-ordered” containers to get around this problem. A sensible approximation in C++ is to use the (unordered_)set and (unordered_)map containers together with another list to iterate on.

The thread scheduler

When you cannot really control the order of instructions, which is really the whole point of threading, you will have a harder time making things deterministic. Like the allocator problem, this is usually also paired with floating-point arithmetic. The workaround here is to make sure that the order of computation does not influence the final result. One common way around this is to sort the output by a unique criteria. For example, if you use multiple threads to report the intersections of a bunch of line segments, you can later sort them by their position in space.

There’s of course the honorable mention for uninitialized variables, but I’m sure your static analyzer will complain about it. Any interaction with the “outside” of your program, any side-effect, be it filesystems, user input, output or cosmic radiation can lead to non-determinism, so be sure to know the context well enough and plan accordingly to your determinism requirements.

Debugging Web Pages for iOS

Web developers use browser tools like the Web Inspector in Chrome and Safari or the Developer Tools in Firefox to develop, debug and test web pages. In Safari you have to enable the developer menu first: Safari -> Preferences… -> Advanced -> Show develop menu in menu bar

All these tools offer modes where you can display the page layout at various screen sizes. In Safari this is called the Responsive Design Mode and can be found in the Develop menu. This is essential for checking the page layout for mobile devices. There are however some differences in behaviour, which can only be tested on the real devices or in a simulator. For example, dropdown menus can trigger a wheel selector on mobile devices, while the desktop browser renders them as regular dropdown menus, even in responsive design mode.

Here are some tips for debugging web pages for iOS devices in the simulator:

Using Web Inspector with the iOS Simulator

Within the mobile Safari browser you can’t simply open the Web Inspector console as you would do when developing a web page using a desktop browser. But you can connect the Web Inspector of your desktop Safari to the mobile Safari browser instance running in the iOS simulator:

  • Start the iOS simulator from Xcode: Xcode -> Open Developer Tool -> Simulator
  • Select the desired device: Hardware -> Device -> e.g. iOS 12.1 -> iPhone SE
  • Open the web page in Safari within the simulator
  • Open the desktop version of Safari

In Safari’s Develop menu the simulator now shows up as a device, e.g. “Simulator – iPhone SE – iOS 12.1 (16B91)”. The web page you opened in the simulator should be listed as submenu item. If you click this menu item the Web Inspector opens. It’s now connected to the simulated Safari instance and you can debug the mobile variant of your web page.

Workaround for Clearing the Cache

When using a desktop web browser one can easily bypass the local browser cache when reloading a web page by holding the shift key while pressing the reload button. Sometimes this is necessary to see changes in effect while developing a web application. However, this doesn’t work in Safari running within the iOS emulator. There’s a little workaround: You can open the web page in an incognito tab, which means the cache is cleared each time you close the tab and re-open it again in a new incognito tab.