Transposition as a programming technique

If you have been programming for a while, you will probably, and hopefully, agree that it is preferable to have a sequence of functions as opposed to the same number of functions nested. In other words, call-graph breadth is better than depth. Among other reasons, a “linear” set of instructions is often easier to follow, which is better for humans, and also tends to not go haywire with what memory it touches, which is better for computers.
However, deep call hierarchies occur much more than I would like. I have seen call stacks well beyond 200 functions deep. But this need not be – one can often be turned into the other by transposition. Transposition derives from the latin transponere, which roughly means “to put across”. With matrices, it means swapping rows and columns. Similarly, we can swap call-hierarchy depth for breath.

The example

A couple of months ago, I was tasked with programming a standing-wave display of power-line voltage curves. As you might know, the signal is roughly sine-wave shaped at about 50Hz. The signal is captured in time windows of 200ms, i.e. there’s a new packet of data 5 times a second with 10 sine cycles in it. However, the frequency value jitters just enough to make the signal drift a bit in the 200ms window, i.e. the wave moves forwards and backwards a little bit. The standing-wave feature tries to remove that drift and make it seemingly stationary in our fixed time window, so changes in amplitude become more visible.

Algorithm 1

The idea seems simple enough for just one signal:

  1. In the previous wave, search backwards to find the spot where the wave crosses from positive to negative.
  2. Take the previous wave from that point on and stitch it together with the current one, and cut that off at 200ms of data.

But there is not just one signal, there can be hundreds. And they should all be aligned to one designated “master” signal. So now we add extra steps:

  1. For all other signals, find the wave packets overlapping (in time) with our new stiched wave packet.
  2. Order them, and stitch them to a new wave packet covering exactly the same time window.

Now even in this version, finding the right packets for a time interval can be more tricky than it seems, because the values for the signals come in irregularly and can be shifted significantly. So you can just buffer of the last N (5?) packets for each signal and search in there. Still, one more requirement remains. For the display of archived data, the algorithm should work on batches of waves, i.e. many seconds worth, which made step 3 harder by extending the search space. So add:

  1. For each previous and current pair in a given time-interval:

Now the whole thing was pretty much implemented with steps 0 to 4 being functions calling into the next step, with major loops on the 0th and 3rd step. The wave data flows through these implementation layers vertically, i.e. from step 0 to step 4 and back, but the control flow of the program does not. It flows perpendicular to it, horizontally, solely controlled by the outer-most loop. It is intuitive to write it this way – after all, the control flow follows the flow of time in the data we are processing, but the code was not particularly easy, especially with the search in step 3 becoming unnecessarily complex.

Algorithm 2

Now let us try transposing this, and match the flow of data with our control flow:

  1. Gather all relevant signals for the time interval and sort their packets.
  2. Extract all the “stitching” time codes from the master signal.
  3. For all signals, traverse pairs together with the time codes and stitch accordingly.

The whole process becomes more digestible, and processing the data in stages made it obvious that sorted data makes using a “merge” type algorithm very easy.
Both algorithms use the same data, but the second makes it explicit, while the first just passes it through the call-stack in chunks.

Conclusion

I have since used this idea of “transposition” a few times to clean up and simplify my designs. It seems especially helpful when trying to decouple messaging from bulk processing.
The idea of looking at the data flow and adapting the control flow to match it, is central to data-oriented design. I argue that while this can be used to optimized programs, transposition is mainly a tool to make programs simpler, which can then lead to optimization. Separating processing into stages is also very similar to loop-fission.
Have you used a technique like this before? Do you, perhaps, know it by another name? Let me know!

Think of your code as a maintenance minefield

Most of the cost, effort and time of a software project is spent on the maintenance phase, the modification of a software product after delivery. If you think about all these resources as “negative investments” or debt settlement and try to associate your spendings with specific code areas or even single lines of code, you’ll probably find that the maintenance cost per line is not equally distributed. There are lots of lines of code that outlast the test of time without any maintenance work at all, a fair amount of lines that require moderate attention and some lines that seem to require constant and excessive developer care.

If you transfer this image to another metaphor, your code presents itself like a minefield for maintenance effort: Most of the area is harmless and safe to travel. But there are some positions that will just blow up once touched. The difference is that as a software developer, you don’t tread on the minefield, but you catch the flak if something happens.

You should try to deliver your code free of maintenance mines.

Spotting a maintenance mine

Identifying a line of code as a maintenance mine after the fact is easy. You probably already recognize the familiar code as “troublesome” because you’ve spent hours trying to understand and fix it. The commit history of your version control system can show you the “hottest” lines in your code – the areas that were modified most often. If you add tests for each new bug, you’ll find that the code is probably tested really well, with tests motivated by different bug issues. In hindsight, you can clearly distinguish low-effort code from high maintenance code.

But before delivery, all code looks the same. Or does it?

An example of a maintenance mine

Let’s look at an example. Our system monitors critical business data and sends out alerts if certain conditions are met. One implementation of the part sending the alerts is a simple e-mail sender. The code is given here:


public class SendEmailService {

  public void sendTo(
                Person person,
                String subject,
                String body) {
    execCmd(
         buildCmd(
               person.email(), subject, body));
  }

  private String buildCmd(String recipientMailAdress, String subject, String body){
    return "'/usr/bin/mutt -t " + recipientMailAdress + " -u " + subject + " -m " + body + "'";
  }

  private int execCmd(String command) throws IOException{
    return Runtime.getRuntime()
                  .exec(command).exitValue();
  }
}

This code has two interesting problems:

  • The first problem is that it is written in Java, a platform agnostic programming language, but depends on being run on a linux (or sufficiently similar unixoid) operating system. The system it runs on needs to supply the /usr/bin/mutt program and have the e-mail sending settings properly configured or else every try to run the send command will result in an error. This implicit dependency on the configuration of the production system isn’t the best way to deal with the situation, but it’s probably a one-time pain. The problem clearly presents itself and once the system is set up in the right way, it is gone (until somebody tampers with the settings again). And my impression is that this code separates two concerns between development and operations rather nicely: Development provides software that can send specific e-mails if operations provides a system that is capable of sending e-mails. No need to configure the system for e-mail sending and doing it again for the software on said system.
  • The second problem looks like a maintenance mine. In the line where the code passes the command line to the operating system (by calling Runtime.getRuntime().exec()), a Process object is returned that is only asked for its exitValue(), implicating a wait for the termination of the system command. The line looks straight and to the point. No need to store and handle intermediate objects if you aren’t interested in them. But perhaps, you should care:

By default, the created process does not have its own terminal or console. All its standard I/O (i.e. stdin, stdout, stderr) operations will be redirected to the parent process, where they can be accessed via the streams obtained using the methods getOutputStream(), getInputStream(), and getErrorStream(). The parent process uses these streams to feed input to and get output from the process. Because some native platforms only provide limited buffer size for standard input and output streams, failure to promptly write the input stream or read the output stream of the process may cause the process to block, or even deadlock.

Emphasize by me, see also: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/Process.html

This means that the Process object’s stdout and stderr outputs are stored in buffers of unknown (and system dependent) size. If one of these buffers fills up, the execution of the command just stops, as if somebody had paused it indefinitely. So, depending on your call’s talkativeness, your alert e-mail will not be sent, your system will appear to have failed to recognize the condition and you’ll never see a stacktrace or error exit value. All other e-mails (with less chatter) will go through just fine. This is a guaranteed source of frantic telephone calls, headaches and lost trust in your system and your ability to resolve issues.

And all the problems originate from one line of code. This is a maintenance mine with a stdout fuse.

The fix for this line might lie in the use of the ProcessBuilder class or your own utility code to drain the buffers. But how would you discover the mine before you deliver it?

Mines often lie at borders

One thing that stands out in this line of code is that it passes control to the “outside”. It acts as a transit point to the underlying operating system and therefor has a lot of baggage to check. There are no safety checks implemented, so the transit must be regarded as unsafe. If you look out for transit points in your code (like passing control to the file system, the network, a database or another external system), make sure you’ve read the instructions and requirements thoroughly. The problems of a maintenance mine aren’t apparent in your code and only manifest themselves during the interaction with the external system. And this is a situation that happens disproportionately often in production and comparably seldom during development.

So, think of your code as a maintenance minefield and be careful around its borders.

What is your minesweeper story? Drop us a comment.

Using parameterized docker builds

Docker is a great addition to you DevOps toolbox. Sometimes you may want to build several similar images using the same Dockerfile. That’s where parameterized docker builds come in:

They provide the ability to provide configuration values at image build time. Do not confuse this with environment variables when running the container! We used parameterized builds for example to build images for creating distribution-specific packages of native libraries and executables.

Our use case

We needed to package some proprietary native programs for several linux distribution version, in our case openSuse Leap. Build ARGs allow us to use a single Dockerfile but build several images and run them to build the packages for each distribution version. This can be easily achieved using so-called multi-configuration jobs in  the jenkins continuous integration server. But let us take a look at the Dockerfile first:

ARG LEAP_VERSION=15.1
FROM opensuse/leap:$LEAP_VERSION
ARG LEAP_VERSION=15.1

# add our target repository
RUN zypper ar http://our-private-rpm-repository.company.org/repo/leap-$LEAP_VERSION/ COMPANY_REPO

# install some pre-requisites
RUN zypper -n --no-gpg-checks refresh && zypper -n install rpm-build gcc-c++

WORKDIR /buildroot

CMD rpmbuild --define "_topdir `pwd`" -bb packaging/project.spec

Notice the ARG instruction defines a parameter name and a default value. That allows us to configure the image at build time using the --build-arg command line flag. Now we can build a docker image for Leap 15.0 using a command like:

docker build -t project-build --build-arg LEAP_VERSION=15.0 -f docker/Dockerfile .

In our multi-configuration jobs we call docker build with the variable from the axis definition to build several images in one job using the same Dockerfile.

A gotcha

As you may have noticed we have put the same ARG instruction twice in the Dockerfile: once before the FROM instruction and another time after FROM. This is because the build args are cleared after each FROM instruction. So beware in multi-stage builds, too. For more information see the docker documentation and this discussion. This had cost us quite some time as it was not as clearly documented at the time.

Conclusion

Parameterized builds allow for easy configuration of your Docker images at image build time. This increases flexibility and reduces duplication like maintaining several almost identical Dockerfiles. For runtime container configuration provide environment variables  to the docker run command.

A new star in software verification?

F* (pronounced F star) is a functional programming language with a dependent type system that supports verification. What follows is a story about my experiences with F* and a concluding opinion why languages like F* could be useful for the working software developer.

There is a big project using F* and related tools for the ambitious goal to build a verified implementation of TLS.
So far, judging from the website, there is a verified efficient implementation of the relevant cryptographic primitives but their implementation of the protocol is not yet verified. That is certainly already a great achievement and got me interested F*.

Setting it up

There is a F*-Tutorial which comes with an online editor, which is great if you just want to play a bit with the language. You don’t have to install anything, provided you have a recent mainstream browser. The tutorial is fun to do, but after I spent some time on it, I wanted to know what it is like to use F* not on a prescribed path.

Sooner or later, you probably want to install it on your machine, which is not a problem at all, if you are on linux system and an emacs user. There is a binary and an emacs mode which is also easy to install on windows but the combination was somehow forbiddingly slow on my (fast) machine. In the following, pictures will show excerpts from the emacs-mode.

Programming in F*

Let me first show you, what programming in F* looks like. Here is a definition of the commonly known (higher order) function “filter” which takes a predicate (function to bool in this case) and a list and returns the “sublist” of all things on the list matching the predicate:

filter1

The arrows and strange dots are just “->” and “::” rendered by the emacs mode. Otherwise, it is exactly the pattern matching definition that would also work in OCaml or F# (where you can write exactly the same thing):

The second argument “l” is matched against the two things a list might be, either the empty list “[]” or a list with consisting of some first element “x” and a list “xs”. In the first case, we are done and in the second case we decide if “x” should be in the result list and recursively call “filter” to deal with the “xs”.

The type of “filter” can be inferred by F*, but we could have declared it before the definition like this:

filter3

The alpha is really fancy emacs-mode rendering for ” ‘a ” and stands for an arbitrary type. This is also the type F* would infer, since what we did does not require anything special from the element type of the lists.

So the declaration says that our function takes some predicate “p” a list of things p may be applied to and returns a list of the same type just with “Tot” written in front of it. The latter tells us, that filter is a total function, i.e. it terminates and will not throw an exception (I guess there is a bit more to it, but so far that explanation worked well for me). There is also a modifier “ML” which would tell us, that the function behaves like a function in OCaml – so it could throw runtime exceptions or loop forever. After introducing you to some verification features, I will tell you why this distinction is important here.

Let us first rewrite our declaration a bit:

filter4

I replaced the alpha with a fixed type, the natural numbers (that will simplify things…) and introduced variables “p” and “l” that we can use now to construct types. What kind of types? Well, in this case, I want to construct a replacement for “(list N)” which tells us a bit more about the result of filtering a list. One reason to have types is to make sure, that the result of some construction meets a specification. Usually these kinds of specifications are quite coarse and might just tell you, that the result is an integer. So far, the specification of the filter function only tells us, that the result will be a list of natural numbers. In F*, we can do a lot more. The following specifies the result of filter to be the sublist of all elements of “l” that satisfy “p”:

filter5

Between the curly braces, I put a formula in predicate logic, which describes this specification. The ” ” is a logical “and”, the “∀” is the “forall” from math and ” element x l’ ” is some function which evaluates to true if x is an element of l’ and false if this is not the case.

Now, one of the things that make me happy when playing with F* is that the definition of filter I gave in the beginning checks against this new declaration as is. This is due to F* using a problem solver to prove that our specification is satisfied and the not-so-random coincidence that to prove the formula, you can use the same structural induction on the list as we used to recursively define “filter”.

For completeness, here is my definition of “element”:

element1

The “#a:eqtype” means, that “a” is some type such that comes with a total equality function for terms of type “a”. The natural numbers used above are such an “a”. Btw: Those natural numbers are not limited in size, like e.g. “unsigned int”. They behave a lot like “BigInteger” in some languages and are certainly not fast, but they are good for proving things.

And now, I should tell you, that the declaration of “filter” which nicely specified what filter does, is probably not what you should do. In fact, it is usually better to split things into definitions and “Lemma”s. So here is again the original definition of filter (with inferred type) and a Lemma about it:

filter6

The “Lemma” is again like “Tot” above – it modifies the type. Technically, “filter_specification” is a function, but the only thing important here, is if its definition (“let rec filter_specification = […]”) type checks with the declaration (“val filter_specification […]”) given – because that means that the definition is a proof of the proposition encoded in the declaration. In particular, it is not important what this function returns. In fact, “Lemma” produces a type all of whose terms are equal.

Verifying Project Euler exercises

First thing I usually do when checking out a new programming language is to solve a couple of Project Euler exercises to get fluent in the basic constructions. So, I thought, why shouldn’t I try the same with F* and prove something about my solution code?

Problem 1 asks us to add all multiples of 3 and 5 below 1000. Usually, my mind drops the “below” and I get a wrong result. But not so this time! Here is my definition of the natural numbers below 1000, which produces the correct list of numbers, since specifying precisely what it is made me read the problem text very carefully:

naturals_below_1000

Now, “naturals_below_1000” is specified by its type, which turned out to work well in this case, since it is not a general purpose function I will reuse.

Now for the task at hand, we need to filter numbers that are multiples of 3 or 5. I took the liberty to equivalently say, numbers divisible by 3 or 5:

divisible

I used modulo arithmetic “%” defined in some library to have an easy definition of “divides”. Also note, that I use definitions in proving that I defined the correct list. Both are a bit fishy if I were to claim that I proved some term in my program meets some mathematical definition commonly used for the concepts in the problem text. So you could call that lazy or efficient according to taste.

Now, the number which the exercise asks for can be defined:

ex1

F* can not just check if everything is correct but also evaluate expression. In particular the expression “ex1” which turns out to be the correct answer to the problem.

Is it of use for the working software developer?

I used Scala some years ago to develop a large prototype and, while there were some pains, it worked about as well as expected and I’m quite sure it would have been less fun and less efficient to use Java for the same project. One important point in the decision to use Scala was the possibility to use Java libraries.
F# has the same kind of advantage and I learned to love its type system from OCaml, which apparently was the basis for F#. So for the same reasons that made Scala work for me when constructing prototypes, I could decide to use F# in similar situations. I don’t know, how realistic my hope is, that F* can be used on top of F# one day, but that would be a great improvement over just having F# which is already something I look forward to. So far, it as possible to extract F# code from an F* program.

Usually, tools like F* are thought of and presented as tools to effectively prevent bugs, i.e. catching all of them no matter the effort. I think, what we need for our daily programming tasks are tools for preventing bugs efficiently. My guess is, that dependent types in a practical language like F# are such a tool. This is of course a heavily biased view, since I have a background involving some dependent type theory, which makes me blind to the effort of learning how dependent types work.

One problem with dependently typed languages is, that small changes can break lots of proofs that took effort to prove. This is particularly bad, if you spell out the proofs in all detail and it means refactoring can become quite expensive.

From what I have seen, the use of a problem solver when type checking reduces this problem quite a bit, so this might not be a real issue here and with other dependently typed languages that do something similar. And of, substitute technologies that achieve similar goals, like unit tests or comments,  also increase refactoring work.

So ultimately, apart from the fun, I also look forward at using F* as tool. As I mentioned before, I already knew about dependent types and could translate what I see when learning F*, so I can’t tell you how much effort it takes, if you don’t already know those things. But what I can tell you is, that learning dependent type theory was certainly among the most rewarding scientific experiences in my life, so maybe you want to do that anyway.

How to disable IP address logging for Apache web server and Tomcat

The General Data Protection Regulation (GDPR) prohibits excessive or unnecessary collection of personal data. IP adresses in server log files are considered personal data.

Logging of IP addresses is usually enabled by default in fresh web server installations. This article describes how to disable it for the Apache web server and the Tomcat application server, which are a common combination for JVM based web applications.

Apache Web Server

The apache web server logs the HTTP requests from web clients in log files called *access.log, which includes IP addresses. Another apache log file, which can contain IP addresses is error.log.

To disable IP address logging for Apache edit the main configuration file, usually called httpd.conf and scan it for LogFormat entries, for example:

LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined

The format ‘verb’ %h stands for the hostname or IP address. This is what you want to remove. You may also want to remove the “Referer” and “User-Agent” components of the log format for privacy:

LogFormat "%l %u %t \"%r\" %>s %O" combined

Restart the server for the changes to take effect.

Tomcat Application Server

For Tomcat you have to configure the so-called AccessLogValve in the server.xml configuration file located in the $CATALINA_HOME/conf directory. The configuration entry looks like this:

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
       prefix="localhost_access_log." suffix=".txt"
       pattern="%h %l %u %t "%r" %s %b" />

The pattern attribute specifies the log format. Again, the relevant format verb you want to remove is %h for the hostname or IP address:

       pattern="%l %u %t "%r" %s %b" />

Restart the server for the changes to take effect.

Clean deployment of .NET Core application

Microsofts .NET Core framework has rightfully earned its spot among cross-platform frameworks. We like to use it for example as a RESTful backend for our react frontends. If you are not burying your .NET Core application in a docker container without the need to configure/customize it you may feel agitated by its default deployment layout: All the dependencies live next to some JSON configuration files in one directory.

While this is ok if you do not need to look in there for a configuration file and change something you may like to clean it up and put the files into different folders. This can be achieved by customizing your MS build but it is all but straightforward!

Our goal

  1. Put all of our dependencies into a lib directory
  2. Put all of our configuration files int a configuration directory
  3. Remove unneeded files

The above should not require any interaction but be part of the regular build process.

The journey

We need to customize the MSBuild system to achieve our goal because the deps.json file must be rewritten to change the location of our dependencies. This is the hardest part! First we add the RoslynCodeTaskFactory as a package reference to our MSbuild in the csproj of our project. That we we can implement tasks using C#. We define two tasks that will help us in rewriting the deps.json:

<Project ToolsVersion="15.8" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <UsingTask TaskName="RegexReplaceFileText" TaskFactory="CodeTaskFactory" AssemblyFile="$(RoslynCodeTaskFactory)" Condition=" '$(RoslynCodeTaskFactory)' != '' ">
    <ParameterGroup>
      <InputFile ParameterType="System.String" Required="true" />
      <OutputFile ParameterType="System.String" Required="true" />
      <MatchExpression ParameterType="System.String" Required="true" />
      <ReplacementText ParameterType="System.String" Required="true" />
    </ParameterGroup>
    <Task>
      <Using Namespace="System" />
      <Using Namespace="System.IO" />
      <Using Namespace="System.Text.RegularExpressions" />
      <Code Type="Fragment" Language="cs">
        <![CDATA[ File.WriteAllText( OutputFile, Regex.Replace(File.ReadAllText(InputFile), MatchExpression, ReplacementText) ); ]]>
      </Code>
    </Task>
  </UsingTask>

  <UsingTask TaskName="RegexTrimFileText" TaskFactory="CodeTaskFactory" AssemblyFile="$(RoslynCodeTaskFactory)" Condition=" '$(RoslynCodeTaskFactory)' != '' ">
    <ParameterGroup>
      <InputFile ParameterType="System.String" Required="true" />
      <OutputFile ParameterType="System.String" Required="true" />
      <MatchExpression ParameterType="System.String" Required="true" />
    </ParameterGroup>
    <Task>
      <Using Namespace="System" />
      <Using Namespace="System.IO" />
      <Using Namespace="System.Text.RegularExpressions" />
      <Code Type="Fragment" Language="cs">
        <![CDATA[ File.WriteAllText( OutputFile, Regex.Replace(File.ReadAllText(InputFile), MatchExpression, "") ); ]]>
      </Code>
    </Task>
  </UsingTask>
</Project>

We put the tasks in a file called RegexReplace.targets file in the Build directory and import it in our csproj using <Import Project="Build/RegexReplace.targets" />.

Now we can just add a new target that is executed after the publish target to our main project csproj to move the assemblies around, rewrite the deps.json and remove unwanted files:

  <Target Name="PostPublishActions" AfterTargets="AfterPublish">
    <ItemGroup>
      <Libraries Include="$(PublishUrl)\*.dll" Exclude="$(PublishUrl)\MyProject.dll" />
    </ItemGroup>
    <ItemGroup>
      <Unwanted Include="$(PublishUrl)\MyProject.pdb;$(PublishUrl)\.filenesting.json" />
    </ItemGroup>
    <Move SourceFiles="@(Libraries)" DestinationFolder="$(PublishUrl)/lib" />
    <Copy SourceFiles="Build\MyProject.runtimeconfig.json;Build\web.config" DestinationFiles="$(PublishUrl)\MyProject.runtimeconfig.json;$(PublishUrl)\web.config" />
    <Delete Files="@(Libraries)" />
    <Delete Files="@(Unwanted)" />
    <RemoveDir Directories="$(PublishUrl)\Build" />
    <RegexTrimFileText InputFile="$(PublishUrl)\MyProject.deps.json" OutputFile="$(PublishUrl)\MyProject.deps.json" MatchExpression="(?&lt;=&quot;).*[/|\\](?=.*\.dll|.*\.exe)" />
    <RegexReplaceFileText InputFile="$(PublishUrl)\MyProject.deps.json" OutputFile="$(PublishUrl)\MyProject.deps.json" MatchExpression="&quot;path&quot;: &quot;.*&quot;" ReplacementText="&quot;path&quot;: &quot;.&quot;" />
  </Target>

The result

All this work should result in a working application with a root directory layout like in the image. As far as we know the remaining files like the web.config, the main project assembly and the two json files cannot easily relocated. The resulting layout is nevertheless quite clean and makes it easy for administrators to find the configuration files they need to customize.

Of course one can argue if the result is worth the hassle but if your customers’ administrators and operations value it you should do it.

Have unregistered classes throw with the unity DI container

The unity container (not to be confused with game engine) is one of the most popular dependency injection tools for C#.
However, by default the unity container will try to Resolve() all classes that it can. If you do not register a class, it will will often just succeed anyways.
I much prefer explicitly registering classes, and resolution just throwing and exception if I try to resolve something I did not register.
There’s a viable solution for that on stackoverflow, but it fails to throw when trying to resolve a class that was only registered via its interface.
Here’s our fixed version:

public class UnityRegistrationTracking : UnityContainerExtension
{
  private readonly ConcurrentDictionary<NamedTypeBuildKey, bool> registrations =
    new ConcurrentDictionary<NamedTypeBuildKey, bool>();

  protected override void Initialize()
  {
    base.Context.Registering += Context_Registering;
    base.Context.Strategies.Add(
        new ValidateRegistrationStrategy(this.registrations), UnityBuildStage.Setup);
  }

  private void Context_Registering(object sender, RegisterEventArgs e)
  {
    var buildKey = new NamedTypeBuildKey(e.TypeFrom, e.Name);
    this.registrations.AddOrUpdate(buildKey, true,
      (key, oldValue) => true);
  }

  public class ValidateRegistrationStrategy : BuilderStrategy
  {
    private ConcurrentDictionary<NamedTypeBuildKey, bool> registrations;

    public ValidateRegistrationStrategy(ConcurrentDictionary<NamedTypeBuildKey, bool> registrations)
    {
      this.registrations = registrations;
    }

    public override void PreBuildUp(ref BuilderContext context)
    {
        var buildKey = new NamedTypeBuildKey(context.RegistrationType, context.Name);
        if (!this.registrations.ContainsKey(buildKey))
        {
          throw new ResolutionFailedException(buildKey.Type, buildKey.Name,
            string.Format("Type {0} was not explicitly registered in the container.", buildKey.Type.Name));
        }
    }
  }
}

We hook into two parts of the unity API here:

  1. The registration, which is called when you call Unity.RegisterType
  2. The resolution process, which is called when unity tries to resolve a specific instance.

The first part happens in Context_Registering. We just store the registration in dictionary for later. It is important to use TypeFrom as a key, since we want to refer to objects by the interfaces they are registered with, not their concrete implementations.
The second part is the ValidateRegistrationStrategy. All registered BuilderStrategy objects go in a list that is processed when an object is built. The UnityBuildStage.Setup acts as a sorting key, to make sure that this strategy is executed as early as possible.
In the strategy, we check whether the requested type was previously registered, and throw an exception if it was not. It is important to use context.RegistrationType here, since context.Type will again contain the concrete type, and not the interface.