A small test saves the day

You think a method is too trivial to write a test for it? Think again if the method is mission-critical!

Just recently, I had to write a connection between an existing application and a new hardware unit. This is a fairly common job for our company, even considering the circumstances that I’d never even seen the hardware, let alone being able to connect to it. The hardware unit itself was rather big and it was installed in a security sensitive area with restricted access. So, I only got a specification of the protocol to use and a description of the hardware’s features.

Our common procedure to include hardware dependent modules into an application is to write two implementations of the module: One implementation is the real deal and interacts with the hardware over ethernet, USB, serial port or whatever proprietary communication device is used. This version of the module can only work as intended if the hardware is present. The other implementation acts as an emulation of the hardware, without any dependencies. If you are familiar with unit tests, think of a big test mock. The emulation version is used during development to test and run the application without requirements about the hardware. There are a lot of subtle pitfalls to consider and avoid, but on a bird-view level of abstraction, these interchangeable implementations of a module enable us to develop software with hardware dependencies without need for the actual hardware.

The first piece of code that’s used of a module is a factory/builder class that chooses between the available implementations, based on some configuration entry (or hardware availability, etc.). A typical implementation of the responsible method might look like this:


public HardwareModule createFor(ModuleConfiguration configuration) {
  if (configuration.isHardwarePresent()) {
    new RealHardwareModule();
  }
  return new EmulatedHardwareModule();
}

If the configuration object says that the hardware is present, the real implementation is used, subsequentially opening a connection to the hardware and talking the client side of the given protocol. Otherwise, the emulation is created and returned, maybe opening a debug GUI window to display certain internal states and values and providing controls to mess with the application during development.

The method itself looks very innocent and meager. There is not much going on, so what could possibly go wrong?

I’m not the most eager test-driven developer in the world, I have to admit. But I see the value of tests (and unit tests in particular) and adhere to the A-TRIP rules defined by Andy Hunt and (pragmatic) Dave Thomas:

  • Automatic
  • Thorough
  • Repeatable
  • Independent
  • Professional

For a complete definition of the rules, read the linked blog entry or, even better, buy the book. It’s small and cheap, but contains a lot of profound basic knowledge about unit testing.

The “Thorough” rule is more of a rule of thumb than a hard scientific formula for good unit tests: Always write a test if you’ve found a bug or if the code you’re writing is mission-critical. This was when my gut feeling told me that while the method above might seem trivial, it is definitely essential for the hardware module. So I wrote a test:

  @Test
  public void providesEmulationIfUnspecified() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration(""));
    assertEquals("not the hardware emulation", EmulatedHardwareModule.class, hardware.getClass());
  }

  @Test
  public void providesEmulationIfHardwareAbsent() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration("hardware.present=false"));
    assertEquals("not the hardware emulation", EmulatedHardwareModule.class, hardware.getClass());
  }

  @Test
  public void providesRealImplementationIfHardwarePresent() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration("hardware.present=true"));
    assertEquals("not the real hardware implementation", RealHardwareModule.class, hardware.getClass());
  }

To my surprise, the test immediately went red for the third test method. After double-checking the test code, I was certain that the test was correct. The test discovered a bug in the production code. And being a mostly independent unit test, it pointed to the problematic lines right away: the method implementation above. The helper method named configuration() spared in the code sample was very unlikely to contain a bug.

After a short moment of reading the code again, I corrected it (note the added return statement in line 3):


public HardwareModule createFor(ModuleConfiguration configuration) {
  if (configuration.isHardwarePresent()) {
    return new RealHardwareModule();
  }
  return new EmulatedHardwareModule();
}

This might not seem like the most disastrous bug ever, but it would have made for a nasty start when I finally would have tried the application with the real hardware. There is nothing more valueable than to be able to keep your cool “in the wild” and work on the real problems like faulty protocol specifications or unexpected/undocumented hardware behaviour. So, my gut feeling (and the Thorough rule) were right and my brain, telling me “skip this petty test” longer than I like to admit, was wrong. A small test for a small method paid off immediately and saved the day, at least for me.

A blind spot of Continuous Integration

Imagine you haven’t changed a continuously integrated project for months when suddenly a unit test breaks. Here’s why CI has a blind spot with flawed tests.

In the early days of April 2008, we updated our hudson continuous integration (CI) server to a new version. This was no unusual action, as there was a new version every day back then, bringing new features in a rapid rate. What was unusual after the upgrade was that one of the surveilled projects failed to build all of a sudden.

Sudden (test) failure without a change

The build was started manually, without a code change. The project itself was inactive back then, meaning that no changes were made for months. And suddenly, a unit test broke. The test was in the project for two whole years without ever going off. What happened?

Good unit tests

There are rules for good unit tests. A basic set are the A-TRIP rules formulated in the excellent beginner’s book “Pragmatic Unit Testing” by Andy Hunt and Dave Thomas. The failed test clearly disobeyed the “repeatable” rule (the R in A-TRIP): It didn’t result in the same result as before while the code under test didn’t change.

Write repeatable tests or your CI will be blind (partially)

The cause of the failed test was putting the clocks back because the daylight-saving time part of the year began. The unit test secured some date calculations by taking the current date and comparing it to future and past dates that got calculated. The calculation went wrong when the daylight-saving mode changes in the calculated period, which was a bug. Repeatability of the unit test was lost when “the current date” entered the code – whether on the unit test or productive code side.

Two years of blindness

How could this bug survive two years without being noticed? The project was under CI surveillance since the beginning, the unit test to detect the bug was present along with the bugged code. The answer is: We never programmed for this project around the weeks of the year when the clock is adjusted and the bug occurs. This was a coincidence influenced by the customer’s schedule. So every time CI (or we) ran the unit test, it passed. Until that day right after putting the clocks back.

How to avoid this blind spot

There are two things you can do to avoid this scenario:

  • Always inject a fixed “the current date” into your code when dealing with date calculations. Only use absolute dates in your unit tests. Time isn’t a healer for your tests, it’s a beast to be tamed.
  • Set up a nightly build for your project that runs once a day even when no changes have been made. It would have caught this bug one and a half year earlier.

To sum it up:

CI only spots bugs when they move (aka the code is changed). Nightly builds provide a (fuzzy) security layer against non-repeatable unit tests. And unit tests with flaws provide only delusory security.

Additional background information

After fully understanding the circumstances, we were curious why the customer didn’t notice the bug and asked him about it. The answer was delightful: “Our computers don’t adjust their clocks. Daylight-saving time only causes trouble.” What a wise decision!

For a good comparison of CI vs. Nightly Builds see this blog entry.