Every Unit Test Is a Stage Play – Part V

In this series about describing unit tests with the metaphor of a stage play that tells short stories about your system, we already published four parts:

Today, we look at the critics.

An integral part of the theater experience is the appraisal of the critics. A good review of a stage play can multiply the viewer count manyfold, while a bad review can make avid visitors hesitate or even omit the visit.

In our world of source code and unit tests, we can define the whole team of developers as critics. If they aren’t fond of the tests, they will neglect or even abandon them. Tests need to prove their worth in order to survive.

Let us think a little deeper about this aspect: Test code is evaluated more critically than any other source code! Normal production code can always claim to be wanted by the customer. No matter how bad the production code may look and feel like, it cannot just be deleted. Somebody would notice and complain.

Test code is not wanted by the customer. You can delete a test and it would not be noticed until a regression bug raises the question why the failing functionality wasn’t secured by a test. So in order to survive, test code needs a stakeholder inside the development team. Nobody outside the team cares about the test.

There is another difference between production code and test code: Production code is inherently silent during development. In contrast to this, test code is programmed to drive the developer’s attention to it in case of a crisis. It is code that tries to steal your focus and cries wolf. It is the messenger that delivers the bad news.

Test code is the code you’ll likely read in a state of irritation or annoyance.

Think about a theater critic that visits and rates a stage play in a state of irritation and annoyance. That wanted to do something else instead and probably has a deadline to meet for that other thing. His opinion is probably biased towards a scathing critique.

We talked about several things that test code can do to be inviting, concise, comprehendible and plausible. What it can’t do is to be entertaining. Test code is inherently boring. Every test is a short story that seems trivial when seen in isolation. We can probably anticipate the critique about such a play: “it meant well, but was ultimately forgettable”.

What can we do to make test code more meaningful? To convey its impact and significance to the critics?

In the world of theater (and even more so: movies), one strategy is to add “big names” to the production: “From the director of Known Masterpiece” or “Part III of the Successful Series”.

Another strategy is to embellish oneself with other critiques (hopefully good ones): “Nominated for X awards” or “Praised by Grumpy Critic”.

Let’s translate these two strategies into the world of unit tests:

Strategy 1: Borrow a stakeholder by linking to the requirement

I stated above that test code has no direct stakeholder. That’s correct for the code itself, but not for its motivation to exist. We don’t write unit tests just to have them. We write them because we want to assert that some functionality is present or some bug is absent. In both cases, we probably have a ticket that describes the required change in depth. We can add the “big name” of the ticket to the test by adding its number or a full url as a comment to the test:

/**
 * #Requirement http://issuetracker/TICKET-3096
 */
@Test
public void understands_iso8601_timestamp() {
    final LocalDateTime actual = SomeController.dateTimeFrom(
        "2023-05-24T17:30:20"
    );
    assertThat(
        actual
    ).isEqualTo(
        "2023-05-24T17:30:20"
    );
}

The detail of interest is the comment above the test method. It explains the motivation behind authoring the test. The first word (#Requirement) indicates that this is a new feature that got commissioned by the customer. If it was a bugfix test instead, the first word would be #Bugfix. In both cases, we tell future developers that this test has a meaning in the context of the linked ticket. It isn’t some random test that annoys them, it is the guard for a specific use case of the system.

Strategy 2: Gather visible awards for previous achievements

Once you get used to the accompanying comment to a test method, you can see it as some kind of billboard that displays the merit of the test. Why not display the heroic deeds of the test, too? I’ve blogged about the idea a decade ago, so this is just a quick recap:

/**
 * #Requirement http://issuetracker/TICKET-3096
 * @lifesaver by dsl
 * @regression by xyz
 */
@Test
public void understands_iso8601_timestamp() {
    /* omitted test code */
}

Every time a test does something positive for you, give it a medal! You can add it right below the ticket link and document for everybody to see that this test has earned its place in the code base. Of course, you can also document your frustrating encounters with a specific test in the same way. Over time, the bad tests will exhibit several negative awards, while your best tests will have several lifesaver medals (the highest distinction a test can achieve).

So, to wrap up this part of the metaphor: Pacify the inevitable critics of your test code by not only giving them pleasant code to look at but also context information about why this code exists and why they should listen to it if it happens to have grabbed their attention, even with bad news.

Epilogue

This is the fifth part of a series. All parts are linked below:

Every Unit Test Is a Stage Play – Part IV

In this series about describing unit tests with the metaphor of a stage play that tells short stories about your system, we already published three parts:

Today, we look at the story.

When you visit a theater, you probably expect to be entertained. You expect some level of preparation and presentation. You might not enjoy every aspect of the stage play, but you can cherish the overall experience.

When you read an unit test as a developer, you should not expect to be entertained. But you can expect some level of presentation and you should be able to endure the overall experience.

In both cases, a great factor to success is how the story is presented to you.

Imagine trying to follow a stage play that is in rehearsal mode. Constant interruptions and corrections from outside the stage, repetitions of scenes and single sentences and sometimes omissions that everybody is clued in on except you. And of course, nobody is dressed for their role. It would be hard to follow the plot and piece the story together.

Unit test code often reads like an early rehearsal. The code is stitched together by copy & paste, some details are modified but not emphasized and the point of the story is only revealed at the end, oftentimes told indirectly by convoluted assertions. When the test runs green for the first time, it is abandoned and left as an exercise in improvement for the next reader.

The next reader is a developer that made a change to the production code that got red-flagged by the unit test. He or she tries to find out why the jury of assertions is against the change and what the test is all about. It’s like the first visitor of a stage play has to tell the lighting technician where to point the spotlights without knowing how the story will play out.

If we accept the metaphor and view unit tests as stage plays that tell a short story, we should try to tell the story in a clear and concise manner. Giving standard names to the participating roles is an important first step to clue in the visitor/reader. But the last part of a story is the most crucial one. You are expected to tie the story threads together and provide a resolution that can be followed.

In unit testing, we express the resolution of the unit test’s short story as assertions:

public void parsingOfErroneousODLState() {
    final SerialODL target = new SerialODL(Z, "19");
    
    final ODLState actual = target.getCurrentState();
    
    Assert.assertFalse(
        actual.isNormalOperation()
    );
    Assert.assertEquals(
        3,
        IterableUtil.getSizeFor(actual.getErrorStates())
    );
    Assert.assertEquals(
        ODLErrorState.TEST_ERROR,
        IterableUtil.getElementAt(0, actual.getErrorStates())
    );
    Assert.assertEquals(
        ODLErrorState.INVALID_VALUE_DUE_TO_INITIALIZING,
        IterableUtil.getElementAt(1, actual.getErrorStates())
    );
    Assert.assertEquals(
        ODLErrorState.VALUE_GREATER_ALARM_THRESHOLD,
        IterableUtil.getElementAt(2, actual.getErrorStates())
    );
}

This unit test consists of one line of preparation (“arrange”), one line of code under test that produces the “actual” (“act”) and five assertions on several lines each (“assert”). Nearly 80 percent of this unit test are assertions. And they try to express something, but it gets drowned in noise.

One key to a better story is the usage of a more fitting form of expression, in our case a more natural way to write assertions:

public void parsingOfErroneousODLState() {
    final SerialODL target = new SerialODL(Z, "19");

    final ODLState actual = target.getCurrentState();

    assertThat(
        actual.isNormalOperation()
    ).isFalse();

    assertThat(
        actual.getErrorStates()
    ).containsExactly(
        ODLErrorState.TEST_ERROR,
        ODLErrorState.INVALID_VALUE_DUE_TO_INITIALIZING,
        ODLErrorState.VALUE_GREATER_ALARM_THRESHOLD
    );
}

In this example, we used assertj fluent assertions. As you can see, you can shrink the assertions part of your story down to the essence. You can state what you really want to see and not hide it behind indices and size comparisons that only exist because of the indices.

Another way to guide your reader is by structuring your test story into a standard form. From classic storytelling, we know about the hero’s journey that consists of three sections (departure, initiation, return) and can be found in countless books, movies and stage plays.

Our test’s journey is called AAA pattern. The three sections are:

  • Arrange
  • Act
  • Assert

Whenever you write an unit test, adhere to this pattern. If you find yourself tempted to add a second act or more assertions, break up your one unit test into two. You might want to think about extracting the arrange part into a common utility method (that is placed down below, behind the curtain). The story then says: The hero is in the same position both times, decides different (the two act sections) and has a different outcome (the two assertions) because of that.

There are probably countless things more that you can think of to make the story of your tests more compelling. Remember that test are not required to be entertaining or surprising. You can tell the same classic tale over and over again. The computer doesn’t mind and the next reader is glad when the test code is accessible right away because the structure and phrasing is on point.

Nobody would pay to see a confusing stage play. And nobody wants to decipher extravagant test code that just broke in an unexpected way. Give your readers what they hope for: Plain short stories about your system.

Epilogue

This is the fourth part of a series. All parts are linked below:

Every Unit Test Is a Stage Play – Part III

In the first and second parts of this series, I talked about describing unit tests with the metaphor of a stage play that tells short stories about your system.

This metaphor is really useful in many aspects of unit testing, as we have seen with naming variables and clearing the test method. In each blog entry of the series, I’m focussing on one aspect of the whole.

Today, we look at the theater.

If you go to the theater as a guest, you are greeted by a pompous entrance with a luxurious stairway that lead you to your comfy seat. You don’t get to see all the people and things behind the heavy curtains. You don’t need to recognize any details of the floor, the walls or the ceiling as you make your way into the auditorium. They don’t matter for the play.

If you enter the theater as an actor or a stage help, you slip into the building by the back entrance and make your way through a series of storage rooms. Or at least that is the cliché in many movies (I’m thinking about Birdman, but you probably have your own mental image at hands). You need to recognize all the details and position yourself according to your job. Your preparation matters for the play.

I described the test methods as single stage plays in earlier blog posts. Today, I want you to think about a test class as a theater. We need to agree on the position of the entrance. In my opinion, the entrance is where I’m starting to read – at the top of the file.

In my opinion, as a reader of the test class, I’m one of the guests. My expectation is that the test class is designed to be inviting to guests.

This expectation comes from a fundamental difference between production code classes and test classes: Classes in production code are not meant to be read. In fact, if you tailor your modules right and design the interface of a class clearly and without surprises, I want to utilize your class, but don’t read it. I don’t have to read it because the interface told me everything I needed to know about your class. Spare me the details, I’ve got problems to solve!

Test classes, on the other hand, are meant to be read. Nobody will call a test method from the production code. The interface of a test class is confusing from the outside. To value a test class is to read and understand it.

Production code classes are like goverment agencies: They serve you at the front, but don’t want you to snoop around the internals. Test classes are like a theater: You are invited to come inside and marvel at the show.

So we should design our test classes like theaters: An inviting upper part for the guests and a pragmatic lower part for the stage hands behind the curtain.

Let’s look at an example:

public class UninvitingTest {
	
    public static class TestResult {
        private final PulseCount[] counts;
        private final byte[] bytes;

        public TestResult(
            final PulseCount count,
            final byte[] bytes
        ) {
            this(
                new PulseCount[] {
                    count
                },
                bytes
            );
        }

        public TestResult(	
    	    final PulseCount[] counts,
            final byte[] bytes
        ) {
            super();
            this.counts = counts;
            this.bytes = bytes;
        }

        public PulseCount[] getCounts() {
            return this.counts;
        }

        public byte[] getBytes() {
            return this.bytes;
        }
    }
	
    private static final TestResult ZERO_COUNT = 
	new TestResult(
            new PulseCount(0),
            new byte[] {0x0, 0x0, 0x0, 0x0}
        );
    private static final TestResult VERY_SMALL_COUNT = 
	new TestResult(
            new PulseCount(34),
            new byte[] {0x0, 0x0, 0x22, 0x0}
        );
    private static final TestResult MEDIUM_COUNT_BORDER = 
	new TestResult(
            new PulseCount(65536),
            new byte[] {0x1, 0x0, 0x0, 0x0}
        );
    
    public UninvitingTest() {
    	super();
    }

    @Test
    public void serializeSingleChannelValues() throws Exception {
        SPEChannelValuesSerializer scv = 
            new SPEChannelValuesSerializer();
        Assertion.assertArrayEquals(
            ZERO_COUNT.getBytes(),
            scv.serializeCounts(ZERO_COUNT.getCounts())
        );
        Assertion.assertArrayEquals(
    	    VERY_SMALL_COUNT.getBytes(),
            scv.serializeCounts(VERY_SMALL_COUNT.getCounts())
        );
        Assertion.assertArrayEquals(
    	    MEDIUM_COUNT_BORDER.getBytes(),
            scv.serializeCounts(MEDIUM_COUNT_BORDER.getCounts())
        );
    }
}

In fact, I hope you didn’t read the whole thing. There are lots of problems with this test, but let’s focus on the entrance part:

  • Package declaration (omitted here)
  • Import statements (omitted here)
  • Class declaration
  • Inner class definition
  • Constant definitions
  • Constructor
  • Test method

The amount depends on the programming language, but some ornaments at the top of a file are probably required and can’t be skipped or moved around. We can think of them as a parking lot that we require, but don’t find visually appealing.

The class declaration is something like an entrance door. Behind it, the theater begins. And just by looking at the next three things, I can tell that I took the wrong door. Why am I burdened with the implementation details of a whole other class? Do I need to remember any of that? Are the constants important? Why does a test class require a constructor?

In this test class, I need to travel 50 lines of code before I reach the first test method. Translated into our metaphor, this would be equivalent to three storage rooms filled with random stuff that I need to traverse before I can sit into my chair to watch the play. It would be ridiculous when encountered in real life.

The solution isn’t that hard: Store your stuff in the back rooms. We just need to move our test method up, right under the class declaration. Everything else is defined at the bottom of our class, after the test methods.

This is a clear violation of the Java code conventions and the usual structure of a class. Just remember this: The code conventions and structures apply to production code and are probably useful for it. But we have other requirements for our test classes. We don’t need to know about the intrinsic details of that inner class because it will only be used in a few test methods. The constants aren’t public and won’t just change. The only call to our constructor lies outside of our code in the test framework. We don’t need it at all and should remove it.

If you view your test class as a theater, you store your stuff in the back and present an inviting front to your readers. You know why they visit you: They want to read the tests, so show them the tests as proximate as possible. Let the compiler travel your code, not your readers.

And just so show you the effect, here is the nasty test class from above, with the more inviting structure:

public class UninvitingTest {
    
    @Test
    public void serializeSingleChannelValues() throws Exception {
        SPEChannelValuesSerializer scv = 
            new SPEChannelValuesSerializer();
        Assertion.assertArrayEquals(
            ZERO_COUNT.getBytes(),
            scv.serializeCounts(ZERO_COUNT.getCounts())
        );
        Assertion.assertArrayEquals(
            VERY_SMALL_COUNT.getBytes(),
            scv.serializeCounts(VERY_SMALL_COUNT.getCounts())
        );
        Assertion.assertArrayEquals(
            MEDIUM_COUNT_BORDER.getBytes(),
            scv.serializeCounts(MEDIUM_COUNT_BORDER.getCounts())
        );
    }

    private static final TestResult ZERO_COUNT = 
        new TestResult(
            new PulseCount(0),
            new byte[] {0x0, 0x0, 0x0, 0x0}
        );
    private static final TestResult VERY_SMALL_COUNT = 
        new TestResult(
            new PulseCount(34),
            new byte[] {0x0, 0x0, 0x22, 0x0}
        );
    private static final TestResult MEDIUM_COUNT_BORDER = 
        new TestResult(
            new PulseCount(65536),
            new byte[] {0x1, 0x0, 0x0, 0x0}
        );

    public static class TestResult {
        private final PulseCount[] counts;
        private final byte[] bytes;

        public TestResult(
            final PulseCount count,
            final byte[] bytes
        ) {
            this(
                new PulseCount[] {
                    count
                },
                bytes
            );
        }

        public TestResult(  
            final PulseCount[] counts,
            final byte[] bytes
        ) {
            super();
            this.counts = counts;
            this.bytes = bytes;
        }

        public PulseCount[] getCounts() {
            return this.counts;
        }

        public byte[] getBytes() {
            return this.bytes;
        }
    }

    public UninvitingTest() {
        super();
    }
}

Show your readers the test methods and don’t burden them with details they just don’t need (yet).

Epilogue

This is the third part of a series. All parts are linked below:

Every Unit Test Is a Stage Play – Part II

In the first part of this series, I talked about describing unit tests with the metaphor of a stage play that tells short stories about your system.

This metaphor holds its water in nearly every aspect of unit testing and guides my approach from each single line of code to the whole concept of test coverage. In this series, I’m focussing on one aspect in each part.

Today, we look at the backdrop.

We learnt from the first part that most unit tests are performed by four roles that appear on stage. In a theater, the stage is oftentimes decorated by additional items that facilitate the story. This is the backdrop (or the coulisse) of the play. We have the same thing in unit tests.

In a unit test, the stage is the code inside the test method:

@Test
public void rounds_up_to_the_next_decimal_power() {
    final Configuration given = new Configuration(
        StringVirtualFile.createFileFromContent(
            "report.config",
	    "scale.maximum=2E5"
	)
    );
    final ReportConfiguration target = new ReportConfiguration(
        given
        SuffixProvider.none
    );
    final Optional<Double> actual = target.scaleMaximum();
    final double expected = 1E6;
    assertThat(actual).contains(expected);
}

A good director is very picky about every detail that appears on stage. There should be no incidental item visible for the audience. In the case of an unit test, the audience is the next developer that reads the test code.

The stage doesn’t need to be devoid of “extras” and furniture, but it should be limited to the essential. A theater play isn’t a movie set where eye candy is seen as something positive. During the play, the audience should recognize the actors (and their roles) easily and without searching between all the clutter.

So we need to do two things: Declutter the stage and identify the extras.

Decluttering the stage

In our example above, there is a mismatch between the code required to instantiate the given role and the rest of the whole test. If this were a real play, half the time would be spent on an extra that doesn’t even speak a line. It is implied that the target gets some information from the given, but it isn’t shown. We need to remove some details from the stage that aren’t even relevant to the story, but introduced in detail just like the essential things. It might be fun and suspenseful to guess if the “report.config” detail in line three is more meaningful than the “scale.maximum” specified in line four, but unit test stories are not meant to be mysterious or even entertaining. They are meant to inform the audience about a little fact of the tested system. There will be thousands of little stories about the system. Make them trivial to read and easy to understand.

We need to move the stage props off the stage:

@Test
void rounds_up_to_the_next_decimal_power() {
    Configuration given = givenScaleMaximumOf("2E5");
    final ReportConfiguration target = new ReportConfiguration(
        given,
        irrelevant()
    );
    final Optional<Double> actual = target.scaleMaximum();
    final double expected = 1E6;
    assertThat(actual).contains(expected);
}

private Configuration givenScaleMaximumOf(
    String scaleMaximum
) {
    return new Configuration(
        StringVirtualFile.createFileFromContent(
            "report.config",
            "scale.maximum=" + scaleMaximum
        )
    );
}

private SuffixProvider irrelevant() {
    return SuffixProvider.none;
}

By moving the initialization code of the Configuration object to a new private method, we employ the storytelling device of “conveniently prepared circumstances”. The private method is moved to the “lower decks” or the “backstage” area of our test class. “The stage is on top” might be our motto. Everything “private” or not-Test-annotated should not be visible first and should not be required reading.

Notice how I introduced a parameter to the new “givenScaleMaximumOf” method in order to keep the necessary test details on stage. The audience should not have to look behind the curtains to gather important information about the story. I made the parameter a string (and not a double or integer) because it is just a prop. The story doesn’t benefit from it being typecasted correctly. And if you look back, it was a string before, too.

Identifying the extras

I’ve also extracted the “magic number” or “silent extra” SuffixProvider.none into its own method. This method adds nothing but a name that conveys the meaning of the value to the story instead of the value itself. If I were a directory in a theater, this actor would have a plain and bland costume in contrast to the bright colored main roles. Maybe the stage lighting would illuminate the main roles and keep the extra in the shadowy area, too.

Now, the focus of our test method is back on the main story. The attention of our audience will be on the stage and it will not be burdened by irrelevant details. We will even label props as dispensable if they are.

Keep your stages free from clutter and the eyes of your audience on the story. Test code is boring by nature. It doesn’t have to be plodding, too.

Epilogue

This is the second part of a series. All parts are linked below:

Every Unit Test Is a Stage Play – Part I

At the last dev brunch, I got the recommendation about a talk that tries to explain functional programming differently. What really got me was the effectiveness of the changed vocabulatory. I’ve seen this before, in the old talk about test driven development and behaviour driven development. But in my head, I think about unit tests with another overarching metaphor that I’m trying to explain in this blog post series:

Every unit test is a stage play that tells a short story about your system.

And this metaphor really guides my approach to nearly any aspect of unit testing, from each single line of code to the whole concept of test coverage. So I’m breaking my explanation into (at least) five parts and focus on one aspect in each part.

Today, we look at the actors.

In each classic play, there are well-known roles that can be played by nearly any human, but always stay the same role. There’s the hero, the (comedic) sidekick and of course, the villain or antagonist. In every show of Romeo and Juliet, there will be a Romeo. It might not be the most convincing Romeo ever, but the role stays the same, no matter the cast.

The same thing is true for every well-formed unit test. There are four roles that always appear on stage:

  • target: This is the object under test or the code under test if you don’t use objects. The target is probably different for every unit test you write, but the role is always present. I’ve seen it being called “cut” for “code under test”, but I prefer “target”. If you see a reference named “target” in my test code, you can be sure about the role it plays in the story.
  • actual: If you can design your code to adhere to the simple “parameters in, result out” call pattern, the “result out” is the “actual”. It is what your target produced when challenged by the specific parameters of your test. One trick to testable code is to design the “actual” role as being quite simple and “flat”.
  • expected: This might be the closest thing to an antagonist in your play. The “expected” role is filled with the value (or values) that your “actual” is measured against. If your “actual” is simple, your “expected” will be simple, too. If your “actual” is too complex, the “expected” role will be overbearing. In any case, the “expected” role is what drives your assertions.
  • given: Our hero, the “target”, is often dependent on entry parameters or secondary objects (mocked or not). These sidekicks are part of the “given” role. You might think about the “given-when-then” storytelling structure of behaviour driven design for the name. If you strive for a simple code structure, the required “given” in your unit test should be manageable.

As you can see, the story of a typical unit test is always the same: The target, with the help of the given, produces an actual that competes against the expected.

If this story has a happy end, your test runs green. If the actual fails the expectation, your test runs red. If the target fails to produce an actual at all (think about an exception or error), your whole play falls apart and the test runs red.

Enough theory, let’s look at an unit test that uses the four roles:

@Test
public void rounds_up_to_the_next_decimal_power() {
    final Configuration given = new Configuration(
        StringVirtualFile.createFileFromContent(
            "report.config",
	    "scale.maximum=2E5"
	)
    );
    final ReportConfiguration target = new ReportConfiguration(
        given
        SuffixProvider.none
    );
    final Optional<Double> actual = target.scaleMaximum();
    final double expected = 1E6;
    assertThat(actual).contains(expected);
}

I’ve highlighted the roles for better visibility. Note that for a role to appear in the play, it doesn’t really have to be named explicitely. Most of the time, the last two lines would be collated into one:

assertThat(actual).contains(1E6);

You can still see the “expected” role play its part, but not as prominent as before.

You also probably saw the extra “given” that wasn’t highlighted:

SuffixProvider.none

It might be relevant to the story or really be an uncredited extra that is not crucial in the target’s journey to produce the correct actual. If that’s the case, it seems appropriate not to name it. We will learn about techniques that I use to make these extras more nondescript in a later part. Right now, we can differentiate between main roles that are named and secondary roles that are just there, as part of the scenery. Just don’t fool your audience by having an unnamed actor contribute an important piece to the story’s success. That might be a cool plot twist, but I’m not here to be surprised.

Let your tests perform boring plays, but lots of them.

By using the four roles of test play, you make it clear to the reader (your real audience) what to expect from your test code parts. Don’t name irrelevant test code parts and only omit the role names if there are no extras on stage.

Your audience will still find your play boring (that’s the fate of nearly all test code), but it won’t feel disregarded or, even worse, deceived.

Epilogue

This is the first part of a series. All parts are linked below:

Don’t test details from a distance

The concept described in this blog entry has evoked a lot of different metaphors and descriptions from our team when we discussed it. So don’t take my words or thoughts on it as the one true way to talk about it – the concept of the “testing gap” or the distance between the code under test and the test’s vantage point.

Before I describe my metaphor for it with some weird visuals, let’s look at some code:

public Budget(String denotation, int maximumHours) {
    this.denotation = denotation;
    this.maximumHours = maximumHours;
    this.currentHours = maximumHours;
}

This is the constructor for an entity, a domain class that represents a budget of work hours that gets slowly used up when you work for the customer’s project. There is not much going on in this code except one little detail of the domain: New budgets always start fully “filled up”, in that the currentHours are set to the maximumHours. You can’t create a budget that is already half empty with this code.

Such a domain concept or “business rule” requires a test that ensures it is still in place:

@Test
public void has_initially_current_hours_set_to_maximum() {
    Budget target = new Budget(
        "current is maxed",
        100
    );
    assertThat(target.getMaximumHours()).isEqualTo(100);
    assertThat(target.getCurrentHours()).isEqualTo(100);
}

This is a fairly boring unit test that ensures that freshly created budgets have all their working hours still available.

In our example, the entity lives in the core of a web application that provides an endpoint to create new budgets. We have a test for the endpoint, of course:

@Test
public void stores_new_budget() throws Exception {
    this.web.perform(
        post("/budgets")
        .contentType(MediaType.APPLICATION_JSON)
        .content("{\"denotation\": \"new budget\", \"maximumHours\": 300}")
    )
    .andExpect(status().isOk())
    .andExpect(content().json("{\"denotation\": \"new budget\", \"maximumHours\": 300, \"currentHours\": 300}"));
}

You can shudder at the code formatting or the necessity to escape your JSON data into inscrutability. At least the second problem more or less disappears with current Java versions. But that’s not the point today. The point is that this is effectively the same test as above, but with a gap in between.

If you wrote just the second test, your code coverage metrics would probably not decrease. Your business rule would still be tested. So why write the first test if it adds nothing to the safety net?

This can be explained with the idea that there is a considerable “testing gap” between the second test and the business rule. It covers the entity’s constructor code and states explicitely that the currentHours property should be set to the same value as the maximumHours property. But it also defines the communication protocol as being HTTP, the data format as being JSON and travels through code that finds an “endpoint” for the given URL, maps the given JSON to a constructor call and serializes the resulting object as JSON back to the requester. That would be a lot of padding just to test the constructor’s third line.

The first test has virtually no testing gap. It knows nothing about the web, data formats or whatever else the application consists of. It just looks at the entity and its behaviour in isolation.

There are perfectly valid reasons to write the second test, but it should not be the only test that ensures the business rule in our example. The second test “sees too much” from its vantage point to pay attention to a little detail like the business rule.

In case you didn’t quite get the concept of the testing gap yet, here is how I imagine it in my head: If your code under test is a mystery box (really try to picture a shoebox made of cardboard that rattles when you move it), then your test is a big floating eye that uses little cracks and holes in the box to get a quick peek inside. If you exhibit state by getter methods like in our example, the eye ensures the internal state of the box by looking at the gauges that are placed on its sides.

If your testing gap is small, the eye hovers up close to the box. It doesn’t see anything else, but it notices every detail of the box.

If you have some testing gap in your test code, the eye is placed in a considerate distance from the mystery box. There are other important things between them. The gauges aren’t directly readable. The eye uses indirect clues and reflections to gather its informations. Every time something in the gap’s setup changes, the testing eye needs to adjust its gaze.

Which brings us to the conclusion of this metaphor: If you want things to be looked at in detail, write tests without a testing gap. Otherwise, your tests will have increased execution times, exhibit a strange imprecision in their message (“something in these dozen of classes has changed and it might not even be relevant”) and require frequent adjustments that are not related to their testing story.

Or, if said with the words of my imagination, place your testing eye directly at the entrance of your test’s hideout.

You’ve probably thought about this concept already, in your own terms and metaphors. Can you try to describe it in a comment? Just for the name, we discussed “testing distance”, “testing height”, “testing gap” and others. Perhaps we like your description even better.

Look at the automated tests to diagnose the project ailments

If you have to evaluate a new software project, draw the outline of the existing test types. If it doesn’t look like a pyramide, you might be in trouble. Here are some common non-pyramide outlines and what they mean.

A cornerstone of modern software development is developer testing. That means that developers are the primary authors of automated test code. In theory, that is a good thing and might look like the quality assurance department is out of work soon. In practice, we as a profession tried for nearly twenty years to install a culture of developer testing in our work and still end up with software projects that feature no automated tests at all (Side note: JUnit 1.0 was released in February of 1998).

What we know about automated tests

One piece of common understanding about developer testing is the test pyramide. Let’s iterate quickly what we know about it. There are different kinds of automated tests and the test pyramide differentiates three of them:

  • Acceptance tests or UI tests are the heaviest type of automated test. They operate on the software from the outside, with the means of a real user and try to assert that real use cases are accomplishable.
  • Integration tests often use several parts of the system in a test scenario that asserts the correct collaboration of the parts. Integration tests may take some time to come to a conclusion and utilize real hardware like network or disks.
  • Unit tests tend to be small and quick and focus on a particular aspect of an “unit” like a class or entity aggregate. Their reach into the system should be short and might be forcefully restricted by employing mocks.

These three types, the A, I and U of automated tests, should come in different numbers. A good rule of thumb is that for every acceptance test, there might be up to one thousand unit tests. If you draw the quantities as areas, they appear in form of a pyramide. A small top of acceptance tests rests on a broader seating of integration tests that relies on a groundwork of many unit tests. A healthy test pyramide looks like this:

Take this picture as an orientation, not as an absolute scale. But be sure to count your different test types from time to time.

Outlining the tests

This is actually one of the first things I do when I get introduced to a new and unknown code base. This happens quite often when I do consulting work for existing development teams. Have a look at the automated tests, determine their type and count their numbers. If it resembles anything close to the test pyramide, you’ve got a chance. If the resulting shape looks different, you might find this blog entry useful:

The Tower

If you have a hard time finding any tests (because there are none) or you find only some half-assed attempts to produce a meaningful automated test suite, you look at a tower project. The tower is rather small in diameter, in the cases of absent tests it is nothing more than a thin vertical line (the “stick”). If you find a solid number of tests for every type, you’ve found a “block” project. Block projects usually don’t have a problem, but a history of test effort migration either from unit to acceptance tests or, more common, in the other direction. If you find a block, you are fine.

The tower, though, is a case of neglect. The project team might have started serious efforts to automated their tests, but got demotivated by intrinsic or extrinsic influences and abandoned the tests soon after their creation. Nobody has looked after them since and the only reason they still pass green is that they didn’t really test anything to begin with or only cover an area of the system that is as finished as it is boring. Topics like user management or utility classes are usually the first and only things that got tests in a tower scenario.

Don’t get me wrong, the tower indicates the absence of tests, but not the absence of willingness to write automated tests, unless the tower is really a stick. A team willing to invest in automated tests may only lack knowledge and coaching about the topic. Be sure to lead them bottom-up (unit tests first), though.

The Egg

If you’ve categorized and counted the tests and couldn’t find many acceptance or unit tests, you’ve found an egg. The egg consists of mostly integration tests that may lean into unit testing territory by asserting smallest bits of functionality here and there (often embedded in an overarching test storyline) or dip their toes into gui-based testing by asserting presentation-specific properties of widget objects. While they provide ample test coverage for the system, they also tie application logic and presentation details together and don’t help to separate domain code from the use cases.

The project team is probably proud of their test coverage and doesn’t see any value in differentiating the automated tests types, because “every test improves the situation”. The blindness to test types is the core problem that may be cured with training and coaching (I’ve found the ATRIP-rules to be particularly effective to distinguish integration and unit tests), but the symptoms, especially the lack of separation of concerns, have to be mitigated soon, too.

One way to start there is to break the tests down into their integration and their unit test parts. You can work from assertion to assertion and ask: is this necessary to ensure the current use case? If not, extract a new unit test focussed on only this one assertion.

As soon as you add a pedestal consisting of unit tests to your egg, you are on your best way to a healthy test pyramide.

The Ice Cream Cone

This is the most fearsome automated test outline in existence, even more dramatic than the stick. Usually, the project team is really enthusiastic about writing tests or at least follow order to do so, but they cannot test parts of the application in isolation. A really tragic case was a complex system that was so entangled with its database, through countless stored procedures that contributed to the application logic, that it was hopeless to think about tests without the database. And because every automated test had to start the whole system including the database, there was really no need to differentiate between application logic and presentation logic. It all became a gordic knot of dependencies that enforced the habit of writing elaborate automated GUI-based tests to test the smallest logic bits deep inside the core. It felt like eating single rice grains with overly long, flimsy wooden chopsticks that would break often.

The ice cream cone is problematic because the project team needs to realize that their effort was mislead and the tests are all telling the bitter truth: the system’s architecture isn’t fit for proper automated tests. It’s not the tests, it’s you (or your architecture)! Nobody wants to hear that and more so, nobody wants to untangle the mess (without the help of a proper safety net consisting of automated tests). Pinning tests are probably helpful in this scenario.

But you need to turn the test pyramide around or the project team will suffocate by the overly costly test tax while increasing technical debt.

Epilogue

Please keep in mind that it’s not a problem in itself that your project doesn’t have a normal test pyramide. It’s great that you have automated tests at all! But your current test type distribution might not be as effective as possible, might be more expensive than necessary and might be not the right automated test setup for your development goals.

What are your stories with automated test setups? Care to share it with us in the comments?

From ugly to pretty – Three steps is all it takes

A story about what can happen if you challenge your students to improve inferior code. With just three simple steps, the code gets beautiful.

makeupI hold lectures in software engineering for over a decade now. One major topic is testing, specifically unit tests. Other corner stones are refactorings and code readability. So whenever I have the chance to challenge my students in cross-topic aspects of software development, it’s almost always a source of insight for them and especially for me. But one golden moment holds a special place in my memory. This is the (rather elaborate, sorry) story of this moment.

During a lecture about unit tests with JUnit, my students had the task to develop tests for a bank account class. That’s about as boring as testing can be – the account was related to a customer and had a current balance. The customer can withdraw money, but only some customers can overdraw their account. To spice things up a bit, we also added the mock object framework EasyMock to the mix. While I would recommend other mock frameworks for production usage, the learning curve of EasyMock is just about right for first time exposure in a “sheep dip” fashion.

Our first test dealt with drawing money from an empty account that can be overdrawn:

@Test
public void canWithdrawOnCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(true);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(new Euro(30), cash);
  assertEquals(new Euro(-30), account.balance());
  EasyMock.verify(customer);
}

The second test made sure that this withdrawal behaviour only works for customers with sufficient credit standing. We decided to pay out nothing (0 Euro) if the customer tries to withdraw more money than his account currently holds:

@Test
public void cannotTakeUpCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(false);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(Euro.ZERO, cash);
  assertEquals(Euro.ZERO, account.balance());
  EasyMock.verify(customer);
}

As you can tell, a lot of copy and paste was going on in the creation of this test. Just look at the name of the local variable “required” – it’s misleading now. Right up to this point, my main topic was the usage of the mock framework, not perfect code. So I explained the five stages of normalized mock-based unit tests (initialize, train mocks, execute tested code, assert results, verify mocks) and then changed the topic by expressing my displeasure about the duplication and the inferior readability of the code (it even tries to trick you with the “required” variable!). Now it was up to my students to improve our situation (this trick works only a few times for every course before they preventively become even pickier than me). A student accepted the challenge and gave advice:

First step: Extract Method refactoring

The obvious first step was to extract the duplication in its own method and adjust the calls by their parameters. This is an easy refactoring that will almost always improve the situation. Let’s see where it got us. Here is the extracted method:

protected void performWithdrawalTestWith(
    boolean customerCanOverdraw,
    Euro amountOfWithdrawal,
    Euro expectedCash,
    Euro expectedBalance) {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(customerCanOverdraw);
  EasyMock.replay(customer);
  Account account = new Account(customer);

  Euro cash = account.withdraw(amountOfWithdrawal);

  assertEquals(expectedCash, cash);
  assertEquals(expectedBalance, customer.balance());
  EasyMock.verify(customer);
}

And the two tests, now really concise:

@Test
public void canWithdrawOnCredit() {
  performWithdrawalTestWith(
      true,
      new Euro(30),
      new Euro(30),
      new Euro(-30));
}

 

@Test
public void cannotTakeUpCredit() {
  performWithdrawalTestWith(
      false,
      new Euro(30),
      Euro.ZERO,
      Euro.ZERO);
}

Well, that did resolve the duplication indeed. But the test methods now lacked any readability. They appeared as if somebody had extracted all the semantics out of the code. We were unhappy, but decided to interpret the current code as an intermediate step to the second refactoring:

Second step: Introduce Explaining Variable refactoring

In the second step, the task was to re-introduce the semantics back into the test methods. All parameters were nameless, so that was our angle of attack. By introducing local variables, we gave the parameters meaning again:

@Test
public void canWithdrawOnCredit() {
  boolean canOverdraw = true;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = new Euro(30);
  Euro resultingBalance = new Euro(-30);

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean canOverdraw = false;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = Euro.ZERO;
  Euro resultingBalance = Euro.ZERO;

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

That brought back the meaning to the test methods, but didn’t improve readability. The code wasn’t intentionally cryptic any more, but still far from being intuitively understandable – and that’s what really readable code should be. If even novices can read your code fluently and grasp the main concepts in the first pass, you’ve created expert code. I challenged the student to further transform the code, without any idea how to carry on myself. My student hesitated, but came up with the decisive refactoring within seconds:

Third step: Rename Variable refactoring

The third step doesn’t change the structure of the code, but its approachability. Instead of naming the local variables after their usage in the extracted method, we name them after their purpose in the test method. A first time reader won’t know about the extracted method (and preferably shouldn’t need to know), so it’s not in the best interest of the reader to foreshadow its details. Instead, we concentrate about telling the reader a coherent story:

@Test
public void canWithdrawOnCredit() {
  boolean aCustomerThatCanOverdraw = true;
  Euro heWithdraws30Euro = new Euro(30);
  Euro receivesTheFullAmount = new Euro(30);
  Euro andIsNow30EuroInTheRed = new Euro(-30);

  performWithdrawalTestWith(
      aCustomerThatCanOverdraw,
      heWithdraws30Euro,
      receivesTheFullAmount,
      andIsNow30EuroInTheRed);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean aCustomerThatCannotOverdraw = false;
  Euro heTriesToWithdraw30Euro = new Euro(30);
  Euro butReceivesNothing = Euro.ZERO;
  Euro andStillHasABalanceOfZero = Euro.ZERO;

  performWithdrawalTestWith(
      aCustomerThatCannotOverdraw,
      heTriesToWithdraw30Euro,
      butReceivesNothing,
      andStillHasABalanceOfZero);
}

If the reader is able to ignore some crude verbalization and special characters, he can read the test out loud and instantly grasp its meaning. The first lines of every test method are a bit confusing, but necessary given Java’s lack of named parameters.

The result might remind you a lot of Behavior Driven Development notation and that’s probably not by chance. In a few minutes during that programming exercise, my students taught themselves to think in scenarios or stories when approaching unit tests. I couldn’t have taught it any better – instead, I got enlightened by this exercise, too.

Communication through Tests – a larger experiment

We evaluated our ability to communicate through tests in a two-day experiment and gathered some interesting results.

triangulatorFor us, automated tests are the hallmark of professional software development. That doesn’t mean that we buy into every testing fad that comes along or consider ourselves testing experts just because we write some tests alongside our code. We put our money where our mouth is and evaluate our abilities in writing effective tests.

One way to measure the effectiveness of tests is to try to “communicate through tests”. One developer/team writes code and tests for a given specification. Another team picks up the tests only and tries to recreate the production code and infer the specification. The only communication between the two teams happens through the tests.

We performed a small experiment with two teams and one day for both phases and blogged about it. The results of this evaluation was that unit tests are a good medium to transport specification details. But we got a hint that problems might be bigger when the code was less arithmetic and more complex. As most of our development tasks are rather complex and driven by business rules instead of clean mathematical algorithms, we wanted to inspect further.

Our larger experiment

So we organized a bigger experiment with a broader scope. Instead of two teams, we had three teams. We ran the phases for eight instead of two hours, essentially increasing the resulting code size by a factor of 3. The assignments weren’t static, but versioned – and the team only knew the rules of the current version. When a team would reach a certain milestone, more rules would be revealed, partly contradicting the previous ruleset. This should emulate changing customer requirements. And to provide the ability to retrospect on the reconstruction phase, we recorded this phase with a screencast software (we used the commercial product Debut Video Capture), capturing both inputs and conversation by using headsets for every developer.

The first part of this experiment happened in late January of 2013, where all teams had one day to produce production and test code. This was a day of loud buzz in our development department. The second part for the reconstruction phase was scheduled for the middle of February 2013. We had to be a bit more quiet this time to increase the audio recording quality, but the developers were humming nonetheless.

Here are some numbers of what was produced in the first session:

  • Team 1: 400 lines of production code, 530 lines of test code. 8 production classes, 54 tests. Test coverage of 90.6%.
  • Team 2: 576 lines of production code, 655 lines of test code. 17 production classes, 59 tests. Test coverage of 98.2%.
  • Team 3: 442 lines of production code, 429 lines of test code. 18 production classes, 37 tests. Test coverage of 97.0%.

The reconstruction phase was finished in less than five hours, partly because we stuck very close to the actual tests with little guesswork. When the tests didn’t enforce a functionality, it wasn’t implemented to reveal the holes in the test coverage. This reduced the amount of production code that had to be written. On the flipside, every team got lost once on the way, loosing the better part of an hour without noticeable progress.

The results

After all the talk about the event itself, let’s have a look at our results of the experiment:

  • The recording of the reconstruction phase was a huge gain in understanding the detailed problems. We even discussed recording the construction phase too to capture the original design decisions.
  • Every decision on unclear terms from the original team lead to “blurry” tests that didn’t guide the reconstruction team as good as the “razor-sharp” tests did.
  • You could definitely tell the TDD tests from the “test first” tests or even the tests written “immediately after”. More on this aspect later, but this was our biggest overall take-away: The quality of the tests in terms of being a specification differed greatly. This wasn’t bound to teams – as soon as a team lost the TDD “drive”, the tests lost guidance power.
  • Test coverage (in terms of line coverage or conditional coverage) means nothing. You can have 100% test coverage and still suffer from severe plot holes in your tests. Blurry tests tend to increase the coverage, but not the accountability of tests.
  • In general, we were surprised how little guidance and coverage most tests offered. The assignments included some obvious “testing problems” like dealing with randomness and every team dealt with them deliberately. Still, these were the major pain points during the reconstruction phase. This result puts our first small experiment a bit into perspective. What works well with small code bases might be disproportionally harder to achieve when the code size scales up. So while TDD/tests might work sufficiently easy on a small task, it needs more attention for a larger task.

The biggest problem

When talking about “plot holes” from the tests, let me give you a detailed example of what I mean. The more useless tests suffered from a lack of triangulation. In geometry, triangulation is the process of determining the location of a point by measuring several angles to it from known points. When writing tests, triangulation is the effort to “pinpoint” or specify the implementation with a set of different inputs and required outputs. You specify enough different tests of the same functionality to require it being “real” instead of a dummy implementation. Let’s look at this test:

@Test
public void parsesUserInput() {
  assertThat(new InputParser().parse("1 3 5"), hasItems(1, 3, 5));
}

Well, the test tells us that we need to convert a given string into a bunch of integers. It specifies the necessary class and method for this task, but gives us great freedom in the actual implementation. This makes the test green:

public Iterable<Integer> parse(String input) {
  return Arrays.asList(1, 3, 5);
}

As far as the tests are concerned, this is a concise and correct implementation of the required functionality. And while it is obvious in our example that this will never be sufficient, it oftentimes isn’t so obvious when the problem domain isn’t as familiar as parsing strings to numbers. But to complete my explanation of test triangulation, let’s consider a more elaborate implementation of this test that needs a lot more work on the implementation side (especially when developed in accordance with the Transformation Priority Premise by Uncle Bob and without obvious duplication):

@Test
public void parsesUserInput() {
  assertThat(new InputParser().parse("1 3 5"), hasItems(1, 3, 5));
  assertThat(new InputParser().parse("1 2"), hasItems(1, 2));
  assertThat(new InputParser().parse("1 2 3 4 5"), hasItems(1, 2, 3, 4, 5));
  assertThat(new InputParser().parse("1 4 5 3 2"), hasItems(1, 2, 3, 4, 5));
  assertThat(new InputParser().parse("5 4"), hasItems(4, 5));
  assertThat(new InputParser().parse("5 3"), hasItems(3, 5));
}

Maybe not all assertions are required and maybe they should live in different tests giving more hints in their names, but you get the idea: Making this test green is way “harder” than the initial test. Writing properly triangulated tests is one of the immediate benefits of Test Driven Development (TDD), as for example outlined nicely by Ray Sinnema on his blog entry about test-driving a code kata.
Our tests that were written “after the fact” often lacked the proper amount of triangulation, making it easier to “fake it” in the reconstruction phase. In a real project setting, these tests would allow for too much implementation deviation to act as a specification. They act more as usage examples and happy path “smoke” tests.

Our benefits

While this experiment doesn’t fulfill rigid academic requirements on gathering data, it already paid off greatly for us. We’ve examined our ability to express our implementations through tests and gathered insight on our real capabilities to use test-driven methodologies. Being able to judge relatively objectively on the quality of your own tests (by watching the reconstruction phase’s screencast) was very helpful. We now know better what skills to improve and what to focus on during training.

Where to go from here?

We plan to repeat this experiment with interested participants as a spare-time event later this year. For now and ourselves, we have gathered enough impressions to act on them. If you are interested in more details, drop us a note. We could publish only the tests (for reconstruction), the complete code or even the screencasts (albeit they are somewhat long-running). Our participants could elaborate their impressions in the comment section, if you ask them.
We are very interested in your results when performing similar events, like Tomasz Borek did this month in Krakow, Poland. We found his blog entry about the event to be very interesting. We definitely lacked the surprise element for the teams during the event.

A small test saves the day

You think a method is too trivial to write a test for it? Think again if the method is mission-critical!

Just recently, I had to write a connection between an existing application and a new hardware unit. This is a fairly common job for our company, even considering the circumstances that I’d never even seen the hardware, let alone being able to connect to it. The hardware unit itself was rather big and it was installed in a security sensitive area with restricted access. So, I only got a specification of the protocol to use and a description of the hardware’s features.

Our common procedure to include hardware dependent modules into an application is to write two implementations of the module: One implementation is the real deal and interacts with the hardware over ethernet, USB, serial port or whatever proprietary communication device is used. This version of the module can only work as intended if the hardware is present. The other implementation acts as an emulation of the hardware, without any dependencies. If you are familiar with unit tests, think of a big test mock. The emulation version is used during development to test and run the application without requirements about the hardware. There are a lot of subtle pitfalls to consider and avoid, but on a bird-view level of abstraction, these interchangeable implementations of a module enable us to develop software with hardware dependencies without need for the actual hardware.

The first piece of code that’s used of a module is a factory/builder class that chooses between the available implementations, based on some configuration entry (or hardware availability, etc.). A typical implementation of the responsible method might look like this:


public HardwareModule createFor(ModuleConfiguration configuration) {
  if (configuration.isHardwarePresent()) {
    new RealHardwareModule();
  }
  return new EmulatedHardwareModule();
}

If the configuration object says that the hardware is present, the real implementation is used, subsequentially opening a connection to the hardware and talking the client side of the given protocol. Otherwise, the emulation is created and returned, maybe opening a debug GUI window to display certain internal states and values and providing controls to mess with the application during development.

The method itself looks very innocent and meager. There is not much going on, so what could possibly go wrong?

I’m not the most eager test-driven developer in the world, I have to admit. But I see the value of tests (and unit tests in particular) and adhere to the A-TRIP rules defined by Andy Hunt and (pragmatic) Dave Thomas:

  • Automatic
  • Thorough
  • Repeatable
  • Independent
  • Professional

For a complete definition of the rules, read the linked blog entry or, even better, buy the book. It’s small and cheap, but contains a lot of profound basic knowledge about unit testing.

The “Thorough” rule is more of a rule of thumb than a hard scientific formula for good unit tests: Always write a test if you’ve found a bug or if the code you’re writing is mission-critical. This was when my gut feeling told me that while the method above might seem trivial, it is definitely essential for the hardware module. So I wrote a test:

  @Test
  public void providesEmulationIfUnspecified() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration(""));
    assertEquals("not the hardware emulation", EmulatedHardwareModule.class, hardware.getClass());
  }

  @Test
  public void providesEmulationIfHardwareAbsent() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration("hardware.present=false"));
    assertEquals("not the hardware emulation", EmulatedHardwareModule.class, hardware.getClass());
  }

  @Test
  public void providesRealImplementationIfHardwarePresent() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration("hardware.present=true"));
    assertEquals("not the real hardware implementation", RealHardwareModule.class, hardware.getClass());
  }

To my surprise, the test immediately went red for the third test method. After double-checking the test code, I was certain that the test was correct. The test discovered a bug in the production code. And being a mostly independent unit test, it pointed to the problematic lines right away: the method implementation above. The helper method named configuration() spared in the code sample was very unlikely to contain a bug.

After a short moment of reading the code again, I corrected it (note the added return statement in line 3):


public HardwareModule createFor(ModuleConfiguration configuration) {
  if (configuration.isHardwarePresent()) {
    return new RealHardwareModule();
  }
  return new EmulatedHardwareModule();
}

This might not seem like the most disastrous bug ever, but it would have made for a nasty start when I finally would have tried the application with the real hardware. There is nothing more valueable than to be able to keep your cool “in the wild” and work on the real problems like faulty protocol specifications or unexpected/undocumented hardware behaviour. So, my gut feeling (and the Thorough rule) were right and my brain, telling me “skip this petty test” longer than I like to admit, was wrong. A small test for a small method paid off immediately and saved the day, at least for me.