Testing Java with Grails 2.2

We have some projects that consist of both java and groovy classes. If you don’t pay attention, you can have a nice WTF moment.

Let us look at the following fictive example. You want to test a static method of a java class “BlogAction”. This method should tell you whether a user can trigger a delete action depending on a configuration property.

The classes under test:

public class BlogAction {
    public static boolean isDeletePossible() {
        return ConfigurationOption.allowsDeletion();
    }
}
class ConfigurationOption {
    static boolean allowsDeletion() {
        // code of the Option class omitted, here it always returns false
        return Option.isEnabled('userCanDelete');
    }
}

In our test we mock the method of ConfigurationOption to return some special value and test for it:

@TestMixin([GrailsUnitTestMixin])
class BlogActionTest {
    @Test
    void postCanBeDeletedWhenOptionIsSet() {
        def option = mockFor(ConfigurationOption)
        option.demand.static.allowsDeletion(0..(Integer.MAX_VALUE-1)) { -> true }

        assertTrue(BlogAction.isDeletePossible())
    }
}

As result, the test runner greets us with a nice message:

junit.framework.AssertionFailedError
    at junit.framework.Assert.fail(Assert.java:48)
    at junit.framework.Assert.assertTrue(Assert.java:20)
    at junit.framework.Assert.assertTrue(Assert.java:27)
    at junit.framework.Assert$assertTrue.callStatic(Unknown Source)
    ...
    at BlogActionTest.postCanBeDeletedWhenOptionIsSet(BlogActionTest.groovy:21)
    ...

Why? There is not much code to mock, what is missing? An additional assert statement adds clarity:

assertTrue(ConfigurationOption.allowsDeletion())

The static method still returns false! The metaclass “magic” provided by mockFor() is not used by my java class. BlogAction simply ignores it and calls the allowsDeletion method directly. But there is a solution: we can mock the call to the “Option” class.

@Test
void postCanBeDeletedWhenOptionIsSet() {
    def option = mockFor(Option)
    option.demand.static.isEnabled(0..(Integer.MAX_VALUE-1)) { String propertyName -> true }

    assertTrue(BlogAction.isDeletePossible())
}

Lessons learned: The more happens behind the scenes, the more important becomes the context of execution.

Test your migrations

Do you trust your database migrations?

An evolving project that changes its persistent data structure can require a transformation of already existing content into the new form. To achieve this goal in our grails projects we use a grails database migration plugin. This plugin allows us to apply changesets to the database and keep track of its current state.

The syntax of the DSL for groovy database migrations is easy to read. This can trick you into the assumption that everything that looks good, compiles and runs without errors is OK. Of course it is not. Here is an example:

changeSet(author: 'vasili', id: 'copies messages to archive') {
  grailsChange {
    change {
      sql.eachRow("SELECT MESSAGES.ID, MESSAGES.CONTENT, "
                + "MESSAGES.DATE_SENT FROM MESSAGES WHERE "
                + "MESSAGES.DATE_SENT > to_date('2011-01-01 00:00', "
                + "'YYYY-MM-DD HH24:MI:SS')") { row ->
        sql.execute("INSERT INTO MESSAGES_ARCHIVE(ID, CONTENT, DATE_SENT) "
                  + "VALUES(${row.id}, ${row.content}, ${row.date_sent})")
      }
    }
    change {
      sql.eachRow("SELECT MESSAGES.ID, MESSAGES.CONTENT, "
                + "MESSAGES.DATE_SENT FROM MESSAGES WHERE "
                + "MESSAGES.DATE_SENT > to_date('2012-01-01 00:00', "
                + "'YYYY-MM-DD HH24:MI:SS')") { row ->
        sql.execute("INSERT INTO MESSAGES_ARCHIVE(ID, CONTENT, DATE_SENT) "
                + "VALUES(${row.id}, ${row.content}, ${row.date_sent})")
      }
    }
  }
}

Here you see two change closures that differ only by the year in the SQL where clause. What do you think will happen with your database when this migration is applied? The answer is: only changes from the year 2012 will be found in the destination table. The assumption that when there is one change closure in the grailsChange block there can also be two changes in it is, while compilable and runnable, wrong. Loking at the documentation you will see that it shows only one change block in the example code. When you divide the migration into multiple parts, each of them working on their own change, everything will work as expected.

Currently there is no safety net like unit tests for database migrations. Every assumption you make must be tested manually with some dummy test data.

The difference between Test First and Test Driven Development

If you tend to fall into the “one-two-everything”-trap while doing TDD, this blog post might give you some perspective on what really happens and how to avoid the trap by decomposing the problem.

The concept of Test First (“TF”, write a failing test first and make it green by writing exactly enough production code to do so) was always very appealing to me. But I’ve never experienced the guiding effect that is described for Test Driven Development (“TDD”, apply a Test First approach to a problem using baby steps, letting the tests drive the production code). This lead to quite some frustration and scepticism on my side. After a lot of attempts and training sessions with experienced TDD practioners, I concluded that while I grasped Test First and could apply it to everyday tasks, I wouldn’t be able to incorporate TDD into my process toolbox. My biggest grievance was that I couldn’t even tell why TDD failed for me.

The bad news is that TDD still lies outside my normal toolbox. The good news is that I can pinpoint a specific area where I need training in order to learn TDD properly. This blog post is the story about my revelation. I hope that you can gather some ideas for your own progress, implied that you’re no TDD master, too.

A simple training session

In order to learn TDD, I always look for fitting problems to apply it to. While developing a repository difference tracker, the Diffibrillator, there was a neat little task to order the entries of several lists of commits into a single, chronologically ordered list. I delayed the implementation of the needed algorithm for a TDD session in a relaxed environment. My mind began to spawn background processes about possible solutions. When I finally had a chance to start my session, one solution had already crystallized in my imagination:

An elegant solution

Given several input lists of commits, now used as queues, and one result list that is initially empty, repeat the following step until no more commits are pending in any input queue: Compare the head commits of all input queues by their commit date and remove the oldest one, adding it to the result list.
I nick-named this approach the “PEZ algorithm” because each commit list acts like the old PEZ candy dispensers of my childhood, always giving out the topmost sherbet shred when asked for.

A Test First approach

Trying to break the problem down into baby-stepped unit tests, I fell into the “one-two-everything”-trap once again. See for yourself what tests I wrote:

@Test
public void emptyIteratorWhenNoBranchesGiven() throws Exception {
  Iterable<ProjectBranch> noBranches = new EmptyIterable<>();
  Iterable<Commit> commits = CombineCommits.from(noBranches).byCommitDate();
  assertThat(commits, is(emptyIterable()));
}

The first test only prepares the classes’ interface, naming the methods and trying to establish a fluent coding style.

@Test
public void commitsOfBranchIfOnlyOneGiven() throws Exception {
  final Commit firstCommit = commitAt(10L);
  final Commit secondCommit = commitAt(20L);
  final ProjectBranch branch = branchWith(secondCommit, firstCommit);
  Iterable<Commit> commits = CombineCommits.from(branch).byCommitDate();
  assertThat(commits, contains(secondCommit, firstCommit));
}

The second test was the inevitable “simple and dumb” starting point for a journey led by the tests (hopefully). It didn’t lead to any meaningful production code. Obviously, a bigger test scenario was needed:

@Test
public void commitsOfSeveralBranchesInChronologicalOrder() throws Exception {
  final Commit commitA_1 = commitAt(10L);
  final Commit commitB_2 = commitAt(20L);
  final Commit commitA_3 = commitAt(30L);
  final Commit commitA_4 = commitAt(40L);
  final Commit commitB_5 = commitAt(50L);
  final Commit commitA_6 = commitAt(60L);
  final ProjectBranch branchA = branchWith(commitA_6, commitA_4, commitA_3, commitA_1);
  final ProjectBranch branchB = branchWith(commitB_5, commitB_2);
  Iterable<Commit> commits = CombineCommits.from(branchA, branchB).byCommitDate();
  assertThat(commits, contains(commitA_6, commitB_5, commitA_4, commitA_3, commitB_2, commitA_1));
}

Now we are talking! If you give the CombineCommits class two branches with intertwined commit dates, the result will be a chronologically ordered collection. The only problem with this test? It needed the complete 100 lines of algorithm code to be green again. There it is: the “one-two-everything”-trap. The first two tests are merely finger exercises that don’t assert very much. Usually the third test is the last one to be written for a long time, because it requires a lot of work on the production side of code. After this test, the implementation is mostly completed, with 130 lines of production code and a line coverage of nearly 98%. There wasn’t much guidance from the tests, it was more of a “holding back until a test allows for the whole thing to be written”. Emotionally, the tests only hindered me from jotting down the algorithm I already envisioned and when I finally got permission to “show off”, I dived into the production code and only returned when the whole thing was finished. A lot of ego filled in the lines, but I didn’t realize it right away.

But wait, there is a detail left out from the test above that needs to be explicitely specified: If two commmits happen at the same time, there should be a defined behaviour for the combiner. I declare that the order of the input queues is used as a secondary ordering criterium:

@Test
public void decidesForFirstBranchIfCommitsAtSameDate() throws Exception {
  final Commit commitA_1 = commitAt(10L);
  final Commit commitB_2 = commitAt(10L);
  final Commit commitA_3 = commitAt(20L);
  final ProjectBranch branchA = branchWith(commitA_3, commitA_1);
  final ProjectBranch branchB = branchWith(commitB_2);
  Iterable<Commit> commits = CombineCommits.from(branchA, branchB).byCommitDate();
  assertThat(commits, contains(commitA_3, commitA_1, commitB_2));
}

This test didn’t improve the line coverage and was green right from the start, because the implementation already acted as required. There was no guidance in this test, only assurance.

And that was my session: The four unit tests cover the anticipated algorithm completely, but didn’t provide any guidance that I could grasp. I was very disappointed, because the “one-two-everything”-trap is a well-known anti-pattern for my TDD experiences and I still fell right into it.

A second approach using TDD

I decided to remove my code again and pair with my co-worker Jens, who formulated a theory about finding the next test by only changing one facet of the problem for each new test. Sounds interesting? It is! Let’s see where it got us:

@Test
public void noBranchesResultsInEmptyTrail() throws Exception {
  CommitCombiner combiner = new CommitCombiner();
  Iterable<Commit> trail = combiner.getTrail();
  assertThat(trail, is(emptyIterable()));
}

The first test starts as no big surprise, it only sets “the mood”. Notice how we decided to keep the CommitCombiner class simple and plain in its interface as long as the tests don’t get cumbersome.

@Test
public void emptyBranchesResultInEmptyTrail() throws Exception {
  ProjectBranch branchA = branchFor();
  CommitCombiner combiner = new CommitCombiner(branchA);
  assertThat(combiner.getTrail(), is(emptyIterable()));
}

The second test asserts only one thing more than the initial test: If the combiner is given empty commit queues (“branches”) instead of none like in the first test, it still returns an empty result collection (the commit “trail”).

With the single-facet approach, we can only change our tested scenario in one “domain dimension” and only the smallest possible amount of it. So we formulate a test that still uses one branch only, but with one commit in it:

@Test
public void branchWithCommitResultsInEqualTrail() throws Exception {
  Commit commitA1 = commitAt(10L);
  ProjectBranch branchA = branchFor(commitA1);
  CommitCombiner combiner = new CommitCombiner(branchA);
  assertThat(combiner.getTrail(), Matchers.contains(commitA1));
}

With this test, there was the first meaningful appearance of production code. We kept it very simple and trusted our future tests to guide the way to a more complex version.

The next test introduces the central piece of domain knowledge to the production code, just by changing the amount of commits on the only given branch from “one” to “many” (three):

@Test
public void branchWithCommitsAreReturnedInOrder() throws Exception {
  Commit commitA1 = commitAt(10L);
  Commit commitA2 = commitAt(20L);
  Commit commitA3 = commitAt(30L);
  ProjectBranch branchA = branchFor(commitA3, commitA2, commitA1);
  CommitCombiner combiner = new CommitCombiner(branchA);
  assertThat(combiner.getTrail(), Matchers.contains(commitA3, commitA2, commitA1));
}

Notice how this requires the production code to come up with the notion of comparable commit dates that needs to be ordered. We haven’t even introduced a second branch into the scenario yet but are already asserting that the topmost mission critical functionality works: commit ordering.

Now we need to advance to another requirement: The ability to combine branches. But whatever we develop in the future, it can never break the most important aspect of our implementation.

@Test
public void twoBranchesWithOnlyOneCommit() throws Exception {
  Commit commitA1 = commitAt(10L);
  ProjectBranch branchA = branchFor(commitA1);
  ProjectBranch branchB = branchFor();
  CommitCombiner combiner = new CommitCombiner(branchA, branchB);
  assertThat(combiner.getTrail(), Matchers.contains(commitA1));
}

You might say that we knew about this behaviour of the production code before, when we added the test named “branchWithCommitResultsInEqualTrail”, but it really is the assurance that things don’t change just because the amount of branches changes.

Our production code had no need to advance as far as we could already anticipate, so there is the need for another test dealing with multiple branches:

@Test
public void allBranchesAreUsed() throws Exception {
  Commit commitA1 = commitAt(10L);
  ProjectBranch branchA = branchFor(commitA1);
  ProjectBranch branchB = branchFor();
  CommitCombiner combiner = new CommitCombiner(branchB, branchA);
  assertThat(combiner.getTrail(), Matchers.contains(commitA1));
}

Note that the only thing that’s different is the order in which the branches are given to the CommitCombiner. With this simple test, there needs to be some important improvements in the production code. Try it for yourself to see the effect!

Finally, it is time to formulate a test that brings the two facets of our algorithm together. We tested the facets separately for so long now that this test feels like the first “real” test, asserting a “real” use case:

@Test
public void twoBranchesWithOneCommitEach() throws Exception {
  Commit commitA1 = commitAt(10L);
  Commit commitB1 = commitAt(20L);
  ProjectBranch branchA = branchFor(commitA1);
  ProjectBranch branchB = branchFor(commitB1);
  CommitCombiner combiner = new CommitCombiner(branchA, branchB);
  assertThat(combiner.getTrail(), Matchers.contains(commitB1, commitA1));
}

If you compare this “full” test case to the third test case in my first approach, you’ll see that it lacks all the mingled complexity of the first try. The test can be clear and concise in its scenario because it can rely on the assurances of the previous tests. The third test in the first approach couldn’t rely on any meaningful single-faceted “support” test. That’s the main difference! This is my error in the first approach: Trying to cramp more than one new facet in the next test, even putting all required facets in there at once. No wonder that the production code needed “everything” when the test requires it. No wonder there’s no guidance from the tests when I wanted to reach all my goals at once. Decomposing the problem at hand into independent “features” or facets is the most essential step to learn in order to advance from Test First to Test Driven Development. Finding a suitable “dramatic composition” for the tests is another important ability, but it can only be applied after the decomposition is done.

But wait, there is a fourth test in my first approach that needs to be tested here, too:

@Test
public void twoBranchesWithCommitsAtSameTime() throws Exception {
  Commit commitA1 = commitAt(10L);
  Commit commitB1 = commitAt(10L);
  ProjectBranch branchA = branchFor(commitA1);
  ProjectBranch branchB = branchFor(commitB1);
  CommitCombiner combiner = new CommitCombiner(branchA, branchB);
  assertThat(combiner.getTrail(), Matchers.contains(commitA1, commitB1));
}

Thankfully, the implementation already provided this feature. We are done! And in this moment, my ego showed up again: “That implementation is an insult to my developer honour!” I shouted. Keep in mind that I just threw away a beautiful 130-lines piece of algorithm for this alternate implementation:

public class CommitCombiner {
  private final ProjectBranch[] branches;

  public CommitCombiner(ProjectBranch... branches) {
    this.branches = branches;
  }

  public Iterable<Commit> getTrail() {
    final List<Commit> result = new ArrayList<>();
    for (ProjectBranch each : this.branches) {
      CollectionUtil.addAll(result, each.commits());
    }
    return sortedWithBranchOrderPreserved(result);
  }

  private Iterable<Commit> sortedWithBranchOrderPreserved(List<Commit> result) {
    Collections.sort(result, antichronologically());
    return result;
  }

  private <D extends Dated> Comparator<D> antichronologically() {
    return new Comparator<D>() {
      @Override
      public int compare(D o1, D o2) {
        return o2.getDate().compareTo(o1.getDate());
      }
    };
  }
}

The final and complete second implementation, guided to by the tests, is merely six lines of active code with some boiler-plate! Well, what did I expect? TDD doesn’t lead to particularly elegant solutions, it leads to the simplest thing that could possibly work and assures you that it will work in the realm of your specification. There’s no place for the programmer’s ego between these lines and that’s a good thing.

Conclusion

Thank you for reading until here! I’ve learnt an important lesson that day (thank you, Jens!). And being able to pinpoint the main hindrance on my way to fully embracing TDD enabled me to further improve my skills even on my own. It felt like opening an ever-closed door for the first time. I hope you’ve extracted some insights from this write-up, too. Feel free to share them!

Does Refactoring turn unit test of TDD to integration tests?

We really value automated tests and do experiments regarding test driven development (TDD) and tests in general from time to time. In the retrospective of our lastest experiment this question struck me: Does refactoring turn the unit tests of TDD to integration tests over time?

Let me elaborate this a bit further. When you start out with your tests you have some unit of functionality – usually a class – in mind. As you add test after test your implementation slowly fleshes out. You are repeating the TDD cycle “Write a failing test – Make test pass – Refactor” as you are adding features. The refactoring step is crucial in the whole process because it keeps the code clean and evolvable. But this step is also the cause leading to my observation: As you add new features you may extract new classes when refactoring to obey the single responsibility principle (SRP) and keep your design sane. It is very easy to forget or just ignore refactoring the tests. They still pass. You still have the same code coverage. But your tests now test the combination of several units. And what’s worse: You have units without direct tests.

This happened even in relatively small experiments on “Communication through tests” where the recontructing team could sometimes only guess that some class existed and either went on without it or created the class out of neccessity. The problem with this is that there are no obvious and clear indicators that your unit tests are not real unit tests anymore.

Conclusion

I neither have any solution nor am I completely sure how big the problem is in practice. It may help to state the TDD cycle more explicitly like “Write a failing test – Make test pass – Refactor implementation and tests” although that is no 100% remedy. One could implement a simple, checkstyle-like tool which lists all units without associated test class. I will keep an eye on the phenomenom and try to analyse it further. I would love to hear you view and experience on the matter.

TDD: avoid getting stuck or what’s the next test?

One central point of practicing TDD is to determine what is the next test. Choosing the wrong path can lead you into the infamous impasse

One central point of practicing TDD is to determine what is the next test. Choosing the wrong path can lead you into the infamous impasse: to make the next test pass you need to make not baby but giant steps. Some time ago Uncle Bob introduced a principle called the transformation priority premise. To make a test pass you need to change the implementation. These changes are transformations. There are at least the following transformations (taken from his blog post):

  • ({}–>nil) no code at all->code that employs nil
  • (nil->constant)
  • (constant->constant+) a simple constant to a more complex constant
  • (constant->scalar) replacing a constant with a variable or an argument
  • (statement->statements) adding more unconditional statements.
  • (unconditional->if) splitting the execution path
  • (scalar->array)
  • (array->container)
  • (statement->recursion)
  • (if->while)
  • (expression->function) replacing an expression with a function or algorithm
  • (variable->assignment) replacing the value of a variable.

To determine what the next test should be you look at the possible next tests and the changes in the implementation necessary to make that test pass. The required transformations should be as high in the list as possible. If you always choose the test which causes the highest transformations you avoid getting stuck, the impasse.
This seems to work but I think this is pretty complicated and expensive. Shouldn’t there be an easier way?
Let’s take a look at his case study: the word wrap kata. Word wrap is a function which takes two parameters: a string, and a column number. It returns the string, but with line breaks inserted at just the right places to make sure that no line is longer than the column number. You try to break lines at word boundaries.
The first three tests (nil, empty string and one word which is shorter than the wrap position) are obvious and easy but the next test can lead to an impasse:

@Test
public void twoWordsLongerThanLimitShouldWrap() throws Exception {
  assertThat(wrap("word word", 6), is("word\nword"));
}

With the transformation priority premise you can “calculate” that this is the wrong test and another one is simpler meaning needs transformations higher in the list. But let me introduce another concept: the facets or dimensions of tests.
Each test in a TDD session tests another facet of your problem. And only one more. What a facet is is determined by the problem domain. So you need some domain knowledge but usually to solve that problem you need this nevertheless. Back to the word wrap example: what is a facet? The first test tests the nil input, it changes one facet. The empty input test changes another facet. Then comes one word shorter than the wrap position (one facet changed again) and the fourth test uses two words longer than the wrap position. See it? The fourth tests introduces changes in two facets: one word to two word and shorter to longer than. So what can you do instead? Just change one facet. According to this the next test would be to use one word longer than the wrap position (facet: longer) which is proposed as a solution. Or you can use two words shorter than the wrap position (facet: word count) but this test will just pass without modifications to the implementation code. So facets of the word wrap kata could be: word count, shorter/longer, number of breaks, break position.
I know this is a very informal way of finding the next tests. It leans on your experience and domain knowledge. But I think it is less expensive than the transformations. And even better it can be combined with the transformation priority premise to check and verify your decisions.
What are you experiences with getting stuck in TDD? Do you think the proposed facets of TDD could be of help? Is it too informal? Too vague?

TDD myths: the problems

I take a look at some (in my experience) problems/misconceptions with TDD:
100% code coverage is enough, Debugging is not needed, Design for testability, You are faster than without tests

100% code coverage is enough

Code coverage seems to be a bad indicator for the quality of the tests. Take the following code as an example:

public void testEmptySum() {
  assertEquals(0, sum());
}

public void testSumOfMultipleNumbers() {
  assertEquals(5, sum(2, 3));
}

Now take a look at the implementation:

public int sum(int...numbers) {
  if (numbers.length == 0) {
    return 0;
  }
  return 5;
}

Baby steps in TDD could lead you to this implementation. It has 100% code coverage and all tests are green. But the implementation isn’t finished at all. Our experiment where we investigated how much tests communicate the intend of the code showed flaws in metrics like code coverage.

Debugging is not needed

One promise of TDD or tests in general is that you can neglect debugging. Even abandon it. In my experience when a test goes red (especially an integration test) you sometimes need to fire up the debugger. The debugger helps you to step through code and see the actual state of the system at that time. Tests treat code as a black box, an input results in an output. But what happens in between? How much do you want to couple your tests to your actual implementation steps? Do we need the tests to cover this aspect of software development? Maybe something along the lines as shown in Inventing on principle where the computer shows you the immediate steps your code takes could replace debugging but tests alone cannot do it.

Design for testability

A noble goal. But are tests your primary client? No. Other code is. Design for maintainability would be better. You will need to change your code, fix it, introduce new features, etc. Don’t get me wrong: You need tests and you need testability. But how much code do you write specifically for your tests? How much flexibility do you introduce because of your tests? What patterns do you use just because your tests need them? It’s like YAGNI for code exposure for tests. Code specifically written only for tests couples your code to your tests. Only things that need to be coupled should be. Is the choice of the underlying data structure important? Couple it, test it. If it isn’t, don’t expose it, don’t write a getter. Don’t break the information hiding principle if you don’t need to. If you couple your tests too much to your code every little change breaks your tests. This hinders maintenance. The important and difficult design question is: what is important. Test this.

You are faster than without tests

Some TDD practitioners claim that they are faster with TDD than without tests because the bugs and problems in your code will overwhelm you after a certain time. So with a certain level of complexity you are going faster with TDD. But where is this level? In my experience writing code without tests is 3x-4x faster than with TDD. For small applications. There are entire communities where many applications are written without or with only a few tests. But I wouldn’t write a large application without tests but at least my feeling is that in many cases I go much slower. Cases where I feel faster are specification heavy. Like parsing or writing formats, designing an algorithm or implementing a scientific formula. So the call is open on this one. What are your experiences? Do you feel slowed down by TDD?

Communication through Tests – a larger experiment

We evaluated our ability to communicate through tests in a two-day experiment and gathered some interesting results.

triangulatorFor us, automated tests are the hallmark of professional software development. That doesn’t mean that we buy into every testing fad that comes along or consider ourselves testing experts just because we write some tests alongside our code. We put our money where our mouth is and evaluate our abilities in writing effective tests.

One way to measure the effectiveness of tests is to try to “communicate through tests”. One developer/team writes code and tests for a given specification. Another team picks up the tests only and tries to recreate the production code and infer the specification. The only communication between the two teams happens through the tests.

We performed a small experiment with two teams and one day for both phases and blogged about it. The results of this evaluation was that unit tests are a good medium to transport specification details. But we got a hint that problems might be bigger when the code was less arithmetic and more complex. As most of our development tasks are rather complex and driven by business rules instead of clean mathematical algorithms, we wanted to inspect further.

Our larger experiment

So we organized a bigger experiment with a broader scope. Instead of two teams, we had three teams. We ran the phases for eight instead of two hours, essentially increasing the resulting code size by a factor of 3. The assignments weren’t static, but versioned – and the team only knew the rules of the current version. When a team would reach a certain milestone, more rules would be revealed, partly contradicting the previous ruleset. This should emulate changing customer requirements. And to provide the ability to retrospect on the reconstruction phase, we recorded this phase with a screencast software (we used the commercial product Debut Video Capture), capturing both inputs and conversation by using headsets for every developer.

The first part of this experiment happened in late January of 2013, where all teams had one day to produce production and test code. This was a day of loud buzz in our development department. The second part for the reconstruction phase was scheduled for the middle of February 2013. We had to be a bit more quiet this time to increase the audio recording quality, but the developers were humming nonetheless.

Here are some numbers of what was produced in the first session:

  • Team 1: 400 lines of production code, 530 lines of test code. 8 production classes, 54 tests. Test coverage of 90.6%.
  • Team 2: 576 lines of production code, 655 lines of test code. 17 production classes, 59 tests. Test coverage of 98.2%.
  • Team 3: 442 lines of production code, 429 lines of test code. 18 production classes, 37 tests. Test coverage of 97.0%.

The reconstruction phase was finished in less than five hours, partly because we stuck very close to the actual tests with little guesswork. When the tests didn’t enforce a functionality, it wasn’t implemented to reveal the holes in the test coverage. This reduced the amount of production code that had to be written. On the flipside, every team got lost once on the way, loosing the better part of an hour without noticeable progress.

The results

After all the talk about the event itself, let’s have a look at our results of the experiment:

  • The recording of the reconstruction phase was a huge gain in understanding the detailed problems. We even discussed recording the construction phase too to capture the original design decisions.
  • Every decision on unclear terms from the original team lead to “blurry” tests that didn’t guide the reconstruction team as good as the “razor-sharp” tests did.
  • You could definitely tell the TDD tests from the “test first” tests or even the tests written “immediately after”. More on this aspect later, but this was our biggest overall take-away: The quality of the tests in terms of being a specification differed greatly. This wasn’t bound to teams – as soon as a team lost the TDD “drive”, the tests lost guidance power.
  • Test coverage (in terms of line coverage or conditional coverage) means nothing. You can have 100% test coverage and still suffer from severe plot holes in your tests. Blurry tests tend to increase the coverage, but not the accountability of tests.
  • In general, we were surprised how little guidance and coverage most tests offered. The assignments included some obvious “testing problems” like dealing with randomness and every team dealt with them deliberately. Still, these were the major pain points during the reconstruction phase. This result puts our first small experiment a bit into perspective. What works well with small code bases might be disproportionally harder to achieve when the code size scales up. So while TDD/tests might work sufficiently easy on a small task, it needs more attention for a larger task.

The biggest problem

When talking about “plot holes” from the tests, let me give you a detailed example of what I mean. The more useless tests suffered from a lack of triangulation. In geometry, triangulation is the process of determining the location of a point by measuring several angles to it from known points. When writing tests, triangulation is the effort to “pinpoint” or specify the implementation with a set of different inputs and required outputs. You specify enough different tests of the same functionality to require it being “real” instead of a dummy implementation. Let’s look at this test:

@Test
public void parsesUserInput() {
  assertThat(new InputParser().parse("1 3 5"), hasItems(1, 3, 5));
}

Well, the test tells us that we need to convert a given string into a bunch of integers. It specifies the necessary class and method for this task, but gives us great freedom in the actual implementation. This makes the test green:

public Iterable<Integer> parse(String input) {
  return Arrays.asList(1, 3, 5);
}

As far as the tests are concerned, this is a concise and correct implementation of the required functionality. And while it is obvious in our example that this will never be sufficient, it oftentimes isn’t so obvious when the problem domain isn’t as familiar as parsing strings to numbers. But to complete my explanation of test triangulation, let’s consider a more elaborate implementation of this test that needs a lot more work on the implementation side (especially when developed in accordance with the Transformation Priority Premise by Uncle Bob and without obvious duplication):

@Test
public void parsesUserInput() {
  assertThat(new InputParser().parse("1 3 5"), hasItems(1, 3, 5));
  assertThat(new InputParser().parse("1 2"), hasItems(1, 2));
  assertThat(new InputParser().parse("1 2 3 4 5"), hasItems(1, 2, 3, 4, 5));
  assertThat(new InputParser().parse("1 4 5 3 2"), hasItems(1, 2, 3, 4, 5));
  assertThat(new InputParser().parse("5 4"), hasItems(4, 5));
  assertThat(new InputParser().parse("5 3"), hasItems(3, 5));
}

Maybe not all assertions are required and maybe they should live in different tests giving more hints in their names, but you get the idea: Making this test green is way “harder” than the initial test. Writing properly triangulated tests is one of the immediate benefits of Test Driven Development (TDD), as for example outlined nicely by Ray Sinnema on his blog entry about test-driving a code kata.
Our tests that were written “after the fact” often lacked the proper amount of triangulation, making it easier to “fake it” in the reconstruction phase. In a real project setting, these tests would allow for too much implementation deviation to act as a specification. They act more as usage examples and happy path “smoke” tests.

Our benefits

While this experiment doesn’t fulfill rigid academic requirements on gathering data, it already paid off greatly for us. We’ve examined our ability to express our implementations through tests and gathered insight on our real capabilities to use test-driven methodologies. Being able to judge relatively objectively on the quality of your own tests (by watching the reconstruction phase’s screencast) was very helpful. We now know better what skills to improve and what to focus on during training.

Where to go from here?

We plan to repeat this experiment with interested participants as a spare-time event later this year. For now and ourselves, we have gathered enough impressions to act on them. If you are interested in more details, drop us a note. We could publish only the tests (for reconstruction), the complete code or even the screencasts (albeit they are somewhat long-running). Our participants could elaborate their impressions in the comment section, if you ask them.
We are very interested in your results when performing similar events, like Tomasz Borek did this month in Krakow, Poland. We found his blog entry about the event to be very interesting. We definitely lacked the surprise element for the teams during the event.

TDD myths

Some TDD myths and what is true about them.

TDD, Test first or test immediately after are all the same

No. All methods result in having tests in the end. But especially in the TDD case your mind set is completely different. First the tests drive the design of your code. You construct your system piece by piece. Unit test for unit test. All code you write must have a test first and you use the tests to describe and reason about the external interface of your units. In TDD the tests represent the future clients using your code. In practice this leads to small(er) units.

In TDD the tests’ (main) objective is to prevent regression

No. Tests help immensely when you break the same code twice. But even more so tests help to structure your code and make it maintainable. When using TDD you tend to reduce your code and its flexibility because you need to write a test for every piece of functionality first. So over designing or implementing things you don’t need (breaking YAGNI or KISS) bites you doubly: in the code and in the tests. Also wrong design decisions like choosing an inappropriate data structure or representation hits you twice as hard. TDD emphasizes bad design decisions.

Test code is the same as production code

No. Test code should adhere too a similar quality level like production code. But you won’t write tests for your tests. Also conditionals and loops are a very bad idea in tests and should be avoided. Take the following example:

public void testSomething() {
  for (MyEnum value : values()) {
    assertEquals(expected, do(value))
  }
}

If you forgot an enum value the tests just passes. Even if you have no values in your enum it passes still. Conditions have the same problem: you introduce another path through your test which can be avoided or never taken. You could secure the other path through an assert but in some cases this is a hint that you broke another principle: single responsibility of tests.

DRY is harmful in tests

No. DRY (don’t repeat yourself) aims to reduce or eliminate duplication in logic. But often DRY is understood as removing code duplication. This is not the same! Code duplication can be essential in tests. You need all of the essential information in the test. This code should not be extracted or abstracted elsewhere. These code lines which may seem similar are not coupled logically. When you change one test, the other test is not affected.

TDD is hard

No and yes. For me learning TDD is like learning a new language. It certainly needs time. But if you do it often and repeatedly you learn more every time you use it. It’s a way of reasoning about a system, a way of thinking, a paradigm. When I started with TDD I thought it was impossible or unreasonable to use in cases other than where strong specs exist like parsing a format. But over time I value the driving part of TDD more and more. You can get into a TDD flow. TDD gives you a very good feeling of security when you refactor. It forces you beforehand to think about your intended use for your code. Which is good. It changes my way of seeing my code, one step at time. Some things are still hard: acceptance tests are unreasonably expensive. Just testing one thing needs discipline. Not jumping ahead of the tests and implementing too much code also. Finding the next unit of testing can be difficult, getting stuck can be frustrating. Just like learning a new language I think it is worth it.

FTP integrated

When developing a feature containing unknown technology or hardware, I prefer a spike followed by integration tests. Sometimes it helps a lot.

How it all began
One of our customers employs NAS for data storage, accessing it per FTP. Some of the features like copying and moving files around were already implemented by us using Apaches FTPClient. The next feature on the list was “cleanup after x days” – deletion of files, or more important: directories. FTP, being a pretty basic protocol, does not allow for recursive deletion of directories. The only way to do it is to delete the deepest elements first,  going up one level and repeat – or in other words – implementing the recursion yourself. This was too much for our simple feature, so the decision was made to hide the complexity behind a VirtualFile, an interface already existing in our framework.

Being a novice in speaking FTP I was happy to hear that we already have acquired exactly the same type of NAS the customer has. To see how the system behaves (or not) and document it at the same time, I decided to implement the interface integration test first.

Fun
As the amount of tests and file operations started to grow, so did grow the round trip time of my test/make test pass/refactor cycle and my patience dwindled. I switched from NAS FTP-Server to a local FileZilla FTP-Server. It worked like a charm and all necessary features were implemented really fast.

The next step was to run the app using the new feature with real amount of data, real directory structure and our NAS. It failed miserably. And randomly. The app suffered from closed connections while trying to open a data connection. After some search the reason was found: FTPClient we use had active mode enabled by default. That means that to transfer data the server tried to connect to the client and the clients Firewall did not like it. After setting connection mode to passive the problem was solved.

The tests run fine, but they run slow. And they introduced a dependency on an external system. If that system broke or were disabled for any other reason, our CI would report failure without any changes in the code. Both points could be addressed by using an embedded FTP Server. We choose Apaches FTP Server. Changing the tests was easy, since the only thing to do was to setup the server before the test and to shut it down afterwards. Surprisingly some tests failed. Apaches server handled some cases differently:

  • it allowed opening output streams to directories without any exception
  • it forbid to delete current working directory
  • the name listing in the directory (NLST) returned by NAS were absolute paths to the file, Apaches server returned simple names.

After another code change the code worked correctly with all three servers.

Lessons learned
While implementing the interface I learned much about how to create and test bridging functionality:

  • Specification cannot replace tests. Searching for the FTP commands to use I looked at several websites that described the commands. None of them wrote about whether NLST returns absolute paths or only filenames. There are always holes in the spec that will be interpreted differently by vendors or the vendors do not always obey it.
  • Unit tests are great, but they are limited to your code only. When it comes to communication between system components, especially communication with foreign systems, an integration test is a must.
  • Working with a test setup that mimics production environment as close as possible is great. Without the NAS, the app would have simply failed in the best case. In the worst case it would have deleted wrong files. Neither of them make a customer happy.