Aspects done right: Concerns

With aspects you cannot see (without sophisticated IDE support) which class has which aspects and which aspects are woven into the class when looking at its source. Here concerns (also called mixins or traits) come to the rescue.

The idea of encapsulating cross cutting concerns struck with me from the beginning but the implementation namely the aspects lacked clarity in my opinion. With aspects you cannot see (without sophisticated IDE support) which class has which aspects and which aspects are woven into the class when looking at its source. Here concerns (also called mixins or traits) come to the rescue. I know that aspects were invented to hide away details about which code is included and where but I find it confusing and hard to trace without tool support.

Take a look at an example in Ruby:

module Versionable
  extend ActiveSupport::Concern

  included do
    attr_accessor :version
  end
end

class Document
  include Versionable
end

Now Document has a field version and is_a?(Versionable) returns true. For clients it looks like the field version is in Document itself. So for clients of this class it is the same as:

class Document
  attr_accessor :version
end

Furthermore you can easily use the versionable concern in another class. This sounds like a great implementation of the separating of concerns principle but why isn’t everyone using it (besides being a standard for the upcoming Rails 4)? Well, some people are concerned with concerns (excuse the pun). As with every powerful feature you can shoot yourself in the foot. Let’s take a look at each problem.

  • Diamond problem aka multiple inheritance
  • Ruby has no multiple inheritance. Even when you include more than one module the modules are like superclasses for the message resolve order. Every include creates a new “superclass” above the including class. So the last include takes precedence.

  • Dependencies between concerns
  • You can have dependencies between different concerns like this concern needs another concern. ActiveSupport:Concerns handles these dependencies automatically.

  • Unforeseeable results
  • One last big problem with concerns is having side effects from combining two concerns. Take for an example two concerns which add a method with the same name. Including both of them renders one concern unusable. This cannot be solved technically but I also think this problem shows an underlying, more important cause. It could be because of poor naming. Or you did not separate these two concerns enough. As always tests can help to isolate and spot the problem. Also concerns should be tested in isolation and in integration.

Thoughts about TDD

Thoughts and links about test driven development

First a disclaimer: I think tests are a hallmark for professional software development, I like to write tests before the implementation but that’s not always easy or simple (for the difference please refer to Simple made easy). I find it hard to grasp test driven development (TDD) though. The difference between test first and test driven lies in the intention: in both cases tests are written before any implementation code but in TDD the tests drive the design of your implementation.

The problem with opinions of TDD is there are mostly extreme positions: some think “TDD is the (next) holy grail” or the ones which dismissed it. Though reading between the lines there are great discussions about how to do it and what problems arise. Many people (me included) are really trying to get value from TDD. Testing should be fun.
One way in letting the tests drive the way you develop is proposed by Uncle Bob: transformation priority premise. He proposes a list of transformations which introduce new or replace existing constructs like replacing a constant by a variable or adding more logic and gives them a priority. Only if you cannot use a high priority transformation to get the test to pass you look at a transformation with a lower priority.
But how do you determine what you should test next or even which is the first test?
Taking the typical Conway’s game of life kata as an example one thing struck me: I could only get the TDD to work smoothly when I started with the data structure. But why that? Naturally I start with the algorithm (in this case the rules) and write the first test for it. But upon further inspection of the problem and deeper (domain) knowledge it seems the data structure is way more important for solving this kata. So you need to know where the journey goes along beforehand, not every step you will take but the big picture: first the data structure, then the rules in this example. Maybe you should start with the integrations or the functional tests and break them down into units.
What are your experiences using TDD? Do you use or want to use TDD?

An experiment about communication through tests

How effectively communicates our test code? We wanted to know if we were able to recreate a software from its tests alone. The experiment gained us some worthwile insights.

lrg-668-wuerfelRecently, we conducted a little experiment to determine our ability to communicate effectively by only using automatic tests. We wanted to know if the tests we write are sufficient to recreate the entire production code from them and understand the original requirements. We were inspired by a similar experiment performed by the Softwerkskammer Karlsruhe in July 2012.

The rules

We chose a “game master” and two teams of two developers each, named “Team A” and “Team B”. The game master secretly picked two coding exercises with comparable skill and effort and briefed every team to one of them. The other team shouldn’t know the original assignment beforehands, so the briefings were held in isolation. Then, the implementation phase began. The teams were instructed to write extensive tests, be it unit or integration tests, before or after the production code. The teams knew about the further utilization of the tests. After about two hours of implementation time, we stopped development and held a little recreation break. Then, the complete test code of each implementation was transferred to the other team, but all production code was kept back for comparison. So, Team A started with all tests of Team B and had to recreate the complete missing production code to fulfill the assignment of Team B without knowing exactly what it was. Team B had to do the same with the production code and assignment of Team A, using only their test code, too. After the “reengineering phase”, as we called it, we compared the solutions and discussed problems and impressions, essentially performing a retrospective on the experiment.

The assignments

The two coding exercises were taken from the Kata Catalogue and adapted to exhibit slightly different rules:

  • Compare Poker Hands: Given two hands of five poker cards, determine which hand has a higher rank and wins the round.
  • Automatic Yahtzee Player: Given five dice and our local Yahtzee rules, determine a strategy which dice should be rerolled.

There was no obligation to complete the exercise, only to develop from a reasonable starting point in a comprehensible direction. The code should be correct and compileable virtually all the time. The test coverage should be near to 100%, even if test driven development or test first wasn’t explicitely required. The emphasis of effort should be on the test code, not on the production code.

The implementation

Both teams understood the assignment immediately and had their “natural” way to develop the code. Programming language of choice was Java for both teams. The game master oscillated between the teams to answer minor questions and gather impressions. After about two hours, we decided to end the phase and stop coding with the next passing test. No team completed their assignment, but the resulting code was very similar in size and other key figures:

  • Team A: 217 lines production code, 198 lines test code. 5 production classes, 17 tests. Test coverage of 94,1%
  • Team B: 199 lines production code, 166 lines test code. 7 production classes, 17 tests. Test coverage of 94,1%

In summary, each team produced half a dozen production classes with a total of ~200 lines of code. 17 tests with a total of ~180 lines of code covered more than 90% of the production code.

The reengineering

After a short break, the teams started with all the test code of the other team, but no production code. The first step was to let the IDE create the missing classes and methods to get the tests to compile. Then, the teams chose basic unit tests to build up the initial production code base. This succeeded very quickly and turned a lot of tests to green. Both teams struggled later on when the tests (and production code) increased in complexity. Both teams introduced new classes to the codebase even when the tests didn’t suggest to do so. Both teams justified their decision with a “better code design” and “ease of implementation”. After about 90 minutes (and nearly simultaneous), both teams had implemented enough production code to turn all tests to green. Both teams were confident to understand the initial assignment and to have implemented a solution equal to the original production code base.

The examination

We gathered for the examination and found that both teams met their requirements: The recreated code bases were correct in terms of the original solution and the assignment. We have shown that communication through only test code is possible for us. But that wasn’t the deepest insight we got from the experiment. Here are a few insights we gathered during the retrospective:

  • Both teams had trouble to effectively distinguish between requirements from the assignment and implementation decisions made by the other team. The tests didn’t transport this aspect good enough. See an example below.
  • The recreated production code turned out to be slightly more precise and concise than the original code. This surprised us a bit and is a huge hint that test driven development, if applied with the “right state of mind”, might improve code quality (at least for this problem domain and these developers).
  • The classes that were introduced during the reengineering phase were present in the original code, too. They just didn’t explicitely show up in the test code.
  • The test code alone wasn’t really helpful in several cases, like:
    • Deciding if a class was/should be an Enum or a normal class
    • Figuring out the meaning of arguments with primitive values. A language with named parameter support would alleviate this problem. In Java, you might consider to use Code Squiggles if you want to prepare for this scenario.
  • The original team would greatly benefit from watching the reengineering team during their coding. The reengineering team would not benefit from interference by the original team. For a solution to this problem, see below.

The revelation

One revelation we can directly apply to our test code was how to help with the distinction between a requirement (“has to be this way”) and implementator’s choice (“incidentally is this way”). Let’s look at an example:

In the poker hands coding exercise, every card is represented by two characters, like “2D” for a two of diamonds or “AS” for an ace of spades. The encoding is straight-forward, except for the 10, it is represented by a “T” and not a “10”: “TH” is a ten of hearts. This is a requirement, the implementator cannot choose another encoding. The test for the encoding looks like this:


@Test
public void parseValueForSymbol() {
  assertEquals(Value._2, Value.forSymbol("2"));
  [...]
  assertEquals(Value._10, Value.forSymbol("T"));
  [...]
  assertEquals(Value.ACE, Value.forSymbol("A"));
}

If you write the test like this, there is a clear definition of the encoding, but not of the underlying decision for it. Let’s rewrite the test to communicate that the “T” for ten isn’t an arbitrary choice:


@Test
public void parseValueForSymbol() {
  assertEquals(Value._2, Value.forSymbol("2"));
  [...]
  assertEquals(Value.ACE, Value.forSymbol("A"));
}

@Test
public void tenIsRequiredToBeRepresentedByT() {
  assertEquals(Value._10, Value.forSymbol("T"));
}

Just by extracting this encoding to a special test case, you emphasize that you are aware of the “inconsistency”. By the test name, you state that it wasn’t your choice to encode it this way.

The improvement

We definitely want to repeat this experiment again in the future, but with some improvements. One would be that the reengineering phases should be recorded with a screencast software to be able to watch the steps in detail and listen to the discussions without the possibility to interact or influence. Both original teams had great interest in the details of the recreation process and the problems with their tests. The other improvement might be an easing on the time axis, as with the recorded implementation phases, there would be no need for a direct observation by a game master or even a concurrent performance. The tasks could be bigger and a bit more relaxed.

In short: It was fun, challenging, informative and reaffirming. A great experience!

Web apps: Security is more than you think

Security in web apps is an ever increasing important topic: in this post we take a look at injection attacks especially SQL injection, the number one OWASP security problem.

Security in web apps is an ever increasing important topic besides securing the machine or your web/application containers on which your apps run you need to deal with some security related issues in your own apps. In this article we take a look at the number one (according to OWASP)risk in web apps:

Injection attacks

Every web app takes some kind of user input (usually through web forms) and works with it. If the web app does not properly handle the user input malicious entries can lead to severe problems like stealing or losing of data. But how do you identify problems in your code? Take a look at a naive but not uncommon implementation of a SQL query:

query("select * from user_data where username='" + username + "'")

Using the input of the user directly in a query like this is devastating, examples include dropping tables or changing data. Even if your library prevents you from using more than one statement in a query you can change this query to return other users’ data.
Blacklisting special characters is not a solution since you need some of them in your input or there are methods to circumvent your blacklists.
The solution here is to proper escape your input using your libraries mechanisms (e.g. with Groovy SpringJDBC):

query("select * from user_data where username=:username", [username: username])

But even when you escape everything you need to take care what you inject in your query. In this example all data is stored with a key of username.data.

query("select * from user_data where key like :username '.%' ", [username: username])

In this case everything will be escaped correctly but what happens when your user names himself % ? He gets the data of all users.

Is SQL the only vulnerable part of your app? No, every part which interprets your input and executes it is vulnerable. Examples include shell commands or JavaScript which we will look at in a future blog post.

As the last query showed: besides using proper escaping, setting your mind for security problems is the first and foremost step to a secure app.

A small test saves the day

You think a method is too trivial to write a test for it? Think again if the method is mission-critical!

Just recently, I had to write a connection between an existing application and a new hardware unit. This is a fairly common job for our company, even considering the circumstances that I’d never even seen the hardware, let alone being able to connect to it. The hardware unit itself was rather big and it was installed in a security sensitive area with restricted access. So, I only got a specification of the protocol to use and a description of the hardware’s features.

Our common procedure to include hardware dependent modules into an application is to write two implementations of the module: One implementation is the real deal and interacts with the hardware over ethernet, USB, serial port or whatever proprietary communication device is used. This version of the module can only work as intended if the hardware is present. The other implementation acts as an emulation of the hardware, without any dependencies. If you are familiar with unit tests, think of a big test mock. The emulation version is used during development to test and run the application without requirements about the hardware. There are a lot of subtle pitfalls to consider and avoid, but on a bird-view level of abstraction, these interchangeable implementations of a module enable us to develop software with hardware dependencies without need for the actual hardware.

The first piece of code that’s used of a module is a factory/builder class that chooses between the available implementations, based on some configuration entry (or hardware availability, etc.). A typical implementation of the responsible method might look like this:


public HardwareModule createFor(ModuleConfiguration configuration) {
  if (configuration.isHardwarePresent()) {
    new RealHardwareModule();
  }
  return new EmulatedHardwareModule();
}

If the configuration object says that the hardware is present, the real implementation is used, subsequentially opening a connection to the hardware and talking the client side of the given protocol. Otherwise, the emulation is created and returned, maybe opening a debug GUI window to display certain internal states and values and providing controls to mess with the application during development.

The method itself looks very innocent and meager. There is not much going on, so what could possibly go wrong?

I’m not the most eager test-driven developer in the world, I have to admit. But I see the value of tests (and unit tests in particular) and adhere to the A-TRIP rules defined by Andy Hunt and (pragmatic) Dave Thomas:

  • Automatic
  • Thorough
  • Repeatable
  • Independent
  • Professional

For a complete definition of the rules, read the linked blog entry or, even better, buy the book. It’s small and cheap, but contains a lot of profound basic knowledge about unit testing.

The “Thorough” rule is more of a rule of thumb than a hard scientific formula for good unit tests: Always write a test if you’ve found a bug or if the code you’re writing is mission-critical. This was when my gut feeling told me that while the method above might seem trivial, it is definitely essential for the hardware module. So I wrote a test:

  @Test
  public void providesEmulationIfUnspecified() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration(""));
    assertEquals("not the hardware emulation", EmulatedHardwareModule.class, hardware.getClass());
  }

  @Test
  public void providesEmulationIfHardwareAbsent() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration("hardware.present=false"));
    assertEquals("not the hardware emulation", EmulatedHardwareModule.class, hardware.getClass());
  }

  @Test
  public void providesRealImplementationIfHardwarePresent() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration("hardware.present=true"));
    assertEquals("not the real hardware implementation", RealHardwareModule.class, hardware.getClass());
  }

To my surprise, the test immediately went red for the third test method. After double-checking the test code, I was certain that the test was correct. The test discovered a bug in the production code. And being a mostly independent unit test, it pointed to the problematic lines right away: the method implementation above. The helper method named configuration() spared in the code sample was very unlikely to contain a bug.

After a short moment of reading the code again, I corrected it (note the added return statement in line 3):


public HardwareModule createFor(ModuleConfiguration configuration) {
  if (configuration.isHardwarePresent()) {
    return new RealHardwareModule();
  }
  return new EmulatedHardwareModule();
}

This might not seem like the most disastrous bug ever, but it would have made for a nasty start when I finally would have tried the application with the real hardware. There is nothing more valueable than to be able to keep your cool “in the wild” and work on the real problems like faulty protocol specifications or unexpected/undocumented hardware behaviour. So, my gut feeling (and the Thorough rule) were right and my brain, telling me “skip this petty test” longer than I like to admit, was wrong. A small test for a small method paid off immediately and saved the day, at least for me.

Testing on .NET: Choosing NUnit over MSTest

We sometimes do smaller .NET projects for our clients even though we are mostly a Java/JVM shop. Our key infrastructure stays the same for all projects – regardless of the platform. That means the .NET projects get integrated into our existing continuous integration (CI) infrastructure based on Jenkins. This works suprisingly well even though you need a windows slave and the MSBuild plugin.

One point you should think about is which testing framework to use. MSTest is part of Visual Studio and provides nice integration into the IDE. Using it in conjunction with Jenkins is possible since there is a MSTest plugin for our favorite CI server. One downside is that you need either Visual Studio itself or the Windows SDK (500MB download, 300MB install) installed on the build server in addition to .NET. Another one is that it does not work with the “Express” editions of Visual Studio. Usually that is not a problem for companies but it raises the entry barrier for open source or other non-profit projects by requiring relatively expensive Visual Studio licences.

In our scenarios NUnit proved much lighter and friendlier in installation and usage. You can easily bundle it with your sources to improve self-containment of the project and lessen the burden on the system and tools. If you plug the NUnit tool into the external tools-section of Visual Studio (which also works with Express) the integration is acceptable, too.

Conclusion

If you are not completely on the full Microsoft stack for you project infrastructure using Visual Studio, TeamCity, Sourcesafe et al. it is worth considering choosing NUnit over MSTest because of its leaner size and looser coupling to the Mircosoft stack.

Antipatterns: Convenience Constructors

Lately I stumble a lot upon code I wrote 4 or more years ago. In the light of introducing new features the code gets tested for its quality. One antipattern I’ve found which I had used in the past but which is really hard to extend is convenience constructors.

Lately I stumble a lot upon code I wrote 4 or more years ago. In the light of introducing new features the code gets tested for its quality. One antipattern I’ve found which I had used in the past but which is really hard to extend is convenience constructors. Take a constructor for a command object for example:

    public SetProperty(String filename, String key, String value) {
        this(filename, key, value, null);
    }

    public SetProperty(String filename,
            String key, String value, String comment) {
        this(filename, ReferenceTo.key(key), value, comment);
    }

    public SetProperty(String filename,
            String sectionType, String sectionName,
            String key, String value) {
        this(filename, sectionType, sectionName, key, value, null);
    }

    public SetProperty(String filename,
            String sectionType, String sectionName,
            String key, String value, String comment) {
        this(filename, ReferenceTo.sectionAndKey(sectionType, sectionName, key), value, comment);
    }

    public SetProperty(String filename,
            AdvancedPropertyReference propertyReference,
            String value, String comment) {
        this(filename, propertyReference, value, comment);
    }

    public SetProperty(String filename,
            AdvancedPropertyReference propertyReference,
            String value, String comment) {
        super(filename);
        this.propertyReference = propertyReference;
        this.value = value;
        this.comment = comment;
    }

We need to add a new feature which enables us to append properties not just set and replace them. One way could be to extend the class. But this is overkill. Just adding a new parameter flag should suffice. But this would blow up the number of constructors because you need to include a version with and without the new parameter for each (used) constructor. Here an old friend comes to the rescue: design patterns. Looking at the GoF book shows a good solution to the problem: the builder pattern.

public class SetPropertyBuilder {
    private final String filename;
    private String sectionType;
    private String sectionName;
    private String referenceKey;
    private String value;
    private String comment;
    private boolean append;

    public SetPropertyBuilder(String filename) {
        super();
        this.filename = filename;
    }

    public SetPropertyBuilder set(String key, String newValue) {
        this.referenceKey = key;
        this.value = newValue;
        return this;
    }

    public SetPropertyBuilder append(String key, String additionalValue) {
        set(key, additionalValue);
        this.append = true;
        return this;
    }

    public SetPropertyBuilder inSection(String type, String name) {
        this.sectionType = type;
        this.sectionName = name;
        return this;
    }

    public SetProperty build() {
        AdvancedPropertyReference reference = ReferenceTo.key(this.referenceKey);
        if (this.sectionType != null && this.sectionName != null) {
            reference = ReferenceTo.sectionAndKey(this.sectionType, this.sectionName, this.referenceKey);
        }
        return new SetProperty(this.filename, reference, this.value, this.comment, this.append);
    }
}

Now we can eleminate all but one constructor from the SetProperty command. Adding a new property now yields one new method in the builder.

Checking preconditions in advance vs. on demand vs. exceptions

Usually, it is good practice to check certain preconditions before applying operations to input data. This is often referred to as defensive programming. Many people are used to lines like:

public void preformOn(String foo) {
  if (!myMap.containsKey(foo)) {
    // handle it correctly
    return;
  }
  // do something with the entry
  myMap.get(foo).performOperation();
}

While there is nothing wrong with such kind of “in advance checking” it may have performance implications – especially when IO is involved.

We had a problem some time ago when working with some thousand wrappers for File objects. The wrappers checked if the given File object actually is a file using the innocent isFile()-method in the constructor which caused hard disk access each time. So building our collection of wrapped files took quite some time (dozens of seconds) and our client complained (rightfully so!) about the performance. Once the collection was built the operations were fast because no checking was needed anymore.

Our first optimization step was deferring the check to the point where the file was actually used. This sped up the creation of the wrappers so it was barely noticeable but processing a bunch of elements took longer because of additional disk accesses. Even though this approach may work for a plethora of situations for our typical use cases the effect of this optimization was not enough.

So we looked at our problem from another perspective: The vast majority of file handles were actually existing and readable files and directories and foreign/unknown files were the exception. Because of this fact we chose to simply leave out any kind of checks and handle the exceptions! Exception handling is often referred to as slow but if exceptions are rare it can make a difference in some orders of magnitude. Our speed up using this approach was enourmous and the client was happy about sub-second responsiveness for his typical operations. In addition we think that the code now expresses more cleary that irregular files really are the exception and not the rule for this particular code.

Conclusion

There are different approaches to handling of parameters and input data. Depending on the cost of the check and the frequency of special input different strategies may prove beneficial both in expressing your intent and the perceived performance of your application.

Solutions to common Java enum problems

More readable solutions to using enums with attributes for categorization or representation.

Say, you have an enum representing a state:

enum State {
  A, B, C, D;
}

And you want to know if a state is a final state. In our example C and D should be final.
An initial attempt might be to use a simple method:

public boolean isFinal() {
	return State.C == this || State.D == this;
}

When there are two states this might seem reasonable but adding more states to this condition makes it unreadable pretty fast.
So why not use the enum hierarchy?

A(false), B(false), C(true), D(true);

private boolean isFinal;

private State(boolean isFinal) {
  this.isFinal = isFinal;
}

public boolean isFinal() {
  return isFinal;
}

This was and is in some cases a good approach but also gets cumbersome if you have more than one attribute in your constructor.
Another attempt I’ve seen:

public boolean isFinal() {
        for (State finalState : State.getFinalStates()) {
            if (this == finalState) {
                return true;
            }
        }
        return false;
    }

    public static List<State> getFinalStates() {
        List<State> finalStates = new ArrayList<State>();
        finalStates.add(State.C);
        finalStates.add(State.D);
        return finalStates;
    }

This code gets one thing right: the separation of the final attribute from the states. But it can be written in a clearer way:

List<State> FINAL_STATES = Arrays.asList(C, D)

public boolean isFinal() {
	return FINAL_STATES.contains(this);
}

Another common problem with enums is constructing them via an external representation, e.g. a text.
The classic dispatch looks like this:

    public static State createFrom(String text) {
        if ("A".equals(text) || "FIRST".equals(text)) {
            return State.A;
        } else if ("B".equals(text)) {
            return State.B;
        } else if ("C".equals(text)) {
            return State.C;
        } else if ("D".equals(text) || "LAST".equals(text)) {
            return State.D;
        } else {
            throw new IllegalArgumentException("Invalid state: " + text);
        }
    }

Readers of refactoring sense a code smell here and promptly want to refactor to a dispatch using the hierarchy.

A("A", "FIRST"),
B("B"),
C("C"),
D("D", "LAST");

private List<String> representations;

private State(String... representations) {
  this.representations = Arrays.asList(representations);
}

public static State createFrom(String text) {
  for (State state : values()) {
    if (state.representations.contains(text)) {
      return state;
    }
  }
  throw new IllegalArgumentException("Invalid state: " + text);
}

Much better.

A mindset for inherited source code

This article outlines a mindset for developers to deal with existing, probably inherited code bases. You’ll have to be an archeologist, a forensicist and a minefield clearer all at once.

One field of expertise our company provides is the continuation of existing software projects. While this sounds very easy to accomplish, in reality, there are a few prerequisites that a software project has to provide to be continuable. The most important one is the source code of the system, obviously. If the source code is accessible (this is a problem more often than you might think!), the biggest hurdle is now the mindset and initial approach of the developers that inherit it.

The mindset

Most developers have a healthy “greenfield” project mindset. There is a list of requirements, so start coding and fulfill them. If the code obstructs the way to your goal, you reshape it in a meaningful manner. The more experience you have with developing software, the better the resulting design and architecture of the code will be. Whether you apply automatic tests to your software (and when) is entirely your decision. In short: You are the master of the code and forge it after your vision. This is a great mindset for projects in the early phases of development. But it will actively hinder you in later phases of your project or in case you inherit foreign code.

For your own late-phase projects and source code written by another team, another mindset provides more value. The “brownfield” metaphor doesn’t describe the mindset exactly. I have three metaphors that describe parts of it for me: You’ll need to be an archeologist, a forensicist (as in “securer of criminal evidence”) and a minefield clearer. If you hear the word archeologist, don’t think of Indiana Jones, but of somebody sitting in the scorching desert, clearing a whole football field from sand with only a shaving brush and his breath. If you think about being a forensicist, don’t think of your typical hero criminalist who rearranges the photos of the crime scene to reveal a hidden hint, but the guy in a white overall who has to take all the photos without disturbing the surrounding (and being disturbed by it). If you think about the minefield clearer: Yes, you are spot on. He has to rely on his work and shouldn’t move too fast in any direction.

The initial approach

This sets the scene for your initial journey inside foreign source code: Don’t touch anything or at least be extra careful, only dust it off in the slightest possible manner. Watch where you step in and don’t get lost. Take a snapshot, mental or written, of anything suspicious you’ll encounter. There will be plenty of temptation to lose focus and instantly improve the code. Don’t fall for it. Remember the forensicist: what would the detective in charge of this case say if you “improved the scenery a bit” to get better photos? This process reminds me so much of a common approach to the game “Minesweeper” that I included the minefield clearer in the analogy. You start somewhere on the field and mark every mine you indirectly identify without ever really revealing them.

Most likely, you don’t find any tests or an issue tracker where you can learn about the development history. With some luck, you’ll have a commit history with meaningful comments. Use the blame view as often as you can. This is your archeological skills at work: Separating layers and layers of code all mingled in one place. A good SCM system can clear up a total mess for you and reveal the author’s intent for it. Without tests, issues and versioning, you cannot distinguish between a problem and a solution, accidental and deliberate complexity or a bug and a feature. Everything could mean something and even be crucial for the whole system or just be useless excess code (so-called “live weight” because the code will be executed, but with no effect in terms of features). To name an example, if you encounter a strange sleep() call (or multiple calls in a row), don’t eliminate or change them! The original author probably “fixed” a nasty bug with it that will come back sooner than you know it.

Walking on broken glass

And this is what you should do: Leave everything in its place, broken, awkward and clumsy, and try to separate your code from “their” code as much as possible. The rationale is to be able to differentiate between “their” mess and “your” mess and make progress on your part without breaking the already existing features. If you cannot wait any longer to clean up some of the existing code, make sure to release into production often and in a timely manner, so you still know what changed if something goes wrong. If possible, try to release two different kinds of new versions:

  • One kind of new version only incorporates refactorings to the existing code. If anything goes wrong or seems suspicious, you can easily bail out and revert to the previous version without losing functionality.
  • The other kind only contains new features, added with as little change to existing code as possible. Hopefully, this release will not break existing behaviour. If it does, you should double-check your assumptions about the code. If reasonably achievable, do not assume anything or at least write an automatic test to validate your assumption.

Personally, I call this approach the “tick-tock” release cycle, modelled after the release cycle of Intel for its CPUs.

Changing gears

A very important aspect of software development is to know when to “change gears” and switch from greenfield to brownfield or from development to maintainance mode. The text above describes the approach with inherited code, where the gear change is externally triggered by transferring the source code to a new team. But in reality, you need to apply most of the practices on your own source code, too. As soon as your system is in “production”, used in the wild and being filled with precious user data, it changes into maintainance mode. You cannot change existing aspects as easily as before.
In his book “Implementation Patterns” (2008), Kent Beck describes the development of frameworks among other topics. One statement is:

While in conventional development reducing complexity to a minimum is a valuable strategy for making the code easy to understand, in framework development it is often more cost-effective to add complexity in order to enhance the framework developer’s ability to improve the framework without breaking client code.
(Chapter 10, page 118)

I not only agree with this statement but think that it partly applies to “conventional development” in maintainance mode, too. Sometimes, the code needs additional complexity to cope with existing structures and data. This is the moment when you’ve inherited your own code.