An experiment on recruitment seriousness

I had the chance to witness an experiment about the seriousness of companies to attract new developers. The outcome was surprisingly accurate.

Most software development companies in our area are desperately searching for additional software developers to employ. The pressure rose until two remarkable recruiting tools were installed. At first, every tram car in town was plastered with advertisement shouting “we search developers” as the only message. These advertisement have a embarrassing low average appeal to the target audience, so the second tool was a considerable bonus for every developer that was recruited by recommendation. This was soon called the “headhunter’s reward” and laughed upon.

Testing the prospects

The sad thing isn’t the current desperation in the local recruitment efforts, it’s the actual implementation of the whole process. Let’s imagine for a moment that a capable software developer from another town arrives at our train station, enters a tram and takes the advertisement serious. He discovers that the company in question is nearby and decides to pay them a visit – right now, right here. What do you think will happen?

I had the fortune to talk to a developer who essentially played the scenario outlined above through with five local software development companies that are actively recruiting and advertising. His experiences differed greatly, but gave a strangely accurate hint of the potential future employer’s actual company culture. And because the companies’ reactions were utmost archetypical, its a great story to learn from.

Meeting the company

The setting for each company was the same: The developer chose an arbitrary date and appeared at the reception – without an appointment, without previous contact, without any documents. He expressed interest in the company and generally in any open developer position they had open. He also was open to spontaneous talks or even a formal job interview, though he didn’t bring along a resume. It was the perfect simulation of somebody who got instantaneously convinced by the tram advertisement and rushed to meet the company of his dreams.

The reactions

Before we take a look at the individual reactions, lets agree to some acceptable level of action from the company’s side. The recruitment process of a company isn’t a single-person task. The “headhunter’s reward” tries to communicate this fact through monetary means. Ideally, the whole company staff engages in many little actions that add up to a consistent whole, telling everybody who gets in contact with any part of the company how awesome it is to work there. While this would be recruitment in perfection, it’s really the little actions that count: Taking a potentially valuable new employee serious, expressing interest and care. It might begin with offering a cool beverage on a exceptionally hot day or giving out the company’s image brochure. If you can agree to the value of these little actions, you will understand my evaluation scheme of the actual reactions.

The accessible boss

One company won the “contest” with great distance to all other participants. After the developer arrived at the reception, he was relayed to the local boss who really had a tight schedule, but offered a coffee talk for ten minutes after some quick calls to shift appointments. Both the developer and the company’s boss exchanged basic informations and expectations in a casual manner. The developer was provided with a great variety of company print material, like the obligatory image brochure, the latest monthly company magazine and a printout of current open job offerings. The whole visit was over in half an hour, but gave a lasting impression of the company. The most notable message was: “we really value you – you are boss level important”. And just to put things into perspective: this wasn’t the biggest company on the list!

The accessible office

Another great reaction was the receptionist who couldn’t reach anybody in charge (it was generally not the most ideal timing) and decided to improvise. She “just” worked in the accounting department, but tried her best to present the software development department and explain basic cool facts about the company. The visit included a tour through the office space and ended with providing generic information material about the company. The most notable message was: “We like to work here – have a look”.

The helpless reception

Two companies basically reacted the same way: The receptionist couldn’t reach anybody in charge, decided to express helplessness and hope for sympathy. Compared to the reactions above, this is is a rather poor and generic approach to the recruitment effort. In one case, the receptionist even forgot basic etiquette and didn’t offer the obligatory coffee or image brochure. The most notable message was: “We only work here – and if you join, you will, too”. To put things into perspective: one of them was the biggest company on the list, probably with rigid processes, highly partitioned responsibilities and strict security rules.

The rude reception

The worst first impression made the company with the reception acting like a defense position. Upon entering, the developer was greeted coldly by the two receptionists. When he explained the motivation of his visit, the first receptionist immediately zoned out while the second one answered: “We have an e-mail address for applications, please use it” and lost all interest in the guest. The most notable message was: “Go away – why do you bother us?”.

What can be learnt?

The whole experiment can be seen from two sides. If you are a developer looking for a new position in a similar job market situation, you’ll gain valuable insights about your future employer by just dropping by and assessing the reactions. If you are a software development company desperately looking out for developers, you should regard your recruitment efforts as a whole-company project. Good recruitment is done by everybody in your company, one thing at a time. Recruitment is a boss task, but to be handled positively, it has to be accompanied by virtually everybody from the whole staff. And a company full of happy developers will attract more happy developers just by convincing recruitment work done by them in the spare time, most of the time without being explicitly aware of it.

Communication through Tests – a larger experiment

We evaluated our ability to communicate through tests in a two-day experiment and gathered some interesting results.

triangulatorFor us, automated tests are the hallmark of professional software development. That doesn’t mean that we buy into every testing fad that comes along or consider ourselves testing experts just because we write some tests alongside our code. We put our money where our mouth is and evaluate our abilities in writing effective tests.

One way to measure the effectiveness of tests is to try to “communicate through tests”. One developer/team writes code and tests for a given specification. Another team picks up the tests only and tries to recreate the production code and infer the specification. The only communication between the two teams happens through the tests.

We performed a small experiment with two teams and one day for both phases and blogged about it. The results of this evaluation was that unit tests are a good medium to transport specification details. But we got a hint that problems might be bigger when the code was less arithmetic and more complex. As most of our development tasks are rather complex and driven by business rules instead of clean mathematical algorithms, we wanted to inspect further.

Our larger experiment

So we organized a bigger experiment with a broader scope. Instead of two teams, we had three teams. We ran the phases for eight instead of two hours, essentially increasing the resulting code size by a factor of 3. The assignments weren’t static, but versioned – and the team only knew the rules of the current version. When a team would reach a certain milestone, more rules would be revealed, partly contradicting the previous ruleset. This should emulate changing customer requirements. And to provide the ability to retrospect on the reconstruction phase, we recorded this phase with a screencast software (we used the commercial product Debut Video Capture), capturing both inputs and conversation by using headsets for every developer.

The first part of this experiment happened in late January of 2013, where all teams had one day to produce production and test code. This was a day of loud buzz in our development department. The second part for the reconstruction phase was scheduled for the middle of February 2013. We had to be a bit more quiet this time to increase the audio recording quality, but the developers were humming nonetheless.

Here are some numbers of what was produced in the first session:

  • Team 1: 400 lines of production code, 530 lines of test code. 8 production classes, 54 tests. Test coverage of 90.6%.
  • Team 2: 576 lines of production code, 655 lines of test code. 17 production classes, 59 tests. Test coverage of 98.2%.
  • Team 3: 442 lines of production code, 429 lines of test code. 18 production classes, 37 tests. Test coverage of 97.0%.

The reconstruction phase was finished in less than five hours, partly because we stuck very close to the actual tests with little guesswork. When the tests didn’t enforce a functionality, it wasn’t implemented to reveal the holes in the test coverage. This reduced the amount of production code that had to be written. On the flipside, every team got lost once on the way, loosing the better part of an hour without noticeable progress.

The results

After all the talk about the event itself, let’s have a look at our results of the experiment:

  • The recording of the reconstruction phase was a huge gain in understanding the detailed problems. We even discussed recording the construction phase too to capture the original design decisions.
  • Every decision on unclear terms from the original team lead to “blurry” tests that didn’t guide the reconstruction team as good as the “razor-sharp” tests did.
  • You could definitely tell the TDD tests from the “test first” tests or even the tests written “immediately after”. More on this aspect later, but this was our biggest overall take-away: The quality of the tests in terms of being a specification differed greatly. This wasn’t bound to teams – as soon as a team lost the TDD “drive”, the tests lost guidance power.
  • Test coverage (in terms of line coverage or conditional coverage) means nothing. You can have 100% test coverage and still suffer from severe plot holes in your tests. Blurry tests tend to increase the coverage, but not the accountability of tests.
  • In general, we were surprised how little guidance and coverage most tests offered. The assignments included some obvious “testing problems” like dealing with randomness and every team dealt with them deliberately. Still, these were the major pain points during the reconstruction phase. This result puts our first small experiment a bit into perspective. What works well with small code bases might be disproportionally harder to achieve when the code size scales up. So while TDD/tests might work sufficiently easy on a small task, it needs more attention for a larger task.

The biggest problem

When talking about “plot holes” from the tests, let me give you a detailed example of what I mean. The more useless tests suffered from a lack of triangulation. In geometry, triangulation is the process of determining the location of a point by measuring several angles to it from known points. When writing tests, triangulation is the effort to “pinpoint” or specify the implementation with a set of different inputs and required outputs. You specify enough different tests of the same functionality to require it being “real” instead of a dummy implementation. Let’s look at this test:

@Test
public void parsesUserInput() {
  assertThat(new InputParser().parse("1 3 5"), hasItems(1, 3, 5));
}

Well, the test tells us that we need to convert a given string into a bunch of integers. It specifies the necessary class and method for this task, but gives us great freedom in the actual implementation. This makes the test green:

public Iterable<Integer> parse(String input) {
  return Arrays.asList(1, 3, 5);
}

As far as the tests are concerned, this is a concise and correct implementation of the required functionality. And while it is obvious in our example that this will never be sufficient, it oftentimes isn’t so obvious when the problem domain isn’t as familiar as parsing strings to numbers. But to complete my explanation of test triangulation, let’s consider a more elaborate implementation of this test that needs a lot more work on the implementation side (especially when developed in accordance with the Transformation Priority Premise by Uncle Bob and without obvious duplication):

@Test
public void parsesUserInput() {
  assertThat(new InputParser().parse("1 3 5"), hasItems(1, 3, 5));
  assertThat(new InputParser().parse("1 2"), hasItems(1, 2));
  assertThat(new InputParser().parse("1 2 3 4 5"), hasItems(1, 2, 3, 4, 5));
  assertThat(new InputParser().parse("1 4 5 3 2"), hasItems(1, 2, 3, 4, 5));
  assertThat(new InputParser().parse("5 4"), hasItems(4, 5));
  assertThat(new InputParser().parse("5 3"), hasItems(3, 5));
}

Maybe not all assertions are required and maybe they should live in different tests giving more hints in their names, but you get the idea: Making this test green is way “harder” than the initial test. Writing properly triangulated tests is one of the immediate benefits of Test Driven Development (TDD), as for example outlined nicely by Ray Sinnema on his blog entry about test-driving a code kata.
Our tests that were written “after the fact” often lacked the proper amount of triangulation, making it easier to “fake it” in the reconstruction phase. In a real project setting, these tests would allow for too much implementation deviation to act as a specification. They act more as usage examples and happy path “smoke” tests.

Our benefits

While this experiment doesn’t fulfill rigid academic requirements on gathering data, it already paid off greatly for us. We’ve examined our ability to express our implementations through tests and gathered insight on our real capabilities to use test-driven methodologies. Being able to judge relatively objectively on the quality of your own tests (by watching the reconstruction phase’s screencast) was very helpful. We now know better what skills to improve and what to focus on during training.

Where to go from here?

We plan to repeat this experiment with interested participants as a spare-time event later this year. For now and ourselves, we have gathered enough impressions to act on them. If you are interested in more details, drop us a note. We could publish only the tests (for reconstruction), the complete code or even the screencasts (albeit they are somewhat long-running). Our participants could elaborate their impressions in the comment section, if you ask them.
We are very interested in your results when performing similar events, like Tomasz Borek did this month in Krakow, Poland. We found his blog entry about the event to be very interesting. We definitely lacked the surprise element for the teams during the event.

An experiment about communication through tests

How effectively communicates our test code? We wanted to know if we were able to recreate a software from its tests alone. The experiment gained us some worthwile insights.

lrg-668-wuerfelRecently, we conducted a little experiment to determine our ability to communicate effectively by only using automatic tests. We wanted to know if the tests we write are sufficient to recreate the entire production code from them and understand the original requirements. We were inspired by a similar experiment performed by the Softwerkskammer Karlsruhe in July 2012.

The rules

We chose a “game master” and two teams of two developers each, named “Team A” and “Team B”. The game master secretly picked two coding exercises with comparable skill and effort and briefed every team to one of them. The other team shouldn’t know the original assignment beforehands, so the briefings were held in isolation. Then, the implementation phase began. The teams were instructed to write extensive tests, be it unit or integration tests, before or after the production code. The teams knew about the further utilization of the tests. After about two hours of implementation time, we stopped development and held a little recreation break. Then, the complete test code of each implementation was transferred to the other team, but all production code was kept back for comparison. So, Team A started with all tests of Team B and had to recreate the complete missing production code to fulfill the assignment of Team B without knowing exactly what it was. Team B had to do the same with the production code and assignment of Team A, using only their test code, too. After the “reengineering phase”, as we called it, we compared the solutions and discussed problems and impressions, essentially performing a retrospective on the experiment.

The assignments

The two coding exercises were taken from the Kata Catalogue and adapted to exhibit slightly different rules:

  • Compare Poker Hands: Given two hands of five poker cards, determine which hand has a higher rank and wins the round.
  • Automatic Yahtzee Player: Given five dice and our local Yahtzee rules, determine a strategy which dice should be rerolled.

There was no obligation to complete the exercise, only to develop from a reasonable starting point in a comprehensible direction. The code should be correct and compileable virtually all the time. The test coverage should be near to 100%, even if test driven development or test first wasn’t explicitely required. The emphasis of effort should be on the test code, not on the production code.

The implementation

Both teams understood the assignment immediately and had their “natural” way to develop the code. Programming language of choice was Java for both teams. The game master oscillated between the teams to answer minor questions and gather impressions. After about two hours, we decided to end the phase and stop coding with the next passing test. No team completed their assignment, but the resulting code was very similar in size and other key figures:

  • Team A: 217 lines production code, 198 lines test code. 5 production classes, 17 tests. Test coverage of 94,1%
  • Team B: 199 lines production code, 166 lines test code. 7 production classes, 17 tests. Test coverage of 94,1%

In summary, each team produced half a dozen production classes with a total of ~200 lines of code. 17 tests with a total of ~180 lines of code covered more than 90% of the production code.

The reengineering

After a short break, the teams started with all the test code of the other team, but no production code. The first step was to let the IDE create the missing classes and methods to get the tests to compile. Then, the teams chose basic unit tests to build up the initial production code base. This succeeded very quickly and turned a lot of tests to green. Both teams struggled later on when the tests (and production code) increased in complexity. Both teams introduced new classes to the codebase even when the tests didn’t suggest to do so. Both teams justified their decision with a “better code design” and “ease of implementation”. After about 90 minutes (and nearly simultaneous), both teams had implemented enough production code to turn all tests to green. Both teams were confident to understand the initial assignment and to have implemented a solution equal to the original production code base.

The examination

We gathered for the examination and found that both teams met their requirements: The recreated code bases were correct in terms of the original solution and the assignment. We have shown that communication through only test code is possible for us. But that wasn’t the deepest insight we got from the experiment. Here are a few insights we gathered during the retrospective:

  • Both teams had trouble to effectively distinguish between requirements from the assignment and implementation decisions made by the other team. The tests didn’t transport this aspect good enough. See an example below.
  • The recreated production code turned out to be slightly more precise and concise than the original code. This surprised us a bit and is a huge hint that test driven development, if applied with the “right state of mind”, might improve code quality (at least for this problem domain and these developers).
  • The classes that were introduced during the reengineering phase were present in the original code, too. They just didn’t explicitely show up in the test code.
  • The test code alone wasn’t really helpful in several cases, like:
    • Deciding if a class was/should be an Enum or a normal class
    • Figuring out the meaning of arguments with primitive values. A language with named parameter support would alleviate this problem. In Java, you might consider to use Code Squiggles if you want to prepare for this scenario.
  • The original team would greatly benefit from watching the reengineering team during their coding. The reengineering team would not benefit from interference by the original team. For a solution to this problem, see below.

The revelation

One revelation we can directly apply to our test code was how to help with the distinction between a requirement (“has to be this way”) and implementator’s choice (“incidentally is this way”). Let’s look at an example:

In the poker hands coding exercise, every card is represented by two characters, like “2D” for a two of diamonds or “AS” for an ace of spades. The encoding is straight-forward, except for the 10, it is represented by a “T” and not a “10”: “TH” is a ten of hearts. This is a requirement, the implementator cannot choose another encoding. The test for the encoding looks like this:


@Test
public void parseValueForSymbol() {
  assertEquals(Value._2, Value.forSymbol("2"));
  [...]
  assertEquals(Value._10, Value.forSymbol("T"));
  [...]
  assertEquals(Value.ACE, Value.forSymbol("A"));
}

If you write the test like this, there is a clear definition of the encoding, but not of the underlying decision for it. Let’s rewrite the test to communicate that the “T” for ten isn’t an arbitrary choice:


@Test
public void parseValueForSymbol() {
  assertEquals(Value._2, Value.forSymbol("2"));
  [...]
  assertEquals(Value.ACE, Value.forSymbol("A"));
}

@Test
public void tenIsRequiredToBeRepresentedByT() {
  assertEquals(Value._10, Value.forSymbol("T"));
}

Just by extracting this encoding to a special test case, you emphasize that you are aware of the “inconsistency”. By the test name, you state that it wasn’t your choice to encode it this way.

The improvement

We definitely want to repeat this experiment again in the future, but with some improvements. One would be that the reengineering phases should be recorded with a screencast software to be able to watch the steps in detail and listen to the discussions without the possibility to interact or influence. Both original teams had great interest in the details of the recreation process and the problems with their tests. The other improvement might be an easing on the time axis, as with the recorded implementation phases, there would be no need for a direct observation by a game master or even a concurrent performance. The tasks could be bigger and a bit more relaxed.

In short: It was fun, challenging, informative and reaffirming. A great experience!

An advent of unconditional quality code

A four-week experiment dealing with conditional statements and how to avoid or replace them. Starting at the first advent, the experiment runs until christmas. We invite you to join and share your experiences.

This blog entry invites you to an experiment in code. It’s an experiment that runs four weeks and can be performed secretly even at your workplace. It might improve the way you think about conditional statements in an object oriented programming language. You don’t need any special hardware or setup, just the will to change your coding style a bit each week.

The experiment

Beginning with this year’s advent (a season of the christian religion), you are asked to omit one type of conditional statement each week while programming your regular code. The omitted statements add up, so that you have to spare four different statements in the week before christmas. There is no relation to christmas (or religion) other than it’s a four week period at the end of the year, which is the perfect timeframe for the experiment. And you might buy yourself a little present for christmas if you succeeded at the experiment (idea: a new programming book).

The four stages

For every stage, you are asked to write your normal code without a specific statement. It is perfectly valid to use semantically equivalent code constructs to achieve the same goal. This experiment is even more successful if you are creative and diversified in your variations of the original statement. Remember that the stages add up. On the fourth stage, you are asked to use none of the statements mentioned below.

  • Stage 1 (first week): Don’t use “else”
  • Stage 2 (second week): Don’t use the conditional operator “?:”
  • Stage 3 (third week): Don’t use “switch”
  • Stage 4 (fourth week): Don’t use “if”

You are not asked to change existing code to conform to these restrictions, except you need to work on the lines that contain the prohibited statements. You should apply the rules to your new code rigorously, though.

Explanation of stage 1 (Don’t use “else”)

This rule bans all the different occurrences of the else-branch to your if-statements. It includes every “else if” or “elsif” your programming language might provide. The rationale behind the rule can be found in the Object Calisthenics, rule #2 by Jeff Bay. Here is an explanation of it by Being Cellfish.

Explanation of stage 2 (Don’t use the conditional operator “?:”)

Elvis is dead. Let this resemblance to his hairdo rest for a week, too. It contains a hidden else statement that is restricted since stage 1. Another rationale is that the conditional operator isn’t very easy to read/grasp if stretched out a long line.

Explanation of stage 3 (Don’t use “switch”)

A switch (or case, or select) statement is nothing but a big if-else cascade. It’s handy sometimes, but can be replaced by a lookup table (like a hashmap) virtually everytime . In Martin Fowler’s book “Refactoring”, the switch statement counts as its own code smell category. You should try to live without it for a week. If you need inspiration, try this article on how to avoid it.

Explanation of stage 4 (Don’t use “if”)

Yes, you didn’t misread. There is a whole campaign that tries to avoid the if-statement altogether. Read their website for inspiration on how to survive this week. Maybe you might make new friends with polymorphism and some other implicit conditional structures. Remember, this is a short week just before christmas. Try it, you might be surprised how easy it looks with hindsight.

Ready, steady, go!

This experiment starts with the first advent at Sunday, 28.11.2010. Every stage lasts for one week and adds up to the previous stages. The experiment ends at christmas.

Good luck! And if you’re done with it, drop us a comment with your experiences.