Should I test this?

Writing software is hard, writing correct software is even harder. So everything that helps you writing better or more correct software should be used to your advantage. But does every test help? And does every code to be automatically tested? How do I decide what to test and how?

Writing software is hard, writing correct software is even harder. So everything that helps you writing better or more correct software should be used to your advantage. But does every test help? And does every code to be automatically tested? How do I decide what to test and how?
Given a typical web CRUD application, take a look at the following piece of functionality:
We have a model class Element which has a Type type:

class Element {
  ...
  Type type
  ...
}

The view contains a select tag which lets you choose a type:

...
<g:select name="filterByTypeId" from="${types}" value="${filterByType?.id}">
...

And finally in the controller we filter the list of shown elements via the selected type:

...
Type filterByType = Type.get(params['filterByTypeId'])
return [elements: filterByType ? Element.findAllByType(filterByType) : Element.list(), types: Type.list(), filterByType: filterByType]
...

Now ask yourself: would you write an automatic test for this? A functional / acceptance or some unit / integration tests? Would you really test this automatically or just by hand? And how do you decide this?

Dogma

According to TDD you should test everything, there does not exist any code without a test (first). If you really live by TDD the choice is already made: you test this code. But is this pragmatic? Effecient? Productive? And what about the aspects you forgot to test? The order of the types for example. The user wanted to list them lexicographically or by a priority or numbered. What if this part changes and your test is so coupled that you need to change it, too. There are some TDD enthusiasts out there but if you are more pragmatic there are other criteria to help you decide.

Cost

If you look at the code in question and think: how much effort is it to create the test(s)? Or to run the test? If the feedback cycle is too long you lose track of it. I need a test for the controller, this is the easy part. Then I need to test that the view passes the correct parameter and accepts and shows the correct list.
I also can write an acceptance test but this seems like a big gun for a small bird. In our case it heavily depends on the framework how easy or difficult and costly it is to write tests for our filter. What do you have to mock or to simulate? You also have to take the hidden costs into account: how much does it cost to maintain this test? When the requirement changes? When there are more filter criteria? Or if an element can have more than one type?

Value

Another question you can ask is: what is the value for the customer? How much does he need it to work? What is the cost of an error? What happens when the code in question does not work? The value for the customer is not only determined by the functionality it provides. Software can be seen as giving your users capabilities, to enable them. The capability is implemented by two things: implementation (your functionality) and affordance (the UI). The value is determined by both parts. So you hardly can decide on the value of a functionality alone. What if you need to change the UI (in our case the select tag) to increase the value? How does this effect your tests? Does the user reach his goal if the functionality part is broken? What is when the code is correct but it is slow? Or the UI isn’t visible on your user’s screen?

Personal / Team profile

You could decide what and if to test by looking at your past: your personal or team mistakes. Typical problems and bugs you made. Habits you have. You could test more when the (business or technical) domain or the underlying technology is new for you. You could write only few tests when you know the area you work in but more when it is unknown and you need to explore it. You can write more tests if you work in a dynamic language and few in a static language. Or vice versa.

Area / Type of code

You can write tests for every bug you find to prevent regression. You could write tests only for algorithms or data structures. For certain core parts or for interaction with other systems. Or only for (public) interfaces. The area or type of code can help you decide if to test or not.

Visibility

Also you could take a look at how easy it is to spot a bug when manually invoke the code. Do you or your user see the bug immediately? Is it hidden? In our case you should easily see when the list is not filtered or filtered by the wrong criteria. But what if it is just a rounding error or an error where cause and effect is separated by time or location?

Conclusion

Do you have or use additional criteria? How do you decide? I have to admit that I didn’t and I wouldn’t test the above code because I can easily spot problems in the code and try it out by hand if it works (visibility). If the code grows more complex and I cannot easily see the problem (again visibility) or the value (or cost of an error) for the customer is high I would write one.

Thoughts on Design

There is a book I am currently reading (and recommend): “The design of everyday things” by Donald A. Norman. The author describes common design errors in an easy readable way and shows or outlines the solutions for them. Despite not beeing a book about software engineering, it covers pretty well one of its greatest problems: interaction with a human. What are the points a software developer should consider when creating new or changing old features?

Natural mapping

Natural mappings are the clues that we can map to known patterns and instinctively use to interpret new unknown things. These mostly date back to prehistoric times and address the animal in us to catch our attention. To animals big things, moving things, things that are different because of color, shape or some other distinctive property are important, because they must decide based on them whether to flee or to attack. Natural mappings require near zero conscious processing power and make the user to pay attention to crucial information instantly. If you have two buttons “cancel order” and “submit order” and you want the user to click on submit, you better make the submit button big and flashy and the cancel button normal size and color it in standard grey (Have a look at the publish button and the preview button of wordpress).

Visibility

Not all users read the manual (if any exists) before they use their programs. Not all users that read the manual, can understand it or find there the steps necessary to accomplish their task. To be still able to succeed, at every step the user asks himself the following questions:

  • What is already done?
  • What is my current position in the process?
  • What is to do now?
  • How far I am away from my goal?

Consider a user who only has 15 minutes and has to fill out an order form consisting of 15 pages. A user who does not even get the total number of pages stops frustrated after very few pages, because the process seems endless. Provide him with the page count and the current page and he will be able to plan ahead and esimate the needed time. If the mandatory fields are marked, the user will concentrate on them and progress faster, incresing the probability to complete the order in time.

Feedback

Every time a user has done something, he will want to ensure that everything happened as he wanted. A mute system will inspire confusion and fear (Want to try it out? Use ed). A status message is a great signal for the user that he accomplished some of the steps on the way to his goal. Without the message “Order submitted successfully” the user won’t know whether the system accepted his input or just jumped to an another page. Additional confirmations like emails allow the user to receive status information with the additional benefit of being persistent unlike a web page in a browser.

Errors

Like any human the users input bogus data or trigger unwanted actions. In this cases the user should get appropriate feedback. When the previous steps are considered, the user will know what field is affected, why the input is not accepted and what the steps are to correct the situation. Sometimes the errors are logical and not syntactical, making them hard to impossile to detect by the system. There is no way to tell whether the user wanted to buy one or ten books. The layout of elements can be adapted to minimize the risk of accidental use: the “Close Application” button is better not to be placed near the “Save” button. When the mistake has been done, the user should be offered an edit, or at least a withdrawal option. Not all systems allow reversal of actions, and present the user a confirmation dialog with an important choice, producing unnecessary stress. There are ways to conter that problem.

Conclusion

This are simple points that can be taken into consideration when carefully designing a system. Even if they are simple, we tend to forget them because we have deadlines or understand the system on such level that we cannot even imagine what steps an inexperienced user can take and what hints he need.

Impressions from Java Forum Stuttgart 2013

Java Forum Stuttgart(JFS) is a yearly java focused conference primarily visited by developers. The conference lasts for a day, offering 45 minute long talks plus some time in between for discussions. This was my second visit and I am happy to tell you about my impressions.

vert.x: Polyglot – modular – Asynchron

Speaker: Eberhard Wolff from adesso AG

This was my first stop. The topic seemed interesting, because at Softwareschneiderei we are using a mix of different languages and frameworks for our projects. To learn about a new Framework was a nice thing. vert.x runs on a Java VM and can be written in a mixable variety of languages like Java, JavaScript or Groovy. The main points of the presentation were the examples in Java and JavaScript showing the asynchronous features and communication between different components. Judging by the function set and the questions asked, this seems to be a framework that provides java developers a smooth transition from the synchronous world to the event based asynchronous world. Compared to NodeJS, vert.x is currently a small project containing only a handful of modules.

Java 8 innovations

Speaker: Michael Wiedeking from MATHEMA Software GmbH

This one is somewhat special. After thousands of blog entries, presentations and whatever there is only a marginal chance to get fresh news about java features. The speaker did know this and spiced the presentation up with some jokes, while showing ever increasing complex code samples. Exactly what I hoped for: reading code and having fun.

Statical code analysis as a quality measurement?

Speaker: Dr. Karl-Heinz Wichert from iteratec GmbH

We are using grails in some of our projects. As any other highly dynamic language, grails suffers from its strength: weak type system. Without acceptance test support it is hard to verify whether a given code piece is correct or not. My hope was to hear something about new trends in statical analysis that allow me to detect simple errors faster, without firing up the system. Biggest mismatch between imagination and reality that day. The speaker presented the reasons why to use statical code analysys and its current shortcomings like the inability to verify that comments match the code commented or the inability to detect complex implementations of a simple algorithm. An interesting statement was that statical analysis fails if not every aspect is checked, the reason beeing the developer trying to optimize the code against the measured criteria while neglecting other aspects. From my point of view this is not a shortcoming of a statical analysis, but of the way the people use it. It is measurable that a maintainable product has proportionally more readable variable names than an unmaintainable one, but is not necessarily true that your product gets maintainable when you rename all your variables. All in one: the speaker managed to motivate me to look for holes in his argumentation and thus to actively think about the topic.

Enterprise portals with grails. Does it work?

Speakers: Tobias Kraft and Manuel Breitfeld from exentio GmbH

Like the previous presentation, this one attracted me because of the grails context. Additionally, because of the title, I was hoping for a nice description on pitfalls they encountered while building their portals. One part of the presentation was the description of the portal they built and the requirements it has to fullfill. Another part was a description of the grails platform. They use grails to deliver snippets for their portal that is organized as a collection of such snippets. Very valuable was the part about the problems one can encounter when using grails, where they honestly admitted that the migration from Grails 1.3.7 to 2.x did cost them some time. To detect regressions during platform upgrades they recommended to put extra effort in tests.

Car2Car systems – Java and Peer2Peer move into the car

Speaker: Adam Kovacs from msg-systems

After the impressive lunch break my brain waves almost reached zero. The program brochure missed any interesting titles for the next round so I went for the least common topic. The presentation turned out a lucky find. The speaker managed to keep the right level of detail, without diving too deep or scratching the surface. He described how Chord, an implementation of a distributed hash table, can be used to share locally relevant traffic data like traffic jams or accidents. To increase the stability and the security of the network he introduced the use of existing transmitting stations and certificate authorities.

Kotlin

Speaker: Dr Ralph Guderlei from eXXcellent Solutions

There were two reasons I wanted to visit this presentation. The first was: My colleague already showed me some features of this language. The second: the language is developed by the company that also develops the IntelliJ IDEA. Good IDE support is practically built in, isn’t it? The presentation covered syntax, lambdas, type inference extension methods and how kotlin handles null references. It looks like kotlin is going to become something like an improved java. I hope for the best.

Enterprise Integration Patterns

Speaker: Alexander Heusingfeld from First Point IT Solutions

Another Presentation that gets selected because of the least boring sounding title and another success. In the first minutes I expected an endless enumeration of common well known patterns. This was only true for the first minutes. The topic quickly shifted to asynchronous messaging and increasingly complex patterns to handle it. As two frameworks with similar range of functions he presented Apache Camel and Spring  Integration

Bottom line

The event was as always fun. Unfortunately it was not possible to visit more presentations due to their “parallel execution”. Have you been an JFS too and want to share your impressions about same or other presentations too? Post a comment!

Communication Through Code

In a previous post my colleague described our experiment on our ability to transfer the intention of the code by tests. The tests describe how the code behaves when called from the outside. Additional approach is to communicate through code.

To understand the code, at least the following two questions have to be answered:

  • How does the code work?
  • What is the reason behind the way the code is implemented?

Challenge

As long as the code is readable, it is possible to deduce its meaning. Improving readability is a common technique to help the reader. This includes using descriptive names, reducing complexity or hiding implementation details until they are absolutely necessary to understand the problem.

On the other hand deducing the reason why exactly this implementation was chosen by somebody is an impossible task without the knowledge (or lack thereof) of all implementors combined. One of the missing parts are the assumptions. Our code is full of them. Consider the following example:

void print(char* text)
{
  printf("program says %s", text);
}

In this function the writer assumes that:

  • the text is a valid pointer
  • the text is zero terminated
  • this program can write to stdout, i.e. is a console app
  • the reader speaks english

Or something nastier:

void* allocateBuffer(size_t size)
{
  void* buffer = malloc(size);
  if (!buffer) {
    printf("expect a segmentation fault!");
  }
  return buffer;
}

Here the writer assumes that malloc always returns either NULL or a pointer to dereferenceable memory. It is not always the case:

If size is zero, the return value depends on the particular library implementation (it may or may not be a null pointer), but the returned pointer shall not be dereferenced.

Assumptions not explicitly defined in the code lead sooner or later to hard to discover bugs.

Solution approaches

Comments are the quick and dirty way of writing down assumptions. They are easiest to read, but are never enforced and tend to diverge from the code with every edit made to it. However it is better to read “should never come here” and hear the alarm bells ringing than seeing nothing but whitespace.

Some of the assumptions can be documented and verified through tests, with varying level of detail. Unit tests will be most efficient on assumptions with little or no context, like verifying that only non-NULL-pointers are passed to a function. For more global assumptions integration or acceptance tests can be used. Together they ensure that no changes to the codebase break the assumptions made earlier. The drawback of unit tests is that they are locally decoupled from the code tested, forcing the reader to gather the information by searching for direct or indirect references to it.

When new code is written, assertions help to document how the API is meant to be used. Since they are executed not only during the test phase, they can capture wrong assumptions the authors made about the runtime environment. Writing down every possible assumption can quickly clutter the code with repeated statements like “assume pointer x is not NULL”, reducing readability and usefulness of this technique.

Conclusion

All of the shown approaches are not new. Each one has an aspect it excels at, so to get the most information out of the code they all have to be used. Their domains overlap partially, so it is possible to choose the approach depending on the situation, i.e. replacing assertions with unit tests for time critical code. One niche currently not filled by any of them is the description of global assumptions like the cultural background of the users.

Working on software as a free time activity

Why would somebody do this? Isn’t it already enough to code at work for eight hours a day, five days a week? If you ask yourself this questions, then I think you should reconsider your position.

There is a fundamental difference between work and free time. You are not constrained. You don’t have to hold a deadline. Software development is a mentally challenging task, and while some time pressure keeps you focused, a little bit more forces you to cut corners instead of considering better alternatives. If deadlines were good, they wouldn’t have “dead” in their name. In your free time you decide when you are done.

Even having considered alternatives you are not always able to implement them. There may be a corporate identity that doesn’t contain your favourite flavour of pink. There can be a module licensed under a non-commerial-only license. Or maybe your company uses an old framework missing the latest features. No such problems in your free time.

There is a theory that mastery comes from practice. By coding in your free time, you can decide whether you invest your time in deeper knowledge of some topic or in a broader horizon thus becoming a valuable employee. And sharing freshly won knowlege and experience can increase your reputation as colleague too.

The social among us even meet like-minded people at events like Java User Group or Schneide Dev Brunch. Here the amount of transported information is  much higher, since everyone has another background and focuses on another things. You can even share your mistakes and laugh with others about them.

Are there any side-effects of free time coding besides those listed before? Yes. Your personality can change. It is possible that you will gain a positive attitude and start invest your free time in your skills. Maybe you’ll even start to motivate others to do likewise.

TDD myths: the problems

I take a look at some (in my experience) problems/misconceptions with TDD:
100% code coverage is enough, Debugging is not needed, Design for testability, You are faster than without tests

100% code coverage is enough

Code coverage seems to be a bad indicator for the quality of the tests. Take the following code as an example:

public void testEmptySum() {
  assertEquals(0, sum());
}

public void testSumOfMultipleNumbers() {
  assertEquals(5, sum(2, 3));
}

Now take a look at the implementation:

public int sum(int...numbers) {
  if (numbers.length == 0) {
    return 0;
  }
  return 5;
}

Baby steps in TDD could lead you to this implementation. It has 100% code coverage and all tests are green. But the implementation isn’t finished at all. Our experiment where we investigated how much tests communicate the intend of the code showed flaws in metrics like code coverage.

Debugging is not needed

One promise of TDD or tests in general is that you can neglect debugging. Even abandon it. In my experience when a test goes red (especially an integration test) you sometimes need to fire up the debugger. The debugger helps you to step through code and see the actual state of the system at that time. Tests treat code as a black box, an input results in an output. But what happens in between? How much do you want to couple your tests to your actual implementation steps? Do we need the tests to cover this aspect of software development? Maybe something along the lines as shown in Inventing on principle where the computer shows you the immediate steps your code takes could replace debugging but tests alone cannot do it.

Design for testability

A noble goal. But are tests your primary client? No. Other code is. Design for maintainability would be better. You will need to change your code, fix it, introduce new features, etc. Don’t get me wrong: You need tests and you need testability. But how much code do you write specifically for your tests? How much flexibility do you introduce because of your tests? What patterns do you use just because your tests need them? It’s like YAGNI for code exposure for tests. Code specifically written only for tests couples your code to your tests. Only things that need to be coupled should be. Is the choice of the underlying data structure important? Couple it, test it. If it isn’t, don’t expose it, don’t write a getter. Don’t break the information hiding principle if you don’t need to. If you couple your tests too much to your code every little change breaks your tests. This hinders maintenance. The important and difficult design question is: what is important. Test this.

You are faster than without tests

Some TDD practitioners claim that they are faster with TDD than without tests because the bugs and problems in your code will overwhelm you after a certain time. So with a certain level of complexity you are going faster with TDD. But where is this level? In my experience writing code without tests is 3x-4x faster than with TDD. For small applications. There are entire communities where many applications are written without or with only a few tests. But I wouldn’t write a large application without tests but at least my feeling is that in many cases I go much slower. Cases where I feel faster are specification heavy. Like parsing or writing formats, designing an algorithm or implementing a scientific formula. So the call is open on this one. What are your experiences? Do you feel slowed down by TDD?

Thoughts about TDD

Thoughts and links about test driven development

First a disclaimer: I think tests are a hallmark for professional software development, I like to write tests before the implementation but that’s not always easy or simple (for the difference please refer to Simple made easy). I find it hard to grasp test driven development (TDD) though. The difference between test first and test driven lies in the intention: in both cases tests are written before any implementation code but in TDD the tests drive the design of your implementation.

The problem with opinions of TDD is there are mostly extreme positions: some think “TDD is the (next) holy grail” or the ones which dismissed it. Though reading between the lines there are great discussions about how to do it and what problems arise. Many people (me included) are really trying to get value from TDD. Testing should be fun.
One way in letting the tests drive the way you develop is proposed by Uncle Bob: transformation priority premise. He proposes a list of transformations which introduce new or replace existing constructs like replacing a constant by a variable or adding more logic and gives them a priority. Only if you cannot use a high priority transformation to get the test to pass you look at a transformation with a lower priority.
But how do you determine what you should test next or even which is the first test?
Taking the typical Conway’s game of life kata as an example one thing struck me: I could only get the TDD to work smoothly when I started with the data structure. But why that? Naturally I start with the algorithm (in this case the rules) and write the first test for it. But upon further inspection of the problem and deeper (domain) knowledge it seems the data structure is way more important for solving this kata. So you need to know where the journey goes along beforehand, not every step you will take but the big picture: first the data structure, then the rules in this example. Maybe you should start with the integrations or the functional tests and break them down into units.
What are your experiences using TDD? Do you use or want to use TDD?

Why do (different) programming languages matter?

One common saying in software development is: use the best tool for the job. But what is the best tool? I think the best tool is determined by two things: how it fits the problem domain and how it fits your mental model.

One common saying in software development is: use the best tool for the job. But what is the best tool? I think the best tool is determined by two things: how it fits the problem domain and how it fits your mental model. Why your mental model? Just use the best language available! you might think. But as humans we think in languages and even inside these languages everybody has a typical way of expressing himself. Even own words and if they become common we even have a name for it: a dialect. But it is all that you should consider when choosing a programming language? Certainly there are the tools of the trade: the IDE, debugger, profiler, etc. Here is comes down to personal preferences and most of the shortcomings in this field are short term: better tool support is on the way.
There’s another more important aspect though: the community and therefore the mindset which is brought along. The communities form how the languages are used, where the most libraries and frameworks are developed, which problem domains are tackled and what the values are. Values can be testing, elegance, simplicity, robustness, …
Since communities are consisted of individuals, individuals form what the values are. But I think the language designer lays a foundation here: take Ruby for example, Ruby was designed with the intention to make programming fun. This is one of the things that appeals to many developers and the whole community which uses Ruby. Ruby is fun.
These environments spawn amazing things like Rails or more recently RubyMotion Because of the mindset of the community and the foundation inside the language there are these fruits. Last but not least another reason to choose a language is your familiarity with it. You might choose an inferior tool or language because you know it inside out.

The 2012 Experience

Do you know that feeling when you use or look at some kind of device, software, technology or service and think that there is an easier, faster, nicer or simply better way to do it? To put it simply, when something is just inconvenient to use. Earlier this year, in a discussion we had about some inconvenient every day technology the phrase “That does not feel like 2012!” was used to describe this feeling. Since then, the “2012 experience” has become kind of an inofficial commendation we award in our discussions.

For me, this sentence says it all. In the year 2012 (that some claim to be our last) there are still more than enough products that make every day life and work unnecessarily complicated. Despite various existing techniques, good practices and, most important, lots of examples that can show producers how to create a good 2012 experience.

Even though the phrase was originally coined describing a scientific API, the last few weeks taught me that this phenomenon is by no means unique to our profession. The worst non-2012 experiences I had thus far were with service hotlines. While there are many examples of good 2012 experiences leaving me with the feeling that my issue is taken care of, there are also those that leave me with the urge to immediately dial again and hope for another service employee.

Regardless of the work I do, be it developing an API, designing a GUI or providing customer support, I always try to imagine how I would expect it to be done by someone else to leave me contended.

Refactor now

What would you say about a mechanic or a craftsman who makes his work and does not clean up afterwards? Would you drive your car with stuff lying in the engine bay? Or use your bath with dirt all over?

Sometimes we write code or a test and think: make it work first and refactor it later. But this ‘later’ may not come in a while.
What would you say about a mechanic or a craftsman who makes his work and does not clean up afterwards? Would you drive your car with stuff lying in the engine bay? Or use your bath with dirt all over?
Certainly not.
So don’t wait until someone cleans your code or until you come back after a while and the first thing you do is cleaning up.
I know that when things get tough, deadlines are near, refactoring does not have top priority. So why not use an iteration or even some hours to clean up afterwards? It will help you in the future.