We can probably agree on automated tests decreasing the likelihood of defects in a software. And we all have observed that writing automated tests can mean serious effort, especially if the software under test wasn’t designed with tests in mind. So we can say that automated tests are an investment in low defect rates, among other things. But like any other investment, it isn’t an uniform function of spending money to gain returns, but a portfolio of better and worse little investments. Each test can be seen as a separate micro-investment that should bring a separate micro-return.
A portfolio of micro-investments
We know from experience that this isn’t true. There are tests that have avoided a lot of embarrassment and there are tests that are a constant source of annoyance. And we aren’t very good at predicting which test will turn out which way, because we would avoid writing the frail ones otherwise. But there is a tool that can help us: statistics. In your financial investments, you keep track of your stock prices with elaborate charts and tables. How do you keep track of your test investment?
Track the investments
Can you say which tests broke most often? Which tests prohibited real bugs and which just cost you additional effort without gain? Do you know your most erratic tests? Given a decent amount of tests, it’s not possible to keep track of these characteristics manually. But what if you could look at the test and see its merits and evil deeds of the past? You don’t have a overall statistic, but an individual answer for every test you examine. That would be a first step. And it is relatively easy to make: Award your tests.
Reward your angels
There is the notion of “putting angels on your shoulders” when you write tests. You should listen to your angels. But you should reward those angels that are really helpful and restrain the ones that keep pointing out nonsense. You should just put a medal on your angel test whenever it was helpful and a warning badge whenever it misled you. After a while, you will see that your highest decorated tests are the really good investments. Let’s take a look at typical awards:
- @bug: This test stands guard against a certain bug that once lingered in your system and is now fixed. The test didn’t actually catch the bug, but was introduced afterwards to protect against regression. The award is normally given with some kind of bug issue number, like @bug(SYS-132). It’s a relatively low award, because the test doesn’t have to do anything heroic yet, but it is a good indicator for fellow developers to trust this test if it cries wolf: It might actually warn you against recurring regression.
- @regression: If a test breaks and points out an actual regression in the system (something worked before and isn’t working any more), it was a good investment. You want to immediately award it for being helpful. This is a very good protection against ever modifying or deleting this test. A @regression veteran should be listened to if it breaks. If a test collects this award several times, it might hint you to a poorly designed part in your code.
- @lifesaver: You don’t give out this award easily. It’s the highest award a test can get. If a test points out a regression that would lead to a mission critical failure in production, this test just saved your ass. And if you weren’t even aware that your change would cause such trouble, the test was the best investment in code you can possibly make. Praise it! And if you happen to break a @lifesaver test with a change, chances are you’ve just done something very stupid (again).
These are only three examples of awards you might want to introduce into your project. But there are also some common warning badges you can hand out to troublesome tests:
- #erratic: If a test fails with no apparent reason or connection to your recent change, you might label it as erratic (or hysteric, the slightly more troublesome variant). An erratic test loses its credibility to point to real problems and will turn into a burden. This was probably a bad investment. If a test collects several erratic badges, you should consider to put it into quarantine, because unreliable tests tend to spread ignorance about any sort of test failure very fast. Quarantining your tests is fairly easy: you move it into a separate test harness. Your primary test harness has the ability to break the build when unstable, but your secondary test harness just reports failures without breaking the build. You degrade your erratic tests to the reserve guard.
- #fragile: If you need to make changes in a test because the production code changed slightly, you are temporarily compromising your safety net. A typical case of a fragile test would be a mock object based test that doesn’t mock a new method call or expects a call that doesn’t happen any more. The test reports a failure, but there isn’t a problem really (well, there’s some violation of the open/closed principle). You have to fix the tests instead of your production code. If you give out this badge often, you have a test harness with high maintenance cost. If your investment is still positive is you decision, but you can now quantify on the stability aspect of your tests compared to the stability aspect of your production code.
- #cryptic: If, for whatever reason, you happen to read across a test and then read it again, and again just to understand what’s going on, you might hand out this warning badge. The test isn’t meant to be read, it’s written to be a mystery. In my book, that’s bad style and should be flagged as such. If a test collects several cryptic badges, that means that several developers spent time to grasp this little piece of code. If the test hasn’t earned awards yet, you’re looking at a typical waste of time. It might be poorly invested time.
Append the meta-data
If you start to label and award your tests, don’t forget to add appropriate meta-data like issue numbers, links to the project wiki and, most important, the date of bestowal. Every award fades with time, so you should be able to instantly see when they were added. A test that was quite erratic in its youth can be a great lifesaver right now. If you use a proper version control system, this information can be queried afterwards, but with greater effort. I think its wise to put all relevant information where it can be accessed immediately.
And where to put it exactly? The documentation comment (JavaDoc, Doxygen) above a test is perfectly suited for this kind of information. Have one line for every type of award and list the meta-data of all occurrences in this line. Or, if you want to give a comment with each award, you might use one line per award. But try to make the awards and the test code inseparable and readable as one piece of information, so that your awards survive refactoring and other editing efforts.
Reap your profits
After some time, you might want to visualize and rank your test awards. There’s no magic here, just grep over your test code a few times. What you make of your findings is up to you. It certainly helps to evaluate the investments that go into your tests.
Do you use a similar tagging scheme for your tests or your production code? Let us know in the comment section!
Hey Daniel,
I really like the idea of awarding my tests, will think of it when I get to writing test code the next time. Could you think of “thumbs up/thumbs down” button next to test result messages? E.g. one test tells you it broke providing you an error message allowing you to find the bug quite quickly and you find this information helpful, so you give “thumbs up” and another test just broke without providing any reason, so you give “thumbs down”. From time to time, you have a look over your tests and especially think about these “thumbs down” tests if you put them into quarantine or need to refine/refactor/improve them. What do you think?
Have a great day,
Chris