When everything’s an issue

Years ago, I read the novel “Manna” by Marshall Brain. It’s a science fiction story about the robotic takeover and it features “Manna”, an (artificially) intelligent work management software that replaces human managers and runs the shop. The story starts with “Manna” and goes on to explore the implications of such a system on mankind. It’s a good read and contains a lot of thoughts about what kind of labor we want to do.

The idea that really captivated me was the company that runs itself. Don’t get me wrong: Most organizations are so big that the individual employee cannot see the big picture anymore. Those organizations seem to “run itself” to the untrained eye, but it is still humans that manage the workload. And like all humans, they make mistakes and, perhaps very subtle, infuse their own selfish goals into the process. But an organization that has its goals and instructs its workers (humans and machines alike) directly is an interesting thought for me.

It also is totally unrealistic with today’s technology and probably contains some risks that should be explored carefully before implementing such a system in the wild.

But what about a more down-to-earth approach that achieves the core advancement of “Manna” without many or all of the risks? What if the organization doesn’t instruct, but makes its needs visible and relies on humans to interpret and schedule those needs and fulfill them? In essence, a “Manna” system without the sensors and decision-making and certainly without the creepy snooping tendencies. Built with today’s technology, that’s called an automated work scheduler.

And that is what we’ve built at our company. We use an issue tracking system to manage and schedule our project work already. We extended its usage to manage and schedule our administrative work, too. Now, every work unit in the company is (or could be) accompanied by an issue in the issue tracker. And just like software developers don’t change code without an issue, we don’t change our company’s data or decisions without an issue that also provides a place for documentation related to the process. We’ve come to the conclusion that most of those administrative issues are recurring. So we automated their creation.

Our very early stage “Manna” system is called “issue scheduler”, a highly creative name on its own. It is a system that basically contains a lot of glorified cron expressions and just enough data to create a meaningful issue in the issue tracker, should a cron expression fire. So basically, our company creates issues for us on a fixed schedule. Let’s look at some examples:

  • We add a new article to our developer blog (you’re reading it right now) every week. This means that every week, our “issue scheduler” creates a blog issue and assigns it to the next author in line. This is done some time in advance to give the author enough time to prepare and possibly trade with other authors. Our developer blog has the “need” for one article each week, but it doesn’t require a particular topic or author. This need is made visible by the automatic blog issues and it is our duty to fulfill this need. On a side note: Maybe you’ve noticed that I wrote two blog articles in direct succession. There is definitely some issue trading going on behind the scenes right now!
  • We tend to have many plants in our office. To look at something green and living adds to our comfort. But those plants have needs, too. They probably make their needs pretty visible, but we aren’t expert plant caregivers. So we gave the “issue scheduler” some entries to inform us about the regular watering and fertilization duties for our office plants. A detailed description of the actual work exists in our company wiki and a link to it gives the caregiver of the week all the information that’s needed.
  • Every month, we are required to file a sales tax summary report. This is a need of the german government agencies that we incorporate into our company’s needs. To work on this issue, you need to have more information and security clearances than fits on a wiki page, but to process is documented nonetheless. So once a month, our company automatically creates an issue that says “do your taxes now!” and assigns it to our administrative employees.

These are three examples of recurring tasks that are covered by our poor man’s “Manna” system. To give you a perspective on the scale of this system for a small company like ours, we currently have about 140 distinct rules for recurring issues. Some of them fire almost every day, some of them sleep for years and wake up just in time to express a certain need of the company that otherwise would surerly be forgotten or rediscovered after the fact.

This approach relieves us from the burden to remember all the tasks and their schedules and lets us concentrate on completing them. And our system, in contrast to “Manna” in the story, isn’t judging or controlling. If you don’t think the plants need any more water, just resolve the issue with “won’t fix”. Perhaps you can explain your decision in a short comment for other humans, but our “issue scheduler” won’t notice.

This isn’t the robotic takeover, after all. It’s just automated scheduling of recurring work. And it works great.

Twelve years a bug

Recently, a friend recommended the talk “Love your bugs” by Allison Kaptur to me. It’s a good talk with a powerful message: Every bug is a chance to learn. The only prerequisite for this chance: You need to be aware of the bug. You need to see it and then understand and fix it.

What if you don’t see a bug for over a decade?

You can still learn from it – a lot. Here is a story about a bug that lingered in my code for twelve years and had impact on the software results without anybody, including me, noticing.

Let’s say the software is a measurement system that records physical measurement values that cannot be reenacted easily. The samples are measured in rapid succession and then stored away. The measurement results act as a quality quantifier as in “the sample was this good at that time”. The sample’s quality diminishes over time, even with perfect storage conditions. And just to be sure that the measurement is accurate, it isn’t performed once, but twice in quick succession: The measurement device traverses the sample in one direction, recording the values and uses the return path for a second measurement. This results in two measurement streaks that should, in theory, be very similar.

The raw measurement values are aggregated in different ways. In the final report, the maximum value over both measurement streaks is the most prominent and most important aspect. There is nothing exciting or error-prone about finding the maximum value in a value series, even twelve years ago. But I tested the code nonetheless. The whole value aggregation code was under test and had a good test coverage. All tests showed their thumbs up.

But twelve years ago, at the end of a workweek, on a Friday at 17 o’clock, I made a small and easy refactoring to the code. I know this in such detail because of the wonders of version control. Without version control, I probably would have learnt a lot less from this bug.

The code before the refactoring contained two nearly identical sections for the measurement streaks, making it duplicated code. There were only two differences: The first section of code counted from 0 to 99 and stored the values in the first streak’s array. The second section counted from 99 to 0, because the measurement device travels backwards over the sample, and stored the values into the second array.

My refactoring brought both pieces of code together: A new method with two parameters was introduced and called at both places. The first parameter specified a series of positions (0 to 99 or backwards), the second parameter specified the array to store into. All automated tests approved the changes. My manual testing showed no differences. The refactoring was going live.

Twelve years later, while working on a new engine for the measurement device, the customer took the raw values from the journal and performed some manual calculations. The maximum value calculation was wrong. It wasn’t wrong all of the time, but also not correct for all measurements. When it went wrong, it selected the second greatest value or, seldom, the third greatest value as the maximum value.

All the tests still insisted that everything is correct – as it was for twelve long years. There was no other change to the code that could affect the aggregation in any way.

There are two different ways that lead me to find the bug’s origin. The first way was to inspect the trail of commits in the area of code that performs the aggregation. My findings were that the abovementioned refactoring was the most likely culprit. The second way was to write more tests to ensure that yes, the maximum value calculation was indeed correct if given the correct values. Using the examples my customer had examined, I could prove with enough certainty that, given the input of all 200 values, the correct maximum value was returned in all cases. The aggregation therefore wasn’t given all values!

And that lead directly to the bug: The first measurement streak used the same array to store the values as the second streak. During the refactoring, I must have copied and pasted the call to the new method and forgotten to change the second parameter. Of 200 measured values, only 100 got stored permanently. The first 100 values were stored and promptly overwritten as the measurement device returned to its park position. Change the calls to store in both arrays and everything works like intended and like before the refactoring.

How did no test, automated or manual, catch this bug? It turns out that most automated tests were too focussed to indicate a problem. All unit tests that secured the maximum value calculation used given sets of values, they didn’t care about the origin of these value sets. The integration tests that covered the whole measurement process should have raised objections to the refactoring. But they used given sets of measurement values, too. And by chance, the given maximum value was in the second measurement streak. In fact, the given set of measurement values was produced by a loop that just increased a value. The greatest value was always the last one.

The manual tests had the same problem: The simulated measurement device produced measurement values that were either fixed or random. If your whole measurement uses the same fixed values, you don’t see it if half of them went missing. And if your measurement uses random values, you’ll have to pay close attention to a detail that isn’t in focus because it is unchanged. Except that one time when it was changed recently. Remember the change date? Friday afternoon, only minutes before the weekend? Not the best time to manually test a change that is a simple standard refactoring, after all.

So, what have I learnt? First: automated tests, even with great test coverage, aren’t enough. There is so much leeway in the setup of these tests that they will have blind spots without your knowledge. Second: A code review by another human (or even by the same human, some days later) might have caught the bug. It was painfully obvious in hindsight. The problem? “Might have” is an heuristics, just like your automated tests are.

My guess is that mutation testing would have shown the blind spot of the existing tests – among several hundred others. The heuristics is now your trained eye that sifts through the results and separates false positives from true positives.

Right now, I’ve fixed the bug, kept all additional tests and added one more: An integration test that performs a measurement with fixed values and checks nothing else but if all 200 values are stored at the end of the measurement. It’s oddly specific, but conveys this story in an automated fashion.

Oh, and I made another refactoring: I’ve replaced the arrays with collections. Hopefully, I won’t regret this one twelve years in the future. I’ve made it on a Monday.

Ignoring YAGNI – 12 years later

Fourteen years ago, we started to build a distributed system to gather environmental data in an automated 24/7 fashion. Our development process was agile and made heavy use of short iterations (at least that was what they were then, today they are normal-sized). So the system grew with many small new features and improvements, giving the customer immediate business value.

One part of the system was the task scheduler. Because the system had to run 24/7 and be mostly independent of human interaction, the task scheduler’s job was to launch different measurement processes at the right time. We had done extensive domain crunching and figured out that all tasks follow a rigid time regime like “start every 10 minutes” or “start every hour”, regardless of the processes’ runtime. This made the scheduler rather easy to develop. You should keep it simple, after all.

But another result of the domain crunching bothered us: The schedule of all tasks originated from the previous software system, built 30 years ago and definitely unfit for the modern software world. The schedules weren’t really rooted in the domain, they all had technical explanations like “the recording of the values is done sequentially and takes up to 8 minutes, we can’t record them more often than that”. For our project, the measurement hardware was changed, so our recording took a couple of milliseconds. We could store and display the values continuously, if the need arises.

So we discussed the required simpleness or complexity of the task scheduler with the customer and they seemed pleased with all the new possibilities. But they decided that the current schedules were sufficient and didn’t need to be changed. We could go ahead and build our simple task scheduler.

And this is when we decided to abandon KISS and make the task scheduler more powerful than needed. “But you ain’t going to need it!” was the enemy. Because we knew that the customer will inevitably come around and make use of their new possibilities. We knew that if we build the system with more complexity, we would be the heroes in a future time, wearing a smug smile and telling the customer: “We’ve already built this, you can use it right away”. Oh how glorious this prospect of the future shone! Just a few more thoughts going into the code and we’re set for a bright future.

Let me tell you a few details about the “few more thoughts” with the example of an “every hour” task schedule. Instead of hard-coding the schedule, we added a configuration file with a cron-like expression for the schedule. You could now leverage the power of cron expressions to design your schedule as you see fit. If you wanted to change the schedule from “every hour” to “every odd minute and when the pale moon rises”, you could do so. The task scheduler had to interpret the configuration file and make sure that tasks don’t pile up: If you schedule a task to run “every minute”, but it takes two minutes to process, you’ve essentially built a time-bomb for your system load. This must not be feasible.

But it doesn’t stop there. A lot of functionality, most of which wasn’t even present or outlined at the time of our decision, relies implicitly on that schedule. Two examples: There are manual operations that must not be performed during the execution of the task. The system goes into a “protected state” around the task execution. It disables these operations a few minutes before the scheduled execution and even some time afterwards. If you had a fixed schedule of “every hour”, you could even hard-code the protected timespan. With a possible dynamic schedule, you have to calculate your timespan based on the current schedule and warn your operator if it isn’t possible anymore to find a time slot to even perform the manual operation.
The second example is a functionality that supervises the completeness of the recorded data. The problem is: This functionality is on another computer (it’s a distributed system, remember?) that doesn’t know about the configuration files. To be able to scan the data archive and say “everything that should be there, is there”, the second computer needs to know about all the schedules of the first computers (there are many of them, recording their data on their own schedules and transferring it to the second computer). And if a schedule changes, the second computer needs to take the change into account and scan the data archive for two areas: one area with the old schedule and one area with the new schedule. Otherwise, there would be false alarms.

You can probably see that the one decision to make the task scheduler a little more complex and configurable as required had quite some impact on the complexity of other parts of the system. But this investment will be worth it as soon as the customer changes the schedule! The whole system is programmed, tested and documented to facilitate schedule changes. We are ready!

It’s been over twelve years since we wrote the first line of code for the more complex implementation (I’ve checked the source control logs). The customer hasn’t changed a single bit of the schedule yet. There are over twenty “first computers” and they all still run the same task schedule as initially planned. Our decision did nothing but to add accidental complexity to the system. It probably introduced some bugs along the way, too. It certainly increased our required level of awareness (“hurdle of understanding”) during the development of features that are somewhat coupled with the task schedule.

In short: It’s been a disaster. The smug smile we thought we’d wear has been replaced by a deep frown. Who wrote all that mess? And why? It wasn’t the customer, it was us. We will never be going to need it.

Domain-aligned bugs

Frank C. Müller [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0) ]

Imagine that you are an user of a typical enterprise software that handles commercial products and their prices. There are different prices in the software that are somehow related to each other. There is the purchase price that indicates your cost if you buy the product. There is the retail price that gets listed in your price lists and is paid by your customers, should they buy the product. You probably already figured out that the retail price should never be lower than the purchase price, because that would mean you lose money with every successful sale.

Let’s say that the enterprise software not only handles products, but also parts. Several parts combined, with some manufacturing effort, result in a product. Each part has a purchase price, the resulting product has a retail price. The retail price of the product should be higher than the sum of purchase prices of the parts. If it isn’t, you lose the costs of the manufacturing effort and some extra money with every successful sale.

If for any reason you cannot clearly estimate your manufacturing effort, the enterprise software has another input field for an amount of money that you can add to the sum of the parts’ costs. We call this field the “sales bonus”. So, if you sell a product made up of parts, your customer has to pay a price that consists at least of the retail prices of the parts and the sales bonus. Of course, your customer has an individual discount percentage that needs to be subtracted from the total price. Are you still following?

You are now thinking in the domain of price determination and financial mathematics. If you were the user of said enterprise software, you’d probably expect some bugs like these:

  • It is possible to enter a retail price lower than the purchase price
  • The price of products manufactured from parts isn’t calculated correctly
  • It is possible to enter a negative sales bonus
  • The total price with discount could be lower than the sum of purchase prices of the parts without a warning

All of them are bugs in the domain. All of them can be explained to a domain expert or a user with terms and concepts from the domain.

But what about the bug when you sell a product that consists of three parts, each with a retail price of 10 €, and a sales bonus of 5 €. You want to create a quote for your customer and the price shows up as 34,99999999998 €. You are a bit bewildered and try to countervail the apparent rounding error by changing the sales bonus to 5,00000000002 €. After this change you get another crazy total price and your prices in the database are different from what you entered, too. Everything seems to destabilize and deviate further and further from clear cut prices.

As a programmer, you know what happened. You know what caused this effect of numerical instability. Somebody stored monetary values in a floating point number. You know that is a bad idea and you’d never do this. But this blog post isn’t about you or what you should do or not do. It is about the user, expert in his domain, that stumbles over the bug as described and has to make some decision on how to fix it. This user cannot use any knowledge from the domain to even understand the mechanics of the bug. You, as the programmer, cannot explain this bug in terms of things the user already knows. You need to be vague (“the software doesn’t store the exact values, just approximations”) or introduce additional complexity (“we store this value by splitting it into a significand and multiply it with a factor consisting of a fixed base and an exponent. We can omit the base and just store the significand and the exponent and express a very large numerical range in just a few bits. Think about how cool that is!”).

Read the last explanation again, from the viewpoint of a salesman. We want to add some prices in the range of a few €, slap a moderate discount on top and call it a day. We don’t care about bits or exponential formulas. That is not part of our domain and it shouldn’t affect our domain or software that works in our domain. Confronting us with technical details reflects negatively on your ability to solve our problems. You seem to burden us with your problems in exchange.

As domain experts, we want only domain-aligned bugs.

Book review: A Philosophy of Software Design

This blog entry is structured in two main parts: The prologue sets the tone, but may be irritating because it doesn’t talk about the book itself. If you get irritated or know the topic well enough to skip it, you can jump to the second part when I talk about the book. It is indicated by a TL;DR summary of the prologue.

Prologue

Imagine a world where the last 25 years of computer game development didn’t happen. A world where we get the power of 5 GHz octacore computers and 128 GB of RAM, but nobody thought about 3D graphics or interaction design. The graphics of computer games is so rudimentary, it consists of ASCII art and color. In this world, two brothers develop a game that simulates a whole fantasy world with all details, in three dimensions. The game is an instant blockbuster hit and spawns multiple cinematic adaptions.
This world never happened. The only thing that seems to be from this world is the game itself: Dwarf Fortress. An ASCII art sandbox simulation of a bunch of dwarves that dig into the (three-dimensional) mountains and inevitably discover the fun in magma. Dwarf Fortress is a game told by stories, not graphics. It burdens the player to micro-manage a whole settlement down to the individual sock – Yes, no plural. There are left socks and right socks and they are different entities with a different story. Dwarves can literally go mad because they miss their favorite left sock and you didn’t notice in time. And you have to control all aspects of the settlement not by direct order, but by giving hints and suggestions through an user interface that is a game of riddles on its own.
Dwarf Fortress is an impossible game. It seems so out of time and touch with current gaming reality that you can only shake your head on first contact. But, it is incredibly deep and well-designed and, most surprising, provides the kind player with endless fun. This game actually works!

TL;DR: Just because something seems odd at first contact doesn’t mean it cannot work. Go and play Dwarf Fortress!

The book

John Ousterhout is a professor teaching software design at the Stanford university and writes software for decades now. In 1988, he invented the Tcl programming language. He got a lot of awards, including the Grace Murray Hopper Award. You can say that he knows what he’s doing and what he talks about. In 2018, he wrote a book with the title “A Philosophy of Software Design”. This book is a peculiar gem besides titles with a similar topic.

Imagine a world where the last 20 years of software development books didn’t happen. One man creates software for his whole life and writes down his thoughts and insights, structured in tactical advices, strategic approaches and an overarching philosophy. He has to invent some new vocabulary to express his ideas. He talks about how he performs programming – and it is nothing like today’s mainstream. In fact, it is sometimes the exact opposite of today’s best practices. But, it is incredibly insightful and well-structured and, most surprising, provides the kind developer with endless fun. Okay, I admit, the latter part of the previous sentence was speculative.

This is a book that seems a bit out of touch with today’s mainstream doctrine – and that’s a good thing. The book begins by defining some vocabulary, like the notion of complexity or the concept of deepness. That is rare by itself, most books just use established words to deliver a message. If you think about the definitions, they will probably enrich your perception of software design. They enriched mine, and I talk about software design to students for nearly twenty years now.

The most obvious thing that is different from other books with similar content: Most other books talk about behaviours, best practices and advices. Then they throw a buch of prohibitions in the mix. This isn’t wrong, but it’s “just” anecdotal knowledge. It is your job as the reader to discern between things that may have worked in the past, but are outdated and things that will continue to work in the future. The real question is left unanswered: Why is it so?

“A Philosophy of Software Design” begins by answering the “why” question. If you want to build an hierarchy of book wisdom depth, this might serve as one:

  • Tactical wisdom: What should be done? Most beginner’s books work on this level. They show exactly what goes on, but go easy on the bigger questions.
  • Strategic wisdom: How should it be done? This is the level that the majority of good software design books work on. They give insights about your work ethics and principles you should abide by.
  • Philosphical wisdom: Why should it be done? The reviewed book begins on this level. It explains the aspects of software and sourcecode that work against human perception and understanding and shows ways to avoid or at least diminish those aspects.

The book doesn’t stay on the philosophical level for long and dives deep into the “how” and “what” areas later on. But it does so with the background of an established “why”. And that’s a great reminder that even if you disagree with a specific “what” (or “how”), you should think about the root cause of your disagreement, not just anecdotes.

The author and the book aren’t as out-of-touch with current software development reality as you might think. There is a whole chapter addressed to current “software trends” like agile development and unit tests. It has a total page count of six pages and doesn’t go into details. But it at least mentions the things it doesn’t talk about.

Conclusion

My biggest learning point from the book for my personal habits as a developer is to write more code comments in the way the book proposes. Yes, you’ve read that right. The book urges you to write more comments – but good ones. It talks about why you should write more comments. It gives you extensive guidelines as to how good comments are written and some examples what these comments look like. After two decades of “write more (unit) tests!”, the message of “write more comments!” is unique and noteworthy. Perhaps we can improve our tools to better support comments in the same way they improved support for tests in the last years.

Perhaps we cannot solve our problems with the sourcecode by writing more sourcecode (unit tests). Perhaps we need to rely on something different. I will give it a try.

You might want to give the book “A Philosophy of Software Design” a try. It’s worth your time and thoughts.

Zero, Maybe, One and Many

In object-oriented programming languages like Java, the compiler will improve its helpfulness if the application provides a rich type system or strong domain model. There is a whole field of study for type systems, called type theory, that is fascinating and helpful, but does not provide easy rules to follow for beginning software developers. This blog entry proposes a simple set of rules for a specific part of type systems (associations among types) that can be applied to a domain model as a rule of thumb. The resulting model will empower the compiler and the code completion of the IDE to help the developer with writing correct code.

Data knows data

Even the most basic domain models separate the data in multiple entities (often classes). For example, an employee class has an internal id, but knows about a person class and a salary class that are associated with this employee. This “knowing about” is modeled as a reference to a person object and a salary object. In this case, the reference is probably of the type “one”: The employee object knows about one person object and one salary object. This is the usual way to structure data.

If you learn about the UML notation of data models, you’ll see that associations (aka references between objects) are given great emphasis. There are several different kinds of associations that can be customized by multiplicities and such. It seems that knowing other data is a complex issue for types. It doesn’t have to be this way. Here are four ways of knowing other data that are sufficient for nearly every use case: Zero, Maybe, One and Many.

Four basic types of association

  • Zero: Knowing zero elements of something different is the usual default case: Your employee object probably doesn’t need to know about the payroll object of the company and therefore has no association to it. This means that there is no member variable of the type Payroll in the class Employee. No developer ever modeled a “zero” association by declaring a member variable and setting it to null. This would be ridiculous. We just omit the member variable and are done. Knowing zero elements of something is easy.
  • One: Yes, I’ve omitted Maybe at the moment. I’ll come back to it. Knowing one element of something is also not hard: You declare a member variable of the type, give it a good name (that’s the hard part!) and ensure that every instance of your class (every object) has a valid reference to an object of something’s type. If you call methods on this reference, you call methods on the object you know. As long as you live, the other object cannot disappear. Knowing one element of something is a long-lasting relationship.
  • Maybe: Sometimes, you want to know an element of something that isn’t there yet or you knew an element of something once, but it is gone. You know “maybe one” element of something. These associations are typically programmed in a cumbersome way by many developers. Instead of embedding the “maybe” aspect in the type system and giving the compiler a chance to help, it is burdened solely onto the developer’s shoulders by implementing the “maybe” like a “one” with the added possibility of a null reference if the element isn’t there. A direct result of this approach are null-checks in the code or NullPointerExceptions at places without such checks. One possibility to elevate the “maybeness” into the type system is to implement the association with a Maybe or Optional type. Instead of referencing a Salary directly that might be null if an employee isn’t salaried anymore, the Employee class references an Optional<Salary> object. This object might “contain” a salary or it might not. With a few adjustments to the conditional flow of the code, this distinction between “something is there” and “something is not there” doesn’t matter anymore. If the code is free from implicit Optional types (references that can be null), a whole category of bugs disappears and the code is freed from manually programmed type system checks. Probably knowing one element is the type of assocation that requires some thought and is often done on the wrong level.
  • Many: As soon as you want to know more than one element of something, you fall into the “many” category. Many-associations are not so easy to handle, because there are so many possibilities to express them. The basic types are arrays or lists. My recommendation is to use lists whenever feasible and only resort to arrays if it is necessary, because arrays are fixed-length and have the same problem of maybe-null-references: An array index might have been written yet or not. If you refrain from storing null references into lists, they express their filling level a lot clearer than arrays. And given advanced features like iterators, there isn’t even a need to ask for the filling level. An interesting observation is that the list-based many-association can also serve as a zero-, maybe- or one-association. It is possible to replace all other types of association with lists. You probably won’t want to do this, because with the maximization of multiplicity flexibility comes more complexity and reduced readability of the code. You should strive to minimize complexity. Only add many-associations if you really need them. Even just replacing a “maybe” (Optional) with a “many” (List) is a source of much unwanted code and uncertainty.

Advanced types of association

Of course, there are many more types of association that you’ll eventually need. A good example is the qualified association, often implemented by a Map/Dictionary that translates from the qualifier type to the qualified type. But they are rare in comparison to the four basic types.

Summary

If you get your basic associations right, your domain model will help your compiler and IDE to support and guide you. This is an upfront investment that pays off manyfold over the course of the project and eliminates the burden of attention to detail when it comes to accidental complexity like null pointers. Your project’s domain probably doesn’t contain null pointers, but the concepts of knowing zero, maybe, one and many.

Did Java just flip the switch?

Twenty years ago, a groundbreaking book was published: Refactoring by Martin Fowler. In this book, we learnt about 72 ways to improve our code and, even more important, over 20 unique signs of bad code, so-called code smells. Among these code smells were obvious ones like “Duplicated Code” and “Long Parameter List” and more specific ones like “Temporary Field” and “Switch Statements”.

Switch is the main offender

What is wrong with a Switch Statement, you ask? Well, nearly everything. Let’s review three flaws of a classic switch statement in Java on different levels:

  • Syntax: The syntax of a switch is clunky at best. Whoever thought that “fall-through” should be the default behaviour and subsequently forced millions of developers to “break” their cases is responsible for so much unnecessary extra work. Think about how a “fallthrough” statement instead of a “break” could have changed the world.
  • Code Design: Each switch statement is an inherent complexity hog. At least if you measure classic complexity metrics like McCabe or cyclomatic complexity. Anything but the smallest switches results in complexity counts that are through the roof. And a small switch is just a syntactically bloated if/else.
  • Programming Paradigm: The reason Martin Fowler advocated against using switch statements is because the alternative, using polymorphism to implicitly switch over the object type, wasn’t common knowledge 20 years ago. Switch statements were the cornerstones of explicit conditional logic and were prone to repetition, leading to duplicated code – another code smell.

There are more things wrong with a classic switch statement, but the logic is clear: Take away the culmilations of explicit conditional logic and developers will adjust their approach and adopt more diverse paradigms. If you think this through, you can also argue that taking away the “else” keyword (as the Object Calisthenics do) or even the “if” statement (as advocated by the anti-if campaign) leads to even more diversity and progressive programming.

Switch in rehabilitation?

For me, a switch statement was nearly always the wrong choice for a given problem. And experienced thinkers like Martin Fowler backed my opinion, so I couldn’t be wrong – right?

In the second edition of Refactoring, published early this year, Martin Fowler changes his position towards the Switch Statement considerably. A single switch isn’t the gateway drug to imperative programming anymore. You’ll need to have “Repeated Switches” to count as a code smell. You can still use “Replace Conditional with Polymorphism”, but the enthusiasm about implicit condititonal structures like polymorphism has faded. Martin Fowler writes that today, we all know about the different ways to express conditional logic. I’m not so sure. He also writes that many languages support more sophisticated forms of switch statements. Ok, but what about the mainstream languages like Java?

My biggest problem with the classic switch statement was that it was a “single purpose” structure. It could only be used to jump to a limited number of code addresses based on a limited type of criterium. I prefer code structures that are “dual purpose” or even “multi-purpose”. When Java’s switch statement got upgraded to switch over Enums (Java 5) and Strings (Java 7), it got more powerful, but still only supported one use case: explicit branching over a condition.

Switch with dual use

In the upcoming Java 12 (yes, we’ve come a long way in terms of version numbers since Java 8), the “Java Enhancement Process” JEP 325 will be included: Switch Expressions. It is marked as a preview language feature, meaning it is ready for usage, but open for discussion – and you’ll have to enable it explicitly. In the grand scheme of things for Java, it is a stepping stone for JEP 305: Pattern Matching for instanceof that will also change the switch statement even further.

With Switch Expressions, you can use a switch statement to essentially inline a method that uses lots of explicit conditionals to map one value to another:

int numLetters = switch (day) {
    case MONDAY, FRIDAY, SUNDAY -> 6;
    case TUESDAY                -> 7;
    case THURSDAY, SATURDAY     -> 8;
    case WEDNESDAY              -> 9;
};

Your switch can now return a result. And with that improvement, it isn’t single purposed anymore. This is the moment when a switch statement isn’t the most clunky and error-prone way to solve a problem anymore, but maybe even elegant and straight to the point.

This is the moment I definitely change my opinion about the switch statement (in Java) and welcome it back into my solution toolbox. How could such an ugly duckling become such a beautiful swan? And why did this take us twenty years?

You can read more about the new switch statement in this brilliant blog post from Nicolai Parlog.

Anyways, the “else” keyword is now even more obsolete than ever.