Domain-aligned bugs

Frank C. Müller [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0) ]

Imagine that you are an user of a typical enterprise software that handles commercial products and their prices. There are different prices in the software that are somehow related to each other. There is the purchase price that indicates your cost if you buy the product. There is the retail price that gets listed in your price lists and is paid by your customers, should they buy the product. You probably already figured out that the retail price should never be lower than the purchase price, because that would mean you lose money with every successful sale.

Let’s say that the enterprise software not only handles products, but also parts. Several parts combined, with some manufacturing effort, result in a product. Each part has a purchase price, the resulting product has a retail price. The retail price of the product should be higher than the sum of purchase prices of the parts. If it isn’t, you lose the costs of the manufacturing effort and some extra money with every successful sale.

If for any reason you cannot clearly estimate your manufacturing effort, the enterprise software has another input field for an amount of money that you can add to the sum of the parts’ costs. We call this field the “sales bonus”. So, if you sell a product made up of parts, your customer has to pay a price that consists at least of the retail prices of the parts and the sales bonus. Of course, your customer has an individual discount percentage that needs to be subtracted from the total price. Are you still following?

You are now thinking in the domain of price determination and financial mathematics. If you were the user of said enterprise software, you’d probably expect some bugs like these:

  • It is possible to enter a retail price lower than the purchase price
  • The price of products manufactured from parts isn’t calculated correctly
  • It is possible to enter a negative sales bonus
  • The total price with discount could be lower than the sum of purchase prices of the parts without a warning

All of them are bugs in the domain. All of them can be explained to a domain expert or a user with terms and concepts from the domain.

But what about the bug when you sell a product that consists of three parts, each with a retail price of 10 €, and a sales bonus of 5 €. You want to create a quote for your customer and the price shows up as 34,99999999998 €. You are a bit bewildered and try to countervail the apparent rounding error by changing the sales bonus to 5,00000000002 €. After this change you get another crazy total price and your prices in the database are different from what you entered, too. Everything seems to destabilize and deviate further and further from clear cut prices.

As a programmer, you know what happened. You know what caused this effect of numerical instability. Somebody stored monetary values in a floating point number. You know that is a bad idea and you’d never do this. But this blog post isn’t about you or what you should do or not do. It is about the user, expert in his domain, that stumbles over the bug as described and has to make some decision on how to fix it. This user cannot use any knowledge from the domain to even understand the mechanics of the bug. You, as the programmer, cannot explain this bug in terms of things the user already knows. You need to be vague (“the software doesn’t store the exact values, just approximations”) or introduce additional complexity (“we store this value by splitting it into a significand and multiply it with a factor consisting of a fixed base and an exponent. We can omit the base and just store the significand and the exponent and express a very large numerical range in just a few bits. Think about how cool that is!”).

Read the last explanation again, from the viewpoint of a salesman. We want to add some prices in the range of a few €, slap a moderate discount on top and call it a day. We don’t care about bits or exponential formulas. That is not part of our domain and it shouldn’t affect our domain or software that works in our domain. Confronting us with technical details reflects negatively on your ability to solve our problems. You seem to burden us with your problems in exchange.

As domain experts, we want only domain-aligned bugs.

Timestamps make horrible identifiers

vetre / fotoliaNot long ago, I’ve struggled with a system that uses timestamps as entity identifiers. What can I say? Timestamps aren’t meant to identify anything else than a specific point in time. Don’t use them as entity identifiers, ever. If you want to know why, I invite you to read on. The blog post is written in Freytag’s dramatic structure for added effect.

Exposition

We’ve designed a system that runs on multiple instances that communicate in all sorts of way. A central archive instance stores all data related to measurements. The whole network revolves around the notion of measurement. Measurement data is the most precious data and all instances will either produce or consume data based on these measurements.

Most important for human operators is an instance that lets you view all existing measurement data. Let’s call it the viewer. The viewer displays an overview list of all measurements in a given context and lets the operator choose to view ever more details of any of them. To be able to provide the overview list as fast as possible, we added a cache that holds the information.

Rising action

This measurement list cache was the source of all kinds of peculiar behaviour of the system. Most, but not all measurement data was incomplete. The list cache entries were assembled from different sources that were available at different times, so it seemed that while one part of the data got written to the cache, another part couldn’t be written for whatever reasons. The operator could load detailed data of some few measurements, but the majority just produced an error message that the data couldn’t be found (despite it being present).
The most obviously broken functionality left the following trace in the log files (paraphrased):

- storing measurement at 2016-02-28T13:25:55.189+01:00 into the list cache
- measurement stored
[...]
- loading measurement at 2016-02-28T13:25:55.189+01:00 from the list cache
- error: measurement not found in list cache

So, the system is essentially telling me that it can’t load some data it just stored. As you can imagine, this may lead to some questions about the sanity of the database product underneath.

Climax

After some investigation and fruitless integration testing, it dawned me: The problem wasn’t timing or the database. All the bugs could be explained with only one circumstance: Measurements were ultimately identified by their timestamp, the moment the measurement was made. There’s also a location, type and some other information in the identifier for each measurement, but only the timestamp changes between two measurements in the same narrow context. And the timestamp was stored in different precisions, depending on the origin of the measurement identifier. Most identifiers were create at the measurement producing system instances (let’s call them measurers) and had millisecond precision. As soon as they got stored in the production database (but not our development database), they lost the milliseconds. And some of the most important measurement data got exported to third-party systems, using a minute-based precision. So we had one measurement identifier in the system, but with three different types, each mostly incompatible to each other.

Falling action

That’s why the log excerpt above never occurred in development, but in production: The measurement is stored in the database, the used identifier gets passed around in the software, but a query on the exact same identifier in the database yields no result because the timestamps now differ in the millisecond range. And the strange effects that sometimes, everything worked just fine? That’s when the milliseconds are zero by chance. Given that most actions in the system are scheduled and performed automatically exactly on the zero mark, the zero milliseconds case happened more often than it would in an even distribution.

Our system dealt with three types of measurement identifiers: Millisecond-precise identifiers produced by the measurers, second-precise identifiers used by the measurement list cache and minute-precise identifiers used (and sometimes fed back into the system) by the data export. These identifiers were incompatible even for the same measurement most of the time, but not always. In unit tests, the timestamps were made up and didn’t reveal the problem properly (who thinks about odd milliseconds when making up a timestamp?).

My solution was to pull this incompatibility up into the type system. Instead of one measurement identifier, there are now three measurement identifiers: MillisecondPreciseIdentifier, SecondPreciseIdentifier and MinutePreciseIdentifier. An identifier of higher precision can be converted to an identifier of lower precision, but not the other way around. Everytime a measurement identifier is created, it needs to explicitely state its precision of the timestamp. This made the compiler highlight the problematic usages clearly as type conflicts and therefore dealing with the problem much easier.

Revelation

Choosing a timestamp as a vital part of a (measurement) identifier was a mistake from the beginning. The greater problem was the omission of the timestamp’s precision. Timestamps perform more like floating-point numbers and less like integers, even if every timestamp can be represented by a long. As soon as I made the precision of each timestamp clear to the compiler, the bugs revealed themselves. The annoying difference between developer and production database would have been detected much sooner, because a millisecond-precise timestamp will now warn in the log files if its millisecond part is zero. As soon as this log entry is seen very often, its clear that something is wrong. The new datatypes not only serve as a clearer API contract definition tool, but also as a runtime sanity check.

If you don’t want to repeat this mistake, keep in mind that each timestamp, date or whatever time-related data type you use will inherently have a maximum precision. As soon as you mix different precisions into the same data type, you’re going to have a bad time. Explicitely state the required precision in your type system and your compiler will keep an eye on it, too.

The Story of a Multithreading Sin

In my last blog entry, I wrote about multithreading pitfalls (in Java), and ironically, this was the week when we got a strange bug report from one of our customers. This blog entry tells the story of the bug and adds another multithreading pitfall to the five I’ve already listed in my blog entry “When it comes to multithreading, better be safe than sorry”.

The premise

We developed a software that runs on several geographically distant independent “stations” that collect a multitude of environmental measurement data. This data is preprocessed and stuffed into data packages, which are periodically transferred to a control center. The software of this control center, also developed by us, receives the data packages, stores them on disk and in a huge database and extracts the overall state of the measurement network from raw data. If you describe the main task of the network on this level, it sounds nearly trivial. But the real functionality requirements are manifold and the project grew large.

We kept the whole system as modular as necessary to maintain an overall grasp of what is going on where in the system and installed a sufficient automatic test coverage for the most important parts. The system is still under active development, but the main parts of the network are in production usage without real changes for years now.

The symptoms

This might explain that we were very surprised when our customer told us that the control center had lost some data packages. Very soon, it turned out that the control center would randomly enter a state of “denial”. In this state, it would still accept data packages from the stations and even acknowledge their arrival (so the stations wouldn’t retry the transmission), but only write parts of the package or nothing at all to the disk and database. When the control center entered this state, it would never recover from it. But when we restarted the software manually, everything would run perfectly fine for several days and then revert back into denial without apparent trigger.

We monitored the control center with every means on our disposal, but its memory consumption, CPU footprint and threading behaviour was without noticeable problem even when the instance was in its degraded state. There was no exception or uncommon entry logged in the logfiles. As the symptom happened randomly, without external cause and with no chance of reversal once it happened, we soon suspected some kind of threading issue.

The bug

The problem with a threading issue is that you can’t just reproduce the bug with an unit or system test. We performed several code reviews until we finally had a trace. When a data package arrives, a global data processing lock is acquired (so that no two data packages can be processed in parallel) and the content of the package is inspected. This might trigger several network status changes. These change events are propagated through the system with classic observer/listener structures, using synchronous calls (normal delegation). The overall status of the network is translated in a human readable status message and again forwarded to a group of status message listeners. This is a synchronous call again. One of the status message listeners was the software driver for a LED ticker display. This module was a recent addition to the control center’s hardware outfit and used to display the status message prominently to the operators. Inside this LED software driver, some bytes are written to a socket stream and then the driver awaits an answer of the hardware device. To avoid the situation that two messages are sent to the device at the same time, a lock is acquired just before the message is sent. This code attracted our attention. Lets have a look at it:

private Message lastMessage = new Message();

public void show(Message message) {
    synchronized (this.lastMessage) {
        writeCommandAndWaitForResponse(Command.SHOW_TEXT, message.asBytes());
        this.lastMessage = message;
    }
}

The main problem here is the object the lock is acquired upon: the reference of lastMessage is mutable! We call this a liquid lock, because the lock isn’t as solid as it should be. It’s one of the more hideous multithreading pitfalls as it looks like everything’s fine at first glance. But this lock doesn’t have a complete “locking” effect because each caller may acquire the lock of a different instance. And a lock with a flawed locking behaviour is guaranteed to fail (in production). The liquid lock is like the bigger brother of the local lock. It isn’t local, but its mutability cause the same problems.

The bug finally turned out to be caused by the liquid lock in the LED display driver that got notified of system message changes when a data package arrived. But only if multiple messages were sent at once to the device, discarding some of the necessary answers in this circumstance or if the connection to the LED hardware would fail in the midst of a transmission, the system would not return from the write attempt. If one thread wouldn’t return to the data package processor, the global data processing lock would not be freed (read the start of this chapter again, this is the most important lock in the system!). And while the data processing lock was still held, all other data packages would be received, but piling up to obtain the lock. But the lock would never be returned from the thread waiting on an answer from a hardware device that had no intention to send another answer. This was when the control center appeared to be healthy but didn’t process any data packages anymore.

The conclusion

If you want to avoid the category of liquid lock multithreading bugs, make sure that all your lock instance references are immutable. Being final is an important property of lock instance references. Avoid to retrieve your locks from notoriously muteable data structures like collections or arrays. The best thing you can do to avoid liquid locks is to “freeze” all your lock instances.

Another insight from this story is that software modules have to be separated threadwise, too. It was a major design flaw to let the data processing thread, while holding the main processing lock, descend down into the deep ends of the LED driver, eventually getting stuck there for infinity. Some simple mechanisms like asynchronous listener notification or producer/consumer queues for pending transmission requests would have helped to confine the effects of the liquid lock bug inside the LED module. Without proper thread separation, it took down the whole software instance.