This is another story about a small misstep during programming that caused a lot of trouble in effect. The core problem has nothing to do with a specific programming language or some technical aspect, but the concept of assumptions. Let’s have a look at what an assumption really is:
assumption: The act of taking for granted, or supposing a thing without proof; a supposition; an unwarrantable claim.
A whole lot of our daily work is based on assumptions, even about technical details that we could easily prove or refute with a simple test. And that isn’t necessarily a bad thing. As long as the assumptions are valid or the results of false information are in the harmless spectrum, it saves us valuable time and resources.
A hidden risk
But what if we aren’t aware of our assumptions? What if we believe them being fact and build a functionality on it without proper validation? That’s when little fiascos happen.
Timothy High cited in his chapter “Challenge assumptions – especially your own” in the book “97 Things Every Software Architect Should Know” the lesser known Wethern’s Law of Suspended Judgement:
Assumption is the mother of all screw-ups.
Much too often, we only realize afterwards that the crucial fact we relied on wasn’t that trustworthy. It was another false assumption in disguise.
Distributed updates
But now it’s storytime again: Some years ago, a software company created a product with an ever-changing core. The program was used by quite some customers and could be updated over the internet. If you want to have a current analogy, think about an anti-virus scanner with its relatively static scanning unit and the virus signature database that gets outdated within days. To deliver the update patches to the customer machines, a single update server was sufficient. The update patches were small and happened weekly or daily, so it could be handled by a dedicated machine.
To determine if it needed another patch, the client program contacted the update server and downloaded a single text file that contained the list of all available patches. Because the patches were incremental, the client program needed to determine its patch level, calculate the missing patches and download and apply them in the right order. This procedure did its job for some time, but one day, the size of the patch list file exceeded the size of a typical patch, so that the biggest part of network traffic was consumed by the initial list download.
Cutting network traffic
A short time measure was to shorten the patch list by removing ancient versions that were surely out of use by then. But it became apparent that the patch list file would be the system’s achilles heel sooner or later, especially if the update frequency would move up a gear, as it was planned. So there was a plan to eliminate the need to download the patch list if it had not changed since the last contact. Instead of directly downloading the patch list, the client would now download a file containing only the modification date of the patch list file. If the modification date was after the last download, it would continue to download the patch list. Otherwise, it would refrain from downloading anything, because no new patches could be present.
Discovering the assumption
The new system was developed and put into service shortly before the constant traffic would exhaust the capabilities of the single update server. But the company admins watched in horror as the traffic didn’t abate, but continued on the previous level and even increased a bit. The number of requests to the server nearly doubled. Apparently, the clients now always downloaded the modification date file and the patch list file, regardless of their last contact. The developers must have screwed up with their implementation.
But no error was found in the code. Everything worked just fine, if – yes, if (there’s the assumption!) the modification date was correct. A quick check revealed that the date in the modification date file was in fact the modification date of the patch list file. This wasn’t the cause of the problem either. Until somebody discovered that the number in the modification date file on the update server changed every minute. And that in fact, the patch list file got rewritten very often, regardless of changes in its content. A simple cron job, running every minute, pushed the latest patch list file from the development server to the update server, changing its modification date in the process and editing the modification date file accordingly.
The assumption was that if the patch list file’s content would not change, the same would be true for its modification date. This was true during development, but changed once the cron job got involved. The assumption hold true during development and went into sabotage mode on the live system.
Record and challenge your assumptions
The developers cannot be blamed for the error in this story. But they could have avoided it if they had adopted the habit of recording their assumptions and had them communicated to the administrators in charge of the live system. The first step towards this goal is to make the process of relying on assumptions visible to oneself. If you can be sure to know about your assumptions, you can record them, test against them (to make them fact in your world) and subsequently communicate them to your users. This whole process won’t even start if you aren’t aware of your assumptions.
I think this could also be seen as a variant of the leaky abstractions. But nevertheless abstractions and assumptions are closely related anyway 🙂
For reference: http://www.joelonsoftware.com/articles/LeakyAbstractions.html