Using the File System as an Interaction Device

In a recent project, my job was to build a scientific data processing pipeline for a new algorithm that wasn’t set in stone yet. Part of my work would be to explore different mathematical formulas interactively with the customer.

My usual approach to projects is a “risk first” strategy. I try to identify the riskiest or most demanding part of the project and deal with it first. This approach essentially resembles the “fail fast” mindset, just that we haven’t failed yet.

In the case of the calculation pipeline, the riskiest part and at the same time the functionality that matters to the customer most, was the pipeline itself. If we were able to implement a system that can transform the given entry data into the desired results, we had an end-to-end prototype and the means to explore different mathematical approaches.

The pipeline consists of different steps that can be described as a complex transformation each. The first step/transformation takes a proprietary data format file and converts it into a big JSON file. The main effort of this step is a deep physical analysis of the data contained in the proprietary format. This analysis requires a lot of thought, exploration and work, but can be seen as a black box that the data traverses on its way from proprietary format to JSON.

The next step takes the JSON input and extracts the necessary information required by the following step. It is essentially a data reduction operation.

The third step feeds the analyzed, reduced data into the formulas and stores the calculation result.

The fourth step aggregates the calculation results into a daily time series report in a format that can be read by a spreadsheet application. This report is the end product of the pipeline and will be used to make decisions and to rule out certain environmental hazards.

The main difference of this project to virtually every project before is that I didn’t write any user interface code. The application’s main window is still blank. The whole interaction of the system with other systems that provide the entry data, of the pipeline steps among each other and with the human user is based on files in the file system.

The system periodically checks for the existence of new entry data. If some is found, it is copied in the “inbox” directory of the first step. The first step periodically checks for the existence of files in its inbox and processes them into its “outbox” that conveniently serves as the inbox of the second step. You probably get the idea by now. All the steps in the system, including the upstream data fetching routine, are actors in an file-based actor model. The files serve as messages from one actor to another. The file system and its directory structure is the common communication channel that passes the messages around.

Each processing step is an actor node with input and output storages

One advantage of this approach is that the file system viewer application of the operating system can be used as the (graphical) user interface. By opening the appropriate directories and viewing their content, the user can supervise the operating state of the system. The system can report problems by moving the incoming message not in the step’s “done” directory , but into its “failed” or “problem” directory. If several directories are on display at once, the user can follow a specific piece of data through the pipeline and view the intermediate results. For domain specific reasons, the actors in this project also have the result directory “omitted” for data that will not be processed any further because some domain rules have determined a cancellation.

An user can even manipulate the data’s flow by moving files away or into a specific directory. Let’s say that we want to calculate a certain amount of data again, we can just copy the files from the “done” directory of the first step into its “inbox” and the system will process it again.

Because the analysis step takes some time while the calculation step is surprisingly fast, we can perform just the calculation again by not moving the initial data files, but the analyzed and reduced entry files for the calculation step. Using this approach, we can try different mathematical formulas by stopping the system, swapping the calculation step with a new version, starting the system again and moving the desired entry files into its inbox.

Using the file system as an interaction device for the user and the system’s parts has many immediate advantages, but some drawbacks, too. One drawback is performance. Using the harddisk for data transfer is the slowest possible way to bring data from step X to step X+1. If your system is required to have high throughput or low latency, this approach isn’t suitable. My project has a low, forecastable throughput and a latency requirement that is measured in minutes or seconds, but not in milliseconds or even nanoseconds. It can spend some time in the filesystem, because the first step alone takes several seconds for each file.

Another drawback is a certain fragility of the communication medium, the file system. You have to account for concurrent reads, writes or even deletes. The target platform of my system (Microsoft Windows) exhibits signs of exhaustion if the amount of files in one directory grows too large. This means that your file selection, already a costly operation, becomes more costly if the systems is put under pressure. If your throughput is usually steady, which is the case in my project, this won’t be a problem. Until you manually copy 100k files in an inbox for swift recalculation and discover that the file copy process alone takes several minutes.

Of course, the system cannot operate without a graphical user interface forever. But some basic interactions with the system will probably just result in some files being copied from one directory to another one in the background.

Useful background metrics: Distance to Disaster

This blog post would not have happened without my wife, who, upon learning that I use this metric in my everyday life, urged me to write about it.

I often categorize events that happen in my life. Due to my nature, I analyze detrimental events more thorough than things that “worked as intended”. One tool for my analysis is a measurement that I call “Distance to Disaster” (DtD). It indicates the “distance” or “bad faith work” or “bad decisions” that needs to be invested in order for disaster to happen. Let me explain:

If we wait on a train, we can stand in the middle of the platform and maximize the physical distance to the tracks before and behind us. Or we can stand right at the edge and minimize the physical distance to one track. If the track we chose for our position is the one where our train will arrive, we have a very low distance to distaster. We can lose our balance and fall onto the tracks. We can misjudge the physical dimensions of the train and get hit with something. In short: Nobody wants to wait on a train with a minimized (physical) distance to disaster.

Another measurement unit for the metric is “bad faith work”. Let’s assume you want to steal my most priced possession. That would be a disaster for me. You need to gain access to my home (step 1), then open the safe (step 2) and then find the key to the safe desposit box at my bank (no-brainer, not a step on its own). Afterwards, you need to gain access to the bank room before I recognize my loss (step 3) and open the box that has a two-lock system (step 4). It is probably easier to come up with a plan to circumvent some steps and attack the bank directly. If you just succeeded with step 1, my most priced possession is probably still very secure because a DtD of 3 is rather high.

And then, there are “bad decisions”. Let’s say you write code and accidentally hit “load” instead of “save”. If you are me in the early nineties, you just overwrote your code with an empty file. I still remember that day and it didn’t help that “save” was bound to F5 and “load” to F6. One bad decision lead to disaster.

Now imagine you still use the same shitty IDE (it was the GWBasic editor), but with modern version control. You commit early and often. You accidentally hit “load” instead of “save” and lose your last few minutes of work. Sad, but not a disaster. Even if you delete the whole file, you can restore your last commit as often as you want. Using version control adds +1 to your “bad decision distance” to disaster.

You probably understand the concept by now. You can specify what a “disaster” is and then measure your current distance to it by trying to come up with the least steps that lead to it.

In our normal everyday life, we are surprisingly often only one step away from disaster, but it never happens. That’s a reassuring reality, but shouldn’t keep us from thinking about how to increase the step count without much effort.

One typical implementation of this approach is a modest backup strategy for all data that you intend to keep. Another one is to have spare parts for crucial devices in stock (the “hardware backup”).

Don’t get me wrong: It’s not about maximizing the DtD. It’s about recognizing the cheap and easy opportunity to add one more step to the distance.

And it’s not about “disaster” in the meaning of life-altering, stop-the-world events. A “disaster” can be everything you don’t want to happen. Try to bring a reasonable distance between you and this thing if possible.

Now that you know about the concept, can you find examples of cheap and easy DtD improvements in software development? Let us know in the comments!

Addendum for my co-workers: Our ETOD metrics is the DtD metrics applied on financial resources.

And another addendum: I find a lot of similarities in the field and mindset of accident prevention. For example, airplane cockpits are designed in a way that dangerous actions require the actuation of two control elements like switches or buttons that are located on different sides of the room. Making it two buttons instead of one adds “bad decision” distance. Placing the buttons in different directions adds “intent distance”.

In software user interaction designs, we try to replicate the second button with a confirmation dialog (“Are you sure?”). It adds to the “bad decision” distance but often lacks in the “intent distance” dimension. I don’t want to be responsible for cumbersome “maximized mouse distance” dialogs, though.

What else can we do?

A common code structure to implement a decision is the if-statement, or in its complete form, the if-else-statement:

By using the explicit if-else-statement, you essentially partition a part of your code into two “execution lanes” that are used mutually exclusive. Instead of writing them one upon the other, we could, if our code editors supported it, write them side by side:

There are some graphical code editors that tried this tabular approach. It certainly looks unfamiliar to the eye trained on the first notation, but it makes one thing clear: The code flow will go through only one of the columns, not both.

Dependence on explicit conditionals

Using the if-else-statement became so second-nature to most developers that they acted confused and helpless when presented with a simple restriction:

“Don’t use the else keyword”

Jeff Bay, Object Calisthenics, 2008

The restriction is imposed as the second of nine rules from the object calisthenics by Jeff Bay. In the explanation of the rule, he stated that the rule should act as a first step towards implicit conditional statements. Paraphrased: There are 99 ways to express an else statement without using the keyword, but the average developer knows none of them.

In my opinion, the rule is merely the warm-up phase to a bigger challenge, as stated by the “anti-if campaign”: To get rid of if-statements (and else-statements by that matter) in all contexts where alternatives prove more effective.

In order to decide when not to use if-statements, we should learn about the alternatives. There are plenty to choose from! (refer to slide #4)

But we should also learn about the if-statement itself. The goal isn’t to abandon it, but to use it when appropriate and then use it to its full potential.

An interesting thought about the “else”

We already know everything about the if and else? I had the opportunity to learn something new not long ago. The hint came from Kevlin Henney in one of his talks (Non-Functional Coding):

The talk is fairly recent and has some traditional “Kevlin parts” in it. The part I highlighted is unusually aggressive for him. The reasoning is sound, but the nearly personal attack towards the audience (to “piss them off”) is uncalled for.

But, the “volume up to 200 %”-style works more often than not and the bit got me thinking. The culprit in question is this code:

According to Kevlin, this style “is just wrong”. Let’s try to find out why.

There is one principle that is mentioned by Kevlin in passing: The “Single Level of Abstraction” principle that states that you should not mix different levels of abstraction in one block of code (the principle talks about methods). It is a foundation for the first rule in the object calisthenics: “Only one level of indentation per method”.

If you look at the if-code and else-code, they operate on the same level of abstraction. Maybe not on the same level of probability, but they deal with the same topic. Elevating one part by eliminating the else-block in favor of an early return means that this part is more important. It also designates the if-code and in fact the whole if-statement to be a guard clause. Guard clauses typically deal with invalid state and don’t complement the desired functionality. They act as gatekeepers and interdict the invalid state to enter the method’s main body. As a metaphor: The bouncers in front of a club are like guard clauses. To say that being denied entry by a bouncer is comparable fun to being in the club is probably not a widespread opinion.

Unfinished reflection

I still reflect on other clues that are name-dropped by Kevlin, like the stated reduction of refactoring opportunities, but that’s probably because I don’t have enough comparison material.

There is one thing that I haven’t got a proper hold on yet and that’s the term “control state“. My google kung-fu is not mighty enough to reach past some obscure ASP.NET concepts from ten years ago. I haven’t heard the term in books – at least I don’t remember it.

So here is my call for help: Can you provide some source or explanation about what Kevlin Henney means by “control state“?

And what else do you think about the whole discussion?

The Optional Wildcast

This blog post presents a particular programming technique that I happen to use more often in recent months. It doesn’t state that this technique is superior or more feasible than others. It’s just a story about a different solution to an old programming problem.

Let’s program a class hierarchy for animals, in particular for mammals and birds. You probably know where this leads up to, but let’s start with a common solution.

Both mammals and birds behave like animals, so they are subclasses of it. Birds have the additional behaviour of laying eggs for reproduction. We indicate this feature by implementing the Egglaying interface.

Mammals feed their offsprings by giving them milk. There are two mammals in our system, a cow and the platypus. The cow behaves like the typical mammal and gives a lot of milk. The platypus also feeds their young with milk, but only after they hatched from their egg. Yes, the platypus is a rare exception in that it is both a mammal and egglaying. We indicate this odd fact by implementing the Egglaying interface, too.

If our code wants to access the additional methods of the Egglaying interface, it has to check if the given object implements it and then upcasts it. I call this type of cast “wildcast” because they seem to appear out of nowhere when reading the code and seemingly don’t lead up or down the typical type hierarchy. Why would a mammal lay eggs?

One of my approaches that I happen to use more often recently is to indicate the existence of real wildcast with a Optional return type. In theory, you can wildcast from anywhere to anyplace you want. But only some of these jumps have a purpose in the domain. And an explicit casting method is a good way to highlight this purpose:

public abstract class Mammal {
	public Optional<Egglaying> asEgglaying() {
		return Optional.empty();
	}
}

The “asEgglaying()” method might return an Egglaying object, or it might not. As you can see, on default, it returns only an empty Optional. This means that no cow, horse, cat or dog has to think about laying eggs, they just aren’t into it by default.

public class Platypus extends Mammal implements Egglaying {
	@Override
	public Optional<Egglaying> asEgglaying() {
		return Optional.of(this);
	}
}

The platypus is another story. It is the exception to the rule and knows it. The code “Optional.of(this)” is typical for this coding technique.

A client that iterates over a collection of mammals can now incorporate the special case with more grace:

for (Mammal each : List.of(mammals())) {
	each.lactate();
	each.asEgglaying().ifPresent(Egglaying::breed);
}

Compare this code with a more classic approach using a wildcast:

for (Mammal each : List.of(mammals())) {
	each.lactate();
	if (each instanceof Egglaying) {
		((Egglaying) each).breed();
	}
}

My biggest grief with the classic approach is that the instanceof is necessary for the functionality, but not guided by the domain model. It comes as a surprise and has no connection to the Mammal type. In the Optional wildcast version, you can look up the callers of “asEgglaying()” and see all the special code that is written for the small number of mammals that lay eggs. In the classic approach, you need to search for conditional upcasts or separate between code for birds and special mammal code when looking up the callers.

In my real-world projects, this “optional wildcast” style facilitates domain discovery by code completion and seems to lead me to more segregated type systems. These impressions are personal and probably biased, so I would like to hear from your experiences or at least opinions in the comments.

Format-based sorting looks clever, but is dangerous

A neat trick I learnt early in my career, even before I learnt about version control, was how to format a date as a string so that alphabetically sorted lists would contain them in the “correct” order:

“YYYYMMDD” is the magic string.

If you format your dates as 20230122 and 20230123, the second name will be sorted after the first one. With nearly any other format, your date strings will not be sorted chronologically in the file system.

I’ve found out that this is also nearly the only format that most people cannot intuitively recognize as a date. So while it is familiar with me and conveniently sorted, it is confusing or at least in need of explanation for virtually every user of my systems.

Keep that in mind when listening to the following story:

One project I adopted is a custom enterprise resource planning system that was developed by a single developer that one day left the company and the code behind. The software was in regular use and in dire need of maintenance and new features.

One concept in the system is central to its users: the list of items in an invoice or a bill of delivery. This list contains items in a defined order that is important to the company and its customers.

To my initial surprise, the position of an item in the list was not defined by an integer, but a string. This can be explained by the need of “sub-positions” that form a hierarchy of items, like in this example:

1 – basic item

1.1 – item upgrade #1

1.2 – item upgrade #2

Both positions “1.1” and “1.2” are positioned “underneath” position “1” and should be considered glued to it. If you move position “1” to position “4”, you also move 1.1 to 4.1 and 1.2 to 4.2.

But there was a strange formatting thing going on with the positions: They were stored as strings in the database, but with a strange padding in front. Instead of “1”, “2” and “3”, the entries contained the positions ” 1″, ” 2″ and ” 3″. All positions were prefixed with two space characters!

Well, nearly all positions. As soon as the list grew, the padding turned out to be dependent on the number of digits in the position: ” 9″, but then ” 10″ and “100”.

The reason can be found relatively simple: If you prefix with spaces (or most other characters, maybe “0”), your strings will be ordered in a numerical way. Without the prefixes, they would be sorted like “1”, “10”, “11”, “2”.

That means that the desired ordering of the positions is hardcoded in the database representation! You probably already thought about the case of a position greater than 999. That’s when trouble begins! Luckily, an invoice with a thousand items on the list is unheard of in the company (yet!).

Please note that while the desired ordering is hardcoded in the database, the items are still loaded in a different order (as they were entered into the system) and need to be sorted by the application. The default sorting for strings is the alphabetical order, so the original developer probably was clever/lazy, went with it and formatted the data in a way that would produce the result and not require additional logic during the sorting.

If you look at the code, you see seemingly strange formatting calls to the position all over the place. This is necessary because, for example, every time a user enters a position into the system, it needs to be reformatted (or at least sanitized) in order to adhere to the “auto-sortable” format.

If you wonder how a hierarchical sub-position looks like with this format, its ” 1. 1″, ” 1. 10″ and even ” 1. 17. 2. 4″. The database stores mostly blanks in this field.

While this approach might seem clever at the moment, it is highly dangerous. It conflates several things that should stay separated, like “storage format” and “display format”, “item order” or just “valid value range”. It is a clear violation of the “separation of concerns” principle. And it broke the application when I missed one place where the formatting was required, but not present. Of course, this only manifests in a problem when your test cases (or manual tries) exceed a list of 9 entries – lesson learnt here.

I dread the moment when the company calls to tell me about this “unusually large invoice” that exceeds the 999 limit. This would mean a reformatting of all stored data or another even more clever hack to circumvent the problem.

Did you encounter a format that was purely there for sorting in the wild? What was the story? Tell us in the comments!

Try ending the workday with a beneficial ritual

One thing that is important to me is to start and end the workday with a proven and familiar routine – lets call it a ritual. There are some advantages to this approach. First, you have a defined starting point. No matter what the day may throw at you, there are some anchors in your structure or environment that you can rely on. For example, I don’t start my work without a (big) filled glass of water on my desk. It might get hectic, but my supply of water is secured until lunch. I make it a habit to empty that glass before lunch, too, but that’s not as important as the ritual of supplying myself with a beverage and only then starting my work.

My guess is that most of you already do this, too. The start of a workday is the natural point in time to install habits or even rituals. But what about the end of your workday? Sure, there is a point in time when you “drop the pen” and rush out the door. But right before this moment, there is a possibility to introduce a beneficial ritual that might only cost minutes, but brings value that furthers your career and even your current work.

My usual ritual is a short daily reflection. That’s not exactly my own idea, I just borrowed it from the Clean Code Developer Initiative. My problem with the CCDI version is the focus on software development alone, which is probably a good start, but too narrow for my work profile.

My adaption is to have three basic questions that I ask myself at the end of each workday and answer in “articulated thoughts”. You may prefer to say it out loud or write your answer down (Obsidian or similar tools might be a suitable tool for that). My questions to myself are:

  • How do you feel right now?
  • What surprised you today?
  • What do you want to remember from today’s work?

Note how these questions don’t deal with details of your current work. If you have specific topics that you want to reflect on, you can always add some more questions for a period. I have found it important not to skip or replace the three basic questions, though.

“How do you feel?” is a complicated question because it leads to your motivation for work. Of course, “tired” or “stressed” is always a valid answer. But what if you legitimately feel “proud” or “fulfilled”? Can you identify what aspect of today’s work made you proud? Can you think of a way to have more of that without neglecting other important duties?

“What surprised you today?” tries to carve out your latest learning experience. It is possible that your day was dull enough to have no surprises, but if there were, you’ve probably expanded your knowledge on a topic you didn’t expect. If the surprise was a negative one, maybe you can think about a way to make it less surprising, more rare or downright impossible in the future. In my case, this lead to some unusual gadgets like the “bad idea commands” list that hangs right besides the admininstation console. The most infamous command on this list is “mdadm –create”, by the way (I meant “mdadm –assemble” and was very surprised by the result).

“What do you want to remember?” is an explicit appeal to write your answer down. You don’t need to tell an elaborate story. Just give your future self some cues, preferably from outside your brain (Obsidian’s market claim of “a second brain” is no coincidence). Make a small note or write your future self an e-mail (this is my typical way of offloading things to future me). But persist this information now or it will be gone.

After this daily reflection, I shut down my computer and put the (probably empty) glass of water into the dishwasher. Then I switch into leisure mode.

Of course, my three questions are inspired from other sources, too. One is the workshop hosting manual for code retreats, which has a great section about the “closing circle”, a group reflection on a probably awesome day.

If you have a similar ritual, let us know about it! Write a blog entry or drop a comment below.

The year 2022 in tickets

We are a company of software developers that decided to run the company itself similar to a typical software project. All company documents are put under version control, most things that can be automated are automated (or listed on a backlog and estimated for their business value), a wiki contains all relevant information and is continually updated and extended and, most important, everything is an issue. The word “issue” is the developer synonym for “ticket”, so what I’m really saying is: “Every activity in our company has a ticket number”. Just like you don’t change the source code of a software project without an issue that motivates the change, we don’t perform work for the company without a motivating ticket. This means that you can review the company’s progress, performance and efforts by at least three activity streams or history tracks:

  • The commit history of the version control system tells the story from the viewpoint of documents. Company documents are mostly the beginning or the result of activity of our administration department. Typical documents that start processes are project orders or letters from official agencies. Typical documents that are created as a result of processes include invoices, filled in forms and more letters.
  • The edit history of the wiki tells the story from the viewpoint of process learning. We document our actual administrative processes in a structured way that might be seen as “source code for humans”. Everything we change in our approach to process and create documents can be traced in this source code. Additionally created business processes indicate a growth in business scope or complexity – or the payback of “business process debt”, the administrative equivalent to “technical debt” in a software project.
  • The resolution history of the ticket system tells the story from the actual footwork. Every activity has its ticket, so we can measure how much activity was necessary to run the company, where this activity was invested and how much regular versus extraordinary work occurred.

There are more “story lines” in our company and I could probably talk for days about how to read them and set them into context with one another, but in this blog post, I try to visualize only the footwork of the year 2022 for our small company by showing the ticket numbers. But before I can do that, we need one more piece of theory about our tickets:

We have two kind of tickets in our system:

  • Manually created tickets accompany activities that occurred “without a schedule”. A human being recognized the need for some work and wrote a ticket to document the motivation and track the progress of this work.
  • Automatically created tickets denote recurring activities that are handled by some form of automation in our company. We have developed a tool to manage the schedules of these “recurring activities”. Our job as humans is to recognize the recurring character of some of our activities, estimate a suitable schedule for it and tell the tool about it. The tool then creates tickets based on that schedule and we need to deal with them. The simplest form to deal with a recurring ticket is to close it as “won’t fix” because there is nothing to do in its regard yet.

Just keep in mind that manually created tickets always denote required activity while automatically created tickets “only” denote the need to check for required activity, but not always to perform it.

Let’s look at some numbers!

The most obvious ticket section is of course the tickets for our blog entries (you are reading BLOG-368). Because we publish one entry each week, there will be at least 52 tickets for 2022. In fact, we have fixed 54 tickets this year, with only one ticket manually created. There are not many surprises with such a strict schedule.

A less predictable topic is the purchase department (don’t be too impressed, the “department” is just its own section in the ticket system). Every purchase is tracked by its own ticket. In 2022, we had 113 purchase activities, with 94 of them manually created. This means that the non-automation ratio for our purchases is above 80 percent. We bought two different things every week and most of it was “on demand”.

The most important section of tickets for me as the CEO is the “business administration” section which encompasses all necessary non-specialized work to keep our company afloat. Let’s dissect it for the year 2022:

956 tickets were resolved over the course of the year. That’s a lot of work for a small company! Luckily, 865 tickets were created by our tool, so they don’t always require actual activity. But 91 tickets were things we needed to do but couldn’t anticipate this need (or else we would have created a schedule for it). This is two things per week that “surprised” us.
If you look at these numbers with a different mindset, you can see the effects of consequent automation: Our administration has an automation factor of 90 percent! We mostly deal with routine tasks and can rely on defined, documented and automated processes. That’s quite an achievement and I still remember the times when we had lower factors. They were more “interesting” (in the asian curse sense).
I want to add another perspective to these numbers: We also track our work time and assign it to different projects, with “administration” being one of them. In the year 2022, we booked approximately 600 person hours of work for “administration” of the company. So we spent circa 35 minutes on each ticket. This is a misleading number, because there were lots of tickets that require no more than a few minutes and some that can hog our attention for days. We also track detailed “time per ticket” numbers, but only use this data to extract “expected durations” for the most important and time-consuming tasks. This helps us to plan our administrative work around the customer project schedules.

I could write a lot more about our different topics in the adminstration section of our ticket system. We have identified around 20 distinguishable topics. But it would become more boring over time, so I close this blog entry with one last topic that is very important for an IT company: the “IT administration” or “operations”.

To keep our IT systems up and running, we worked on 48 different tickets, but only 13 of them were automatically created. This seems rather low when compared to the adminstration with around 1000 tickets, but it is very misleading. Our IT administration is nearly fully automated, so that routine work doesn’t create tickets, but starts automation runs (Jenkins builds, Ansible playbooks and such). The 48 tickets were additional work or, in the case of the 13 tickets, recurring work that requires human oversight and interference.
I’m glad that this number is as low as it is. It means that the IT runs smooth and rather silent. The 115 person hours booked for it tell the same story: Our IT is low maintenance. A tad more than 2 hours per week is an affordable price.

I hope this blog entry was entertaining enough to give you an idea of how we make things visible in our company. We use the data to test hypotheses, expose problems and track our improvement efforts. Without this data, we could only rely on assumptions, feelings and spotty memory. By reading the numbers, we (or at least I) get a feeling for the intricacies of the company that translate down to the day-to-day work and makes intuitive and appropriate management possible.

If you want to know more, feel free to leave a comment!

Fun with docker container environment variables

Docker (as one specific container technology product) is a basic ingredient of our development infrastructure that steadily gained ground from the production servers over the build servers on our development machines. And while it is not simple when used for operations, the complexity increases a lot when used for development purposes.

One way to express complexity is by making the moving parts configurable and using different configurations. A common way to make things configurable with containers are environment variables. Running a container might look like a endurance typing contest if used extensibly:

docker run --rm \
-e POSTGRES_USER=myuser \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_DB=mydatabase \
-e PGDATA=/var/lib/postgresql/data/pgdata \
ubuntu:22.04 env

This is where our fun begins.

Using an env-file for extensive configurations

The parameter –env-file reads environment variables from a local text file with a simple key=value format:

docker run --rm --env-file my-vars.env ubuntu:22.04 env

The file my-vars.env contains all the variables line by line:

FIRST=1
SECOND=2

If we run the command above in a directory containing the file, we get the following output:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=46a23b701dc8
FIRST=1
SECOND=2
HOME=/root

The HOSTNAME might vary, but the FIRST and SECOND environment variables are straight from our file.

The only caveat is that the env-file really has to exist, or we get an error:

docker: open my-vars2.env: Das System kann die angegebene Datei nicht finden.

My beloved shell

The env-file can be empty, contain only comments (use # to begin them) or whitspace, but it has to be present.

Please be aware that the env-files are different from the .env-file(s) in docker-compose. A lot of fun is lost by this simple statement, like variable expansion. As far as I’m aware, there is no .env-file mechanism in docker itself.

But we can have some kind of variable substitution, too:

Using multiple env-files for layered configurations

If you don’t want to change all your configuration entries all the time, you can layer them. One layer for the “constants”, one layer for global presets and one layer for local overrides. You can achieve this with multiple –env-files parameters, they are evaluated in your specified order:

docker run -it --rm --env-file first.env --env-file second.env ubuntu:22.04 env

Let’s assume that the content of first.env is:

TEST=1
FIRST=1

And the content of second.env is:

TEST=2
SECOND=2

The results of our container call are (abbreviated):

TEST=2
FIRST=1
SECOND=2

You can see that the second TEST assignment wins. If you switch the order of your parameters, you would read TEST=1.

Now imagine that first.env is named global.env and second.env is named local.env (or default.env and development.env) and you can see how this helps you with modular configurations. If only the files need not to exist all the time, it would even fit well with git and .gitignore.

The best thing about this feature? You can have as many –env-file parameters as you like (or your operating system allows).

Mixing local and configured environment variables

We don’t have explicit variable expansion (like TEST=${FIRST} or something) with –env-files, but we have a funny poor man’s version of it. Assume that the second.env from the example above contains the following entries:

OS
TEST=2
SECOND=2

You’ve seen that right: The first entry has no value (and no equal sign)! This is when the value is substituted from your operating system:

TEST=2
FIRST=1
OS=Windows_NT
SECOND=2

By just declaring, but not assigning an environment variable it is taken from your own environment. This even works if the variable was already assigned in previous –env-files.

If you don’t believe me, this is a documented feature:

If the operator names an environment variable without specifying a value, then the current value of the named variable is propagated into the container’s environment

https://docs.docker.com/engine/reference/run/#env-environment-variables

And even more specific:

When running the command, the Docker CLI client checks the value the variable has in your local environment and passes it to the container. If no = is provided and that variable is not exported in your local environment, the variable won’t be set in the container.

https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables–e—env—env-file

This is a cool feature, albeit a little bit creepy. Sadly, it doesn’t work in all tools that allow to run docker containers. Last time I checked, PyCharm omitted this feature (as one example).

Epilogue

I’ve presented you with three parts that can be used to manage different configurations for docker containers. There are some pain points (non-optional file existence, feature loss in tools, no direct variable expansion), but also a lot of fun.

Do you know additional tricks and features in regard to environment variables and docker? Comment below or link to your article.

Applying the Golden Circle to software development

When I was a young and impressionable software developer (1998), the slogan of the year was “the code is the documentation”. This essentially meant that comments, and (inline) code comments in particular, were a sign of bad code. The reasoning was (and still is) that if you can’t articulate your ideas clear enough in source code, adding additional text won’t rescue your communication. “Communication” is the conveying of what you want to accomplish with your code towards two listeners: the computer and the next human that works with your code. The computer is often the easier part, because it just does as it is told without interpretation.

Some years later (2004), Jeff Atwood, the author of the influential codinghorror blog, still condemns most of the comments we could find in contemporary source code. But there is more nuance than just “comments are bad” and it is cleared up later (2006) that there is a difference in what should be expressed in source code and what should be expressed in comments. Yes, you should write comments, but the “right kind”.

According to Jeff Atwood, the source code contains the “how”. It tells the story in all its details for the next human and the computer. The comments, on the other hand, are not intended for the computer and shouldn’t contain details. They should contain the “why”, the high-level picture and the motivation behind the specific “how” that we find.

The code tells you how, the comments tell you why” (2006) is a great way to describe the expected content of both “layers”.

But my feeling was (and still is) that something is missing in that description. My programs aren’t just code and comments, there are more things that I try to tell my story with. And just a few weeks ago, I had a sudden idea that I might be able to describe what that missing piece could be. It is just an idea and the puzzle probably is still missing pieces, but it feels “right” enough that I want to write this blog post to discuss the idea. But I need to talk about something else first.

In 2009, Simon Sinek presents the Golden Circle to the world. The TED talk is probably the most energetic piece of explanation in human history. The Golden Circle is “the world’s simplest idea”, defining three layers of “clarity” to actions:

  • What: The “lowest” level, meaning that every person knows “what they are doing”.
  • How: The “intermediate” level. Some people know “how they do it”. They explicitly choose their method of doing and can reason about it.
  • Why: The “highest” level. According to Simon Sinek, only a few people know “why they do it”. He talks about “purpose, cause and belief”.

If you don’t know about the Golden Circle yet, please watch the 20 minutes TED talk while I wait here for you. If you want to think about it first – I’m patient. The Golden Circle has inspired and guided me ever since. Not that I’m very good at applying it in my business or personal life, but it stayed with me and gave me a coordinate system to categorize things.

And with that categorization practice, I feel as if the layers in Jeff Atwood’s blog post from 2006 are misnamed and one crucial layer is missing:

  • What: The code tells you what (not how!). It is the detailed step-by-step recipe to replicate a behaviour. It is so simple that even a computer can do it, without being aware of it.
  • How: This is the missing layer. I want to talk about it in a minute.
  • Why: That’s the part that is correctly placed and named: The why of a story requires the spoken word. Only comments are viable for this kind of information. The computer (as of today) has no understanding what any of it means.

The missing layer is the “How”. In my idea, I envisioned that everything that is deliberately put in the code, but not readily understood by a computer, like structures, patterns, idioms or even names (I call them “activated comments”) are there for the “How”. We structure our code not because the computer requires it, the compiling stages of our programming languages even interfold and compact it until it is a binary blob. We don’t name our variables and types because the computer would learn something from them. The first thing a compiler does is to shorten our names to unreadable “symbols”. Most of our patterns get replaced by other, more minute patterns that the compiler puts into place. We put these things in our code because it helps us understand “how the story goes”. It provides us with guidance how to make sense of the mess. The structure tells you how to approach the program.

The computer only knows about “What”. There are maybe some indications about a simple “How”-awareness in some technologies, but most of the time, the computer is deaf to human communication.

Which brings me to my description of code and comment, as an updated version of Jeff Atwood’s motto:

“The code tells you what, the structure tell you how and the comments tell you why”

By “structure”, I mean everything that gets lost during translation for the computer, but is visible for the human reader. It entails high-level things like “architecture” or “code design” and lower-end decisions like names and formatting. If you have a better word for it, let me know!

I hope that this blog post inspired you to have a thought yourself. Don’t hesitate and tell us your thought in the comment below.

Help me with the Spiderman Operator

From time to time, I encounter a silly syntax in Java that I silently dubbed the “spiderman operator” because of all the syntactically pointing that’s going on. My problem is that it’s not very readable, I don’t know an alternative syntax for it and my programming style leads me more often to it than I am willing to ignore.

The spiderman operator looks like this:

x -> () -> x

In its raw form, it means that you have a function that takes x and returns a Supplier of x:

Function<X, Supplier<X>> rawForm = x -> () -> x;

That in itself is not very useful or mysterious, but if you take into account that the Supplier<X> is just one possible type you can return, because in Java, as long as the signature fits, the thing sits, it gets funnier.

A possible use case

Let’s define a type that is an interface with just one method:

public interface DomainValue {
    BigDecimal value();
}

In Java, the @FunctionalInterface annotation is not required to let the interface be, in fact, a functional interface. It only needs to have one method without implementation. How can we provide methods with implementation in Java interfaces. Default methods are the way:

@FunctionalInterface
public interface DomainValue {
    BigDecimal value();

    default String denotation() {
        return getClass().getSimpleName();
    }
}

Let’s say that we want to load domain values from a key-value-store with the following access method:

Optional<Double> loadEntry(String key)

If there is no entry with the given key or the syntax is not suitable to be interpreted as a double, the method returns Optional.emtpy(). Else it returns the double value wrapped in an Optional shell. We can convert it to our domain value like this:

Optional<DomainValue> myValue = 
    loadEntry("current")
        .map(BigDecimal::new)
        .map(x -> () -> x);

And there it is, the spiderman operator. We convert from Double to BigDecimal and then to DomainValue by saying that we want to convert our BigDecimal to “something that can supply a BigDecimal”, which is exactly what our DomainValue can do.

A bigger use case

Right now, the DomainValue type is nothing more than a mantle around a numerical value. But we can expand our domain to have more specific types:

public interface Voltage extends DomainValue {
}
public interface Power extends DomainValue {
    @Override
    default String denotation() {
        return "Electric power";
    }
}

Boring!

public interface Current extends DomainValue {
    default Power with(Voltage voltage) {
	return () -> value().multiply(voltage.value());
    }
}

Ok, this is maybe no longer boring. We can implement a lot of domain functionality just in interfaces and then instantiate ad-hoc types:

Voltage europeanVoltage = () -> BigDecimal.valueOf(220);
Current powerSupply = () -> BigDecimal.valueOf(2);
Power usage = powerSupply.with(europeanVoltage);

Or we load the values from our key-value-store:

Optional<Voltage> maybeVoltage = 
    loadEntry("voltage")
        .map(BigDecimal::new)
        .map(x -> () -> x);

Optional<Current> maybeCurrent = 
    loadEntry("current")
        .map(BigDecimal::new)
        .map(x -> () -> x);

You probably see it already: We have some duplicated code! The strange thing is, it won’t go away so easily.

The first call for help

But first I want to sanitize the code syntactically. The duplication is bad, but the spiderman operator is just unreadable.

If you have an idea how the syntax of the second map() call can be improved, please comment below! Just one request: Make sure your idea compiles beforehands.

Failing to eliminate the duplication

There is nothing easier than eliminating the duplication above: The code is syntactically identical and only the string parameter is different – well, and the return type. We will see how this affects us.

What we cannot do:

<DV extends DomainValue> Optional<DV> loadFor(String entry) {
    Optional<BigDecimal> maybeValue = load(entry);
    return maybeValue.map(x -> () -> x);
}

Suddenly, the spiderman operator does not compile with the error message:

The target type of this expression must be a functional interface

I can see the problem: Subtypes of DomainValue are not required to stay compatible to the functional interface requirement (just one method without implementation).

Interestingly, if we work with a wildcard for the generic, it compiles:

Optional<? extends DomainValue> loadFor(String entry) {
    Optional<BigDecimal> maybeValue = load(entry);
    return maybeValue.map(x -> () -> x);
}

The problem is that we still need to downcast to our specific subtype afterwards. But we can use this insight and move the downcast into the method:

<DV extends DomainValue> Optional<DV> loadFor(
	String entry,
	Class<DV> type
) {
	Optional<BigDecimal> maybeValue = load(entry);
	return maybeValue.map(x -> type.cast(x));
}

Which makes our code readable enough, but at the price of using reflection:

Optional<Voltage> european = loadFor("voltage", Voltage.class);
Optional<Current> powerSupply = loadFor("current", Current.class);

I’m not a fan of this solution, because downcasts are dangerous and reflection is dangerous, too. Mixing two dangerous things doesn’t neutralize the danger most of the time. This code will fail during runtime sooner or later, without any compiler warning us about it. If you don’t believe me, add a second method without implementation to the Current interface and see if the compiler warns you. Hint: This is what you will see at runtime:

java.lang.ClassCastException: Cannot cast java.math.BigDecimal to Current

But, to our surprise, it doesn’t even need a second method. The code above doesn’t work. Even if we reintroduce our spiderman operator (with an additional assignment to help the type inference), the cast won’t work:

<DV extends DomainValue> Optional<DV> loadFor(
    String entry,
    Class<DV> type
) {
    Optional<BigDecimal> maybeValue = load(entry);
    Optional<DomainValue> maybeDomainValue = maybeValue.map(x -> () -> x);
    return maybeDomainValue.map(x -> type.cast(x));
}

The ClassCastException just got a lot more mysterious:

java.lang.ClassCastException: Cannot cast Loader$$Lambda$8/0x00000008000028c0 to Current

My problem is that I am stuck. There is working code that uses the spiderman operator and produces code duplication, but there is no way around the duplication that I can think of. I can get objects for the supertype (DomainValue), but not for a specific subtype of it. If I want that, I have to accept duplication. Or am I missing something?

The second call for help

If you can think about a way to eliminate the duplication, please tell me (or us) in the comments. This problem doesn’t need to be solved for my peace of mind or the sanity of my code – the duplication is confined to a particular place.

Being used to roam nearly without boundaries in the Java syntax (25 years of thinking in Java will do that to you), this particular limitation hit hard. If you can give me some ideas, I would be grateful.