Format-based sorting looks clever, but is dangerous

A neat trick I learnt early in my career, even before I learnt about version control, was how to format a date as a string so that alphabetically sorted lists would contain them in the “correct” order:

“YYYYMMDD” is the magic string.

If you format your dates as 20230122 and 20230123, the second name will be sorted after the first one. With nearly any other format, your date strings will not be sorted chronologically in the file system.

I’ve found out that this is also nearly the only format that most people cannot intuitively recognize as a date. So while it is familiar with me and conveniently sorted, it is confusing or at least in need of explanation for virtually every user of my systems.

Keep that in mind when listening to the following story:

One project I adopted is a custom enterprise resource planning system that was developed by a single developer that one day left the company and the code behind. The software was in regular use and in dire need of maintenance and new features.

One concept in the system is central to its users: the list of items in an invoice or a bill of delivery. This list contains items in a defined order that is important to the company and its customers.

To my initial surprise, the position of an item in the list was not defined by an integer, but a string. This can be explained by the need of “sub-positions” that form a hierarchy of items, like in this example:

1 – basic item

1.1 – item upgrade #1

1.2 – item upgrade #2

Both positions “1.1” and “1.2” are positioned “underneath” position “1” and should be considered glued to it. If you move position “1” to position “4”, you also move 1.1 to 4.1 and 1.2 to 4.2.

But there was a strange formatting thing going on with the positions: They were stored as strings in the database, but with a strange padding in front. Instead of “1”, “2” and “3”, the entries contained the positions ” 1″, ” 2″ and ” 3″. All positions were prefixed with two space characters!

Well, nearly all positions. As soon as the list grew, the padding turned out to be dependent on the number of digits in the position: ” 9″, but then ” 10″ and “100”.

The reason can be found relatively simple: If you prefix with spaces (or most other characters, maybe “0”), your strings will be ordered in a numerical way. Without the prefixes, they would be sorted like “1”, “10”, “11”, “2”.

That means that the desired ordering of the positions is hardcoded in the database representation! You probably already thought about the case of a position greater than 999. That’s when trouble begins! Luckily, an invoice with a thousand items on the list is unheard of in the company (yet!).

Please note that while the desired ordering is hardcoded in the database, the items are still loaded in a different order (as they were entered into the system) and need to be sorted by the application. The default sorting for strings is the alphabetical order, so the original developer probably was clever/lazy, went with it and formatted the data in a way that would produce the result and not require additional logic during the sorting.

If you look at the code, you see seemingly strange formatting calls to the position all over the place. This is necessary because, for example, every time a user enters a position into the system, it needs to be reformatted (or at least sanitized) in order to adhere to the “auto-sortable” format.

If you wonder how a hierarchical sub-position looks like with this format, its ” 1. 1″, ” 1. 10″ and even ” 1. 17. 2. 4″. The database stores mostly blanks in this field.

While this approach might seem clever at the moment, it is highly dangerous. It conflates several things that should stay separated, like “storage format” and “display format”, “item order” or just “valid value range”. It is a clear violation of the “separation of concerns” principle. And it broke the application when I missed one place where the formatting was required, but not present. Of course, this only manifests in a problem when your test cases (or manual tries) exceed a list of 9 entries – lesson learnt here.

I dread the moment when the company calls to tell me about this “unusually large invoice” that exceeds the 999 limit. This would mean a reformatting of all stored data or another even more clever hack to circumvent the problem.

Did you encounter a format that was purely there for sorting in the wild? What was the story? Tell us in the comments!

Try ending the workday with a beneficial ritual

One thing that is important to me is to start and end the workday with a proven and familiar routine – lets call it a ritual. There are some advantages to this approach. First, you have a defined starting point. No matter what the day may throw at you, there are some anchors in your structure or environment that you can rely on. For example, I don’t start my work without a (big) filled glass of water on my desk. It might get hectic, but my supply of water is secured until lunch. I make it a habit to empty that glass before lunch, too, but that’s not as important as the ritual of supplying myself with a beverage and only then starting my work.

My guess is that most of you already do this, too. The start of a workday is the natural point in time to install habits or even rituals. But what about the end of your workday? Sure, there is a point in time when you “drop the pen” and rush out the door. But right before this moment, there is a possibility to introduce a beneficial ritual that might only cost minutes, but brings value that furthers your career and even your current work.

My usual ritual is a short daily reflection. That’s not exactly my own idea, I just borrowed it from the Clean Code Developer Initiative. My problem with the CCDI version is the focus on software development alone, which is probably a good start, but too narrow for my work profile.

My adaption is to have three basic questions that I ask myself at the end of each workday and answer in “articulated thoughts”. You may prefer to say it out loud or write your answer down (Obsidian or similar tools might be a suitable tool for that). My questions to myself are:

  • How do you feel right now?
  • What surprised you today?
  • What do you want to remember from today’s work?

Note how these questions don’t deal with details of your current work. If you have specific topics that you want to reflect on, you can always add some more questions for a period. I have found it important not to skip or replace the three basic questions, though.

“How do you feel?” is a complicated question because it leads to your motivation for work. Of course, “tired” or “stressed” is always a valid answer. But what if you legitimately feel “proud” or “fulfilled”? Can you identify what aspect of today’s work made you proud? Can you think of a way to have more of that without neglecting other important duties?

“What surprised you today?” tries to carve out your latest learning experience. It is possible that your day was dull enough to have no surprises, but if there were, you’ve probably expanded your knowledge on a topic you didn’t expect. If the surprise was a negative one, maybe you can think about a way to make it less surprising, more rare or downright impossible in the future. In my case, this lead to some unusual gadgets like the “bad idea commands” list that hangs right besides the admininstation console. The most infamous command on this list is “mdadm –create”, by the way (I meant “mdadm –assemble” and was very surprised by the result).

“What do you want to remember?” is an explicit appeal to write your answer down. You don’t need to tell an elaborate story. Just give your future self some cues, preferably from outside your brain (Obsidian’s market claim of “a second brain” is no coincidence). Make a small note or write your future self an e-mail (this is my typical way of offloading things to future me). But persist this information now or it will be gone.

After this daily reflection, I shut down my computer and put the (probably empty) glass of water into the dishwasher. Then I switch into leisure mode.

Of course, my three questions are inspired from other sources, too. One is the workshop hosting manual for code retreats, which has a great section about the “closing circle”, a group reflection on a probably awesome day.

If you have a similar ritual, let us know about it! Write a blog entry or drop a comment below.

The year 2022 in tickets

We are a company of software developers that decided to run the company itself similar to a typical software project. All company documents are put under version control, most things that can be automated are automated (or listed on a backlog and estimated for their business value), a wiki contains all relevant information and is continually updated and extended and, most important, everything is an issue. The word “issue” is the developer synonym for “ticket”, so what I’m really saying is: “Every activity in our company has a ticket number”. Just like you don’t change the source code of a software project without an issue that motivates the change, we don’t perform work for the company without a motivating ticket. This means that you can review the company’s progress, performance and efforts by at least three activity streams or history tracks:

  • The commit history of the version control system tells the story from the viewpoint of documents. Company documents are mostly the beginning or the result of activity of our administration department. Typical documents that start processes are project orders or letters from official agencies. Typical documents that are created as a result of processes include invoices, filled in forms and more letters.
  • The edit history of the wiki tells the story from the viewpoint of process learning. We document our actual administrative processes in a structured way that might be seen as “source code for humans”. Everything we change in our approach to process and create documents can be traced in this source code. Additionally created business processes indicate a growth in business scope or complexity – or the payback of “business process debt”, the administrative equivalent to “technical debt” in a software project.
  • The resolution history of the ticket system tells the story from the actual footwork. Every activity has its ticket, so we can measure how much activity was necessary to run the company, where this activity was invested and how much regular versus extraordinary work occurred.

There are more “story lines” in our company and I could probably talk for days about how to read them and set them into context with one another, but in this blog post, I try to visualize only the footwork of the year 2022 for our small company by showing the ticket numbers. But before I can do that, we need one more piece of theory about our tickets:

We have two kind of tickets in our system:

  • Manually created tickets accompany activities that occurred “without a schedule”. A human being recognized the need for some work and wrote a ticket to document the motivation and track the progress of this work.
  • Automatically created tickets denote recurring activities that are handled by some form of automation in our company. We have developed a tool to manage the schedules of these “recurring activities”. Our job as humans is to recognize the recurring character of some of our activities, estimate a suitable schedule for it and tell the tool about it. The tool then creates tickets based on that schedule and we need to deal with them. The simplest form to deal with a recurring ticket is to close it as “won’t fix” because there is nothing to do in its regard yet.

Just keep in mind that manually created tickets always denote required activity while automatically created tickets “only” denote the need to check for required activity, but not always to perform it.

Let’s look at some numbers!

The most obvious ticket section is of course the tickets for our blog entries (you are reading BLOG-368). Because we publish one entry each week, there will be at least 52 tickets for 2022. In fact, we have fixed 54 tickets this year, with only one ticket manually created. There are not many surprises with such a strict schedule.

A less predictable topic is the purchase department (don’t be too impressed, the “department” is just its own section in the ticket system). Every purchase is tracked by its own ticket. In 2022, we had 113 purchase activities, with 94 of them manually created. This means that the non-automation ratio for our purchases is above 80 percent. We bought two different things every week and most of it was “on demand”.

The most important section of tickets for me as the CEO is the “business administration” section which encompasses all necessary non-specialized work to keep our company afloat. Let’s dissect it for the year 2022:

956 tickets were resolved over the course of the year. That’s a lot of work for a small company! Luckily, 865 tickets were created by our tool, so they don’t always require actual activity. But 91 tickets were things we needed to do but couldn’t anticipate this need (or else we would have created a schedule for it). This is two things per week that “surprised” us.
If you look at these numbers with a different mindset, you can see the effects of consequent automation: Our administration has an automation factor of 90 percent! We mostly deal with routine tasks and can rely on defined, documented and automated processes. That’s quite an achievement and I still remember the times when we had lower factors. They were more “interesting” (in the asian curse sense).
I want to add another perspective to these numbers: We also track our work time and assign it to different projects, with “administration” being one of them. In the year 2022, we booked approximately 600 person hours of work for “administration” of the company. So we spent circa 35 minutes on each ticket. This is a misleading number, because there were lots of tickets that require no more than a few minutes and some that can hog our attention for days. We also track detailed “time per ticket” numbers, but only use this data to extract “expected durations” for the most important and time-consuming tasks. This helps us to plan our administrative work around the customer project schedules.

I could write a lot more about our different topics in the adminstration section of our ticket system. We have identified around 20 distinguishable topics. But it would become more boring over time, so I close this blog entry with one last topic that is very important for an IT company: the “IT administration” or “operations”.

To keep our IT systems up and running, we worked on 48 different tickets, but only 13 of them were automatically created. This seems rather low when compared to the adminstration with around 1000 tickets, but it is very misleading. Our IT administration is nearly fully automated, so that routine work doesn’t create tickets, but starts automation runs (Jenkins builds, Ansible playbooks and such). The 48 tickets were additional work or, in the case of the 13 tickets, recurring work that requires human oversight and interference.
I’m glad that this number is as low as it is. It means that the IT runs smooth and rather silent. The 115 person hours booked for it tell the same story: Our IT is low maintenance. A tad more than 2 hours per week is an affordable price.

I hope this blog entry was entertaining enough to give you an idea of how we make things visible in our company. We use the data to test hypotheses, expose problems and track our improvement efforts. Without this data, we could only rely on assumptions, feelings and spotty memory. By reading the numbers, we (or at least I) get a feeling for the intricacies of the company that translate down to the day-to-day work and makes intuitive and appropriate management possible.

If you want to know more, feel free to leave a comment!

Fun with docker container environment variables

Docker (as one specific container technology product) is a basic ingredient of our development infrastructure that steadily gained ground from the production servers over the build servers on our development machines. And while it is not simple when used for operations, the complexity increases a lot when used for development purposes.

One way to express complexity is by making the moving parts configurable and using different configurations. A common way to make things configurable with containers are environment variables. Running a container might look like a endurance typing contest if used extensibly:

docker run --rm \
-e POSTGRES_USER=myuser \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_DB=mydatabase \
-e PGDATA=/var/lib/postgresql/data/pgdata \
ubuntu:22.04 env

This is where our fun begins.

Using an env-file for extensive configurations

The parameter –env-file reads environment variables from a local text file with a simple key=value format:

docker run --rm --env-file my-vars.env ubuntu:22.04 env

The file my-vars.env contains all the variables line by line:

FIRST=1
SECOND=2

If we run the command above in a directory containing the file, we get the following output:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=46a23b701dc8
FIRST=1
SECOND=2
HOME=/root

The HOSTNAME might vary, but the FIRST and SECOND environment variables are straight from our file.

The only caveat is that the env-file really has to exist, or we get an error:

docker: open my-vars2.env: Das System kann die angegebene Datei nicht finden.

My beloved shell

The env-file can be empty, contain only comments (use # to begin them) or whitspace, but it has to be present.

Please be aware that the env-files are different from the .env-file(s) in docker-compose. A lot of fun is lost by this simple statement, like variable expansion. As far as I’m aware, there is no .env-file mechanism in docker itself.

But we can have some kind of variable substitution, too:

Using multiple env-files for layered configurations

If you don’t want to change all your configuration entries all the time, you can layer them. One layer for the “constants”, one layer for global presets and one layer for local overrides. You can achieve this with multiple –env-files parameters, they are evaluated in your specified order:

docker run -it --rm --env-file first.env --env-file second.env ubuntu:22.04 env

Let’s assume that the content of first.env is:

TEST=1
FIRST=1

And the content of second.env is:

TEST=2
SECOND=2

The results of our container call are (abbreviated):

TEST=2
FIRST=1
SECOND=2

You can see that the second TEST assignment wins. If you switch the order of your parameters, you would read TEST=1.

Now imagine that first.env is named global.env and second.env is named local.env (or default.env and development.env) and you can see how this helps you with modular configurations. If only the files need not to exist all the time, it would even fit well with git and .gitignore.

The best thing about this feature? You can have as many –env-file parameters as you like (or your operating system allows).

Mixing local and configured environment variables

We don’t have explicit variable expansion (like TEST=${FIRST} or something) with –env-files, but we have a funny poor man’s version of it. Assume that the second.env from the example above contains the following entries:

OS
TEST=2
SECOND=2

You’ve seen that right: The first entry has no value (and no equal sign)! This is when the value is substituted from your operating system:

TEST=2
FIRST=1
OS=Windows_NT
SECOND=2

By just declaring, but not assigning an environment variable it is taken from your own environment. This even works if the variable was already assigned in previous –env-files.

If you don’t believe me, this is a documented feature:

If the operator names an environment variable without specifying a value, then the current value of the named variable is propagated into the container’s environment

https://docs.docker.com/engine/reference/run/#env-environment-variables

And even more specific:

When running the command, the Docker CLI client checks the value the variable has in your local environment and passes it to the container. If no = is provided and that variable is not exported in your local environment, the variable won’t be set in the container.

https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables–e—env—env-file

This is a cool feature, albeit a little bit creepy. Sadly, it doesn’t work in all tools that allow to run docker containers. Last time I checked, PyCharm omitted this feature (as one example).

Epilogue

I’ve presented you with three parts that can be used to manage different configurations for docker containers. There are some pain points (non-optional file existence, feature loss in tools, no direct variable expansion), but also a lot of fun.

Do you know additional tricks and features in regard to environment variables and docker? Comment below or link to your article.

Applying the Golden Circle to software development

When I was a young and impressionable software developer (1998), the slogan of the year was “the code is the documentation”. This essentially meant that comments, and (inline) code comments in particular, were a sign of bad code. The reasoning was (and still is) that if you can’t articulate your ideas clear enough in source code, adding additional text won’t rescue your communication. “Communication” is the conveying of what you want to accomplish with your code towards two listeners: the computer and the next human that works with your code. The computer is often the easier part, because it just does as it is told without interpretation.

Some years later (2004), Jeff Atwood, the author of the influential codinghorror blog, still condemns most of the comments we could find in contemporary source code. But there is more nuance than just “comments are bad” and it is cleared up later (2006) that there is a difference in what should be expressed in source code and what should be expressed in comments. Yes, you should write comments, but the “right kind”.

According to Jeff Atwood, the source code contains the “how”. It tells the story in all its details for the next human and the computer. The comments, on the other hand, are not intended for the computer and shouldn’t contain details. They should contain the “why”, the high-level picture and the motivation behind the specific “how” that we find.

The code tells you how, the comments tell you why” (2006) is a great way to describe the expected content of both “layers”.

But my feeling was (and still is) that something is missing in that description. My programs aren’t just code and comments, there are more things that I try to tell my story with. And just a few weeks ago, I had a sudden idea that I might be able to describe what that missing piece could be. It is just an idea and the puzzle probably is still missing pieces, but it feels “right” enough that I want to write this blog post to discuss the idea. But I need to talk about something else first.

In 2009, Simon Sinek presents the Golden Circle to the world. The TED talk is probably the most energetic piece of explanation in human history. The Golden Circle is “the world’s simplest idea”, defining three layers of “clarity” to actions:

  • What: The “lowest” level, meaning that every person knows “what they are doing”.
  • How: The “intermediate” level. Some people know “how they do it”. They explicitly choose their method of doing and can reason about it.
  • Why: The “highest” level. According to Simon Sinek, only a few people know “why they do it”. He talks about “purpose, cause and belief”.

If you don’t know about the Golden Circle yet, please watch the 20 minutes TED talk while I wait here for you. If you want to think about it first – I’m patient. The Golden Circle has inspired and guided me ever since. Not that I’m very good at applying it in my business or personal life, but it stayed with me and gave me a coordinate system to categorize things.

And with that categorization practice, I feel as if the layers in Jeff Atwood’s blog post from 2006 are misnamed and one crucial layer is missing:

  • What: The code tells you what (not how!). It is the detailed step-by-step recipe to replicate a behaviour. It is so simple that even a computer can do it, without being aware of it.
  • How: This is the missing layer. I want to talk about it in a minute.
  • Why: That’s the part that is correctly placed and named: The why of a story requires the spoken word. Only comments are viable for this kind of information. The computer (as of today) has no understanding what any of it means.

The missing layer is the “How”. In my idea, I envisioned that everything that is deliberately put in the code, but not readily understood by a computer, like structures, patterns, idioms or even names (I call them “activated comments”) are there for the “How”. We structure our code not because the computer requires it, the compiling stages of our programming languages even interfold and compact it until it is a binary blob. We don’t name our variables and types because the computer would learn something from them. The first thing a compiler does is to shorten our names to unreadable “symbols”. Most of our patterns get replaced by other, more minute patterns that the compiler puts into place. We put these things in our code because it helps us understand “how the story goes”. It provides us with guidance how to make sense of the mess. The structure tells you how to approach the program.

The computer only knows about “What”. There are maybe some indications about a simple “How”-awareness in some technologies, but most of the time, the computer is deaf to human communication.

Which brings me to my description of code and comment, as an updated version of Jeff Atwood’s motto:

“The code tells you what, the structure tell you how and the comments tell you why”

By “structure”, I mean everything that gets lost during translation for the computer, but is visible for the human reader. It entails high-level things like “architecture” or “code design” and lower-end decisions like names and formatting. If you have a better word for it, let me know!

I hope that this blog post inspired you to have a thought yourself. Don’t hesitate and tell us your thought in the comment below.

Help me with the Spiderman Operator

From time to time, I encounter a silly syntax in Java that I silently dubbed the “spiderman operator” because of all the syntactically pointing that’s going on. My problem is that it’s not very readable, I don’t know an alternative syntax for it and my programming style leads me more often to it than I am willing to ignore.

The spiderman operator looks like this:

x -> () -> x

In its raw form, it means that you have a function that takes x and returns a Supplier of x:

Function<X, Supplier<X>> rawForm = x -> () -> x;

That in itself is not very useful or mysterious, but if you take into account that the Supplier<X> is just one possible type you can return, because in Java, as long as the signature fits, the thing sits, it gets funnier.

A possible use case

Let’s define a type that is an interface with just one method:

public interface DomainValue {
    BigDecimal value();
}

In Java, the @FunctionalInterface annotation is not required to let the interface be, in fact, a functional interface. It only needs to have one method without implementation. How can we provide methods with implementation in Java interfaces. Default methods are the way:

@FunctionalInterface
public interface DomainValue {
    BigDecimal value();

    default String denotation() {
        return getClass().getSimpleName();
    }
}

Let’s say that we want to load domain values from a key-value-store with the following access method:

Optional<Double> loadEntry(String key)

If there is no entry with the given key or the syntax is not suitable to be interpreted as a double, the method returns Optional.emtpy(). Else it returns the double value wrapped in an Optional shell. We can convert it to our domain value like this:

Optional<DomainValue> myValue = 
    loadEntry("current")
        .map(BigDecimal::new)
        .map(x -> () -> x);

And there it is, the spiderman operator. We convert from Double to BigDecimal and then to DomainValue by saying that we want to convert our BigDecimal to “something that can supply a BigDecimal”, which is exactly what our DomainValue can do.

A bigger use case

Right now, the DomainValue type is nothing more than a mantle around a numerical value. But we can expand our domain to have more specific types:

public interface Voltage extends DomainValue {
}
public interface Power extends DomainValue {
    @Override
    default String denotation() {
        return "Electric power";
    }
}

Boring!

public interface Current extends DomainValue {
    default Power with(Voltage voltage) {
	return () -> value().multiply(voltage.value());
    }
}

Ok, this is maybe no longer boring. We can implement a lot of domain functionality just in interfaces and then instantiate ad-hoc types:

Voltage europeanVoltage = () -> BigDecimal.valueOf(220);
Current powerSupply = () -> BigDecimal.valueOf(2);
Power usage = powerSupply.with(europeanVoltage);

Or we load the values from our key-value-store:

Optional<Voltage> maybeVoltage = 
    loadEntry("voltage")
        .map(BigDecimal::new)
        .map(x -> () -> x);

Optional<Current> maybeCurrent = 
    loadEntry("current")
        .map(BigDecimal::new)
        .map(x -> () -> x);

You probably see it already: We have some duplicated code! The strange thing is, it won’t go away so easily.

The first call for help

But first I want to sanitize the code syntactically. The duplication is bad, but the spiderman operator is just unreadable.

If you have an idea how the syntax of the second map() call can be improved, please comment below! Just one request: Make sure your idea compiles beforehands.

Failing to eliminate the duplication

There is nothing easier than eliminating the duplication above: The code is syntactically identical and only the string parameter is different – well, and the return type. We will see how this affects us.

What we cannot do:

<DV extends DomainValue> Optional<DV> loadFor(String entry) {
    Optional<BigDecimal> maybeValue = load(entry);
    return maybeValue.map(x -> () -> x);
}

Suddenly, the spiderman operator does not compile with the error message:

The target type of this expression must be a functional interface

I can see the problem: Subtypes of DomainValue are not required to stay compatible to the functional interface requirement (just one method without implementation).

Interestingly, if we work with a wildcard for the generic, it compiles:

Optional<? extends DomainValue> loadFor(String entry) {
    Optional<BigDecimal> maybeValue = load(entry);
    return maybeValue.map(x -> () -> x);
}

The problem is that we still need to downcast to our specific subtype afterwards. But we can use this insight and move the downcast into the method:

<DV extends DomainValue> Optional<DV> loadFor(
	String entry,
	Class<DV> type
) {
	Optional<BigDecimal> maybeValue = load(entry);
	return maybeValue.map(x -> type.cast(x));
}

Which makes our code readable enough, but at the price of using reflection:

Optional<Voltage> european = loadFor("voltage", Voltage.class);
Optional<Current> powerSupply = loadFor("current", Current.class);

I’m not a fan of this solution, because downcasts are dangerous and reflection is dangerous, too. Mixing two dangerous things doesn’t neutralize the danger most of the time. This code will fail during runtime sooner or later, without any compiler warning us about it. If you don’t believe me, add a second method without implementation to the Current interface and see if the compiler warns you. Hint: This is what you will see at runtime:

java.lang.ClassCastException: Cannot cast java.math.BigDecimal to Current

But, to our surprise, it doesn’t even need a second method. The code above doesn’t work. Even if we reintroduce our spiderman operator (with an additional assignment to help the type inference), the cast won’t work:

<DV extends DomainValue> Optional<DV> loadFor(
    String entry,
    Class<DV> type
) {
    Optional<BigDecimal> maybeValue = load(entry);
    Optional<DomainValue> maybeDomainValue = maybeValue.map(x -> () -> x);
    return maybeDomainValue.map(x -> type.cast(x));
}

The ClassCastException just got a lot more mysterious:

java.lang.ClassCastException: Cannot cast Loader$$Lambda$8/0x00000008000028c0 to Current

My problem is that I am stuck. There is working code that uses the spiderman operator and produces code duplication, but there is no way around the duplication that I can think of. I can get objects for the supertype (DomainValue), but not for a specific subtype of it. If I want that, I have to accept duplication. Or am I missing something?

The second call for help

If you can think about a way to eliminate the duplication, please tell me (or us) in the comments. This problem doesn’t need to be solved for my peace of mind or the sanity of my code – the duplication is confined to a particular place.

Being used to roam nearly without boundaries in the Java syntax (25 years of thinking in Java will do that to you), this particular limitation hit hard. If you can give me some ideas, I would be grateful.

The layer cake approach: Docker multi-stage builds and Dev containers

One recent feature that has the ability to change the way developers work on their local machines are dev containers. In short, a dev container is a docker container that provides a working environment for a specific project and can be used by the IDE to develop the project without ever installing anything on the developer machine except the IDE, docker and git. It is especially interesting if you switch between projects with different ecosystems or the same ecosystem in different version often.

Most modern IDEs support dev containers, even if it still feels a bit unpolished in some implementations. In my opinion, the advantages of dev containers outweigh the additional complexity. The main advantage is the guarantee that all development systems are set up correctly and in sync. Our instructions to set up projects shrank from a lengthy wiki page to four bullet points that are essentially the same process for all projects.

But one question needs to be answered: If you have a docker-based build and perhaps even a docker-based delivery and operation, how do you keep their Dockerfiles in sync with your dev container Dockerfile?

My answer is to not split the project’s Dockerfiles, but layer them in one file as a multi-stage build. Let’s view the (rather simple) Dockerfile of one small project:

# --------------------------------------------
FROM python:3.10-alpine AS project-python-platform

COPY requirements.txt requirements.txt
RUN pip3 install --upgrade pip
RUN pip3 install --no-cache-dir -r requirements.txt

# --------------------------------------------
FROM project-python-platform AS project-application

WORKDIR /app

# ----------------------
# Environment
# ----------------------
ENV FLASK_APP=app
# ----------------------

COPY . .

CMD python -u app.py

The interesting part is introduced by the horizontal dividers: The Dockerfile is separated into two parts, the first one called “project-python-platform” and the second one “project-application”.

The first build target contains everything that is needed to form the development environment (python and the project’s requirements). If you build an image from just the first build target, you get your dev container’s image:

docker build --pull --target project-python-platform -t dev-container .

The second build target uses the dev container image as a starting point and includes the project artifacts to provide an operations image. You can push this image into production and be sure that your development effort and your production experience are based on the same platform.

If you frowned because of the “COPY . .” line, we make use of the .dockerignore feature for small projects.

Combining multi-stage builds with dev containers keeps all stages of your delivery pipeline in sync, even the zeroth stage – the developer’s machine.

But what if you have a more complex scenario than just a python project? Let’s look at the Dockerfile of a python project with a included javascript/node/react sub-project:

# --------------------------------------------
FROM python:3.9-alpine AS chronos-python-platform

COPY requirements.txt requirements.txt

RUN pip3 install --upgrade pip
RUN pip3 install --no-cache-dir -r requirements.txt

# --------------------------------------------
FROM node:16-bullseye AS chronos-node-platform

COPY chronos_client/package.json .
COPY chronos_client/package-lock.json .

RUN npm -v
RUN npm ci --ignore-scripts

# --------------------------------------------
FROM chronos-node-platform AS chronos-react-builder

COPY chronos_client .

# build the client javascript application
RUN npm run build

# --------------------------------------------
FROM chronos-python-platform AS chronos-application

WORKDIR /app

COPY . .

# set environment variables
ENV ZEITGEIST_PASSWORT=''

COPY --from=chronos-react-builder /build ./chronos_client/build

CMD python -u chronos.py

It is the same approach, just more layers on the cake, four in this example. There is one small caveat: If you want to build the “chronos-node-platform”, the preceding “chronos-python-platform” gets built, too. That delays things a bit, but only once in a while.

The second to last line might be interesting if you aren’t familiar with multi-stage builds: The copy command takes compiled files from the third stage/layer and puts them in the final layer that is the operations image. This ensures that the compiler is left behind in the delivery pipeline and only the artifacts are published.

I’m sure that this layer cake approach is not feasible for all project setups. It works for us for small and medium projects without too much polyglot complexity. The best aspect is that it separates project-specific knowledge from approach knowledge. The project-specific things get encoded in the Dockerfile, the approach knowledge is the same for all projects and gets documented in the Wiki – once.

Don’t test details from a distance

The concept described in this blog entry has evoked a lot of different metaphors and descriptions from our team when we discussed it. So don’t take my words or thoughts on it as the one true way to talk about it – the concept of the “testing gap” or the distance between the code under test and the test’s vantage point.

Before I describe my metaphor for it with some weird visuals, let’s look at some code:

public Budget(String denotation, int maximumHours) {
    this.denotation = denotation;
    this.maximumHours = maximumHours;
    this.currentHours = maximumHours;
}

This is the constructor for an entity, a domain class that represents a budget of work hours that gets slowly used up when you work for the customer’s project. There is not much going on in this code except one little detail of the domain: New budgets always start fully “filled up”, in that the currentHours are set to the maximumHours. You can’t create a budget that is already half empty with this code.

Such a domain concept or “business rule” requires a test that ensures it is still in place:

@Test
public void has_initially_current_hours_set_to_maximum() {
    Budget target = new Budget(
        "current is maxed",
        100
    );
    assertThat(target.getMaximumHours()).isEqualTo(100);
    assertThat(target.getCurrentHours()).isEqualTo(100);
}

This is a fairly boring unit test that ensures that freshly created budgets have all their working hours still available.

In our example, the entity lives in the core of a web application that provides an endpoint to create new budgets. We have a test for the endpoint, of course:

@Test
public void stores_new_budget() throws Exception {
    this.web.perform(
        post("/budgets")
        .contentType(MediaType.APPLICATION_JSON)
        .content("{\"denotation\": \"new budget\", \"maximumHours\": 300}")
    )
    .andExpect(status().isOk())
    .andExpect(content().json("{\"denotation\": \"new budget\", \"maximumHours\": 300, \"currentHours\": 300}"));
}

You can shudder at the code formatting or the necessity to escape your JSON data into inscrutability. At least the second problem more or less disappears with current Java versions. But that’s not the point today. The point is that this is effectively the same test as above, but with a gap in between.

If you wrote just the second test, your code coverage metrics would probably not decrease. Your business rule would still be tested. So why write the first test if it adds nothing to the safety net?

This can be explained with the idea that there is a considerable “testing gap” between the second test and the business rule. It covers the entity’s constructor code and states explicitely that the currentHours property should be set to the same value as the maximumHours property. But it also defines the communication protocol as being HTTP, the data format as being JSON and travels through code that finds an “endpoint” for the given URL, maps the given JSON to a constructor call and serializes the resulting object as JSON back to the requester. That would be a lot of padding just to test the constructor’s third line.

The first test has virtually no testing gap. It knows nothing about the web, data formats or whatever else the application consists of. It just looks at the entity and its behaviour in isolation.

There are perfectly valid reasons to write the second test, but it should not be the only test that ensures the business rule in our example. The second test “sees too much” from its vantage point to pay attention to a little detail like the business rule.

In case you didn’t quite get the concept of the testing gap yet, here is how I imagine it in my head: If your code under test is a mystery box (really try to picture a shoebox made of cardboard that rattles when you move it), then your test is a big floating eye that uses little cracks and holes in the box to get a quick peek inside. If you exhibit state by getter methods like in our example, the eye ensures the internal state of the box by looking at the gauges that are placed on its sides.

If your testing gap is small, the eye hovers up close to the box. It doesn’t see anything else, but it notices every detail of the box.

If you have some testing gap in your test code, the eye is placed in a considerate distance from the mystery box. There are other important things between them. The gauges aren’t directly readable. The eye uses indirect clues and reflections to gather its informations. Every time something in the gap’s setup changes, the testing eye needs to adjust its gaze.

Which brings us to the conclusion of this metaphor: If you want things to be looked at in detail, write tests without a testing gap. Otherwise, your tests will have increased execution times, exhibit a strange imprecision in their message (“something in these dozen of classes has changed and it might not even be relevant”) and require frequent adjustments that are not related to their testing story.

Or, if said with the words of my imagination, place your testing eye directly at the entrance of your test’s hideout.

You’ve probably thought about this concept already, in your own terms and metaphors. Can you try to describe it in a comment? Just for the name, we discussed “testing distance”, “testing height”, “testing gap” and others. Perhaps we like your description even better.

The charged charging switch

In this blog post, I’ll describe my experiences with a certain product (a computer monitor) and its manual. It might serve as an example of how ridiculous a poorly designed customer experience is perceived on the receiving end. Hopefully, it inspires some readers to think about sensible defaults and how to communicate them.

Let’s start with the context. In a previous blog post, I described my journey from one small monitor to four monitors in total (three big ones, one small additional one). Well, it is not just my journey – all of my co-workers have now four computer monitors for their office workplace.

This meant that we bought a lot of smaller monitors in the last months. We decided to go the monoculture route and bought one piece of our favorite model.

It arrived faulty. The only thing that this device did was to indicate “battery full” when the battery status button was pressed (yes, this particular monitor has its own battery for mobile usage). Everything else didn’t work, especially not the power button. The device was a dead fish. I returned it to the supplier.

The replacement unit was also dead on arrival. This puzzled me, because the odds of having two duds in a row seem very small. So I investigated and found an interesting fact: The unpacking and assembly instruction sheet is incomplete. Well, even more than that. It’s plain misleading.

It starts with a big lettered alert that reads “Please follow the illustration and text description strictly when opening the package and installing the display.” It then shows three illustrations of a totally different monitor and ends the instructions at the step when the styrofoam is removed (and no cables attached). At the bottom of the sheet, there’s an explanation: “The machine picture and styrofoam shown are for illustration purpose only and may differ from the actual product”. You can’t make this up.

The manual urges me to follow it “strictly” and then vaguely tells me how to unwrap the monitor from the styrofoam and nothing more. Even better, in the illustrations, there are different options given like “For binding-less, please ignore the untying action” (actual quote!). You can’t follow strictly if given multiple options and hand-wavey instructions. “Unpack the monitor correctly” is more actionable than this manual.

But that was just the beginning. The user manual actually references the correct monitor and gives usage instructions for common use cases, but it lacks a troubleshooting section. The user manual starts with a working device – and my device(s) don’t work. They don’t turn on if the power button is pressed – and it has to be pressed for 3 seconds to turn on the monitor! Yes, the manual is clear on this one: To turn the monitor on by using its power button, you have to press for three, long, “twenty-two”, tedious, “twenty-three”, seconds. That’s like having a light switch, but if you press it in the dark, it requires you to keep pressing because it could be a mistake – do you really want to have the lights on?

The device is still dead, the manual is no help for my situation, so I inspect the material a little bit more thorough. There is a sticker at the bottom of the monitor (at the opposite side from the power plug and the power button) that catches my eye. I have photographed it, because nobody would believe me otherwise. Here it is:

The first sentence is a no-brainer. But the second one is a head-scratcher: “Please turn on the charging switch for the first time”.

There is no mention of a “charging switch” in the manual. There is no switch labeled “charging” on the device. All the buttons/switches and ports that are present are described in the manual and can’t be interpreted as a “charging switch”.

But if you look at the sticker more closely, you’ll see the illustration at the right side. In reality, it is 3 mm wide and 18 mm in height. It is very small. Even smaller are the depicted things – they resemble the input ports on the right side! From the bottom up, there is a USB-C port, a micro-HDMI port and something that is encircled in the illustration. The circle is probably our hint that this is indeed the “charging switch” mentioned on the sticker.

I searched for the switch and only found a notch in the plastic, about 3 mm wide. Only by using a magnifying glass did I find a small black plastic knob at the bottom of the notch (2 mm deep). The knob is probably one square-millimeter tiny. It was situated more to the top of the notch.

I have built electronics since the early nineties. I know how to solder and recognize all kinds of electronic parts. This thing was a DIP-switch, but one of the smallest ones I’ve ever seen. And it wasn’t labeled at all. The only hint we get to search for it is the illustration on the sticker.

So – is it in the “on” position? I decided to find out by moving it down. A paper clip wire was too big to fit, so I used the smallest screwdriver my micro-mechanic screwdriver set would offer. Just a bit smaller and I would have resorted to an actual hair. The DIP-switch moved half a millimeter down and got stuck more to the bottom of the notch.

The monitor suddenly worked – after the three second pressing. The unlabeled “on” position of the unlabeled “charging switch” that you have to manipulate by using the smallest metal rod that you can find in an electronics lab is at the bottom. Good to know.

I won’t reiterate the madness that we just experienced. It gets even worse, so buckle up.

Right now, I have a working monitor that is actually pleasing to use. I buy it again – the same routine. I wonder if I should report the trick to the supplier.

We have more than two workplaces, so I buy the monitor – the same product for the same price – again, but five times now.

I get five packages with identical content. Well, nearly identical. The stickers are different!

Three monitors have the same sticker as seen above. One of them needs to be switched to turn on, the other two were already in the “on” position.

But the other two monitors have a different sticker:

Both monitors were already in the “on” position, so nothing needed to be done. But this sticker tells you to leave the charging switch alone – A switch that is never mentioned in the manual, that is so small that you probably miss it even if you search for it and that needs special equipment to be changed. That’s as if my refrigerator came with a warning sticker not to disable a particular fuse when this fuse is safely hidden away in the internals of the refrigerators electronics and never mentioned in the manual. Why point it out if my only job is to ignore it?

Remember the first manual that “strictly” tells a vague story? This is the same logic. And it gets even better with the second sentence, the one with an exclamation mark! “Let it keep the factory state!” means that it is turned off when coming from the factory? Or does it mean to keep it in the state that is delivered, regardless of the monitor being functional or disabled by it?

I still don’t know what the “on” position of this switch really is and now I’m even more confused than before.

My mind invented this elaborate fantasy story about a factory that produces monitors. One engineer is tasked with designing the charging functionality and adds the “charging switch” to enable or disable the whole feature. But she/he forgets to remove it before the blueprint is committed into production and now the switch is part of the consumer product. The DIP switch is on the “off” position by default from its producer. This renders the first batches of monitors useless because the documentation doesn’t mention the magic switch that needs to be flipped once to have the monitors turn on. The return rates are horrendous and management gets involved. They decide to get rid of the problem by applying a quick fix – the first sticker. This sentences their customers to perform a scavenger hunt of subtle hints to have the monitors work. They also install a new production line station – the switch flipper. This person needs training and is only available for the day shift – Half of the monitors leave the factory with the switch in the “on” position, the other half is in the “off” position. The first sticker remains, it is still a mystery, but the return rates are cut in half nearly overnight.

In my story, the original engineer recognizes her/his error and tries to correct it – by reversing the switch positions. The default position (“off”) now enables the feature, while the “on” position disables it. Just by turning the (still unlabeled) positions around, the factory produces ready-to-use monitors without requiring intervention from the customer.

The problem? A lot of customers have now learned the switch-flip trick and deactivate their product. And the switch flipper still deactivates half of the production without noticing. They need to inform their customers! They apply the second sticker, hoping to clear this matter once and for all.

And here I am, having bought 7 monitors so far and received nearly every possible combination of sticker and initial switch position. I am more confused and wary as if they had stuck to their original approach and just updated their manual.

But there is one indicator that might be helpful: The serial number of the monitors start with some letters and then two digits:

  • 79: You get sticker 1 and need to flip the switch
  • 99: You get sticker 2 and need not flip the switch
  • 69: You get sticker 1, but the switch is already flipped

At least that was my observation with the samples at hand.

What can we, as software developers, learn from this disaster?

First, keep an eye on your feature switches! One non-sensible default and you chase that error forever.

Second, don’t compensate the first error by making the complemental error, too. Sometimes, the cure is worse than the disease.

Third, don’t ever not avoid negative logic! Boolean logic is hard enough itself, if you further complicate it, people like me will just resort to guessing and trial-and-error.

Fourth, and that is the most important one for me: Don’t explain things that need no attention from the user. I’m definitely guilty of that one. Often, I want my documentation to be “complete” and to “show all opportunities” when all I do is confuse my users with sentences like “Do not turn on the charging switch. Let it keep the factory state!” and then never mention the “charging switch” anywhere again.

Effective computer names with DNS aliases

If you have a computer in a network, it has a lot of different names and addresses. Most of them are chosen by the manufacturer, like the MAC address of the network device. Some are chosen by you, like the IP address in the local network. And some need to be chosen by you, like the computer’s name in your local DNS (domain name service).

A typical indicator for an under-managed network is the lack of sufficiently obvious computer names in it. You want to connect to the printer? 192.168.0.77 it is. You need to access the network drive? It is reachable under nas-producer-123.local. You can be sure that either of these names change as soon as anything gets modified in the network.

Not every computer in a network needs a never-changing, obvious name. If you connect a notebook for some hours, it can be addressable only by 192.168.0.151 and nobody cares. But there will be computers and similar network devices like printers that stay longer and provide services to others. These are the machines that require a proper name, and probably not only one.

Our approach is a layered one, with four layers:

  • MAC-address, chosen by the manufacturer
  • IP address, chosen by our DHCP
  • Device name, chosen by our DNS
  • Device aliases, chosen by our DNS

Of course, our DHCP and our DNS is told by our administrator what addresses and names to give out. Our IP addresses are partitioned into sections, but that is not relevant to the users.

The device name is a mapping of a name on an IP address. It is chosen by the administrator in case of a server/service machine. It will tell you about the primary service, like “printer0”, “printer1” or “nas0”. It is not a creative name and should not be remembered or used directly. If the machine has a direct user, like a workstation or a notebook, the user gets to choose the name. The only guideline is to keep it short, the rest is personal preference. This name should only be remembered by the user.

On top of the device name, each machine gets one or several additional DNS names, in the form of DNS aliases (CNAME records). These are the names we work with directly and should be remembered. Let’s see some examples:

I want to print on the laser printer: “laserprinter.local” is the correct address. It is an alias to printer0.local which is a mapping to 192.168.0.77 which resolves to a specific MAC address. If the laser printer gets replaced, every entry in this chain will probably change, except for one: the alias will point to the new printer and I don’t have to care much about it (maybe I need to update my driver).

I want to access the network drive: “nas.local” is one possibility. “networkdrive.local” is another one. Both point to “nas0” today and maybe “nas1” tomorrow. I don’t need to care which computer provides the service, because the service alias always points to the correct machine.

I want to connect to my colleague’s workstation: Because we have different naming preferences, I cannot remember that computer’s name. But I also don’t have to, because the computer has an alias: If my colleague’s name is “Joe”, the computer’s alias is “joe.local”, which resolves to his “totallywhackname.local”, which points to the IP address, etc. There is probably no more obvious DNS name than “joe.local”.

Another thing that we do is give a service its purpose as a name. This blog is run by wordpress, so we would have “wordpress.local”, but also “blog.local” which is the correct address to use if you want to access the blog. Should we eventually migrate our blog to another service, the “blog.local” address would point to it, while the “wordpress.local” address would still point to the old blog. The purpose doesn’t change, while the product that provides it might some day.

Of course, maintaining such a rich ecosystem of names and aliases is a lot of work. We don’t type our zone files directly, we use generators that supply us with the required level of comfort and clarity. This is done by one of our internal tools (if you remember the Sunzu blog post, you now know 2 out of our 53 tools). In short, we maintain a table in our wiki, listing all IP addresses and their DNS aliases and linking to the computer’s detail wiki page. From there, the tool scrapes the computer’s name and MAC address and generates configuration files for both the DHCP and DNS services. We can define our whole network in the wiki and have the tool generate the actual settings for us.

That way, the extra effort for the DNS aliases is negligible, while the positive effects are noticeable. Most network modifications can be done without much reconfiguration of dependent services or machines. And it all starts with alias names for your computers.