Unit-Testing Deep-Equality in C#

In the suite of redux-style applications we are building in C#, we are making extensive use of value-types, which implies that a value compares as equal exactly if all of its contents are equal also known as “deep equality”, as opposed to “reference equality” or “shallow equality”. Both of those imply deep equality, but the other way around is not true. The same object is of course equal to itself, not matter how deep you look. And an object that references the same data as another object also has equal content. But a simple object that contains different lists with equal content will be unequal under shallow comparison, but equal under deep comparison.

Though init-only records already provide a per-member comparison as Equals be default, this fails for collection types such as ImmutableList<> that, against all intuition but in accordance to , only provide reference-equality. For us, this means that we have to override Equals for any value type that contains a collection. And this is were the trouble starts. Once Equals is overridden, it’s extremely easy to forget to also adapt Equals when adding a new property. Since our redux-style machinery relies on a proper “unequal”, this would manifest in the application as a sporadically missing UI update.

So we devised a testing strategy for those types, using a little bit of reflection:

  1. Create a sample instance of the value type with no member retaining its default value
  2. Test, by going over all properties and comparing to the same property in a default instance, if indeed all members in the sample are non-default
  3. For each property, run Equals the sample instance to a modified sample instance with that property set to the value from a default instance.

If step 2 fails, it means there’s a member that’s still at its default value in the sample instance, e.g. the test wasn’t updated after a new property was added. If step 3 fails, the sample was updated, but the new property is not considered in Equals – and it can even tell which property is missing.

The same problems of course arise with GetHashCode, but are usually less severe. Forgetting to add a property just makes collisions more likely. It can be tested much in the same way, but can potentially lead to false positives: collisions can occur even if all properties are correctly considered in the function. In that case, however, the sample can usually be altered to remove the collision – and it is really unlikely. In fact, we never had a false positive.

Partitioning in Oracle Database: Because Who Wants to Search an Endless Table?

As data volumes continue to grow, managing large database tables and indexes can become a challenge. This is where partitioning comes in. Partitioning is a feature of database systems that allows you to divide large tables and indexes into smaller, more manageable parts, known as partitions. This can improve the performance and manageability of your database. Aside from performance considerations, maintenance operations, such as backups and index rebuilds, can become easier by allowing them to be performed on smaller subsets of data.

This is achieved by reducing the amount of data that needs to be scanned during query execution. When a query is executed, the database can use the partitioning information to skip over partitions that do not contain the relevant data, instead of having to scan the entire table. This reduces the amount of I/O required to execute the query, which can result in significant performance gains, especially for large tables.

There are several types of partitioning available in Oracle Database, including range partitioning, hash partitioning, list partitioning, and composite partitioning. Each type of partitioning is suited to different use cases and can be used to optimize the performance of your database in different ways. In this blog post we will look range partitioning.

Range partitioning

Here is an example of range-based partitioning in Oracle:

CREATE TABLE books (
  id NUMBER,
  title VARCHAR2(200),
  publication_year NUMBER
)

PARTITION BY RANGE (publication_year) (
  PARTITION p_before_2000 VALUES LESS THAN (2000),
  PARTITION p_2000s VALUES LESS THAN (2010),
  PARTITION p_2010s VALUES LESS THAN (2020),
  PARTITION p_after_2020 VALUES LESS THAN (MAXVALUE)
);

In this example, we have created a table called books that stores book titles, partitioned by the year of publication. We have defined four partitions, p_before_2000, p_2000s, p_2010s, and p_after_2020.

Now, when we insert data into the books table, it will automatically be placed in the appropriate partition based on the year of publication:

INSERT INTO books (id, title, publication_year)
  VALUES (1, 'Nineteen Eighty-Four', 1949);

This book will be inserted into partition p_before_2000, as the year of publication is before 2000. The following book will be placed into partition p_2000s:

INSERT INTO books (id, title, publication_year)
  VALUES (2, 'The Hunger Games', 2008);

When we query the books table, the database will only access the partitions that contain the data we need. For example, if we want to retrieve data for books published in 2015 and 2016, the database will only access partition p_2010s.

SELECT * FROM books WHERE publication_year>=2015 AND publication_year<=2016:

However, you should be aware that while partitioning can improve query performance for some types of queries, it can also negatively impact query performance for others, especially if the partitioning scheme does not align well with the query patterns. Therefore, you should tailor the partitioning to your needs and check if it brings the desired effect.

Format-based sorting looks clever, but is dangerous

A neat trick I learnt early in my career, even before I learnt about version control, was how to format a date as a string so that alphabetically sorted lists would contain them in the “correct” order:

“YYYYMMDD” is the magic string.

If you format your dates as 20230122 and 20230123, the second name will be sorted after the first one. With nearly any other format, your date strings will not be sorted chronologically in the file system.

I’ve found out that this is also nearly the only format that most people cannot intuitively recognize as a date. So while it is familiar with me and conveniently sorted, it is confusing or at least in need of explanation for virtually every user of my systems.

Keep that in mind when listening to the following story:

One project I adopted is a custom enterprise resource planning system that was developed by a single developer that one day left the company and the code behind. The software was in regular use and in dire need of maintenance and new features.

One concept in the system is central to its users: the list of items in an invoice or a bill of delivery. This list contains items in a defined order that is important to the company and its customers.

To my initial surprise, the position of an item in the list was not defined by an integer, but a string. This can be explained by the need of “sub-positions” that form a hierarchy of items, like in this example:

1 – basic item

1.1 – item upgrade #1

1.2 – item upgrade #2

Both positions “1.1” and “1.2” are positioned “underneath” position “1” and should be considered glued to it. If you move position “1” to position “4”, you also move 1.1 to 4.1 and 1.2 to 4.2.

But there was a strange formatting thing going on with the positions: They were stored as strings in the database, but with a strange padding in front. Instead of “1”, “2” and “3”, the entries contained the positions ” 1″, ” 2″ and ” 3″. All positions were prefixed with two space characters!

Well, nearly all positions. As soon as the list grew, the padding turned out to be dependent on the number of digits in the position: ” 9″, but then ” 10″ and “100”.

The reason can be found relatively simple: If you prefix with spaces (or most other characters, maybe “0”), your strings will be ordered in a numerical way. Without the prefixes, they would be sorted like “1”, “10”, “11”, “2”.

That means that the desired ordering of the positions is hardcoded in the database representation! You probably already thought about the case of a position greater than 999. That’s when trouble begins! Luckily, an invoice with a thousand items on the list is unheard of in the company (yet!).

Please note that while the desired ordering is hardcoded in the database, the items are still loaded in a different order (as they were entered into the system) and need to be sorted by the application. The default sorting for strings is the alphabetical order, so the original developer probably was clever/lazy, went with it and formatted the data in a way that would produce the result and not require additional logic during the sorting.

If you look at the code, you see seemingly strange formatting calls to the position all over the place. This is necessary because, for example, every time a user enters a position into the system, it needs to be reformatted (or at least sanitized) in order to adhere to the “auto-sortable” format.

If you wonder how a hierarchical sub-position looks like with this format, its ” 1. 1″, ” 1. 10″ and even ” 1. 17. 2. 4″. The database stores mostly blanks in this field.

While this approach might seem clever at the moment, it is highly dangerous. It conflates several things that should stay separated, like “storage format” and “display format”, “item order” or just “valid value range”. It is a clear violation of the “separation of concerns” principle. And it broke the application when I missed one place where the formatting was required, but not present. Of course, this only manifests in a problem when your test cases (or manual tries) exceed a list of 9 entries – lesson learnt here.

I dread the moment when the company calls to tell me about this “unusually large invoice” that exceeds the 999 limit. This would mean a reformatting of all stored data or another even more clever hack to circumvent the problem.

Did you encounter a format that was purely there for sorting in the wild? What was the story? Tell us in the comments!

Try ending the workday with a beneficial ritual

One thing that is important to me is to start and end the workday with a proven and familiar routine – lets call it a ritual. There are some advantages to this approach. First, you have a defined starting point. No matter what the day may throw at you, there are some anchors in your structure or environment that you can rely on. For example, I don’t start my work without a (big) filled glass of water on my desk. It might get hectic, but my supply of water is secured until lunch. I make it a habit to empty that glass before lunch, too, but that’s not as important as the ritual of supplying myself with a beverage and only then starting my work.

My guess is that most of you already do this, too. The start of a workday is the natural point in time to install habits or even rituals. But what about the end of your workday? Sure, there is a point in time when you “drop the pen” and rush out the door. But right before this moment, there is a possibility to introduce a beneficial ritual that might only cost minutes, but brings value that furthers your career and even your current work.

My usual ritual is a short daily reflection. That’s not exactly my own idea, I just borrowed it from the Clean Code Developer Initiative. My problem with the CCDI version is the focus on software development alone, which is probably a good start, but too narrow for my work profile.

My adaption is to have three basic questions that I ask myself at the end of each workday and answer in “articulated thoughts”. You may prefer to say it out loud or write your answer down (Obsidian or similar tools might be a suitable tool for that). My questions to myself are:

  • How do you feel right now?
  • What surprised you today?
  • What do you want to remember from today’s work?

Note how these questions don’t deal with details of your current work. If you have specific topics that you want to reflect on, you can always add some more questions for a period. I have found it important not to skip or replace the three basic questions, though.

“How do you feel?” is a complicated question because it leads to your motivation for work. Of course, “tired” or “stressed” is always a valid answer. But what if you legitimately feel “proud” or “fulfilled”? Can you identify what aspect of today’s work made you proud? Can you think of a way to have more of that without neglecting other important duties?

“What surprised you today?” tries to carve out your latest learning experience. It is possible that your day was dull enough to have no surprises, but if there were, you’ve probably expanded your knowledge on a topic you didn’t expect. If the surprise was a negative one, maybe you can think about a way to make it less surprising, more rare or downright impossible in the future. In my case, this lead to some unusual gadgets like the “bad idea commands” list that hangs right besides the admininstation console. The most infamous command on this list is “mdadm –create”, by the way (I meant “mdadm –assemble” and was very surprised by the result).

“What do you want to remember?” is an explicit appeal to write your answer down. You don’t need to tell an elaborate story. Just give your future self some cues, preferably from outside your brain (Obsidian’s market claim of “a second brain” is no coincidence). Make a small note or write your future self an e-mail (this is my typical way of offloading things to future me). But persist this information now or it will be gone.

After this daily reflection, I shut down my computer and put the (probably empty) glass of water into the dishwasher. Then I switch into leisure mode.

Of course, my three questions are inspired from other sources, too. One is the workshop hosting manual for code retreats, which has a great section about the “closing circle”, a group reflection on a probably awesome day.

If you have a similar ritual, let us know about it! Write a blog entry or drop a comment below.

How comments get you through a code review

Code comments are a big point of discussion in software development. How and where to use comments. Or should you comment at all? Is the code not enough documentation if it is just written well enough? Here I would like to share my own experience with comments.

In the last months I had some code reviews where colleagues looked over my merge requests and gave me feedback. And it happened again and again that they asked questions why I do this or why I decided to go this way.
Often the decisions had a specific reason, for example because it was a customer requirement, a special case that had to be covered or the technology stack had to be kept small.

That is all metadata that would be tedious and time-consuming for reviewers to gather. And at some point, it is no longer a reviewer, it is a software developer 20 years from now who has to maintain the code and can not ask you questions any more . The same applies if you yourself adjust the code again some time later and can not remember your thoughts months ago. This often happens faster than you think. To highlight how fast details disappear here is a current example: This week I set up a new laptop because the old one had a hardware failure. I did all the steps only half a year ago. But without documentation, I would not have been able to reconstruct everything. And where the documentation was missing or incomplete, I had to invest effort to rediscover the required steps.

Example

Here is an example of such a comment. In the code I want to compare if the mixer volume has changed after the user has made changes in the setup dialog.

var setup = await repository.LoadSetup(token);

var volumeOld = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();

setup = Setup.App.RunAsDialog(setup, configuration);

var volumeNew = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();
if (volumeNew == volumeOld)
{
     break;
}
            
ResizeToMixerVolume(setup, volumeOld);

Why do I save the volume in an additional variable instead of just writing the setup into a new variable in the third line? That would be much easier and more elegant. I change this quickly – and the program is broken.

This little comment would have prevented that and everyone would have understood why this way was chosen at the moment.

// We need to copy the volumes, because the original setup is partially mutated by the Setup App.
var volumeOld = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();

If you annotate such prominent places, where a lot of brain work has gone into, you make the code more comprehensible to everyone, including yourself. This way, a reviewer can understand the code without questions and the code becomes more maintainable in the long run.



The year 2022 in tickets

We are a company of software developers that decided to run the company itself similar to a typical software project. All company documents are put under version control, most things that can be automated are automated (or listed on a backlog and estimated for their business value), a wiki contains all relevant information and is continually updated and extended and, most important, everything is an issue. The word “issue” is the developer synonym for “ticket”, so what I’m really saying is: “Every activity in our company has a ticket number”. Just like you don’t change the source code of a software project without an issue that motivates the change, we don’t perform work for the company without a motivating ticket. This means that you can review the company’s progress, performance and efforts by at least three activity streams or history tracks:

  • The commit history of the version control system tells the story from the viewpoint of documents. Company documents are mostly the beginning or the result of activity of our administration department. Typical documents that start processes are project orders or letters from official agencies. Typical documents that are created as a result of processes include invoices, filled in forms and more letters.
  • The edit history of the wiki tells the story from the viewpoint of process learning. We document our actual administrative processes in a structured way that might be seen as “source code for humans”. Everything we change in our approach to process and create documents can be traced in this source code. Additionally created business processes indicate a growth in business scope or complexity – or the payback of “business process debt”, the administrative equivalent to “technical debt” in a software project.
  • The resolution history of the ticket system tells the story from the actual footwork. Every activity has its ticket, so we can measure how much activity was necessary to run the company, where this activity was invested and how much regular versus extraordinary work occurred.

There are more “story lines” in our company and I could probably talk for days about how to read them and set them into context with one another, but in this blog post, I try to visualize only the footwork of the year 2022 for our small company by showing the ticket numbers. But before I can do that, we need one more piece of theory about our tickets:

We have two kind of tickets in our system:

  • Manually created tickets accompany activities that occurred “without a schedule”. A human being recognized the need for some work and wrote a ticket to document the motivation and track the progress of this work.
  • Automatically created tickets denote recurring activities that are handled by some form of automation in our company. We have developed a tool to manage the schedules of these “recurring activities”. Our job as humans is to recognize the recurring character of some of our activities, estimate a suitable schedule for it and tell the tool about it. The tool then creates tickets based on that schedule and we need to deal with them. The simplest form to deal with a recurring ticket is to close it as “won’t fix” because there is nothing to do in its regard yet.

Just keep in mind that manually created tickets always denote required activity while automatically created tickets “only” denote the need to check for required activity, but not always to perform it.

Let’s look at some numbers!

The most obvious ticket section is of course the tickets for our blog entries (you are reading BLOG-368). Because we publish one entry each week, there will be at least 52 tickets for 2022. In fact, we have fixed 54 tickets this year, with only one ticket manually created. There are not many surprises with such a strict schedule.

A less predictable topic is the purchase department (don’t be too impressed, the “department” is just its own section in the ticket system). Every purchase is tracked by its own ticket. In 2022, we had 113 purchase activities, with 94 of them manually created. This means that the non-automation ratio for our purchases is above 80 percent. We bought two different things every week and most of it was “on demand”.

The most important section of tickets for me as the CEO is the “business administration” section which encompasses all necessary non-specialized work to keep our company afloat. Let’s dissect it for the year 2022:

956 tickets were resolved over the course of the year. That’s a lot of work for a small company! Luckily, 865 tickets were created by our tool, so they don’t always require actual activity. But 91 tickets were things we needed to do but couldn’t anticipate this need (or else we would have created a schedule for it). This is two things per week that “surprised” us.
If you look at these numbers with a different mindset, you can see the effects of consequent automation: Our administration has an automation factor of 90 percent! We mostly deal with routine tasks and can rely on defined, documented and automated processes. That’s quite an achievement and I still remember the times when we had lower factors. They were more “interesting” (in the asian curse sense).
I want to add another perspective to these numbers: We also track our work time and assign it to different projects, with “administration” being one of them. In the year 2022, we booked approximately 600 person hours of work for “administration” of the company. So we spent circa 35 minutes on each ticket. This is a misleading number, because there were lots of tickets that require no more than a few minutes and some that can hog our attention for days. We also track detailed “time per ticket” numbers, but only use this data to extract “expected durations” for the most important and time-consuming tasks. This helps us to plan our administrative work around the customer project schedules.

I could write a lot more about our different topics in the adminstration section of our ticket system. We have identified around 20 distinguishable topics. But it would become more boring over time, so I close this blog entry with one last topic that is very important for an IT company: the “IT administration” or “operations”.

To keep our IT systems up and running, we worked on 48 different tickets, but only 13 of them were automatically created. This seems rather low when compared to the adminstration with around 1000 tickets, but it is very misleading. Our IT administration is nearly fully automated, so that routine work doesn’t create tickets, but starts automation runs (Jenkins builds, Ansible playbooks and such). The 48 tickets were additional work or, in the case of the 13 tickets, recurring work that requires human oversight and interference.
I’m glad that this number is as low as it is. It means that the IT runs smooth and rather silent. The 115 person hours booked for it tell the same story: Our IT is low maintenance. A tad more than 2 hours per week is an affordable price.

I hope this blog entry was entertaining enough to give you an idea of how we make things visible in our company. We use the data to test hypotheses, expose problems and track our improvement efforts. Without this data, we could only rely on assumptions, feelings and spotty memory. By reading the numbers, we (or at least I) get a feeling for the intricacies of the company that translate down to the day-to-day work and makes intuitive and appropriate management possible.

If you want to know more, feel free to leave a comment!

When laziness broke my code

I was just integrating a new task-graph system for a C# machine control system when my tests started to go red. Note that the tasks I refer to are not the same as the C# Task implementation, but the broader concept. Task-graphs are well known to be DAGs, because otherwise the tasks cannot be finished. The general algorithm to execute a task-graph like this is called topological sorting, and it goes like this:

  1. Find the number of dependencies (incoming edges) for each task
  2. Find the tasks that have zero dependencies and start them
  3. For any finished tasks, decrement the follow-up tasks dependency count by one and start them if they reach zero.

The graph that was failed looked like the one below. Task A was immediately followed by a task B that was followed by a few more tasks.

I quickly figured out that the reason that the tests were failing was that node B was executed twice. Looking at the call-stack for both executions, I could see that the first time B was executed was when A was completed. This is correct as per step 3 in the algorithm. However, the second time it was started was directly from the initial Run method that does the work from step 2: Starting the initial tasks that are not being started recursively. I was definitely not calling Run twice, so how did that happen?

public void Run()
{
    var ready = tasks
        .Where(x => x.DependencyCount == 0);

    StartGroup(ready);
}

Can you see it? It is important to note that many of the tasks in this graph are asynchronous. Their completion is triggered by an IObserver, a C# Task completing or some other event. When the event is processed, StartGroup is used to start all tasks that have no more dependencies. However, A was no such task, it was synchronous, so the StartGroup({B}) call happened while Run was still on the stack.

Now what happened was that when A (instantly!) completed, it set the DependencyCount of B to 0. Since ready in the code snippet is lazily evaluated from within StartGroup, the ‘contents’ actually change while StartGroup is running.

The fix was adding a .ToList after the .Where, a unit test that checked that this specifically would not happen again, and a mental note that lazy evaluation can be deceiving.

PostgreSQL’s hstore module for semi-structured data

PostgreSQL has an extension module called hstore that allows you to store semi-structured data in a key/value format. Values ​​of an hstore object are stored like in a dictionary. You can also reference its values in SQL queries.

To use the extension, it must first be loaded into the current database:

CREATE EXTENSION hstore;

Now you can use the data type hstore. Here, we create a table with some regular columns and one column of type hstore:

CREATE TABLE animals (
    id     serial PRIMARY KEY,
    name   text,
    props  hstore
);

Literals of type hstore are written in single quotes, containing a set of key => value pairs separated by commas:

INSERT INTO
    animals (name, props)
VALUES
    ('Octopus', 'arms => 8, habitat => sea, color => varying'),
    ('Cat',     'legs => 4, fur => soft'),
    ('Bee',     'legs => 6, wings => 4, likes => pollen');

The order of the pairs is irrelevant. Keys within a hstore are unique. If you declare the same key more than once only one instance will be kept and the others will be discarded. You can use double quotes to include spaces or special characters:

'"fun-fact" => "Cats sleep for around 13 to 16 hours a day (70% of their life)"'

If the type of the literal can’t be inferred you can append ::hstore as a type indicator:

'legs => 4, fur => soft'::hstore

Both keys and values are stored as strings, so these two are equivalent:

'legs => 4, fur => soft'
'"legs" => "4", "fur" => "soft"'

Another limitation of hstore values is that they cannot be nested, which means they are less powerful than JSON objects.

You can use the -> operator to dereference a key, for example in a SELECT:

SELECT
    name, props->'legs' AS number_of_legs
FROM
    animals;

It returns NULL if the key is not present. Of course, you can also use it in a WHERE clause:

SELECT * FROM animals WHERE props->'fur' = 'soft';

There are many other operators and functions that can be used with hstore objects. Here is a small selection (please refer to the documentation for a complete list):

  • The || operator concatenates (merges) two hstores: a || b
  • The ? operator checks the existence of a key and returns a boolean value: props ? 'fur'
  • The - operator deletes a key from a hstore: props - 'fur'
  • The akeys function returns an array of a hstore’s keys: akeys(hstore)

You can also convert a hstore object to JSON: hstore_to_json(hstore). If you want to learn more about JSON in PostgreSQL you can continue reading this blog post: Working with JSON data in PostgreSQL

Use real(istic) data from early on

When developing software in general and also specifically user interfaces (UIs) one important aspect is often neglected: The form, shape and especially the amount of data.

One very common practice is to fill unknown texts with fragments of the famous Lorem ipsum placeholder text. This may be a good idea if you are designing a software for displaying a certain kind of articles similar in size and structure to your placeholder text. In all other cases I would regard using lorem ipsum as a smell.

My recommendation is to collect as many samples of real or at least realistic data as feasible. Use them to build and test your application. Why do I think it matters? Let me elaborate a bit in the following sections.

Data affects the layout

You can only choose a fitting layout if you have knowledge about the length of certain texts, size of image etc. The width of columns can be chosen more appropriately, you can descide if you need scrollbars, if you want them permantently visible for a more stable and calm layout, how large panels or text areas have to be for optimum readability and so on.

Data affects the choice of UI controls

The data your application has to handle should reflect not only in the layout but also in the type of controls to be used.

For example, the amount of options for the user to make a choice from drastically affects the selection of an adequate UI control. If you have only 2 or 3 options toggle buttons, checkboxes or radio buttons next to each other or layed out in one column may be a good fit. If the count of options is greater, dropdowns may be better. At some point maybe a full-blown list with filters, sorting and search may be necessary.

To make a good decision, you have to know the expected amount and shape of your data.

Data affects algorithms and technical decisions regarding performance

The data your system has to work with and to present to the user also has technical impact. If the datasets are moderate in size, you may be able to transfer them all to the frontend and do presentation, filtering etc. there. That has the advantage of reducing backend stress and putting computational effort in the hands of the clients.

Often this becomes unfeasible when the system and its data pool grows. Then you have to think about backend search and filtering, datacompression and the like.

Also algorithmns and datastructure may change from simple lists and linear search to search trees, indexes and lookup tables.

The better you know the scope of your system and the data therein the better your technical decisions can be. You will also be able to judge if the YAGNI principle applies or not.

Conclusion

To quickly sum-up the essence of the advice above: Get to know the expected amount and shape of data your application has to deal with to be able to design your system and the UI/UX accordingly.

Fun with docker container environment variables

Docker (as one specific container technology product) is a basic ingredient of our development infrastructure that steadily gained ground from the production servers over the build servers on our development machines. And while it is not simple when used for operations, the complexity increases a lot when used for development purposes.

One way to express complexity is by making the moving parts configurable and using different configurations. A common way to make things configurable with containers are environment variables. Running a container might look like a endurance typing contest if used extensibly:

docker run --rm \
-e POSTGRES_USER=myuser \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_DB=mydatabase \
-e PGDATA=/var/lib/postgresql/data/pgdata \
ubuntu:22.04 env

This is where our fun begins.

Using an env-file for extensive configurations

The parameter –env-file reads environment variables from a local text file with a simple key=value format:

docker run --rm --env-file my-vars.env ubuntu:22.04 env

The file my-vars.env contains all the variables line by line:

FIRST=1
SECOND=2

If we run the command above in a directory containing the file, we get the following output:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=46a23b701dc8
FIRST=1
SECOND=2
HOME=/root

The HOSTNAME might vary, but the FIRST and SECOND environment variables are straight from our file.

The only caveat is that the env-file really has to exist, or we get an error:

docker: open my-vars2.env: Das System kann die angegebene Datei nicht finden.

My beloved shell

The env-file can be empty, contain only comments (use # to begin them) or whitspace, but it has to be present.

Please be aware that the env-files are different from the .env-file(s) in docker-compose. A lot of fun is lost by this simple statement, like variable expansion. As far as I’m aware, there is no .env-file mechanism in docker itself.

But we can have some kind of variable substitution, too:

Using multiple env-files for layered configurations

If you don’t want to change all your configuration entries all the time, you can layer them. One layer for the “constants”, one layer for global presets and one layer for local overrides. You can achieve this with multiple –env-files parameters, they are evaluated in your specified order:

docker run -it --rm --env-file first.env --env-file second.env ubuntu:22.04 env

Let’s assume that the content of first.env is:

TEST=1
FIRST=1

And the content of second.env is:

TEST=2
SECOND=2

The results of our container call are (abbreviated):

TEST=2
FIRST=1
SECOND=2

You can see that the second TEST assignment wins. If you switch the order of your parameters, you would read TEST=1.

Now imagine that first.env is named global.env and second.env is named local.env (or default.env and development.env) and you can see how this helps you with modular configurations. If only the files need not to exist all the time, it would even fit well with git and .gitignore.

The best thing about this feature? You can have as many –env-file parameters as you like (or your operating system allows).

Mixing local and configured environment variables

We don’t have explicit variable expansion (like TEST=${FIRST} or something) with –env-files, but we have a funny poor man’s version of it. Assume that the second.env from the example above contains the following entries:

OS
TEST=2
SECOND=2

You’ve seen that right: The first entry has no value (and no equal sign)! This is when the value is substituted from your operating system:

TEST=2
FIRST=1
OS=Windows_NT
SECOND=2

By just declaring, but not assigning an environment variable it is taken from your own environment. This even works if the variable was already assigned in previous –env-files.

If you don’t believe me, this is a documented feature:

If the operator names an environment variable without specifying a value, then the current value of the named variable is propagated into the container’s environment

https://docs.docker.com/engine/reference/run/#env-environment-variables

And even more specific:

When running the command, the Docker CLI client checks the value the variable has in your local environment and passes it to the container. If no = is provided and that variable is not exported in your local environment, the variable won’t be set in the container.

https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables–e—env—env-file

This is a cool feature, albeit a little bit creepy. Sadly, it doesn’t work in all tools that allow to run docker containers. Last time I checked, PyCharm omitted this feature (as one example).

Epilogue

I’ve presented you with three parts that can be used to manage different configurations for docker containers. There are some pain points (non-optional file existence, feature loss in tools, no direct variable expansion), but also a lot of fun.

Do you know additional tricks and features in regard to environment variables and docker? Comment below or link to your article.