Heterogeneous lookup in unordered C++ containers

I often use std::string_view via the sv suffix for string constants in my code. If I need to associate something with those constants at runtime, I put it in an std::unordered_map with the constants as the keys.

Just a few days ago, I was using and std::unordered_map<std::string, ...> and wanted to .find(...) something in it with such a string constant. But that didn’t compile. From long ago, I remember that the type must be identical, and since there is no implicit conversion from std::string_view to std::string, I made that explicit to get it to compile. But wait. Didn’t C++ add support for using a different type than the key_type for the lookup? Indeed it did, in P0919R3 and P1690R1 from last decade. All major compilers seem to support it too. Then why wasn’t this working? It turns out that it’s not enabled by default, you need to explicitly enable it by supplying a special hasher. Here’s how I do it:

struct stringly_hash
{
  using is_transparent = void;
  [[nodiscard]] size_t operator()(char const* rhs) const
  {
    return std::hash<std::string_view>{}(rhs);
  }
  [[nodiscard]] size_t operator()(std::string_view rhs) const
  {
    return std::hash<std::string_view>{}(rhs);
  }
  [[nodiscard]] size_t operator()(std::string const& rhs) const
  {
    return std::hash<std::string>{}(rhs);
  }
};

template <typename ValueType>
using unordered_string_map = std::unordered_map<
  std::string,
  ValueType,
  stringly_hash,
  std::equal_to<>
>;

This is almost the same code as the sample given in the first of the two proposals. The using is_transparent = void; is how the feature is enabled and was changed in the second proposal.

JSON as a table in PostgreSQL 17

Some time ago, I described in a blog post how to work with JSON data in a PostgreSQL database. Last month (September 2024), PostgreSQL 17 was released, which offers another feature for working with JSON data: the JSON_TABLE() function. Such a function already existed in other database systems, such as MySQL and Oracle.

The core idea behind JSON_TABLE() is to temporarily transform JSON data into a relational format that can be queried using standard SQL commands. In doing so, developers can apply the full power of SQL, such as filtering, sorting, aggregating, and joining, to data that originally exists in JSON format: It enables you to use SQL to query JSON data stored in a column as though it were relational data, and you can join JSON data with regular relational tables for a seamless mix of structured and semi-structured data.

Syntax

The syntax for JSON_TABLE looks a bit complex at first glance, but once you break it down, it becomes quite clear. Here’s the basic structure:

JSON_TABLE(
  json_doc, path
  COLUMNS (
    column_name type PATH 'path_to_value' [DEFAULT default_value ON ERROR],
    ...
  )
) AS alias_name

The json_doc is the JSON data you’re querying. It can be a JSON column or a JSON string. It is followed by a path expression, which describes the location in the JSON document that you want to process. Path expressions are specified as strings in single quotation marks and have a special syntax called JSONPath (loosely inspired by XPath for XML). For example, such an expression could look like this: '$.store.book[0].title'. The dollar sign represents the root of the JSON tree, the sub-properties are separated by dots, and array elements are referenced using square brackets.

This is followed by the COLUMNS keyword. It specifies the columns to be extracted from the JSON document. Each column is specified by a name of the column in the resulting table, a data type of the extracted value (e.g., VARCHAR, INT), followed by the PATH keyword and a JSON path expression that references a JSON value below the original path expression.

The DEFAULT keyword optionally provides a default value if the path does not exist or if there’s an error during extraction.

After the JSON_TABLE function call you can specify the alias name for the table with the AS keyword.

Example

It’s time for an example. Let’s assume we have a table named fruit_store with a column named json_details that holds the some JSON data about fruits:

INSERT INTO fruit_store (id, json_details) VALUES (
    1,
    '{
        "category": "Citrus",
        "fruits": [
            {
                "name": "Orange",
                "color": "Orange",
                "taste": "Sweet",
                "price_per_kg": 3.5
            },
            {
                "name": "Lemon",
                "color": "Yellow",
                "taste": "Sour",
                "price_per_kg": 4.0
            }
        ]
    }'
);

Now we can use the JSON_TABLE function to extract the details of each fruit from the JSON document. The goal is to retrieve the fruit name, color, taste, and price for each fruit in the array. Here’s the query:

SELECT *
  FROM fruit_store,
  JSON_TABLE(
    json_details, '$.fruits[*]'
    COLUMNS (
      fruit_name   VARCHAR(50)  PATH '$.name',
      fruit_color  VARCHAR(50)  PATH '$.color',
      fruit_taste  VARCHAR(50)  PATH '$.taste',
      price_per_kg DECIMAL(5,2) PATH '$.price_per_kg'
    )
  ) AS fruit_table;

Running this query will give you the following result:

idfruit_namefruit_colorfruit_tasteprice_per_kg
1OrangeOrangeSweet3.50
1LemonYellowSour4.00

Git revert may not be what you want!

Git is a powerful version control system (VCS) for tracking changes in a source code base as most developers will know by now… It is also known for its quirky command line interface and previously less known concepts like staging area, cherry-picking and rebasing.

This can make Git quite intimidating and there are many memes about how confusing and hard it is to work with.

Especially for people coming from Subversion (SVN) there are a few changes and pitfalls because some commands have the same name but either do different things in the context or altogether, e.g. commit, checkout and revert.

Commit in the SVN world perform synchronization with the remote repository whereas a commit in git is local and has to synchronized with the remote repository using push. Checkout is similar and maybe even worse: In SVN it fetches the remote repository state and puts it in your working copy. In Git is used to replace files in your working copy with contents from the local repository or formerly to switch and/or create local branches…

But now on to our main topic:

What does git revert actually do?

Revert is maybe the most misleading command existing in both SVN and Git.

In SVN it is quite simple: Undo all local edits (see svnbook). It throws away you uncommited changes and causes potential data loss.

Git’s revert works in a completely different way! It creates “inverse” commit(s) in your (local) repository. Your working copy has to be clean without changes to be able to revert and you will undo the specified commit and record this change in the repository and its history.

So both revert commands are fundamentally different. This can lead to unexpected behaviour, especially when reverting a merge commit:

We had the case where our developers had two feature branches and one dev decided to temporarily merge the other feature branch to test some things out. Then he reverted the merge using git revert and opened a merge request. This leads to the situation, that the other feature branch becomes unmergeable because git records no changes. This is rightfully so, because this was already merged as part of the first one, but without the changes (because they were reverted!). So all the changes of the other feature branch were lost and not easily mergeable:

To fix a situation like this you can cherry-pick the reverted commit mentioned in the commit message.

How to revert local changes in git

So revert is maybe not what we wanted to do in the first place to undo our temporary merge of the other feature branch. Instead we should use git reset. It has some variants, but the equivalent of SVN’s revert would be the lossy

git reset --hard HEAD

to clean our working copy. If we wanted to “revert” the merge commit in our example we would do

git reset --hard HEAD^

to undo the last commit.

I hope this clear up some confusion regarding the function of some Git commands vs. their SVN counterparts or “false friends”.

Every Unit Test Is a Stage Play – Part IV

In this series about describing unit tests with the metaphor of a stage play that tells short stories about your system, we already published three parts:

Today, we look at the story.

When you visit a theater, you probably expect to be entertained. You expect some level of preparation and presentation. You might not enjoy every aspect of the stage play, but you can cherish the overall experience.

When you read an unit test as a developer, you should not expect to be entertained. But you can expect some level of presentation and you should be able to endure the overall experience.

In both cases, a great factor to success is how the story is presented to you.

Imagine trying to follow a stage play that is in rehearsal mode. Constant interruptions and corrections from outside the stage, repetitions of scenes and single sentences and sometimes omissions that everybody is clued in on except you. And of course, nobody is dressed for their role. It would be hard to follow the plot and piece the story together.

Unit test code often reads like an early rehearsal. The code is stitched together by copy & paste, some details are modified but not emphasized and the point of the story is only revealed at the end, oftentimes told indirectly by convoluted assertions. When the test runs green for the first time, it is abandoned and left as an exercise in improvement for the next reader.

The next reader is a developer that made a change to the production code that got red-flagged by the unit test. He or she tries to find out why the jury of assertions is against the change and what the test is all about. It’s like the first visitor of a stage play has to tell the lighting technician where to point the spotlights without knowing how the story will play out.

If we accept the metaphor and view unit tests as stage plays that tell a short story, we should try to tell the story in a clear and concise manner. Giving standard names to the participating roles is an important first step to clue in the visitor/reader. But the last part of a story is the most crucial one. You are expected to tie the story threads together and provide a resolution that can be followed.

In unit testing, we express the resolution of the unit test’s short story as assertions:

public void parsingOfErroneousODLState() {
    final SerialODL target = new SerialODL(Z, "19");
    
    final ODLState actual = target.getCurrentState();
    
    Assert.assertFalse(
        actual.isNormalOperation()
    );
    Assert.assertEquals(
        3,
        IterableUtil.getSizeFor(actual.getErrorStates())
    );
    Assert.assertEquals(
        ODLErrorState.TEST_ERROR,
        IterableUtil.getElementAt(0, actual.getErrorStates())
    );
    Assert.assertEquals(
        ODLErrorState.INVALID_VALUE_DUE_TO_INITIALIZING,
        IterableUtil.getElementAt(1, actual.getErrorStates())
    );
    Assert.assertEquals(
        ODLErrorState.VALUE_GREATER_ALARM_THRESHOLD,
        IterableUtil.getElementAt(2, actual.getErrorStates())
    );
}

This unit test consists of one line of preparation (“arrange”), one line of code under test that produces the “actual” (“act”) and five assertions on several lines each (“assert”). Nearly 80 percent of this unit test are assertions. And they try to express something, but it gets drowned in noise.

One key to a better story is the usage of a more fitting form of expression, in our case a more natural way to write assertions:

public void parsingOfErroneousODLState() {
    final SerialODL target = new SerialODL(Z, "19");

    final ODLState actual = target.getCurrentState();

    assertThat(
        actual.isNormalOperation()
    ).isFalse();

    assertThat(
        actual.getErrorStates()
    ).containsExactly(
        ODLErrorState.TEST_ERROR,
        ODLErrorState.INVALID_VALUE_DUE_TO_INITIALIZING,
        ODLErrorState.VALUE_GREATER_ALARM_THRESHOLD
    );
}

In this example, we used assertj fluent assertions. As you can see, you can shrink the assertions part of your story down to the essence. You can state what you really want to see and not hide it behind indices and size comparisons that only exist because of the indices.

Another way to guide your reader is by structuring your test story into a standard form. From classic storytelling, we know about the hero’s journey that consists of three sections (departure, initiation, return) and can be found in countless books, movies and stage plays.

Our test’s journey is called AAA pattern. The three sections are:

  • Arrange
  • Act
  • Assert

Whenever you write an unit test, adhere to this pattern. If you find yourself tempted to add a second act or more assertions, break up your one unit test into two. You might want to think about extracting the arrange part into a common utility method (that is placed down below, behind the curtain). The story then says: The hero is in the same position both times, decides different (the two act sections) and has a different outcome (the two assertions) because of that.

There are probably countless things more that you can think of to make the story of your tests more compelling. Remember that test are not required to be entertaining or surprising. You can tell the same classic tale over and over again. The computer doesn’t mind and the next reader is glad when the test code is accessible right away because the structure and phrasing is on point.

Nobody would pay to see a confusing stage play. And nobody wants to decipher extravagant test code that just broke in an unexpected way. Give your readers what they hope for: Plain short stories about your system.

Epilogue

This is the fourth part of a series. All parts are linked below:

Every Unit Test Is a Stage Play – Part III

In the first and second parts of this series, I talked about describing unit tests with the metaphor of a stage play that tells short stories about your system.

This metaphor is really useful in many aspects of unit testing, as we have seen with naming variables and clearing the test method. In each blog entry of the series, I’m focussing on one aspect of the whole.

Today, we look at the theater.

If you go to the theater as a guest, you are greeted by a pompous entrance with a luxurious stairway that lead you to your comfy seat. You don’t get to see all the people and things behind the heavy curtains. You don’t need to recognize any details of the floor, the walls or the ceiling as you make your way into the auditorium. They don’t matter for the play.

If you enter the theater as an actor or a stage help, you slip into the building by the back entrance and make your way through a series of storage rooms. Or at least that is the cliché in many movies (I’m thinking about Birdman, but you probably have your own mental image at hands). You need to recognize all the details and position yourself according to your job. Your preparation matters for the play.

I described the test methods as single stage plays in earlier blog posts. Today, I want you to think about a test class as a theater. We need to agree on the position of the entrance. In my opinion, the entrance is where I’m starting to read – at the top of the file.

In my opinion, as a reader of the test class, I’m one of the guests. My expectation is that the test class is designed to be inviting to guests.

This expectation comes from a fundamental difference between production code classes and test classes: Classes in production code are not meant to be read. In fact, if you tailor your modules right and design the interface of a class clearly and without surprises, I want to utilize your class, but don’t read it. I don’t have to read it because the interface told me everything I needed to know about your class. Spare me the details, I’ve got problems to solve!

Test classes, on the other hand, are meant to be read. Nobody will call a test method from the production code. The interface of a test class is confusing from the outside. To value a test class is to read and understand it.

Production code classes are like goverment agencies: They serve you at the front, but don’t want you to snoop around the internals. Test classes are like a theater: You are invited to come inside and marvel at the show.

So we should design our test classes like theaters: An inviting upper part for the guests and a pragmatic lower part for the stage hands behind the curtain.

Let’s look at an example:

public class UninvitingTest {
	
    public static class TestResult {
        private final PulseCount[] counts;
        private final byte[] bytes;

        public TestResult(
            final PulseCount count,
            final byte[] bytes
        ) {
            this(
                new PulseCount[] {
                    count
                },
                bytes
            );
        }

        public TestResult(	
    	    final PulseCount[] counts,
            final byte[] bytes
        ) {
            super();
            this.counts = counts;
            this.bytes = bytes;
        }

        public PulseCount[] getCounts() {
            return this.counts;
        }

        public byte[] getBytes() {
            return this.bytes;
        }
    }
	
    private static final TestResult ZERO_COUNT = 
	new TestResult(
            new PulseCount(0),
            new byte[] {0x0, 0x0, 0x0, 0x0}
        );
    private static final TestResult VERY_SMALL_COUNT = 
	new TestResult(
            new PulseCount(34),
            new byte[] {0x0, 0x0, 0x22, 0x0}
        );
    private static final TestResult MEDIUM_COUNT_BORDER = 
	new TestResult(
            new PulseCount(65536),
            new byte[] {0x1, 0x0, 0x0, 0x0}
        );
    
    public UninvitingTest() {
    	super();
    }

    @Test
    public void serializeSingleChannelValues() throws Exception {
        SPEChannelValuesSerializer scv = 
            new SPEChannelValuesSerializer();
        Assertion.assertArrayEquals(
            ZERO_COUNT.getBytes(),
            scv.serializeCounts(ZERO_COUNT.getCounts())
        );
        Assertion.assertArrayEquals(
    	    VERY_SMALL_COUNT.getBytes(),
            scv.serializeCounts(VERY_SMALL_COUNT.getCounts())
        );
        Assertion.assertArrayEquals(
    	    MEDIUM_COUNT_BORDER.getBytes(),
            scv.serializeCounts(MEDIUM_COUNT_BORDER.getCounts())
        );
    }
}

In fact, I hope you didn’t read the whole thing. There are lots of problems with this test, but let’s focus on the entrance part:

  • Package declaration (omitted here)
  • Import statements (omitted here)
  • Class declaration
  • Inner class definition
  • Constant definitions
  • Constructor
  • Test method

The amount depends on the programming language, but some ornaments at the top of a file are probably required and can’t be skipped or moved around. We can think of them as a parking lot that we require, but don’t find visually appealing.

The class declaration is something like an entrance door. Behind it, the theater begins. And just by looking at the next three things, I can tell that I took the wrong door. Why am I burdened with the implementation details of a whole other class? Do I need to remember any of that? Are the constants important? Why does a test class require a constructor?

In this test class, I need to travel 50 lines of code before I reach the first test method. Translated into our metaphor, this would be equivalent to three storage rooms filled with random stuff that I need to traverse before I can sit into my chair to watch the play. It would be ridiculous when encountered in real life.

The solution isn’t that hard: Store your stuff in the back rooms. We just need to move our test method up, right under the class declaration. Everything else is defined at the bottom of our class, after the test methods.

This is a clear violation of the Java code conventions and the usual structure of a class. Just remember this: The code conventions and structures apply to production code and are probably useful for it. But we have other requirements for our test classes. We don’t need to know about the intrinsic details of that inner class because it will only be used in a few test methods. The constants aren’t public and won’t just change. The only call to our constructor lies outside of our code in the test framework. We don’t need it at all and should remove it.

If you view your test class as a theater, you store your stuff in the back and present an inviting front to your readers. You know why they visit you: They want to read the tests, so show them the tests as proximate as possible. Let the compiler travel your code, not your readers.

And just so show you the effect, here is the nasty test class from above, with the more inviting structure:

public class UninvitingTest {
    
    @Test
    public void serializeSingleChannelValues() throws Exception {
        SPEChannelValuesSerializer scv = 
            new SPEChannelValuesSerializer();
        Assertion.assertArrayEquals(
            ZERO_COUNT.getBytes(),
            scv.serializeCounts(ZERO_COUNT.getCounts())
        );
        Assertion.assertArrayEquals(
            VERY_SMALL_COUNT.getBytes(),
            scv.serializeCounts(VERY_SMALL_COUNT.getCounts())
        );
        Assertion.assertArrayEquals(
            MEDIUM_COUNT_BORDER.getBytes(),
            scv.serializeCounts(MEDIUM_COUNT_BORDER.getCounts())
        );
    }

    private static final TestResult ZERO_COUNT = 
        new TestResult(
            new PulseCount(0),
            new byte[] {0x0, 0x0, 0x0, 0x0}
        );
    private static final TestResult VERY_SMALL_COUNT = 
        new TestResult(
            new PulseCount(34),
            new byte[] {0x0, 0x0, 0x22, 0x0}
        );
    private static final TestResult MEDIUM_COUNT_BORDER = 
        new TestResult(
            new PulseCount(65536),
            new byte[] {0x1, 0x0, 0x0, 0x0}
        );

    public static class TestResult {
        private final PulseCount[] counts;
        private final byte[] bytes;

        public TestResult(
            final PulseCount count,
            final byte[] bytes
        ) {
            this(
                new PulseCount[] {
                    count
                },
                bytes
            );
        }

        public TestResult(  
            final PulseCount[] counts,
            final byte[] bytes
        ) {
            super();
            this.counts = counts;
            this.bytes = bytes;
        }

        public PulseCount[] getCounts() {
            return this.counts;
        }

        public byte[] getBytes() {
            return this.bytes;
        }
    }

    public UninvitingTest() {
        super();
    }
}

Show your readers the test methods and don’t burden them with details they just don’t need (yet).

Epilogue

This is the third part of a series. All parts are linked below:

A few heuristics for rejecting Merges

The title of this post is somewhat of a working title. It is based on the observation, that in larger projects a day might come where you have to outsource some amount of work (an “issue”, if you will) to anyone who is not you, and maybe not even someone you already know well. Or at least now, how well they fit your image of good code.

That concept is probably not new* for you, and neither is the idea that sooner or later, the foreign code has to be merged, and you are the reviewer. (depending on your version control system, the lingo might differ, but let’s call this whole process a Merge Request for now, like GitLab does).

Now, every issue is different, therefore it is not upon me to give you hard rules what a Merge Request should do. These rules would be as diverse as the issues themselves, and there is no clear answer to “as a reviewer, what should I actually look for?”.

The Best-Case Scenario (forget about it)

The best-case scenario might be:

  • You understand the issue well enough
  • You understand the code base well enough
  • You understand the language well enough
  • The changes are few enough that you can understand them quickly enough.

In that case, you do not need some blog post to help you.

So you could try and pull up some rules for your project in which every Issue leads to Best-Case Merge Requests, but as you will fail anyway, it might be the wiser thing to lower your expectation. I mean, if all issues would match that case, you probably would be faster by writing all the code yourself.

Rather go for the Not-Good-Enough Scenarios

So, what do you do to combine the requirements a) the collaboration should really simplify your life and b) you do not want to compromise your standards too heavily?

This thought process is not finished yet, but at least I would go for some heuristics – if these are violated too heavily, chances are that everyone involved might actually profit from having this Merge Request rejected.

  • Can the issue be grasped within a few minutes?
  • Can the changes be grasped in about half an hour or less?
  • Can the problematic code pieces be described in a few short points?
  • Does this increase the feeling of trust in the collaboration?
Can the issue be grasped within a few minutes?

If it can’t, or you can’t, how come you are the reviewer? This is probably the easiest heuristic to ignore, because real-life problems and real-life programmers might be so imperfect that this is utopian, but still, if it happens, be extra wary.

Can the changes be grasped in about half an hour or less?

The half-an-hour is only a rough figure, of course, but that’s not the point.

If you ever thought “my brain is great”, remember that this was your brain speaking. Your attention span is just not good. If you think otherwise, how come you are the reviewer?

It is easy to think that longer Merge Requests just take a longer time to process. But it doesn’t scale. You will use up your willpower, motivation, attention span quicker than you would like to admit; and if you then go for “well, at least I want to finish the thing now, otherwise I have to do it even again” you will probably let too many mistakes slip.

It is just not responsible. If you want to show willpower, go for increasing your willpower in admitting that some Merge Requests are just too big, and reject them.

One can always backup a branch, create a new one, cherry-pick commits on them, even only commit portions of the same file – this might feel like a punishment at first, but chances are that this process might actually find several problems that were hidden before.

To be continued…

I spoilered my two other heuristics above, but figure that this post is long enough already. Even if you don’t agree with e.g. the time spans given above – the main idea is: Do not let a large Merge Request through just because you believe that you are strong enough to handle it. There’s noone to be impressed.

The impressive part of good work is that it doesn’t look impressive.

I will continue this, and – maybe you have some ideas from your experience? What would you do when a Merge Request is too complex?

I have changed my stance on “using” in C++ headers

I used to be pretty strictly against using either C++ using-directives or -declarations from within header files. It kind of stuck with me as a no-go. But that has changed in recent years.

There are now good cases where using can go into a header. For example, I do not really like putting things like…

using namespace std::string_literals;
using namespace std::string_view_literals;
using namespace std::chrono_literals;

…at the beginning of each source file. Did you know that you can pull all those (and some more) in with a single using namespace std::literals? Either way, in my newer projects, these usually go into one of the more prominent headers. Same goes for other literal operators such as those from the SI library. And so do using declarations for common vocabulary types. E.g. 2D or 3D vector types , in math heavy projects. Of course, they always go after the specific #include(s) the using is referencing. The benefits of doing that usually outweigh the danger of name-clashes and weird order dependencies.

There are cases where I still avoid using in headers however, and that is when the given header is ‘public’, i.e. being consumed by something that is not under my organization’s control. In that case, you better leave that decision to the library consumer.

A Tale of Hidden Variables

Today was one of those frustrating moments that every developer encounters at some point. We were working on a Docker Compose setup and observed behavior that could only happen if a specific environment variable had been set. To ensure that this environment variable wasn’t being set I scoured through the Docker Compose file, checked the local environment variables using the export command, and grepped all the relevant files in the project directory. But no matter what I did, this environment variable was still haunting us, wreaking havoc on the setup.

After what felt like an eternity of troubleshooting, we finally uncovered the culprit: an old, hidden .env file left over from a long-forgotten configuration. This file had been silently setting the environment variable I was desperately trying to eliminate.

Here’s how it all unfolded and what I learned from the experience:

When I first suspected that the environment variable might be lurking somewhere in the project, my instinct was to use grep to search for it in all the files within my local directory. I ran something along the lines of:

grep -r 'MY_ENV_VAR' *

To my surprise, nothing relevant showed up. I had expected this command to search through everything in my local directory. However, I had forgotten one important detail: grep doesn’t search hidden files by default when you use *.

Since .env files are typically hidden (starting with a dot), grep completely skipped over them. Little did I know, that old .env file was sitting quietly in the background, setting the environment variable that was causing all my issues.

After some frustration, my colleague finally had the realization that there might be hidden files at play. In Unix-like operating systems, files that start with a dot (.), like .env, are treated as hidden and are not listed or searched by default with common commands. Just as hidden variables in physics could influence particles without being directly observable, the hidden .env file was affecting my environment variables without being immediately visible.

To include hidden files in your search, you need to modify the grep command to look for them explicitly:

grep -r 'MY_ENV_VAR' . --include=".*"

This experience led me to reflect on whether deployment-relevant files like .env should be hidden in the first place, since they can easily be overlooked during debugging. It also makes them more prone to being forgotten. Hidden files are easy to miss when troubleshooting, especially when you’re under pressure.

Given that .env files can have a significant impact on the behavior of applications, containerized setups, and CI/CD pipelines, making them hidden by default might not always be the best approach. After all, if an environment variable has the power to alter how an entire application runs, it’s something we want to be highly visible and readily accessible.

In the end, this experience taught me two important lessons:

Always search for hidden files when troubleshooting issues related to environment variables. If your Docker Compose or other environment-dependent setups aren’t behaving as expected, don’t forget to check for hidden .env files.

Consider the visibility of critical configuration files. Should .env files be hidden by default, or should they be treated as first-class citizens in our directory structures? In many cases, keeping them visible might help avoid unexpected behavior and wasted hours of debugging.

Every Unit Test Is a Stage Play – Part II

In the first part of this series, I talked about describing unit tests with the metaphor of a stage play that tells short stories about your system.

This metaphor holds its water in nearly every aspect of unit testing and guides my approach from each single line of code to the whole concept of test coverage. In this series, I’m focussing on one aspect in each part.

Today, we look at the backdrop.

We learnt from the first part that most unit tests are performed by four roles that appear on stage. In a theater, the stage is oftentimes decorated by additional items that facilitate the story. This is the backdrop (or the coulisse) of the play. We have the same thing in unit tests.

In a unit test, the stage is the code inside the test method:

@Test
public void rounds_up_to_the_next_decimal_power() {
    final Configuration given = new Configuration(
        StringVirtualFile.createFileFromContent(
            "report.config",
	    "scale.maximum=2E5"
	)
    );
    final ReportConfiguration target = new ReportConfiguration(
        given
        SuffixProvider.none
    );
    final Optional<Double> actual = target.scaleMaximum();
    final double expected = 1E6;
    assertThat(actual).contains(expected);
}

A good director is very picky about every detail that appears on stage. There should be no incidental item visible for the audience. In the case of an unit test, the audience is the next developer that reads the test code.

The stage doesn’t need to be devoid of “extras” and furniture, but it should be limited to the essential. A theater play isn’t a movie set where eye candy is seen as something positive. During the play, the audience should recognize the actors (and their roles) easily and without searching between all the clutter.

So we need to do two things: Declutter the stage and identify the extras.

Decluttering the stage

In our example above, there is a mismatch between the code required to instantiate the given role and the rest of the whole test. If this were a real play, half the time would be spent on an extra that doesn’t even speak a line. It is implied that the target gets some information from the given, but it isn’t shown. We need to remove some details from the stage that aren’t even relevant to the story, but introduced in detail just like the essential things. It might be fun and suspenseful to guess if the “report.config” detail in line three is more meaningful than the “scale.maximum” specified in line four, but unit test stories are not meant to be mysterious or even entertaining. They are meant to inform the audience about a little fact of the tested system. There will be thousands of little stories about the system. Make them trivial to read and easy to understand.

We need to move the stage props off the stage:

@Test
void rounds_up_to_the_next_decimal_power() {
    Configuration given = givenScaleMaximumOf("2E5");
    final ReportConfiguration target = new ReportConfiguration(
        given,
        irrelevant()
    );
    final Optional<Double> actual = target.scaleMaximum();
    final double expected = 1E6;
    assertThat(actual).contains(expected);
}

private Configuration givenScaleMaximumOf(
    String scaleMaximum
) {
    return new Configuration(
        StringVirtualFile.createFileFromContent(
            "report.config",
            "scale.maximum=" + scaleMaximum
        )
    );
}

private SuffixProvider irrelevant() {
    return SuffixProvider.none;
}

By moving the initialization code of the Configuration object to a new private method, we employ the storytelling device of “conveniently prepared circumstances”. The private method is moved to the “lower decks” or the “backstage” area of our test class. “The stage is on top” might be our motto. Everything “private” or not-Test-annotated should not be visible first and should not be required reading.

Notice how I introduced a parameter to the new “givenScaleMaximumOf” method in order to keep the necessary test details on stage. The audience should not have to look behind the curtains to gather important information about the story. I made the parameter a string (and not a double or integer) because it is just a prop. The story doesn’t benefit from it being typecasted correctly. And if you look back, it was a string before, too.

Identifying the extras

I’ve also extracted the “magic number” or “silent extra” SuffixProvider.none into its own method. This method adds nothing but a name that conveys the meaning of the value to the story instead of the value itself. If I were a directory in a theater, this actor would have a plain and bland costume in contrast to the bright colored main roles. Maybe the stage lighting would illuminate the main roles and keep the extra in the shadowy area, too.

Now, the focus of our test method is back on the main story. The attention of our audience will be on the stage and it will not be burdened by irrelevant details. We will even label props as dispensable if they are.

Keep your stages free from clutter and the eyes of your audience on the story. Test code is boring by nature. It doesn’t have to be plodding, too.

Epilogue

This is the second part of a series. All parts are linked below:

Every Unit Test Is a Stage Play – Part I

At the last dev brunch, I got the recommendation about a talk that tries to explain functional programming differently. What really got me was the effectiveness of the changed vocabulatory. I’ve seen this before, in the old talk about test driven development and behaviour driven development. But in my head, I think about unit tests with another overarching metaphor that I’m trying to explain in this blog post series:

Every unit test is a stage play that tells a short story about your system.

And this metaphor really guides my approach to nearly any aspect of unit testing, from each single line of code to the whole concept of test coverage. So I’m breaking my explanation into (at least) five parts and focus on one aspect in each part.

Today, we look at the actors.

In each classic play, there are well-known roles that can be played by nearly any human, but always stay the same role. There’s the hero, the (comedic) sidekick and of course, the villain or antagonist. In every show of Romeo and Juliet, there will be a Romeo. It might not be the most convincing Romeo ever, but the role stays the same, no matter the cast.

The same thing is true for every well-formed unit test. There are four roles that always appear on stage:

  • target: This is the object under test or the code under test if you don’t use objects. The target is probably different for every unit test you write, but the role is always present. I’ve seen it being called “cut” for “code under test”, but I prefer “target”. If you see a reference named “target” in my test code, you can be sure about the role it plays in the story.
  • actual: If you can design your code to adhere to the simple “parameters in, result out” call pattern, the “result out” is the “actual”. It is what your target produced when challenged by the specific parameters of your test. One trick to testable code is to design the “actual” role as being quite simple and “flat”.
  • expected: This might be the closest thing to an antagonist in your play. The “expected” role is filled with the value (or values) that your “actual” is measured against. If your “actual” is simple, your “expected” will be simple, too. If your “actual” is too complex, the “expected” role will be overbearing. In any case, the “expected” role is what drives your assertions.
  • given: Our hero, the “target”, is often dependent on entry parameters or secondary objects (mocked or not). These sidekicks are part of the “given” role. You might think about the “given-when-then” storytelling structure of behaviour driven design for the name. If you strive for a simple code structure, the required “given” in your unit test should be manageable.

As you can see, the story of a typical unit test is always the same: The target, with the help of the given, produces an actual that competes against the expected.

If this story has a happy end, your test runs green. If the actual fails the expectation, your test runs red. If the target fails to produce an actual at all (think about an exception or error), your whole play falls apart and the test runs red.

Enough theory, let’s look at an unit test that uses the four roles:

@Test
public void rounds_up_to_the_next_decimal_power() {
    final Configuration given = new Configuration(
        StringVirtualFile.createFileFromContent(
            "report.config",
	    "scale.maximum=2E5"
	)
    );
    final ReportConfiguration target = new ReportConfiguration(
        given
        SuffixProvider.none
    );
    final Optional<Double> actual = target.scaleMaximum();
    final double expected = 1E6;
    assertThat(actual).contains(expected);
}

I’ve highlighted the roles for better visibility. Note that for a role to appear in the play, it doesn’t really have to be named explicitely. Most of the time, the last two lines would be collated into one:

assertThat(actual).contains(1E6);

You can still see the “expected” role play its part, but not as prominent as before.

You also probably saw the extra “given” that wasn’t highlighted:

SuffixProvider.none

It might be relevant to the story or really be an uncredited extra that is not crucial in the target’s journey to produce the correct actual. If that’s the case, it seems appropriate not to name it. We will learn about techniques that I use to make these extras more nondescript in a later part. Right now, we can differentiate between main roles that are named and secondary roles that are just there, as part of the scenery. Just don’t fool your audience by having an unnamed actor contribute an important piece to the story’s success. That might be a cool plot twist, but I’m not here to be surprised.

Let your tests perform boring plays, but lots of them.

By using the four roles of test play, you make it clear to the reader (your real audience) what to expect from your test code parts. Don’t name irrelevant test code parts and only omit the role names if there are no extras on stage.

Your audience will still find your play boring (that’s the fate of nearly all test code), but it won’t feel disregarded or, even worse, deceived.

Epilogue

This is the first part of a series. All parts are linked below: