Unit-Testing Deep-Equality in C#

In the suite of redux-style applications we are building in C#, we are making extensive use of value-types, which implies that a value compares as equal exactly if all of its contents are equal also known as “deep equality”, as opposed to “reference equality” or “shallow equality”. Both of those imply deep equality, but the other way around is not true. The same object is of course equal to itself, not matter how deep you look. And an object that references the same data as another object also has equal content. But a simple object that contains different lists with equal content will be unequal under shallow comparison, but equal under deep comparison.

Though init-only records already provide a per-member comparison as Equals be default, this fails for collection types such as ImmutableList<> that, against all intuition but in accordance to , only provide reference-equality. For us, this means that we have to override Equals for any value type that contains a collection. And this is were the trouble starts. Once Equals is overridden, it’s extremely easy to forget to also adapt Equals when adding a new property. Since our redux-style machinery relies on a proper “unequal”, this would manifest in the application as a sporadically missing UI update.

So we devised a testing strategy for those types, using a little bit of reflection:

  1. Create a sample instance of the value type with no member retaining its default value
  2. Test, by going over all properties and comparing to the same property in a default instance, if indeed all members in the sample are non-default
  3. For each property, run Equals the sample instance to a modified sample instance with that property set to the value from a default instance.

If step 2 fails, it means there’s a member that’s still at its default value in the sample instance, e.g. the test wasn’t updated after a new property was added. If step 3 fails, the sample was updated, but the new property is not considered in Equals – and it can even tell which property is missing.

The same problems of course arise with GetHashCode, but are usually less severe. Forgetting to add a property just makes collisions more likely. It can be tested much in the same way, but can potentially lead to false positives: collisions can occur even if all properties are correctly considered in the function. In that case, however, the sample can usually be altered to remove the collision – and it is really unlikely. In fact, we never had a false positive.

Format-based sorting looks clever, but is dangerous

A neat trick I learnt early in my career, even before I learnt about version control, was how to format a date as a string so that alphabetically sorted lists would contain them in the “correct” order:

“YYYYMMDD” is the magic string.

If you format your dates as 20230122 and 20230123, the second name will be sorted after the first one. With nearly any other format, your date strings will not be sorted chronologically in the file system.

I’ve found out that this is also nearly the only format that most people cannot intuitively recognize as a date. So while it is familiar with me and conveniently sorted, it is confusing or at least in need of explanation for virtually every user of my systems.

Keep that in mind when listening to the following story:

One project I adopted is a custom enterprise resource planning system that was developed by a single developer that one day left the company and the code behind. The software was in regular use and in dire need of maintenance and new features.

One concept in the system is central to its users: the list of items in an invoice or a bill of delivery. This list contains items in a defined order that is important to the company and its customers.

To my initial surprise, the position of an item in the list was not defined by an integer, but a string. This can be explained by the need of “sub-positions” that form a hierarchy of items, like in this example:

1 – basic item

1.1 – item upgrade #1

1.2 – item upgrade #2

Both positions “1.1” and “1.2” are positioned “underneath” position “1” and should be considered glued to it. If you move position “1” to position “4”, you also move 1.1 to 4.1 and 1.2 to 4.2.

But there was a strange formatting thing going on with the positions: They were stored as strings in the database, but with a strange padding in front. Instead of “1”, “2” and “3”, the entries contained the positions ” 1″, ” 2″ and ” 3″. All positions were prefixed with two space characters!

Well, nearly all positions. As soon as the list grew, the padding turned out to be dependent on the number of digits in the position: ” 9″, but then ” 10″ and “100”.

The reason can be found relatively simple: If you prefix with spaces (or most other characters, maybe “0”), your strings will be ordered in a numerical way. Without the prefixes, they would be sorted like “1”, “10”, “11”, “2”.

That means that the desired ordering of the positions is hardcoded in the database representation! You probably already thought about the case of a position greater than 999. That’s when trouble begins! Luckily, an invoice with a thousand items on the list is unheard of in the company (yet!).

Please note that while the desired ordering is hardcoded in the database, the items are still loaded in a different order (as they were entered into the system) and need to be sorted by the application. The default sorting for strings is the alphabetical order, so the original developer probably was clever/lazy, went with it and formatted the data in a way that would produce the result and not require additional logic during the sorting.

If you look at the code, you see seemingly strange formatting calls to the position all over the place. This is necessary because, for example, every time a user enters a position into the system, it needs to be reformatted (or at least sanitized) in order to adhere to the “auto-sortable” format.

If you wonder how a hierarchical sub-position looks like with this format, its ” 1. 1″, ” 1. 10″ and even ” 1. 17. 2. 4″. The database stores mostly blanks in this field.

While this approach might seem clever at the moment, it is highly dangerous. It conflates several things that should stay separated, like “storage format” and “display format”, “item order” or just “valid value range”. It is a clear violation of the “separation of concerns” principle. And it broke the application when I missed one place where the formatting was required, but not present. Of course, this only manifests in a problem when your test cases (or manual tries) exceed a list of 9 entries – lesson learnt here.

I dread the moment when the company calls to tell me about this “unusually large invoice” that exceeds the 999 limit. This would mean a reformatting of all stored data or another even more clever hack to circumvent the problem.

Did you encounter a format that was purely there for sorting in the wild? What was the story? Tell us in the comments!

How comments get you through a code review

Code comments are a big point of discussion in software development. How and where to use comments. Or should you comment at all? Is the code not enough documentation if it is just written well enough? Here I would like to share my own experience with comments.

In the last months I had some code reviews where colleagues looked over my merge requests and gave me feedback. And it happened again and again that they asked questions why I do this or why I decided to go this way.
Often the decisions had a specific reason, for example because it was a customer requirement, a special case that had to be covered or the technology stack had to be kept small.

That is all metadata that would be tedious and time-consuming for reviewers to gather. And at some point, it is no longer a reviewer, it is a software developer 20 years from now who has to maintain the code and can not ask you questions any more . The same applies if you yourself adjust the code again some time later and can not remember your thoughts months ago. This often happens faster than you think. To highlight how fast details disappear here is a current example: This week I set up a new laptop because the old one had a hardware failure. I did all the steps only half a year ago. But without documentation, I would not have been able to reconstruct everything. And where the documentation was missing or incomplete, I had to invest effort to rediscover the required steps.

Example

Here is an example of such a comment. In the code I want to compare if the mixer volume has changed after the user has made changes in the setup dialog.

var setup = await repository.LoadSetup(token);

var volumeOld = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();

setup = Setup.App.RunAsDialog(setup, configuration);

var volumeNew = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();
if (volumeNew == volumeOld)
{
     break;
}
            
ResizeToMixerVolume(setup, volumeOld);

Why do I save the volume in an additional variable instead of just writing the setup into a new variable in the third line? That would be much easier and more elegant. I change this quickly – and the program is broken.

This little comment would have prevented that and everyone would have understood why this way was chosen at the moment.

// We need to copy the volumes, because the original setup is partially mutated by the Setup App.
var volumeOld = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();

If you annotate such prominent places, where a lot of brain work has gone into, you make the code more comprehensible to everyone, including yourself. This way, a reviewer can understand the code without questions and the code becomes more maintainable in the long run.



When laziness broke my code

I was just integrating a new task-graph system for a C# machine control system when my tests started to go red. Note that the tasks I refer to are not the same as the C# Task implementation, but the broader concept. Task-graphs are well known to be DAGs, because otherwise the tasks cannot be finished. The general algorithm to execute a task-graph like this is called topological sorting, and it goes like this:

  1. Find the number of dependencies (incoming edges) for each task
  2. Find the tasks that have zero dependencies and start them
  3. For any finished tasks, decrement the follow-up tasks dependency count by one and start them if they reach zero.

The graph that was failed looked like the one below. Task A was immediately followed by a task B that was followed by a few more tasks.

I quickly figured out that the reason that the tests were failing was that node B was executed twice. Looking at the call-stack for both executions, I could see that the first time B was executed was when A was completed. This is correct as per step 3 in the algorithm. However, the second time it was started was directly from the initial Run method that does the work from step 2: Starting the initial tasks that are not being started recursively. I was definitely not calling Run twice, so how did that happen?

public void Run()
{
    var ready = tasks
        .Where(x => x.DependencyCount == 0);

    StartGroup(ready);
}

Can you see it? It is important to note that many of the tasks in this graph are asynchronous. Their completion is triggered by an IObserver, a C# Task completing or some other event. When the event is processed, StartGroup is used to start all tasks that have no more dependencies. However, A was no such task, it was synchronous, so the StartGroup({B}) call happened while Run was still on the stack.

Now what happened was that when A (instantly!) completed, it set the DependencyCount of B to 0. Since ready in the code snippet is lazily evaluated from within StartGroup, the ‘contents’ actually change while StartGroup is running.

The fix was adding a .ToList after the .Where, a unit test that checked that this specifically would not happen again, and a mental note that lazy evaluation can be deceiving.

Use real(istic) data from early on

When developing software in general and also specifically user interfaces (UIs) one important aspect is often neglected: The form, shape and especially the amount of data.

One very common practice is to fill unknown texts with fragments of the famous Lorem ipsum placeholder text. This may be a good idea if you are designing a software for displaying a certain kind of articles similar in size and structure to your placeholder text. In all other cases I would regard using lorem ipsum as a smell.

My recommendation is to collect as many samples of real or at least realistic data as feasible. Use them to build and test your application. Why do I think it matters? Let me elaborate a bit in the following sections.

Data affects the layout

You can only choose a fitting layout if you have knowledge about the length of certain texts, size of image etc. The width of columns can be chosen more appropriately, you can descide if you need scrollbars, if you want them permantently visible for a more stable and calm layout, how large panels or text areas have to be for optimum readability and so on.

Data affects the choice of UI controls

The data your application has to handle should reflect not only in the layout but also in the type of controls to be used.

For example, the amount of options for the user to make a choice from drastically affects the selection of an adequate UI control. If you have only 2 or 3 options toggle buttons, checkboxes or radio buttons next to each other or layed out in one column may be a good fit. If the count of options is greater, dropdowns may be better. At some point maybe a full-blown list with filters, sorting and search may be necessary.

To make a good decision, you have to know the expected amount and shape of your data.

Data affects algorithms and technical decisions regarding performance

The data your system has to work with and to present to the user also has technical impact. If the datasets are moderate in size, you may be able to transfer them all to the frontend and do presentation, filtering etc. there. That has the advantage of reducing backend stress and putting computational effort in the hands of the clients.

Often this becomes unfeasible when the system and its data pool grows. Then you have to think about backend search and filtering, datacompression and the like.

Also algorithmns and datastructure may change from simple lists and linear search to search trees, indexes and lookup tables.

The better you know the scope of your system and the data therein the better your technical decisions can be. You will also be able to judge if the YAGNI principle applies or not.

Conclusion

To quickly sum-up the essence of the advice above: Get to know the expected amount and shape of data your application has to deal with to be able to design your system and the UI/UX accordingly.

Fun with docker container environment variables

Docker (as one specific container technology product) is a basic ingredient of our development infrastructure that steadily gained ground from the production servers over the build servers on our development machines. And while it is not simple when used for operations, the complexity increases a lot when used for development purposes.

One way to express complexity is by making the moving parts configurable and using different configurations. A common way to make things configurable with containers are environment variables. Running a container might look like a endurance typing contest if used extensibly:

docker run --rm \
-e POSTGRES_USER=myuser \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_DB=mydatabase \
-e PGDATA=/var/lib/postgresql/data/pgdata \
ubuntu:22.04 env

This is where our fun begins.

Using an env-file for extensive configurations

The parameter –env-file reads environment variables from a local text file with a simple key=value format:

docker run --rm --env-file my-vars.env ubuntu:22.04 env

The file my-vars.env contains all the variables line by line:

FIRST=1
SECOND=2

If we run the command above in a directory containing the file, we get the following output:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=46a23b701dc8
FIRST=1
SECOND=2
HOME=/root

The HOSTNAME might vary, but the FIRST and SECOND environment variables are straight from our file.

The only caveat is that the env-file really has to exist, or we get an error:

docker: open my-vars2.env: Das System kann die angegebene Datei nicht finden.

My beloved shell

The env-file can be empty, contain only comments (use # to begin them) or whitspace, but it has to be present.

Please be aware that the env-files are different from the .env-file(s) in docker-compose. A lot of fun is lost by this simple statement, like variable expansion. As far as I’m aware, there is no .env-file mechanism in docker itself.

But we can have some kind of variable substitution, too:

Using multiple env-files for layered configurations

If you don’t want to change all your configuration entries all the time, you can layer them. One layer for the “constants”, one layer for global presets and one layer for local overrides. You can achieve this with multiple –env-files parameters, they are evaluated in your specified order:

docker run -it --rm --env-file first.env --env-file second.env ubuntu:22.04 env

Let’s assume that the content of first.env is:

TEST=1
FIRST=1

And the content of second.env is:

TEST=2
SECOND=2

The results of our container call are (abbreviated):

TEST=2
FIRST=1
SECOND=2

You can see that the second TEST assignment wins. If you switch the order of your parameters, you would read TEST=1.

Now imagine that first.env is named global.env and second.env is named local.env (or default.env and development.env) and you can see how this helps you with modular configurations. If only the files need not to exist all the time, it would even fit well with git and .gitignore.

The best thing about this feature? You can have as many –env-file parameters as you like (or your operating system allows).

Mixing local and configured environment variables

We don’t have explicit variable expansion (like TEST=${FIRST} or something) with –env-files, but we have a funny poor man’s version of it. Assume that the second.env from the example above contains the following entries:

OS
TEST=2
SECOND=2

You’ve seen that right: The first entry has no value (and no equal sign)! This is when the value is substituted from your operating system:

TEST=2
FIRST=1
OS=Windows_NT
SECOND=2

By just declaring, but not assigning an environment variable it is taken from your own environment. This even works if the variable was already assigned in previous –env-files.

If you don’t believe me, this is a documented feature:

If the operator names an environment variable without specifying a value, then the current value of the named variable is propagated into the container’s environment

https://docs.docker.com/engine/reference/run/#env-environment-variables

And even more specific:

When running the command, the Docker CLI client checks the value the variable has in your local environment and passes it to the container. If no = is provided and that variable is not exported in your local environment, the variable won’t be set in the container.

https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables–e—env—env-file

This is a cool feature, albeit a little bit creepy. Sadly, it doesn’t work in all tools that allow to run docker containers. Last time I checked, PyCharm omitted this feature (as one example).

Epilogue

I’ve presented you with three parts that can be used to manage different configurations for docker containers. There are some pain points (non-optional file existence, feature loss in tools, no direct variable expansion), but also a lot of fun.

Do you know additional tricks and features in regard to environment variables and docker? Comment below or link to your article.

Create custom jre in Docker

I recently wrote a Java application and wanted to run it in a Docker container. Unfortunately, my program needs a module from the Java jdk, which is why I can’t run it in the standard jre. When I run the program with the jdk, everything works but I have much more than I actually need and a 390MB docker container.

That’s why I set out to build my own jre, cut-down exactly to my application needs.

For this I found two Gradle plugins that help me. Jdeps checks the dependencies of an application and can summarise them in a file. This file can be used as input for the Jlink plugin. Jlink builds its own jre from it.

In the following I will present a way to build a custom jre with gradle and the two plugins in a multistage dockerfile and run an application in it.

Configuration in Gradle

First we need to do some configuration in gradle.build file. To use jdeps, the application must be packaged as an executable jar file and all dependencies must be copied in a seperate folder . First we create a task that copies all current dependencies to folder named lib. For the jar, we need to define the main class and set the class path to search for all dependencies in the lib folder.

// copies all the jar dependencies of your app to the lib folder
task copyDependencies(type: Copy) {
    from configurations.runtimeClasspath
    into "$buildDir/lib"
}

jar {
    manifest {
        attributes["Main-Class"] = "Main"
        attributes["Class-Path"] = configurations.runtimeClasspath
            .collect{'lib/'+it.getName()}.join(' ')
    }
}

Docker Image

Now we can get to work on the Docker image.

As a first step, we build our jre in a Java jdk. To do this, we run the CopyDependencies task we just created and build the Java application with gradle. Then we let jdeps collect all dependencies from the lib folder.
The output is written to the file jre-deps.info. It is therefore important that no errors or warnings are output, which is set with the -q parameter. The print-module-deps is crucial so that the dependencies are output and saved in the file.

The file is now passed to jlink and a custom-fit jre for the application is built from it. The parameters set in the call also reduce the size. The settings can be read in detail in the plugin documentation linked above.

FROM eclipse-temurin:17-jdk-alpine AS jre-build

COPY . /app
WORKDIR /app

RUN chmod u+x gradlew; ./gradlew copyDependencies; ./gradlew build

# find JDK dependencies dynamically from jar
RUN jdeps \
--ignore-missing-deps \
-q \
--multi-release 17 \
--print-module-deps \
--class-path build/lib/* \
build/libs/app-*.jar > jre-deps.info

RUN jlink \
--compress 2 \
--strip-java-debug-attributes \
--no-header-files \
--no-man-pages \
--output jre \
--add-modules $(cat jre-deps.info)

FROM alpine:3.17.0
WORKDIR /deployment

# copy the custom JRE produced from jlink
COPY --from=jre-build /app/jre jre
# copy the app dependencies
COPY --from=jre-build /app/build/lib/* lib/
# copy the app
COPY --from=jre-build /app/build/libs/app-*.jar app.jar

# run in user context
RUN useradd schneide
RUN chown schneide:schneide .
USER schneide

# run the app on startup
CMD jre/bin/java -jar app.jar

In the second Docker stage, a smaller image is used and only the created jre, the dependencies and the app.jar are copied into it. The app is then executed with the jre.

In my case, I was able to reduce the Docker container size to 110 MB with the alpine image, which is less than a third. Or, using a ubuntu:jammy image, to 182 MB, which is also more than half.

Using custom Docker containers for development with WebStorm & Co.

Docker has become one of the go-to tools of many developers these days. Not because any project should implement as many technological buzz words per se, but due to their great deal of flexibility compared with their small hassle of setup.

For stuff like node-based applications, using a Dev Container is useful because in principle, you do not need to have any of the npm stuff on your actual machine – not only you avoid having these monstrous node_modules folders, but also avoid having accidental dependencies on some specific configuration that might hold true on your device, but not generally.

For some of these reasons probably, JetBrains included Docker Dev Containers as a kind of “remote” development. In a sense, a docker container can be thought of as a remote machine, regardless of the fact that it shares your local hardware and is just a software abstraction.

In my opinion, JetBrains usually does great software, but there is some weird behaviour in their usage of Docker Dev Containers and it took us a while to find a quite general and IDE-independent solution; I’ll just use WebStorm as an example of something that appeared unusually hard to tame. I guess it will become better eventually.

For now, one might think of using the built-in config like:

  1. New Run Configuration -> npm
  2. Node Interpreter: “…”
  3. “+” -> Add Remote… -> “Docker”
  4. Use an image of your choice, either one of the node base images or a custom one (see below) with its corresponding tag

Now for reasons that seem to be completely undocumented and unavoidable (tell me if you know more!), the IDE forces you to then mount your project to /opt/project inside this container, where it gets mirrored during runtime to somewhere /tmp/<temporary uuid>/ – and in several of our projects (due to our folder structure which is not even particularly abnormal) this made this option to be completely unusable.

The way one can work without these strange idiosyncrasies is as follows:

First, create a Dockerfile in which you do all the required setup. It might be an optional idea to set the user, away from “root” to something more restricted like “node” (even though in development, you probably have your eyes on everything nevertheless). You can do more custom setup here. This can look like

FROM node:16.18.0-bullseye-slim

WORKDIR /your-home-inside-container
RUN chown node .

COPY package.json package-lock.json /your-home-inside-container

USER node

RUN npm ci --ignore-scripts

# COPY <whatever you might want> <where you want it inside>

EXPOSE 3000

CMD npm start

From that Dockerfile, build a local image in the same folder like:

# you might need -f if the Dockerfile is not named "Dockerfile"
docker build -t your-dev-image .

Then, create a new Run Configuration but choose “Shell script” (not npm)

docker run -it --rm --entrypoint= -v ${PWD}/src:/your-home-inside-container/src -p 0.0.0.0:3000:3000 your-dev-image

You might use a different “-p” port forwarding if you do not want to have your development server broadcasting on port 3000 (another advantage of Dev Containers, you can easily run multiple instances on different ports).

This is about the whole magic. But there are two further things that could be important here:

Hot Reloading (live updating whenever source files change)

This is done rather easily, however seems to change once in a while. We figured out that at least if you are using react-scripts@5.0.1 (which is what “npm start” addresses, unless you do that differently), you just need to set the environment variable “WATCHPACK_POLLING=true”. I.e put that in your Dockerfile a

ENV WATCHPACK_POLLING true

or pass it into your docker run ... -e WATCHPACK_POLLING=true ... your-dev-image line

Routing a development proxy to some “local host”

If your software e.g. adresses a backend that is running on your development machine or another Docker Dev Container, it can not just access that host from inside the Docker container. Neither is the port forwarding via “-p …:…” of any use, because that addresses the other direction – i.e. what port from the container is exposed to outside access – here, we go the other direction.

When the software inside the container would actually want to address “localhost”, it needs to be directed at the host under which your local machine appears. Docker has a special hostname for that and it is host.docker.internal

I.e. if your local backend is running on “localhost:8080” on your machine, you need to tell your Dev Container to direct its requests to “host.docker.internal:8080”.

In one of our projects, we needed some specific control over the proxy that the React development server gives you and here is way to gain that control – add a “setupProxy.js” inside your src/ folder and put in it something like

const { createProxyMiddleware } = require('http-proxy-middleware');

module.exports = function(app) {
    if (process.env.LOCAL_DEVELOPMENT) {
        return;
    }

    let httpProxyMiddleware = createProxyMiddleware({
        target: process.env.REACT_APP_PROXY || 'http://localhost:8080',
        changeOrigin: true,
    });
    app.use('/api', httpProxyMiddleware); // change to your needs accordingly
};

This way, one can always change the address via setting a REACT_APP_PROXY environment variable as in the step above; and one can also disable the whole proxying by setting the LOCAL_DEVELOPMENT env variable to true. Name these as you like, and you can even extend this setupProxy to include web sockets or different proxies for different routes, if you have any questions on that, just comment below 🙂

Improving my C++ time queue

Another code snippet that can be found in a few of my projects is the “time queue”, which is a simple ‘priority queue’ style data structure that I use to defer actions to a later time.

With this specific data structure, I have multiple implementations that clearly came from the same source. One indicator for that is a snarky comment in both about how std::list is clearly not the best choice for the underlying data structure. They have diverged a bit since then though.

Requirements

In my use case not use time points, but only durations in standard-library nomenclature. This is a pretty restrictive requirement, because otherwise any priority queue (e.g. from boost or even from the standard library) can be used quite well. On the other hand, it allows me to use floating-point durations with predictable accuracy. The queue has two important functions:

  1. insert to insert a timeout duration and a payload.
  2. tick is called with a specific duration and then reports the payloads that have timed out since their insertions.

Typically tick is called a lot more frequently than insert, and it should be fast. The payload is typically something like a std::function or an id for a state-machine that needs to be pulsed.

The basic idea is to only keep the duration difference to the previous item in the list. Only the first item keeps its total timeout. This way, when tick is called, usually only the first item needs to be updated. tick only has to touch more items when they time out.

Simple Implementation

One of the implementations for void insert(TimeType timeout, PayloadType payload) looks like this:

if (tick_active_)
{
  deferred_.push_back({ .remaining = after, .payload = std::move(payload) });
  return;
}

auto i = queue_.begin();
for (; i != queue_.end() && timeout > i->remaining; ++i)
  timeout -= i->remaining;

if (i != queue_.end())
  i->remaining -= timeout;

queue_.insert(i, { .remaining = after, .payload = std::move(payload) });

There is a special case there that guards against inserting into queue_ (which is still a very bad std::list) by instead inserting into deferred_ (which is a std::vector, phew). We will see why this is useful in the implementation for template void tick(TimeType delta, Executor execute):

tick_active_ = true;
auto i = queue_.begin();
for (; i != queue_.end() && delta >= i->remaining; ++i)
{
  delta -= i->remaining;
  execute(i->payload);
}

if (i != queue_.end())
  i->remaining -= delta;

queue_.erase(queue_.begin(), i);
tick_active_ = false;

while (!deferred_.empty())
{
  auto& entry = deferred_.back();
  insert(entry.remaining, std::move(entry.payload));
  deferred_.pop_back();
}

The timed out items are reported via a callback that is supplied as Executor execute. Of course, these can do anything, including inserting new items, which can invalidate the iterator. This is a common use case, in fact, as many deferred actions will naturally want follow ups (let’s ignore for the moment that the implementation is nowhere near exception safe…). The items that were deferred to deferred_ in insert get added to queue_ after the iteration is complete.

This worked well enough to ship, but the other implementation had another good idea. Instead of reporting the timed-out items to a callback, it just returned them in a vector. The whole tick_active_ guard becomes unnecessary, as any processing on the returned items is naturally deferred until after the iteration:

std::vector<PayloadType> tick(TimeType delta)
{
  std::vector<PayloadType> result;
  auto i = queue_.begin();
  for (; i != queue_.end() && delta >= i->remaining; ++i)
  {
    delta -= i->remaining;
    result.push_back(i->payload);
  }

  if (i != queue_.end())
    i->remaining -= delta;

  queue_.erase(queue_.begin(), i);
  return result;
}

This solves the insert-while-tick problem, and lets us use the result neatly in a range-based for-loop like this: for (auto const& payload : queue.tick(delta)) {}. Which I personally always find a little bit nicer than inversion-of-control. However, the cost is at least one extra allocation for timed-out items. This might be acceptable, but maybe we can do better for very little extra complexity.

Return of the second list

Edit: The previous version of this article tried to keep the timed-out items at the beginning of the vector before returning them as a std::span. As commenter Steffen pointed out, this again prevents us from inserting while iterating on the result, as any insert might invalidate the backing-vector.

We can get rid of the allocation for most of the tick calls, even if they return a non-empty list. Remember that a std::vector does not deallocate its capacity even when it’s cleared unless that is explicitly requested, e.g. via shrink_to_fit. So instead of returning a new vector each time, we’re keeping one around for the timed out items and return a const-ref to it from tick:

std::vector<PayloadType> const& tick(TimeType delta)
{
  timed_out_.clear();
  auto i = queue_.begin();
  for (; i != queue_.end() && delta >= i->remaining; ++i)
  {
    delta -= i->remaining;
    timed_out_.push_back(std::move(i->payload));
  }

  if (i != queue_.end())
    i->remaining -= delta;

  queue_.erase(queue_.begin(), i);
  return timed_out_;
}

This solution is pretty similar to the deferred list from the first version, but instead of ‘locking’ the main list while iterating, we’re now separating the items we’re iterating on.

Web Security for Frontend and Backend

The web is everywhere and we use it for tons of important tasks like online banking, shopping and communication. So it becomes increasingly important to implement proper security. As attacks like cross-site scripting (XSS) or cross-site request forgery (CSRF) are wide-spread browsers, web standards designers and web application developers implement more and more mechanisms to make such attacks harder or even impossible. This puts a certain burden on both frontend and backend developers.

Since security is hard and should not be an afterthought I would like to give you some advice when implementing a web app using a Javascript-frontend and a backend service written in some of the common languages/frameworks like .NET, Micronaut, Javalin, Flask or the like.

Frontend advice

I prefer traditional cookie-based sessions to JWT-based approaches for interactive web frontends because of simplicity, browser support and the possibility to use it without Javascript. For service-to-service communication bearer tokens of some kind may be more appropriate. Your Javascript client has to include the credentials in the fetch() calls to cause the browser to send the cookie.

Unfortunately, incorrect use of cookies may be insecure, so be sure to check up-to-date advice on cookies; see some hints below in the backend part because cookies are configured and issued there.

Backend advice

Modern web security requires additional measures on the server side to ensure secure authentication and communication with web clients. You should use https whereever possible to gain at least transport security and avoid many cases of sniffing credentials or changing content between client and backend.

Improving security of cookies

First of all, cookies should be HttpOnly so that scripts cannot access the contents of a cookie. Furthermore you should ideally set the SameSite and Secure attributes appropriately and use https whenever possible. That way you have mitigated the most common attacks on your session handling and authentication.

Another bonus for cookies is that browsers can inform you about problems with your cookie setup:

Configuring Cross-Origin Resource Sharing (CORS)

Nowadays it is common for web app to be served from a different host than the backend API. This is a potential problem because attackers may sneak scripts into the browser of a user and use the existing session to access the resources in an illegal way. Therefore another means of improving security of web apps running in browsers was introduced with the access control using CORS.

For browsers to be able to prevent or allow requests to certain resources the backend has to provide appropriate Access-Control-headers, most notably Access-Control-Allow-Origin and Access-Control-Allow-Credentials. Make sure to set these values correctly or your frontend will have trouble to access your backend or you introduce a potential security whole.

Fortunately many web frameworks make it easy to configure CORS, see Micronaut documentation for example.

Conclusion

Security is always important and browser vendors keep implementing additional measures to mitigate problems in the current web environment. Make sure you keep up with the latest advice and measures and implement them in your applications.