Using Message Queuing Telemetry Transport (MQTT) for communication in a distributed system

If you have several participants who are interested in each other’s measurements or events, you can use the MQTT protocol for this. In the following, I will present the basics.

The Mqtt protocol is based on publish and subscribe with asynchronous communication. Therefore it can also be used in networks with high latency. It can also be operated with low bandwidth.

At the center is an MQTT broker. It receives published messages and forwards them to the subscribing clients. The MQTT topics are used for this purpose. Each message is published to a topic. The topics look like a file path and can be chosen almost freely. The only exception are names beginning with $, because these are used for MQTT-own telemetry data. An example for such a topic would be “My/Test/Topic”. Attention, the topic is case sensitive. Every level of the topic can be subscribed to. For example “My/Test/Topic/#”, “My/Test/#” or “My/#”. In the latter case, a message published to “My/Productive/Things” would also be received by the subscriber. This way you can build your own message hierarchy using the Topics.

In the picture a rough structure of the MQTT infrastructure is shown. Two clients have subscribed to a topic. If the sensor sends data to the topic, the broker forwards it to the clients. One of the clients writes the data into a database, for example, and then processes it graphically with a tool such as Grafana.

How to send messages

For the code examples I used Python with the package paho-mqtt. First, an MQTT client must be created and connected.

self.client = mqtt.Client()
self.client.connect("hostname-broker.de", 1883)
self.client.loop_start()

Afterwards, the client can send messages to the MQTT broker at any time using the publish command. A topic and the actual message are sent as payload. The payload can have any structure. For example Json format or xml. In the code example json is used

self.client.publish(topic="own/test/topic", payload=json.dumps(payload))

How to subscribe topics

Even when subscribing, an MQTT client must first be created and a connection established. However, the on_connect and on_message functions are also used here. These are always called when the client establishes a connection or a new message arrives. It makes sense to make the subscriptions in the on_connect method, since they are created so with a new connection also always new and are not lost.

self.client = mqtt.Client()
self.client.on_connect = on_connect
self.client.on_message = on_message
self.client.connect("hostname-broker.de", 1883)
self.client.loop_start()

Here you can see an example on_connect method that outputs the result code of the connection setup and subscribes to a topic. For this, only the respective topic must be specified.

def on_connect(client, userdata, flags, rc):
      print(Connected with result code " + str(rc))
      self.client.subscribe("own/test/topic/#")

In the on_message method you can specify what should happen to an incoming message.

Conclusion

MQTT is a simple way to exchange data between a variety of devices. You can customize it very much and have a lot of freedom. All messages are TSL encrypted and you can set up client authentication in the broker, which is why it is also considered secure. For asynchronous communication, this is definitely a technology to consider.

Arrow Anti-Pattern

When you write code, it can happen that you nest some ifs or loops inside each other. Here is an example:

Because of the shape of the indentation, this code smell is called an anti-arrow pattern. The deepest indentation depth is the tip of the arrow. In my opinion, such a style is detrimental to readability and comprehension.

In the following, I would like to present a simple way of resolving such arrow anti-patterns.

Extract Method

First we extract the arrow pattern as a new method. This allows us to use return values instead of variable assignments and makes the code clearer.

public string PrintElephantMessage(Animal animal)
{
    Console.WriteLine(IsAElephant(animal));
}
public string IsAElephant(Animal animal)
{
    if (animal.IsMammal())
    {
        if (animal.IsGrey())
        {
            if (animal.IsBig())
            {
                if (animal.LivesOnLand())
                {
                    return "It is an elephant";
                }
                else
                {
                    return "It is not an elephant. Elephants live on land";
                }
            }
            else
            {
                return "It is not an elephant. Elephants are big";
            }
        }
        else
        {
            return "It is not an elephant. Elephants are grey";
        }
    }
    else
    {
        return "It is not an elephant. Elephants are mammals";
    }
}

Turn over ifs

A quick way to eliminate the arrow anti-pattern is to invert the if conditions. This will make the code look like this:

public string IsAElephant(Animal animal)
{
    if (!animal.IsMammal())
    {
        return "It is not an elephant. Elephants are mammals";
    }
    if (!animal.IsGrey())
    {
        return "It is not an elephant. Elephants are grey";
    }
    if (!animal.IsBig())
    {
        return "It is not an elephant. Elephants are big";
    }
    if (!animal.LivesOnLand())
    {
        return "It is not an elephant. Elephants live on land";
    }
    return "It is an elephant";
}

Some IDEs like Visual Studio can help flip the Ifs.

Conclusion

Arrow anti-pattern are code smells and make your code less legible. Fortunately, you can refactor the code with a few simple steps.

Cancellation Token in C#

Perhaps you know this problem from your first steps with asynchronous functions. You want to call them, but a CancellationToken is requested. But where does it come from if you do not have one at hand? At first glance, a quick way is to use an empty CancellationToken. But is that really the solution?

In this blog article, I explain what CancellationTokens are for and what the CancellationTokenSource is for.

Let us start with a thought experiment: You have an Application with different Worker threads. The Application is closed. How can you make the whole Application with all the Worker threads close?
One solution would be a global bool variable “closed”, which is set in this case. All Workers have a loop built in, which regularly checks this bool and terminates if it is set. Basically, the Application and all Workers must cooperate and react to the signal.

class Application
{
    public static bool closed = false;
    Application()
    {
        var worker= new Worker();
        worker.Run();

        Thread.Sleep(2500);
        closed = true;

        // Wait for termination of Tasks
        Thread.Sleep(2500);
    }
}
class Worker
{
    public async void Run()
    {
        bool moreToDo = true;
        while (moreToDo)
        {
            await DoSomething();
            if (Application.closed)
            {
                return;
            }
        }
    }
}

This is what the CancellationToken can be used for. The token is passed to the involved Workers and functions. Throught it, the status of whether or not it was cancelled can be queried. The token is provided via a CancellationTokenSource, which owns and manages the token.

class Application
{
    Application()
    {
        var source = new CancellationTokenSource();
        var token = source.Token;
        var worker = new Worker();
        worker.Run(token);

        Thread.Sleep(2500);
        source.Cancel();

        // Wait for cancellation of Tasks and dispose source
        Thread.Sleep(2500);
        source.Dispose();
    }
}
class Worker
{
    public async void Run(CancellationToken token)
    {
        bool moreToDo = true;
        while (moreToDo)
        {
            await DoSomething(token);
            if (token.IsCancellationRequested)
            {
                return;
            }
        }
    }   
}

The call to source.Cancel() sets the token to canceled. CancellationTokens and sources offer significantly more built-in possibilities, such as callbacks, WaitHandle, CancelAfter and many more.

So using an empty CancellationToken is not a good solution. This means that cancellations cannot be passed on and a task could continue to run in the background even though the Application should have been terminated a long time ago.

I hope I have been able to bring the topic a little closer to you. If you have already encountered such problems and found solutions, please leave a comment.

How comments get you through a code review

Code comments are a big point of discussion in software development. How and where to use comments. Or should you comment at all? Is the code not enough documentation if it is just written well enough? Here I would like to share my own experience with comments.

In the last months I had some code reviews where colleagues looked over my merge requests and gave me feedback. And it happened again and again that they asked questions why I do this or why I decided to go this way.
Often the decisions had a specific reason, for example because it was a customer requirement, a special case that had to be covered or the technology stack had to be kept small.

That is all metadata that would be tedious and time-consuming for reviewers to gather. And at some point, it is no longer a reviewer, it is a software developer 20 years from now who has to maintain the code and can not ask you questions any more . The same applies if you yourself adjust the code again some time later and can not remember your thoughts months ago. This often happens faster than you think. To highlight how fast details disappear here is a current example: This week I set up a new laptop because the old one had a hardware failure. I did all the steps only half a year ago. But without documentation, I would not have been able to reconstruct everything. And where the documentation was missing or incomplete, I had to invest effort to rediscover the required steps.

Example

Here is an example of such a comment. In the code I want to compare if the mixer volume has changed after the user has made changes in the setup dialog.

var setup = await repository.LoadSetup(token);

var volumeOld = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();

setup = Setup.App.RunAsDialog(setup, configuration);

var volumeNew = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();
if (volumeNew == volumeOld)
{
     break;
}
            
ResizeToMixerVolume(setup, volumeOld);

Why do I save the volume in an additional variable instead of just writing the setup into a new variable in the third line? That would be much easier and more elegant. I change this quickly – and the program is broken.

This little comment would have prevented that and everyone would have understood why this way was chosen at the moment.

// We need to copy the volumes, because the original setup is partially mutated by the Setup App.
var volumeOld = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();

If you annotate such prominent places, where a lot of brain work has gone into, you make the code more comprehensible to everyone, including yourself. This way, a reviewer can understand the code without questions and the code becomes more maintainable in the long run.



Create custom jre in Docker

I recently wrote a Java application and wanted to run it in a Docker container. Unfortunately, my program needs a module from the Java jdk, which is why I can’t run it in the standard jre. When I run the program with the jdk, everything works but I have much more than I actually need and a 390MB docker container.

That’s why I set out to build my own jre, cut-down exactly to my application needs.

For this I found two Gradle plugins that help me. Jdeps checks the dependencies of an application and can summarise them in a file. This file can be used as input for the Jlink plugin. Jlink builds its own jre from it.

In the following I will present a way to build a custom jre with gradle and the two plugins in a multistage dockerfile and run an application in it.

Configuration in Gradle

First we need to do some configuration in gradle.build file. To use jdeps, the application must be packaged as an executable jar file and all dependencies must be copied in a seperate folder . First we create a task that copies all current dependencies to folder named lib. For the jar, we need to define the main class and set the class path to search for all dependencies in the lib folder.

// copies all the jar dependencies of your app to the lib folder
task copyDependencies(type: Copy) {
    from configurations.runtimeClasspath
    into "$buildDir/lib"
}

jar {
    manifest {
        attributes["Main-Class"] = "Main"
        attributes["Class-Path"] = configurations.runtimeClasspath
            .collect{'lib/'+it.getName()}.join(' ')
    }
}

Docker Image

Now we can get to work on the Docker image.

As a first step, we build our jre in a Java jdk. To do this, we run the CopyDependencies task we just created and build the Java application with gradle. Then we let jdeps collect all dependencies from the lib folder.
The output is written to the file jre-deps.info. It is therefore important that no errors or warnings are output, which is set with the -q parameter. The print-module-deps is crucial so that the dependencies are output and saved in the file.

The file is now passed to jlink and a custom-fit jre for the application is built from it. The parameters set in the call also reduce the size. The settings can be read in detail in the plugin documentation linked above.

FROM eclipse-temurin:17-jdk-alpine AS jre-build

COPY . /app
WORKDIR /app

RUN chmod u+x gradlew; ./gradlew copyDependencies; ./gradlew build

# find JDK dependencies dynamically from jar
RUN jdeps \
--ignore-missing-deps \
-q \
--multi-release 17 \
--print-module-deps \
--class-path build/lib/* \
build/libs/app-*.jar > jre-deps.info

RUN jlink \
--compress 2 \
--strip-java-debug-attributes \
--no-header-files \
--no-man-pages \
--output jre \
--add-modules $(cat jre-deps.info)

FROM alpine:3.17.0
WORKDIR /deployment

# copy the custom JRE produced from jlink
COPY --from=jre-build /app/jre jre
# copy the app dependencies
COPY --from=jre-build /app/build/lib/* lib/
# copy the app
COPY --from=jre-build /app/build/libs/app-*.jar app.jar

# run in user context
RUN useradd schneide
RUN chown schneide:schneide .
USER schneide

# run the app on startup
CMD jre/bin/java -jar app.jar

In the second Docker stage, a smaller image is used and only the created jre, the dependencies and the app.jar are copied into it. The app is then executed with the jre.

In my case, I was able to reduce the Docker container size to 110 MB with the alpine image, which is less than a third. Or, using a ubuntu:jammy image, to 182 MB, which is also more than half.

Lineendings in repository

Git normally leaves files and their lineendings untouched. However, it is often desired to have uniform line endings in a project. Git provides support for this.

Config Variable

What some may already know is the configuration variable core.autocrlf. With this, a developer can locally specify that his newly created files are checked in to Git with LF. By setting the variable to “true” the files will be converted to CRLF locally by Git on Windows and converted back when saved to the repository. If the variable is set to “input” the files are used locally with the same lineending as in Git without conversion.
The problem is, this normalization only affects new files and each developer must set it locally. If you set core.autocrlf to false, files can still be checked in with not normalized line endings.

Gitattributes File

Another possibility is the .gitattributes file. The big advantage is that the file is checked in similarly to the .gitignore file and the settings therefore apply to all developers. To do this, the .gitattributes file is created in the repository and a path pattern and the text attribute are defined in it. The setting affects how the files are stored locally for the git switch, git checkout and git merge commands and how the files are stored in the repository for git add and git commit.

*.jpg          -text

The text attribute can be unset, then neither check-in nor check-out will do any conversions

*              text=auto

The attribute can also be set to auto. In this case, the line endings will be converted to LF at check-in if Git recognizes the file contents as text. However, if the file is already CRLF in the repository, no conversion takes place and the files remain CRLF. In the example above, the settings are set for all file types.

*.txt         text
*.vcproj      text eol=crlf
*.sh          text eol=lf

If the attribute is set, the lineending are stored in the default repository with LF. But eol can also be used to force fixed line endings for specific file types.

*.ps1	      text working-tree-encoding=UTF-16

Furthermore, settings such as the encoding can be set via the gitattributes file by using working-tree-encoding attribute. Everything else can be read in the documentation of the gitattributes file.

We use this possibility more and more often in our projects. Partly only to set single file types like .sh files to LF or to normalize the whole project.

Docker Interpreter with Environment Variables in RubyMine

As you know from previous blog entries, we now rely on Docker dev containers as interpreters for our IDEs. This has the advantage that we don’t need local installations of for example Ruby or single packages, but all requirements are in a Docker container and our machine stays clean.
RubyMine has some pitfalls for this way. So in this blog post, I’ll present you some hard-won insights and show you what solution we came to.

Those were our first problems:

  1. When using a single Docker image as an interpreter, it does not clean up everything when exiting the application and container. For example, the Server.pids file remains on the local machine, resulting in the following error: “A server is already running. Check path/Pids/Server.pids.” This behavior can be worked around with a Before Launch script that deletes the file when the application starts, but it’s not very nice. RubyMine unfortunately does not have Docker Container Settings, different to other Jetbrain IDEs, so a simple –rm does not work.
  2. Furthermore, we need to talk to our local machine from our Docker container, for example to access a DB or to use a VPN tunnel originating from the local machine. For this, the Docker container needs to know my IP address, or the respective IP address of each developer. In our version control system, however, we don’t want to constantly overwrite the IP addresses or check before each push that you don’t accidentally write up your own IP address. Our wish was to have a local environment variable MY_MACHINE_IP where each developer writes their IP address and the Docker container fetches it when the program starts. The normal integration of the system environment variables by simply checking this option unfortunately does not work here when we start the program in Docker. This is because the IDE then integrates the environment variables of the Docker container and not those of my local machine. Also, using a local environment variable to pass it to Docker doesn’t work in Run Configuration, nor in Docker Image creation, as the images below show. Same if you want to use PATH variables of the IDE instead of environment variables.
Environment Settings in Run Configuration
Environments in Dev Container

Our solution – Docker Compose:

Our first problem is solved by using docker compose directly. The problem with the Server.pid does not occur, because RubyMine manages the Docker Compose better and removes the Server.pid automatically at startup.

Below, I explain our setup of a Docker Compose as an interpreter and how it solved all the problems.

We created a simple Docker Compose yaml with the image we want to use as an interpreter. At this point, you can also define environment variables that use local environment variables of your local machine. This solves our second problem.

Docker Compose yaml

The definition of the volume is important at this point. Docker tries to store things there during the installation and throws an internal server error if the volume is missing: 
Error response from daemon: the working directory ‘C:\repositories\my-project’ is invalid, it needs to be an absolute path.

Now an interpreter can be set up in RubyMine. To do this, a new remote interpreter must be created in the Settings under Language & Frameworks: Ruby SDK and Gems. An example is shown in the figure below:

Ruby Docker Compose Interpreter

It is important here to select the interpreter after this, otherwise you will not be able to save and will get an error that the project has no interpreter.

Now the interpreter can be stored in the run configuration.

Recap:

We are now able to run our Ruby environment in a Docker container. Thus, the environment is independent of the local circumstances such as the installed Ruby version or packages, as these can all be found in the container. Also, each programmer can run the project locally without any further adjustments via a defined environment variable and the Docker container can still talk to other local Docker containers on the machine. Thus, the status in Gitlab is generally valid and not bound to the respective programmer by a customizable IP or the like.