Schemas, naming and search path in PostgreSQL

Modern (SQL) databases provide a multi-level hierarchy of separated contexts.

A database management system (DBMS) can manage multiple databases.

A database can contain multiple schemas.

A schema usually contains multiple tables, views, sequences and other objects.

Most of the time we developers only care about our database and the objects it contains – ignoring schemas and the DBMS.

In PostgreSQL instances this means we are using the default schema – usually called public – without mentioning it anywhere. Neither in the connection string or in any queries.

Ok, and why does all of the above even matter?

Special situations

Sometimes customers or operators require us to use special schemas in certain databases. When we do not have full control over our database deployment and usage we have to adapt to the situation we encounter.

Camel-casing

One thing I really advise against is using upper case names for tables, columns and so on. SQL itself is not case sensitive but many DBMSes differentiate case when it comes to naming. So they require you to use double-quotes to reference mixed-case objects like schemas and tables, e.g. "MyImportantTable". Imho it is far better to use only lower case letters and use snake_case for all names.

Multiple schemas in one database

I do not endorse using multiple names schemas in one database. Most of the time you do not need this level of separation and can simply use multiple databases in the same DBMS with their default schemas. That way you do not have to specify the schema in your queries.

What if you are required to use a non-default schema? At least for PostgreSQL you can modify your login role to change the search path. So you can leave all your queries and maybe other deployments untouched and without some schema name scattered all around your project:

-- instead of specifying schema name like
select * from "MyTargetSchema".my_table;

-- you can alter the search path of your role
alter role my_login set search_path = public,"MyTargetSchema";

-- to only write
select * from my_table;

Wrapping it up

Today’s software systems are quite complex and the problems we are trying to solve with them have their own, ever growing complexity. So when using DBMSes follow common advice like the above to reduce complexity in your solution even if it seems to be mandated.

Sometimes there are easy to follow conventions or configuration options that can reduce your burden as a developer even if you do not have complete control over the relevant parts of the deployment environment.

Save yourself from releasing garbage with Git Hooks

Times do happen, where one would write code that should please, please not land in the production release, like for example:

  • Overwriting a certain URL with a local one
  • Having a dialog always-open in order to efficiently style it
  • or generally, mocking code of something that is not-important-right-now™

And then we all already know that such shortcuts tend to stay in the code longer than intended. One might tell oneself:

Oh, I will mark this as // TODO: Remove ASAP

This is the feature branch, so either me or the Code Review will catch it when merging on main.

And this is better than nothing – some Git clients might disallow you from even committing code with any //TODO, but I feel that this is direct violation of the idea of “Commit Early, Commit Often”. As in, now you’re bound to finish your feature before you commit. This is the opposite of helping. By now you figure:

Thanks for nothing, I will just rename this as // TO_REMOVE

Rest of above logic applies untampered with.

These keyword-in-comments are sometimes called Code Tags (e.g. here), and here we just worked around the point that //TODO is more commonly used than other tags, but of course, these still are comments of no specific meaning.

You might now be able to push again, but still – say, the Code Reviewer does the heinous mistake of trusting you too much – this might lead to code in the official release that behaves so silly that it just makes every customer question your mental sanity, or worse.

Git Hooks can help you with appearing more sane than you are.

These are bash scripts in your specific repository instance (i.e. your local clone, or the server copy, individually) to run at specific points in the Git workflow.

For our particular use case, two hooks are of interest:

  • A pre-receive hook run on the server-side repository instance that can prevent the main branch from receiving your dumb development code. This sounds more rigid, but you need access to the server hosting the (bare) repository.
  • A pre-push hook run on your local repository instance that can prevent you from pushing your toxic waste to the branch in question. Keep in mind that if you do your merging unto main via GitLab Merge Requests etc. that this hook will not run then – but implementing it is easier because you already have all the access.

The local pre-push hook is as simple as adding a file named pre-push in the .git/hooks subfolder. It needs to be executable – (which, under Windows, you can use e.g. Git Bash for.)

cd $repositoryPath/.git/hooks
touch pre-push
chmod +x pre-push

And it contains:

#!/bin/sh
 
while read localRef localHash remoteRef remoteHash; do
    if [[ "$remoteRef" == "refs/heads/main" ]]; then
        for commit in $(git rev-list $remoteHash..$localHash); do
            if git grep -n "// REMOVE_ME" $commit; then
                echo "REJECTED: Commit contains REMOVE_ME tag!"
                exit 1
            fi
        done
    fi
done

This already will then lead git push to fail with output like

2f5da72ae9fd85bb5d64c03171c9a8f248b4865f:src/DevelopmentStuff.js:65:        // REMOVE_ME: temporary override for database URL
REJECTED: Commit contains REMOVE_ME tag!
failed to push some refs to '...'

and if that REMOVE_ME is removed, the push goes through.

Some comments:

  • You can easily extend this to multiple branches with the regex condition:
    if [[ "$remoteRef" =~ /(main|release|whatever)$ ]];
  • The git grep -n flag is there for printing out the offending line number.
  • You can make this more convenient with git grep -En "//\s*REMOVE_ME", i.e. allowing an arbitrary number of whitespace between the // and the tag.
  • The surrounding loop structure:
    while read localRef localHash remoteRef remoteHash;
    is exactly matching the way git is processing this pre-push hook. Each hook has a while read <arg list> structure, but the specific arguments depend on the actual hook type.

Hope this can help you taking some extra care!

However, remember that for these local hooks, every developer has to setup them for themselves; they are not pushed to the server instance – but implementing a pre-receive hook there is the topic for a future blog post.

A Tale of Hidden Variables

Today was one of those frustrating moments that every developer encounters at some point. We were working on a Docker Compose setup and observed behavior that could only happen if a specific environment variable had been set. To ensure that this environment variable wasn’t being set I scoured through the Docker Compose file, checked the local environment variables using the export command, and grepped all the relevant files in the project directory. But no matter what I did, this environment variable was still haunting us, wreaking havoc on the setup.

After what felt like an eternity of troubleshooting, we finally uncovered the culprit: an old, hidden .env file left over from a long-forgotten configuration. This file had been silently setting the environment variable I was desperately trying to eliminate.

Here’s how it all unfolded and what I learned from the experience:

When I first suspected that the environment variable might be lurking somewhere in the project, my instinct was to use grep to search for it in all the files within my local directory. I ran something along the lines of:

grep -r 'MY_ENV_VAR' *

To my surprise, nothing relevant showed up. I had expected this command to search through everything in my local directory. However, I had forgotten one important detail: grep doesn’t search hidden files by default when you use *.

Since .env files are typically hidden (starting with a dot), grep completely skipped over them. Little did I know, that old .env file was sitting quietly in the background, setting the environment variable that was causing all my issues.

After some frustration, my colleague finally had the realization that there might be hidden files at play. In Unix-like operating systems, files that start with a dot (.), like .env, are treated as hidden and are not listed or searched by default with common commands. Just as hidden variables in physics could influence particles without being directly observable, the hidden .env file was affecting my environment variables without being immediately visible.

To include hidden files in your search, you need to modify the grep command to look for them explicitly:

grep -r 'MY_ENV_VAR' . --include=".*"

This experience led me to reflect on whether deployment-relevant files like .env should be hidden in the first place, since they can easily be overlooked during debugging. It also makes them more prone to being forgotten. Hidden files are easy to miss when troubleshooting, especially when you’re under pressure.

Given that .env files can have a significant impact on the behavior of applications, containerized setups, and CI/CD pipelines, making them hidden by default might not always be the best approach. After all, if an environment variable has the power to alter how an entire application runs, it’s something we want to be highly visible and readily accessible.

In the end, this experience taught me two important lessons:

Always search for hidden files when troubleshooting issues related to environment variables. If your Docker Compose or other environment-dependent setups aren’t behaving as expected, don’t forget to check for hidden .env files.

Consider the visibility of critical configuration files. Should .env files be hidden by default, or should they be treated as first-class citizens in our directory structures? In many cases, keeping them visible might help avoid unexpected behavior and wasted hours of debugging.

Converting a Grails app from war deployment in Tomcat to docker

A few years ago we had real servers with servlet containers installed and managed by administrators. These machines were bare-metal unicorns and had to be kept alive as long as possible (for many years). With the advent of first virtual machines and then container runtimes like Docker the approach and responsibilities for hosting web applications changed.

Our application not only went from Grails framework version 1 to 5 (atm, and soon to version 6) but also the above voyage regarding the hosting environment.

While the step from bare-metal to virtual machine was negligible from the developer and application side the migration to docker-based hosting is quite a step. Therefore I would like to depict the biggest changes of running a Grails application as a WAR deployment in Tomcat to a standalone application in Docker.

The original setting

We had our Grails application running on a server with a Tomcat servlet container for years. On the server we had also an ElasticSearch instance running and saved documents in a few well-defined directories on the local file system. Our database (Oracle in the past, PostgreSQL for over a year now) was running managed in a data center. We used container features like JNDI for parts of the configuration and datasources.

Deployment was essentially uploading a new WAR and context.xml file to the Tomcat servlet container using an Ansible playbook (where we restarted the servlet container to ensure not leaving any resource leaks…).

This setup worked quite well for years, only upgrades of the host operating system along with Tomcat and Java Virtual Machine (JVM) updates meant some work.

The target setup

Our customer has been in the process of converting most of the services running on virtual machines to dockerized deployments managed using Portainer. Most of the provisioning is automated and there is almost no snow flaking of the virtual machines hosting the docker runtimes. This eases the operation and administration of the services a lot and makes it much easier to setup new instances of a service and to monitor them.

So it was our task to migrate our setup on the the host VM to a docker-based deployment. In the future we basically push a docker image of our application to a docker registry of the customer and update the stack using Portainer.

The journey to a dockerized Grails app

The first thing to do were some changes to build.gradle to build a Jar-application instead of a WAR archive. We decided it was easier and more lightweight to run the Grails app standalone with the embedded tomcat than to use Tomcat in a docker container:

  • Remove the gradle war plugin
  • Change providedCompile dependencies to implementation
  • Add a task to prepare the Dockerfile and context to build our bootable application image:
task prepareDocker(type: Copy, dependsOn: assemble) {
    description = 'Copy files from src/main/docker and application jar to Docker temporal build directory'
    group = 'Docker'

    from 'src/main/docker'
    from project.bootJar

    into dockerBuildDir
}
  • Add an appropiate Dockerfile to our docker context in src/main/docker:
FROM eclipse-temurin:11-jdk-jammy
MAINTAINER Mihael Koep "mihael.koep@softwareschneiderei.de"

EXPOSE 8080

WORKDIR /app
COPY naomi-boot.jar .

ENV GRAILS_ENV development
# Additional env variables for configuration

CMD java -Dgrails.env=$GRAILS_ENV -jar /app/application-boot.jar
  • Remove JNDI usages and convert configuration from context.xml and JNDI to environment variables
  • Add a docker-compose.yml for the stack definition (here outlined for development with a database as part of the stack)
version: '3.7'
services:
  my-app:
    container_name: my-app-application
    image: ${IMAGE_TAG}
    environment:
      - GRAILS_ENV=development
      - DB_URL=jdbc:postgresql://my-app-db:5432/my-app
    volumes:
      - my-appdata:/app/data_home
    ports:
      - 8080:8080
    depends_on:
      - my-app-db
      - my-app-elastic
  my-app-db:
    container_name: my-app-database-stack
    image: postgres:13
    environment:
      - POSTGRES_USER=my-app
      - POSTGRES_PASSWORD=my-db-password
      - POSTGRES_DB=my-app
    ports:
      - 5432:5432
  my-app-elastic:
    container_name: my-app-elastic-stack
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    environment:
      - node.name=es01
      - cluster.name=my-app-es
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - searchdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9300:9300
volumes:
  searchdata:
  my-appdata:

Conclusion

The journey from classical deployments to a dockerized environment using a docker stack is not that long for a Grails application (or most other Java web applications). It makes it trivial setting up a new instance or migrating it to another host as most configuration is in version controlled files and there is next to none snowflaking. The customer has an easier time running the web application because the requirements are only a container-runtime and explicitly formulated in the stack definition.

In addition, the developers are free to choose the JVM and other details within the containers without having to cross-check with the administrators of the customer if they are supported by their organization.

Avoid fragmenting your configuration

Nowadays configuration often is done using environment (aka ENV) variables. They work great using docker/containers, in development and production, on all platforms and using all languages. In short I think environment variables are great for configuration of many aspects of an application.

However, I encountered a pattern in several different applications that I really dislike: Several, fragmented ENV variables for one configurable aspect of the application.

Let us have a look at two examples to see what I mean, then I will try to explain where it could come from and why I think it is bad practice. Finally I will show a better alternative – at least in my opinion.

First real world example

In one javascript app a websocket url was made configurable using 4 (!) ENV variables like this:

WS_PREFIX || "wss://";
WS_HOST || "hostname";
WS_PORT || "";
WS_PATH || "/ws";

function ConnectionString(prefix, host, port, path) {
  return {
    attrib: {
      prefix, 
      host,
      port,
      path,
    },
    string: prefix + host + port + path,
  };
}

We immediately see, that the author wrote a function to deal with the complex configuration in the rest of the application. Not only the devops team or administrators need to supply many ENV variables but they have to supply them in a peculiar way:

The port needs to be specified as :8888, using a leading colon (or the host needs a trailing colon…) which is more than unexpected. The alternative would be a better and more sophisticated implementation of ConnectionString…

Another real example

In the following example the code there are again three ENV variables dealing with hosts, urls and websockets. This examples feels quite convoluted, is hard to understand and definitely needs a refactoring.

TANGOGQL_SOCKET=ws://${TANGO_HOST}:5004/socket

const defaultHost = window.TANGOGQL_HOST ?? "localhost:5004";
const defaultSocketUrl = window.TANGOGQL_SOCKET ?? ws://${defaultHost}/socket;

// dealing with config peculiarities somewhere else
const socketUrl = React.useMemo(() =>
        config.host.replace(/.*:\/\//, "ws://") + "/socket"
    , [config.host]);

Discussion

The examples show clearly that something simple like a configuration for an URL can lead to complicated and hard to use solutions. Most likely the authors tried to not repeat themselves and factored the URLs into the smallest sensible components. While this may sound like a good idea it puts burden on both the developers and the devops team configuring the application.

In my opinion it would be much simpler and more usable for both parties to have complete URLs for the different use cases. Of course this could mean repeating protocols, hostnames and ports if they are the same in the different situations. But just having one or two ENV variables like

WS_URL=wss://myhost:8080/ws
HOST_URL=https://myhost:8080

would be straightforward to use in code and to be configured in the runtime environment. At the same time the chance for errors and the complexity in the configuration is reduced.

Even though certain parts of the URLs are duplicated in the configuration I highly prefer this approach over the presented real world solutions.

Use real(istic) data from early on

When developing software in general and also specifically user interfaces (UIs) one important aspect is often neglected: The form, shape and especially the amount of data.

One very common practice is to fill unknown texts with fragments of the famous Lorem ipsum placeholder text. This may be a good idea if you are designing a software for displaying a certain kind of articles similar in size and structure to your placeholder text. In all other cases I would regard using lorem ipsum as a smell.

My recommendation is to collect as many samples of real or at least realistic data as feasible. Use them to build and test your application. Why do I think it matters? Let me elaborate a bit in the following sections.

Data affects the layout

You can only choose a fitting layout if you have knowledge about the length of certain texts, size of image etc. The width of columns can be chosen more appropriately, you can descide if you need scrollbars, if you want them permantently visible for a more stable and calm layout, how large panels or text areas have to be for optimum readability and so on.

Data affects the choice of UI controls

The data your application has to handle should reflect not only in the layout but also in the type of controls to be used.

For example, the amount of options for the user to make a choice from drastically affects the selection of an adequate UI control. If you have only 2 or 3 options toggle buttons, checkboxes or radio buttons next to each other or layed out in one column may be a good fit. If the count of options is greater, dropdowns may be better. At some point maybe a full-blown list with filters, sorting and search may be necessary.

To make a good decision, you have to know the expected amount and shape of your data.

Data affects algorithms and technical decisions regarding performance

The data your system has to work with and to present to the user also has technical impact. If the datasets are moderate in size, you may be able to transfer them all to the frontend and do presentation, filtering etc. there. That has the advantage of reducing backend stress and putting computational effort in the hands of the clients.

Often this becomes unfeasible when the system and its data pool grows. Then you have to think about backend search and filtering, datacompression and the like.

Also algorithmns and datastructure may change from simple lists and linear search to search trees, indexes and lookup tables.

The better you know the scope of your system and the data therein the better your technical decisions can be. You will also be able to judge if the YAGNI principle applies or not.

Conclusion

To quickly sum-up the essence of the advice above: Get to know the expected amount and shape of data your application has to deal with to be able to design your system and the UI/UX accordingly.

Packaging Java-Project as DEB-Packages

Providing native installation mechanisms and media of your software to your customers may be a large benefit for them. One way to do so is packaging for the target linux distributions your customers are running.

Packaging for Debian/Ubuntu is relatively hard, because there are many ways and rules how to do it. Some part of our software is written in Java and needs to be packaged as .deb-packages for Ubuntu.

The official way

There is an official guide on how to package java probjects for debian. While this may be suitable for libraries and programs that you want to publish to official repositories it is not a perfect fit for your custom project that you provide spefically to your customers because it is a lot of work, does not integrate well with your delivery pipeline and requires to provide packages for all of your dependencies as well.

The convenient way

Fortunately, there is a great plugin for ant and maven called jdeb. Essentially you include and configure the plugin in your pom.xml as with all the other build related stuff and execute the jdeb goal in your build pipeline and your are done. This results in a nice .deb-package that you can push to your customers’ repositories for their convenience.

A working configuration for Maven may look like this:

<build>
    <plugins>
        <plugin>
            <artifactId>jdeb</artifactId>
            <groupId>org.vafer</groupId>
            <version>1.8</version>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>jdeb</goal>
                    </goals>
                    <configuration>
                        <dataSet>
                            <data>
                                <src>${project.build.directory}/${project.build.finalName}-jar-with-dependencies.jar</src>
                                <type>file</type>
                                <mapper>
                                    <type>perm</type>
                                    <prefix>/usr/share/java</prefix>
                                </mapper>
                            </data>
                            <data>
                                <type>link</type>
                                <linkName>/usr/share/java/MyProjectExecutable</linkName>
                                <linkTarget>/usr/share/java/${project.build.finalName}-jar-with-dependencies.jar</linkTarget>
                                <symlink>true</symlink>
                            </data>
                            <data>
                                <src>${project.basedir}/src/deb/MyProjectStartScript</src>
                                <type>file</type>
                                <mapper>
                                    <type>perm</type>
                                    <prefix>/usr/bin</prefix>
                                    <filemode>755</filemode>
                                </mapper>
                            </data>
                        </dataSet>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

If you are using gradle as your build tool, the ospackage-plugin may be worth a look. I have not tried it personally, but it looks promising.

Wrapping it up

Packaging your software for your customers drastically improves the user experience for users and administrators. Doing it the official debian-way is not always the best or most efficient option. There are many plugins or extensions for common build systems to conveniently build native packages that may easier for many use-cases.

Running a Micronaut Service in Docker

Micronaut is a state-of-the-art micro web framework targeting the Java Virtual Machine (JVM). I quite like that you can implement your web application oder service using either Java, Groovy or Kotlin – all using micronaut as their foundation.

You can easily tailor it specifically to your own needs and add features as need be. Micronaut Launch provides a great starting point and get you up and running within minutes.

Prepared for Docker

Web services and containers are a perfect match. Containers and their management software eases running and supervising web services a lot. It is feasible to migrate or scale services and run them separately or in some kind of stack.

Micronaut comes prepared for Docker and allows you to build Dockerfiles and Docker images for your application out-of-the-box. The gradle build files of a micronaut application offer handy tasks like dockerBuild or dockerfile to aid you.

A thing that simplifies deploying one service in multiple scenarios is configuration using environment variables (env vars, ENV).

I am a big fan of using environment variables because this basic mechanism is supported almost everywhere: Windows, Linux, docker, systemd, all (?) programming languages and many IDEs.

It is so simple everyone can deal with it and allows for customization by developers and operators alike.

Configuration using environment variables

Usually, micronaut configuration is stored in YAML format in a file called application.yml. This file is packaged in the application’s executable jar file making it ready to run. Most of the configuration will be fixed for all your deployments but things like the database connection settings or URLs of other services may change with each deployment and are most likely different in development.

Gladly, micronaut provides a mechanism for externalizing configuration.

That way you can use environment variables in application.yml while providing defaults for development for example. Note that you need to put values containing : in backticks. See an example of a database configuration below:

datasources:
  default:
    url: ${JDBC_URL:`jdbc:postgresql://localhost:5432/supermaster`}
    driverClassName: org.postgresql.Driver
    username: ${JDBC_USER:db_user}
    password: ${JDBC_PASSWORD:""}
    dialect: POSTGRES

Having prepared your application this way you can run it using the command line (CLI), via gradle run or with docker run/docker-compose providing environment variables as needed.

# Running the app "my-service" using docker run
docker run --name my-service -p 80:8080 -e JDBC_URL=jdbc:postgresql://prod.databasehost.local:5432/servicedb -e JDBC_USER=mndb -e JDBC_PASSWORD="mnrocks!" my-service

# Running the app using java and CLI
java -DJDBC_URL=jdbc:postgresql://prod.databasehost.local:5432/servicedb -DJDBC_USER=mndb -DJDBC_PASSWORD="mnrocks!" -jar application.jar

Running Tango-Servers in Docker

Containers are a great way of running your software in an environment that you defined and no one else needs to maintain of even know about. This is especially true if your software provides its service using a network port. All that operators have to provide is the execution platform for the container and the required resources.

Tango servers fit quite well into this type of software: Essentially they provide a service over the network. Unfortunately, they need a tango control system to be fully usable, so thats some other services. Luckily, there are docker images for this so building a stack of containers to run your Tango server in the scope of a control system is quite easy (sample docker-compose.yml):

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    environment:
      - TANGO_HOST=tango-cs:10000
    depends_on:
      - tango-cs
  tango-cs:
    container_name: tango-database
    image: tangocs/tango-cs:9.3.2-alpha.1-no-tango-test
    ports:
      - "10000:10000"
    environment:
      - TANGO_HOST=localhost:10000
      - MYSQL_HOST=tango-db:3306
      - MYSQL_USER=tango
      - MYSQL_PASSWORD=tango
      - MYSQL_DATABASE=tango
    links:
      - "tango-db:localhost"
    depends_on:
      - tango-db
  tango-db:
    container_name: mysql-database
    image: tangocs/mysql:9.2.2
    environment:
      - MYSQL_ROOT_PASSWORD=root
    volumes:
      - tango_mysql_data:/var/lib/mysql
volumes:
  tango_mysql_data:

Example Dockerfile for the image my-ts used for the my-tango-server container:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts

Unfortunately this naive approach has the following issues:

  • The network port of our tango server changes with each restart and is not available from outside the docker stack
  • The tango server communicates the internal IP-Address of its container to the tango database so connection attempts from the outside fail even if the port is correct
  • Even though the tango server container depends on the tango control system container the tango server might start up before the tango control system is available.

Let us tackle the issues one by one.

Expose a fixed port for the Tango server

This is probably the easiest issue to fix because it is well documented for OmniORB and widely used. So let us change docker-compose.yml to expose a port and pass it into the container as an environment variable:

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    ports:
      - "45450:45450"
   environment:
      - TANGO_HOST=tango-cs:10000
      - TANGO_SERVER_PORT=45450
    depends_on:
      - tango-cs
  tango-cs:
 ...

And use it in our Dockerfile for the my-ts image:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT

Fine, now our Tango server always listens on a defined port and that same port is exposed to the outside of our docker stack.

Publish the correct IP/hostname of the Tango server

I found it hard to find documentation about this and how to put it on the command line correctly but here is the solution to the problem. The crucial command line parameter is -ORBendPointPublish . We need to pass a hostname of the host machine into the container and let OmniORB publish that name (tango-server.local in our example):

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    ports:
      - "45450:45450"
   environment:
      - TANGO_HOST=tango-cs:10000
      - TANGO_SERVER_PORT=45450
      - TANGO_SERVER_PUBLISH=tango-server.local:45450
    depends_on:
      - tango-cs
  tango-cs:
 ...

And the corresponding changes to our Dockerfile:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT -ORBendPointPublish giop:tcp:$TANGO_SERVER_PUBLISH

Waiting for availability of a service

This one is not specific to our tango server and the whole stack but a general problem when building stacks where one service depends on another one being up and listening on a network port. Docker compose takes care of the startup order of the containers but does not check for the readiness of services running in the containers. In linux containers this can easily be achieved by using the small shell utility wait-for-it. Here we only need to make some changes to our Dockerfile:

FROM debian:buster
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install wait-for-it

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD wait-for-it $TANGO_HOST -- my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT -ORBendPointPublish giop:tcp:$TANGO_SERVER_PUBLISH

Depending on your distribution/base image installation of the tool (or other similar alternatives like dockerize or docker-compose-wait).

Summing it up

After moving some rocks out of the way it is dead easy to deploy your tango servers together with a complete tango control system to any host platform providing a container runtime. You do not need to delivery everything in one single stack if you do not want to. Most of the stuff described above works the same when deploying tango control system and tango servers in independent stacks or as separate containers even on multiple hosts.

Docker runtime breaking your container

Docker (or container technology in general) is a great tool to clearly separate the concerns of developers and operations. We use it to simplify various tasks like building projects, packaging them for different platforms and deployment of our software onto the target machines like staging and production servers. All the specifics of the projects are contained and version controlled using the Dockerfiles and compose files.

Our operations only needs to provide some infrastructure able to build container images and run them. This works great most of the time and removes a lot of the friction between developers and operation where in the past snowflaky-servers needed to be setup and maintained. Developers often had to ask for specific setups and environments because each project had their own needs. That is all gone with this great container technology. Brave new world. Except when it suddenly does not work anymore.

Help, my deployment container stopped working!

As mentioned above we use docker to deploy our software to the target machines. These machines are often part of a corporate network protected by firewalls and only accessible using VPN. I already talked about how to use openvpn in a docker container for deployment. So the other day I was making a release of one of my long-running projects and pressing the deploy button for that project on our jenkins continuous integration server.

But instead of just leaning back, relaxing and watching the magic work the deployment failed and the red light lit up! A look into the job output showed that the connection to the target machine was refused. A quick check from the developer machine showed no problem on the receiving side. VPN, target machine and everything was up and running as usual.

After a quick manual deployment performed with care and administrator hat I went on an investigation journey…

What was going on?

The deployment job did not change for several months, the container image did not change and the rest of the infrastructure was working as expected. After more digging, debugging narrowing down the problem I found out, that openvpn did not work in the container anymore because of some strange permission denied error:

Tue May 19 15:24:14 2020 /sbin/ip addr add dev tap0 1xx.xxx.xxx.xxx/22 broadcast 1xx.xxx.xxx.xxx
Tue May 19 15:24:14 2020 /sbin/ip -6 addr add 2axx:1xxx:4:5xxx:9xx:5xxx:5xxx:4xxx/64 dev tap0
RTNETLINK answers: Permission denied
Tue May 19 15:24:14 2020 Linux ip -6 addr add failed: external program exited with error status: 2
Tue May 19 15:24:14 2020 Exiting due to fatal error

This hot trace made it easy to google for and revealed following issue on github: https://github.com/dperson/openvpn-client/issues/75. The cause of all the trouble was changed behaviour of the docker runtime. Our automatic updates had run over the weekend and actually installed a new package version of the docker runtime (see exerpt from apt history log):

containerd.io:amd64 (1.2.13-1, 1.2.13-2)

This subtle change broke my container! After some sacrifices to the whale gods I went on to implement the fix. Fortunately there is an easy way to get it working like before. You just have to pass following command line switch to docker run and everything works as expected:

--sysctl net.ipv6.conf.all.disable_ipv6=0

As nice as containers are for abstracting away hardware, operating systems and other environment details sometimes the container runtime shines through. It is just a shame that such things happen on minor releases or package release upgrades…