Using Docker Containers in Development with WebStorm: Next Iteration

We are always in pursue of improving our build and development infrastructures. Who isn’t?

At Softwareschneiderei, we have about five times as many projects than we have developers (without being overworked, by the way) and each of that comes with its own requirements, so it is important to be able to switch between different projects as easily as cloning a git repository, avoiding meticulous configuration of your development machines that might break on any change.

This is the main advantage of the development container (DevContainer) approach (with Docker being the major contestant at the moment), and last November, I tried to outline my then-current understanding of integrating such an approach with the JetBrains IDEs. E.g. for WebStorm, there is some kind of support for dockerized run configurations, but that does some weird stuff (see below), and JetBrains did not care enough yet to make that configurable, or at least to communicate the sense behind that.

Preparing our Dev Container

In our projects, we usually have at least two Docker build stages:

  • one to prepare the build platform (this will be used for the DevContainer)
  • one to execute the build itself (only this stage copies actual sources)

There might be more (e.g. for running the build in production, or for further dependencies), but the basic distinction above helps us to speed up the development process already. (Further reading: Docker cache management)

For one of our current React projects (in which I chose to try Vite in favor of the outdated Create-React-App, see also here), the Dockerfile might look like

# --------------------------------------------
FROM node:18-bullseye AS build-platform

WORKDIR /opt
COPY package.json .
COPY package-lock.json .

# see comment below
RUN npm install -g vite

RUN npm ci --ignore-scripts
WORKDIR /opt/project

# --------------------------------------------
FROM build-platform AS build-stage

RUN mkdir -p /build/result
COPY . .
CMD npm run build && mv dist /build/result/app

The “build platform” stage can then be used as our Dev Container, from the command line as (assuming, this Dockerfile resides inside your project directory where also src/ etc. are chilling)

docker build -t build-platform-image --target build-platform .
docker run --rm -v ${PWD}:/opt/project <command_for_starting_dev_server>

Some comments:

  • The RUN step to npm install -g vite is required for a Vite project because the our chosen base image node:18-bullseye does not know about the vite binaries. One could improve that by adding another step beforehand, only preparing a vite+node base image and taking advantage of Docker caching from then on.
  • We specifically have to take the WORKDIR /opt/project because our mission statement is to integrate the whole thing with WebStorm. If you are not interested in that, that path is for you to choose.

Now, if we are not working against any idiosyncrasies of an IDE, the preparation step “npm ci” gives us all our node dependencies in the current directory inside a node_modules/ folder. Because this blog post is going somewhere, already now we chose to place that node_modules in the parent folder of the actual WORKDIR. This will work because for lack of an own node_modules, node will find it above (this fact might change with future Node versions, but for now it holds true).

The Challenge with JetBrains

Now, the current JetBrains IDEs allow you to run your project with the node interpreter (containerized within the node-platform image) in the “Run/Debug Configurations” window via

“+” ➔ “npm” ➔ Node interpreter “Add…” ➔ “Add Remote” ➔ “Docker”

then choose the right image (e.g. build-platform-image:latest).

Now enters that strange IDE behaviour that is not really documented or changeable anywhere. If you run this configuration, your current project directory is going to be mounted in two places inside the container:

  • /opt/project
  • /tmp/<temporary UUID>

This mounting behaviour explains why we cannot install our node_modules dependencies inside the container in the /opt/project path – mounting external folders always override anything that might exist in the corresponding mount points, e.g. any /opt/project/node_modules will be overwritten by force.

As we cared about that by using the /opt parent folder for the node_modules installation, and we set the WORKDIR to be /opt/project one could think that now we can just call the development server (written as <command_for_starting_dev_server> above).

But we couldn’t!

For reasons that made us question our reality way longer than it made us happy, it turned out that the IDE somehow always chose the /tmp/<uuid> path as WORKDIR. We found no way of changing that. JetBrains doesn’t tell us anything about it. the “docker run -w / --workdir” parameter did not help. We really had to use that less-than-optimal hack to modify the package.json “scripts” options, by

 "scripts": {
    "dev": "vite serve",
    "dev-docker": "cd /opt/project && vite serve",
    ...
  },

The “dev” line was there already (if you use create-react-app or something else , this calls that something else accordingly). We added another script with an explicit “cd /opt/project“. One can then select that script in the new Run Configuration from above and now that really works.

We do not like this way because doing so, one couples a bad IDE behaviour with hard coded paths inside our source files – but at least we separate it enough from our other code that it doesn’t destroy anything – e.g. in principle, you could still run this thing with npm locally (after running “npm install” on your machine etc.)

Side note: Dealing with the “@esbuild/linux-x64” error

The internet has not widely adopteds Vite as a scaffolding / build tool for React projects yet and one of the problems on our way was a nasty error of the likes

Error: The package "esbuild-linux-64" could not be found, and is needed by esbuild

We found the best solution for that problem was to add the following to the package.json:

"optionalDependencies": {
    "@esbuild/linux-x64": "0.17.6"
}

… using the “optionalDependencies” rather than the other dependency entries because this way, we still allow the local installation on a Windows machine. If the dependency was not optional, npm install would just throw an wrong-OS-error.

(Note that as a rule, we do not like the default usage of SemVer ^ or ~ inside the package.json – we rather pin every dependency, and do our updates specifically when we know we are paying attention. That makes us less vulnerable to sudden npm-hacks or sneaky surprises in general.)

I hope, all this information might be useful to you. It took us a considerable amount of thought and research to come to this conclusion, so if you have any further tips or insights, I’d be glad to hear from you!

Create custom jre in Docker

I recently wrote a Java application and wanted to run it in a Docker container. Unfortunately, my program needs a module from the Java jdk, which is why I can’t run it in the standard jre. When I run the program with the jdk, everything works but I have much more than I actually need and a 390MB docker container.

That’s why I set out to build my own jre, cut-down exactly to my application needs.

For this I found two Gradle plugins that help me. Jdeps checks the dependencies of an application and can summarise them in a file. This file can be used as input for the Jlink plugin. Jlink builds its own jre from it.

In the following I will present a way to build a custom jre with gradle and the two plugins in a multistage dockerfile and run an application in it.

Configuration in Gradle

First we need to do some configuration in gradle.build file. To use jdeps, the application must be packaged as an executable jar file and all dependencies must be copied in a seperate folder . First we create a task that copies all current dependencies to folder named lib. For the jar, we need to define the main class and set the class path to search for all dependencies in the lib folder.

// copies all the jar dependencies of your app to the lib folder
task copyDependencies(type: Copy) {
    from configurations.runtimeClasspath
    into "$buildDir/lib"
}

jar {
    manifest {
        attributes["Main-Class"] = "Main"
        attributes["Class-Path"] = configurations.runtimeClasspath
            .collect{'lib/'+it.getName()}.join(' ')
    }
}

Docker Image

Now we can get to work on the Docker image.

As a first step, we build our jre in a Java jdk. To do this, we run the CopyDependencies task we just created and build the Java application with gradle. Then we let jdeps collect all dependencies from the lib folder.
The output is written to the file jre-deps.info. It is therefore important that no errors or warnings are output, which is set with the -q parameter. The print-module-deps is crucial so that the dependencies are output and saved in the file.

The file is now passed to jlink and a custom-fit jre for the application is built from it. The parameters set in the call also reduce the size. The settings can be read in detail in the plugin documentation linked above.

FROM eclipse-temurin:17-jdk-alpine AS jre-build

COPY . /app
WORKDIR /app

RUN chmod u+x gradlew; ./gradlew copyDependencies; ./gradlew build

# find JDK dependencies dynamically from jar
RUN jdeps \
--ignore-missing-deps \
-q \
--multi-release 17 \
--print-module-deps \
--class-path build/lib/* \
build/libs/app-*.jar > jre-deps.info

RUN jlink \
--compress 2 \
--strip-java-debug-attributes \
--no-header-files \
--no-man-pages \
--output jre \
--add-modules $(cat jre-deps.info)

FROM alpine:3.17.0
WORKDIR /deployment

# copy the custom JRE produced from jlink
COPY --from=jre-build /app/jre jre
# copy the app dependencies
COPY --from=jre-build /app/build/lib/* lib/
# copy the app
COPY --from=jre-build /app/build/libs/app-*.jar app.jar

# run in user context
RUN useradd schneide
RUN chown schneide:schneide .
USER schneide

# run the app on startup
CMD jre/bin/java -jar app.jar

In the second Docker stage, a smaller image is used and only the created jre, the dependencies and the app.jar are copied into it. The app is then executed with the jre.

In my case, I was able to reduce the Docker container size to 110 MB with the alpine image, which is less than a third. Or, using a ubuntu:jammy image, to 182 MB, which is also more than half.

Using custom Docker containers for development with WebStorm & Co.

Docker has become one of the go-to tools of many developers these days. Not because any project should implement as many technological buzz words per se, but due to their great deal of flexibility compared with their small hassle of setup.

For stuff like node-based applications, using a Dev Container is useful because in principle, you do not need to have any of the npm stuff on your actual machine – not only you avoid having these monstrous node_modules folders, but also avoid having accidental dependencies on some specific configuration that might hold true on your device, but not generally.

For some of these reasons probably, JetBrains included Docker Dev Containers as a kind of “remote” development. In a sense, a docker container can be thought of as a remote machine, regardless of the fact that it shares your local hardware and is just a software abstraction.

In my opinion, JetBrains usually does great software, but there is some weird behaviour in their usage of Docker Dev Containers and it took us a while to find a quite general and IDE-independent solution; I’ll just use WebStorm as an example of something that appeared unusually hard to tame. I guess it will become better eventually.

For now, one might think of using the built-in config like:

  1. New Run Configuration -> npm
  2. Node Interpreter: “…”
  3. “+” -> Add Remote… -> “Docker”
  4. Use an image of your choice, either one of the node base images or a custom one (see below) with its corresponding tag

Now for reasons that seem to be completely undocumented and unavoidable (tell me if you know more!), the IDE forces you to then mount your project to /opt/project inside this container, where it gets mirrored during runtime to somewhere /tmp/<temporary uuid>/ – and in several of our projects (due to our folder structure which is not even particularly abnormal) this made this option to be completely unusable.

The way one can work without these strange idiosyncrasies is as follows:

First, create a Dockerfile in which you do all the required setup. It might be an optional idea to set the user, away from “root” to something more restricted like “node” (even though in development, you probably have your eyes on everything nevertheless). You can do more custom setup here. This can look like

FROM node:16.18.0-bullseye-slim

WORKDIR /your-home-inside-container
RUN chown node .

COPY package.json package-lock.json /your-home-inside-container

USER node

RUN npm ci --ignore-scripts

# COPY <whatever you might want> <where you want it inside>

EXPOSE 3000

CMD npm start

From that Dockerfile, build a local image in the same folder like:

# you might need -f if the Dockerfile is not named "Dockerfile"
docker build -t your-dev-image .

Then, create a new Run Configuration but choose “Shell script” (not npm)

docker run -it --rm --entrypoint= -v ${PWD}/src:/your-home-inside-container/src -p 0.0.0.0:3000:3000 your-dev-image

You might use a different “-p” port forwarding if you do not want to have your development server broadcasting on port 3000 (another advantage of Dev Containers, you can easily run multiple instances on different ports).

This is about the whole magic. But there are two further things that could be important here:

Hot Reloading (live updating whenever source files change)

This is done rather easily, however seems to change once in a while. We figured out that at least if you are using react-scripts@5.0.1 (which is what “npm start” addresses, unless you do that differently), you just need to set the environment variable “WATCHPACK_POLLING=true”. I.e put that in your Dockerfile a

ENV WATCHPACK_POLLING true

or pass it into your docker run ... -e WATCHPACK_POLLING=true ... your-dev-image line

Routing a development proxy to some “local host”

If your software e.g. adresses a backend that is running on your development machine or another Docker Dev Container, it can not just access that host from inside the Docker container. Neither is the port forwarding via “-p …:…” of any use, because that addresses the other direction – i.e. what port from the container is exposed to outside access – here, we go the other direction.

When the software inside the container would actually want to address “localhost”, it needs to be directed at the host under which your local machine appears. Docker has a special hostname for that and it is host.docker.internal

I.e. if your local backend is running on “localhost:8080” on your machine, you need to tell your Dev Container to direct its requests to “host.docker.internal:8080”.

In one of our projects, we needed some specific control over the proxy that the React development server gives you and here is way to gain that control – add a “setupProxy.js” inside your src/ folder and put in it something like

const { createProxyMiddleware } = require('http-proxy-middleware');

module.exports = function(app) {
    if (process.env.LOCAL_DEVELOPMENT) {
        return;
    }

    let httpProxyMiddleware = createProxyMiddleware({
        target: process.env.REACT_APP_PROXY || 'http://localhost:8080',
        changeOrigin: true,
    });
    app.use('/api', httpProxyMiddleware); // change to your needs accordingly
};

This way, one can always change the address via setting a REACT_APP_PROXY environment variable as in the step above; and one can also disable the whole proxying by setting the LOCAL_DEVELOPMENT env variable to true. Name these as you like, and you can even extend this setupProxy to include web sockets or different proxies for different routes, if you have any questions on that, just comment below 🙂

Docker Interpreter with Environment Variables in RubyMine

As you know from previous blog entries, we now rely on Docker dev containers as interpreters for our IDEs. This has the advantage that we don’t need local installations of for example Ruby or single packages, but all requirements are in a Docker container and our machine stays clean.
RubyMine has some pitfalls for this way. So in this blog post, I’ll present you some hard-won insights and show you what solution we came to.

Those were our first problems:

  1. When using a single Docker image as an interpreter, it does not clean up everything when exiting the application and container. For example, the Server.pids file remains on the local machine, resulting in the following error: “A server is already running. Check path/Pids/Server.pids.” This behavior can be worked around with a Before Launch script that deletes the file when the application starts, but it’s not very nice. RubyMine unfortunately does not have Docker Container Settings, different to other Jetbrain IDEs, so a simple –rm does not work.
  2. Furthermore, we need to talk to our local machine from our Docker container, for example to access a DB or to use a VPN tunnel originating from the local machine. For this, the Docker container needs to know my IP address, or the respective IP address of each developer. In our version control system, however, we don’t want to constantly overwrite the IP addresses or check before each push that you don’t accidentally write up your own IP address. Our wish was to have a local environment variable MY_MACHINE_IP where each developer writes their IP address and the Docker container fetches it when the program starts. The normal integration of the system environment variables by simply checking this option unfortunately does not work here when we start the program in Docker. This is because the IDE then integrates the environment variables of the Docker container and not those of my local machine. Also, using a local environment variable to pass it to Docker doesn’t work in Run Configuration, nor in Docker Image creation, as the images below show. Same if you want to use PATH variables of the IDE instead of environment variables.
Environment Settings in Run Configuration
Environments in Dev Container

Our solution – Docker Compose:

Our first problem is solved by using docker compose directly. The problem with the Server.pid does not occur, because RubyMine manages the Docker Compose better and removes the Server.pid automatically at startup.

Below, I explain our setup of a Docker Compose as an interpreter and how it solved all the problems.

We created a simple Docker Compose yaml with the image we want to use as an interpreter. At this point, you can also define environment variables that use local environment variables of your local machine. This solves our second problem.

Docker Compose yaml

The definition of the volume is important at this point. Docker tries to store things there during the installation and throws an internal server error if the volume is missing: 
Error response from daemon: the working directory ‘C:\repositories\my-project’ is invalid, it needs to be an absolute path.

Now an interpreter can be set up in RubyMine. To do this, a new remote interpreter must be created in the Settings under Language & Frameworks: Ruby SDK and Gems. An example is shown in the figure below:

Ruby Docker Compose Interpreter

It is important here to select the interpreter after this, otherwise you will not be able to save and will get an error that the project has no interpreter.

Now the interpreter can be stored in the run configuration.

Recap:

We are now able to run our Ruby environment in a Docker container. Thus, the environment is independent of the local circumstances such as the installed Ruby version or packages, as these can all be found in the container. Also, each programmer can run the project locally without any further adjustments via a defined environment variable and the Docker container can still talk to other local Docker containers on the machine. Thus, the status in Gitlab is generally valid and not bound to the respective programmer by a customizable IP or the like.

The layer cake approach: Docker multi-stage builds and Dev containers

One recent feature that has the ability to change the way developers work on their local machines are dev containers. In short, a dev container is a docker container that provides a working environment for a specific project and can be used by the IDE to develop the project without ever installing anything on the developer machine except the IDE, docker and git. It is especially interesting if you switch between projects with different ecosystems or the same ecosystem in different version often.

Most modern IDEs support dev containers, even if it still feels a bit unpolished in some implementations. In my opinion, the advantages of dev containers outweigh the additional complexity. The main advantage is the guarantee that all development systems are set up correctly and in sync. Our instructions to set up projects shrank from a lengthy wiki page to four bullet points that are essentially the same process for all projects.

But one question needs to be answered: If you have a docker-based build and perhaps even a docker-based delivery and operation, how do you keep their Dockerfiles in sync with your dev container Dockerfile?

My answer is to not split the project’s Dockerfiles, but layer them in one file as a multi-stage build. Let’s view the (rather simple) Dockerfile of one small project:

# --------------------------------------------
FROM python:3.10-alpine AS project-python-platform

COPY requirements.txt requirements.txt
RUN pip3 install --upgrade pip
RUN pip3 install --no-cache-dir -r requirements.txt

# --------------------------------------------
FROM project-python-platform AS project-application

WORKDIR /app

# ----------------------
# Environment
# ----------------------
ENV FLASK_APP=app
# ----------------------

COPY . .

CMD python -u app.py

The interesting part is introduced by the horizontal dividers: The Dockerfile is separated into two parts, the first one called “project-python-platform” and the second one “project-application”.

The first build target contains everything that is needed to form the development environment (python and the project’s requirements). If you build an image from just the first build target, you get your dev container’s image:

docker build --pull --target project-python-platform -t dev-container .

The second build target uses the dev container image as a starting point and includes the project artifacts to provide an operations image. You can push this image into production and be sure that your development effort and your production experience are based on the same platform.

If you frowned because of the “COPY . .” line, we make use of the .dockerignore feature for small projects.

Combining multi-stage builds with dev containers keeps all stages of your delivery pipeline in sync, even the zeroth stage – the developer’s machine.

But what if you have a more complex scenario than just a python project? Let’s look at the Dockerfile of a python project with a included javascript/node/react sub-project:

# --------------------------------------------
FROM python:3.9-alpine AS chronos-python-platform

COPY requirements.txt requirements.txt

RUN pip3 install --upgrade pip
RUN pip3 install --no-cache-dir -r requirements.txt

# --------------------------------------------
FROM node:16-bullseye AS chronos-node-platform

COPY chronos_client/package.json .
COPY chronos_client/package-lock.json .

RUN npm -v
RUN npm ci --ignore-scripts

# --------------------------------------------
FROM chronos-node-platform AS chronos-react-builder

COPY chronos_client .

# build the client javascript application
RUN npm run build

# --------------------------------------------
FROM chronos-python-platform AS chronos-application

WORKDIR /app

COPY . .

# set environment variables
ENV ZEITGEIST_PASSWORT=''

COPY --from=chronos-react-builder /build ./chronos_client/build

CMD python -u chronos.py

It is the same approach, just more layers on the cake, four in this example. There is one small caveat: If you want to build the “chronos-node-platform”, the preceding “chronos-python-platform” gets built, too. That delays things a bit, but only once in a while.

The second to last line might be interesting if you aren’t familiar with multi-stage builds: The copy command takes compiled files from the third stage/layer and puts them in the final layer that is the operations image. This ensures that the compiler is left behind in the delivery pipeline and only the artifacts are published.

I’m sure that this layer cake approach is not feasible for all project setups. It works for us for small and medium projects without too much polyglot complexity. The best aspect is that it separates project-specific knowledge from approach knowledge. The project-specific things get encoded in the Dockerfile, the approach knowledge is the same for all projects and gets documented in the Wiki – once.

Running a containerized ActiveDirectory for developers

If you develop software for larger organizations one big aspect is integrating it with existing infrastructure. While you may prefer simple deployments of services in docker containers a customer may want you to deploy to their wildfly infrastructure for example.

One common case of infrastructure is an Active Directory (AD) or plain LDAP service used for organization wide authentication and authorization. As a small company we do not have such an infrastructure ourselves and it would not be a great idea to use it for development anyway.

So how do you develop and test your authentication module without an AD being available for you?

Fortunately, nowadays this is relatively easy using tools like Docker and Samba. Let us see how to put such a development infrastructure up and where the pitfalls are.

Running Samba in a Container

Samba cannot only serve windows shares or act as an domain controller for Microsoft Windows based networks but includes a full AD implementation with proper LDAP support. It takes a small amount of work besides installing Samba in a container to set it up, so we have two small shell scripts for setup and launch in a container. I think most of the Dockerfile and scripts should be self-explanatory and straightforward:

Dockerfile:

FROM ubuntu:20.04

RUN DEBIAN_FRONTEND=noninteractive apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install samba krb5-config winbind smbclient 
RUN DEBIAN_FRONTEND=noninteractive apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install iproute2
RUN DEBIAN_FRONTEND=noninteractive apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install openssl
RUN DEBIAN_FRONTEND=noninteractive apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install vim

RUN rm /etc/krb5.conf
RUN mkdir -p /opt/ad-scripts

WORKDIR /opt/ad-scripts

CMD chmod +x *.sh && ./samba-ad-setup.sh && ./samba-ad-run.sh

samba-ad-setup.sh:

#!/bin/bash

set -e

info () {
    echo "[INFO] $@"
}

info "Running setup"

# Check if samba is setup
[ -f /var/lib/samba/.setup ] && info "Already setup..." && exit 0

info "Provisioning domain controller..."

info "Given admin password: ${SMB_ADMIN_PASSWORD}"

rm /etc/samba/smb.conf

samba-tool domain provision\
 --server-role=dc\
 --use-rfc2307\
 --dns-backend=SAMBA_INTERNAL\
 --realm=`hostname`\
 --domain=DEV-AD\
 --adminpass=${SMB_ADMIN_PASSWORD}

mv /etc/samba/smb.conf /var/lib/samba/private/smb.conf

touch /var/lib/samba/.setup

Using samba-ad-run.sh we start samba directly instead of running it as a service which you would do outside a container:

#!/bin/bash

set -e

[ -f /var/lib/samba/.setup ] || {
    >&2 echo "[ERROR] Samba is not setup yet, which should happen automatically. Look for errors!"
    exit 127
}

samba -i -s /var/lib/samba/private/smb.conf

With the scripts and the Dockerfile in place you can simply build the container image using a command like

docker build -t dev-ad -f Dockerfile .

We then run it like follows and use the local mounts to preserve the data in the AD we will be using for testing and toying around:

 docker run --name dev-ad --hostname ldap.schneide.dev --privileged -p 636:636 -e SMB_ADMIN_PASSWORD=admin123! -v $PWD/:/opt/ad-scripts -v $PWD/samba-data:/var/lib/samba dev-ad

To have everything running seamlessly you should add the specified hostname – ldap.schneide.dev in our example – to /etc/hosts so that all tools work as expected and like it was a real AD host somewhere.

Testing our setup

Now of course you may want to check if your development AD works as expected and maybe add some groups and users which you need for your implementation to work.

While there are a bunch of tools for working with an AD/LDAP I found the old and sturdy LdapAdmin the easiest and most straightforward to use. It comes as one self-contained executable file (downloadable from Sourceforge) ready to use without installation or other hassles.

After getting the container and LdapAdmin up and running and logging in you should see something like this below:

LdapAdmin Window showing our Samba AD

Then you can browse and edit your active directory to fit your needs allowing you to develop your authentication and authorization module based on LDAP.

I hope you found the above useful for you development setup.

Migrating from Oracle to PostgreSQL

We are maintaining several applications with a SQL-Database as our data storage. If we can decide freely, we usually opt for PostgreSQL as the database management system (DBMS). But sometimes our clients have specific requirements because they are running the services on-premises so we use our customers’ choice. SQL is SQL anyway, is it not?

No it isn’t. And this year one of our customers asked us to migrate our application from Oracle to PostgreSQL. The migration was challenging even though we are using an object-relational mapper (ORM) and the necessary changes to our application code were very limited.

In this post I want to explain the general, application-agnostic challenges of such a migration. A follow-up will cover the application- and framework-specific issues.

Why is it not easy?

Luckily, PostgreSQL supports most common SQL features of Oracle, especially sequences, PL/SQL like scripts, triggers, foreign keys etc. and all the important datatypes. So you are mostly migrating from an inferior to a more powerful solution, at least feature and capability-wise from a client perspective. Please note that I am not judging the performance, replication, clustering and other administrative features here!

Unfortunately there is no simple and powerful enough tool to simply dump the oracle database into some standard SQL text format that you could pipe into psql or use with pg_restore. In addition there is also a challenge to convert the different number-types of Oracle to sematically equivalent PostgreSQL types etc.

Another challenge is coping with the referential integrity. Especially data in complex schemas with a lot of foreign keys are harder to migrate without proper tool support as you have to figure out the correct order of tables to restore.

Nevertheless, such a migration is doable, especially if you do not have too much scripting logic in your database. And there is a free tool to help you with all this stuff called Ora2Pg.

What can Ora2Pg do for you?

It can export the full database schema including constraints, convert datatypes based on configuration provided by you and offers a basic automatic conversion of PL/SQL code to PLPGSQL. When running the migration you can interactively choose what to migrate and what to skip. That allows you to only migrate the data into a readily prepared schema, for example.

How to run Ora2Pg?

Ora2Pg is a collection of perl scripts and configuration files so you need a system capable of running these. If you do not want to mess with your whole system and install all of the dependencies I prepared a Dockerfile able to run Ora2Pg:

FROM centos:7

# Prepare the system for ora2pg 
RUN yum install -y wget
RUN wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

COPY ol7-temp.repo /etc/yum.repos.d/
RUN yum install -y oraclelinux-release-el7
RUN mv /etc/yum.repos.d/ol7-temp.repo /etc/yum.repos.d/ol7-temp.repo.disabled
RUN yum install -y oracle-instantclient-release-el7
RUN yum install -y oracle-instantclient-basic
RUN yum install -y oracle-instantclient-devel
RUN yum install -y oracle-instantclient-sqlplus

RUN yum install -y perl perl-CPAN perl-DBI perl-Time-HiRes perl-YAML perl-local-lib make gcc
RUN yum install -y perl-App-cpanminus

RUN cpanm CPAN::Config
RUN cpanm CPAN::FirstTime

ENV LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib
ENV ORACLE_HOME=/usr/lib/oracle/21/client64

RUN perl -MCPAN -e 'install DBD::Oracle'

COPY ora2pg-21.1.tar.gz /tmp

WORKDIR /tmp
RUN tar zxf ora2pg-21.1.tar.gz && cd ora2pg-21.1 && perl Makefile.PL && make && make install

RUN mkdir -p /migration
RUN ora2pg --project_base /migration --init_project my_project
WORKDIR /migration/my_project

# uncomment this if you have a customized ora2pg.conf
#COPY ora2pg.conf /migration/my_project/config/

CMD ora2pg -t SHOW_VERSION -c config/ora2pg.conf && ora2pg -t SHOW_TABLE -c config/ora2pg.conf\
 && ora2pg -t SHOW_REPORT --estimate_cost -c config/ora2pg.conf\
 && ./export_schema.sh && ora2pg -t INSERT -o data.sql -b ./data -c ./config/ora2pg.conf

Here are the commands and the workflow to export the oracle database using the above docker image:

docker build -t o2pg .
# this will fail initially but create the project structure and generate a default configuration file
docker run --name oracle-export o2pg
# copy the project structure to the host system
docker cp oracle-export:/migration/my_project ./my_project_migration/

Now you can edit the configuration in my_project_migration/config and copy it to the directory you have built and run the docker commands. Most importantly you have to change the connection parameters at the top of the ora2pg.conf file. When you are ready to make the first go you need to enable configuration copying in the Dockerfile and rebuild the image. Now you should get your first somehow usable export.

The most import config options we changed for our projects are:

  • Connection parameters
  • Excluded tables that you do not want to migrate
  • Deletion of the contents of the target tables
  • Conversion of some datatypes like NUMBER(*,0) to bigint and NUMBER:1 to boolean for some columns

Most of the defaults are sensible to begin with but you can tailor the export specifically to your needs. If you feel ready to try the import you can run the import using a second docker image based on the following Dockerfile-import:

FROM centos:7

# Prepare the system for ora2pg 
RUN yum install -y wget
RUN wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

COPY ol7-temp.repo /etc/yum.repos.d/
RUN yum install -y oraclelinux-release-el7
RUN mv /etc/yum.repos.d/ol7-temp.repo /etc/yum.repos.d/ol7-temp.repo.disabled
RUN yum install -y oracle-instantclient-release-el7
RUN yum install -y oracle-instantclient-basic
RUN yum install -y oracle-instantclient-devel
RUN yum install -y oracle-instantclient-sqlplus
RUN yum install -y postgresql-server

RUN yum install -y perl perl-CPAN perl-DBI perl-Time-HiRes perl-YAML perl-local-lib make gcc
RUN yum install -y perl-App-cpanminus

RUN cpanm CPAN::Config
RUN cpanm CPAN::FirstTime

ENV LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib
ENV ORACLE_HOME=/usr/lib/oracle/21/client64

RUN perl -MCPAN -e 'install DBD::Oracle'

COPY ora2pg-21.1.tar.gz /tmp

WORKDIR /tmp
RUN tar zxf ora2pg-21.1.tar.gz && cd ora2pg-21.1 && perl Makefile.PL && make && make install

# you need to mount the project volume to /my_project
WORKDIR /my_project

ENV pg_port=5432

CMD ./import_all.sh -d $pg_db -h $pg_host -p $pg_port -U $pg_user -o $pg_user

To run the import with your exported project run build and run the import container as follows:

docker build -t postgres-import -f Dockerfile-import .
docker run -it --rm -e pg_host=target-db.intranet -e pg_db=my_project_db -e pg_user=my_db_user -v ./my_project_migration:/my_project postgres-import

Then you can interactively provide the database password and decide which migration steps to perform.

Caveat

Depending on your schema, data and privileges in the target database it may be necessary to disable all triggers before importing and reenable them after a successful import. This can done by replacing all occurences of TRIGGER USER by TRIGGER ALL in the file data/data.sql. You may need appropriate privileges for this to work.

Final words

Such a migration is not an easy task but may be worth it in total cost of ownership and maybe developer satisfaction as Oracle has some oddities and limitations for backend developers.

I will tackle some application specific issues with such a migration in a follow-up article that we encountered when migrating our system from Oracle to PostgreSQL using the above approach and tools.

Running a Micronaut Service in Docker

Micronaut is a state-of-the-art micro web framework targeting the Java Virtual Machine (JVM). I quite like that you can implement your web application oder service using either Java, Groovy or Kotlin – all using micronaut as their foundation.

You can easily tailor it specifically to your own needs and add features as need be. Micronaut Launch provides a great starting point and get you up and running within minutes.

Prepared for Docker

Web services and containers are a perfect match. Containers and their management software eases running and supervising web services a lot. It is feasible to migrate or scale services and run them separately or in some kind of stack.

Micronaut comes prepared for Docker and allows you to build Dockerfiles and Docker images for your application out-of-the-box. The gradle build files of a micronaut application offer handy tasks like dockerBuild or dockerfile to aid you.

A thing that simplifies deploying one service in multiple scenarios is configuration using environment variables (env vars, ENV).

I am a big fan of using environment variables because this basic mechanism is supported almost everywhere: Windows, Linux, docker, systemd, all (?) programming languages and many IDEs.

It is so simple everyone can deal with it and allows for customization by developers and operators alike.

Configuration using environment variables

Usually, micronaut configuration is stored in YAML format in a file called application.yml. This file is packaged in the application’s executable jar file making it ready to run. Most of the configuration will be fixed for all your deployments but things like the database connection settings or URLs of other services may change with each deployment and are most likely different in development.

Gladly, micronaut provides a mechanism for externalizing configuration.

That way you can use environment variables in application.yml while providing defaults for development for example. Note that you need to put values containing : in backticks. See an example of a database configuration below:

datasources:
  default:
    url: ${JDBC_URL:`jdbc:postgresql://localhost:5432/supermaster`}
    driverClassName: org.postgresql.Driver
    username: ${JDBC_USER:db_user}
    password: ${JDBC_PASSWORD:""}
    dialect: POSTGRES

Having prepared your application this way you can run it using the command line (CLI), via gradle run or with docker run/docker-compose providing environment variables as needed.

# Running the app "my-service" using docker run
docker run --name my-service -p 80:8080 -e JDBC_URL=jdbc:postgresql://prod.databasehost.local:5432/servicedb -e JDBC_USER=mndb -e JDBC_PASSWORD="mnrocks!" my-service

# Running the app using java and CLI
java -DJDBC_URL=jdbc:postgresql://prod.databasehost.local:5432/servicedb -DJDBC_USER=mndb -DJDBC_PASSWORD="mnrocks!" -jar application.jar

Migrating from Oracle to PostgreSQL

One promise of SQL for application developers is that changing the database management system (DBMS) is not that of a big deal. Due to the many specialties and not complete standards conformance of the database vendors it can be a big task to migrate from one DBMS vendor to another.

Nevertheless, there are plenty of good reasons to do so:

  • Cost of buying, running and maintaining the DBMS
  • Limitations of the current DBMS like performance, tool support, character sets, naming, data types and sizes etc.
  • Missing features like geospatial support, clustering, replication, sharding, timeseries support and so on
  • Support or requirements on the customers or operators side

Some of our long running projects that started several years ago had the requirement to work with an Oracle DBMS, version 8i at that time. Now, more than 10 years later our customer provides and prefers to host a PostgreSQL 13 cluster. Of course she would like us to migrate our applications over to the new DBMS and eventually get rid of the Oracle installation.

Challenges for the migration

Even though PostgreSQL is supports most of SQL:2016 core and most important features of Oracle there are enough differences and subtleties that make migration non-trivial. The most obvious items to look out for are

  • different column type names
  • SQL features and syntactical differences (sequences!)
  • PL/SQL functions syntax and features

Depending on your usage of database specific features you have to assess how much work and risk is expected.

Tools and migration process

Fortunately, there is a quite mature tool that can aid you along the process called ora2pg. It has tons of options to help you customizing the migration and a quite helpful assessment of the task ahead. The migration report looks like this:

-------------------------------------------------------------------------------
Ora2Pg v21.1 - Database Migration Report
-------------------------------------------------------------------------------
Version Oracle Database 12c Enterprise Edition Release 12.1.0.2.0
Schema  NAOMI-TEST
Size    91.44 MB

-------------------------------------------------------------------------------
Object  Number  Invalid Estimated cost  Comments        Details
-------------------------------------------------------------------------------
DATABASE LINK   0       0       0.00    Database links will be exported as SQL/MED PostgreSQL's Foreign Data Wrapper (FDW) extensions using oracle_fdw.
FUNCTION        1       0       1.00    Total size of function code: 0 bytes.
GLOBAL TEMPORARY TABLE  60      0       168.00  Global temporary table are not supported by PostgreSQL and will not be exported. You will have to rewrite some application code to match the PostgreSQL temporary table behavior.  ht_my_table <--- SNIP --->.
INDEX   69      0       6.90    0 index(es) are concerned by the export, others are automatically generated and will do so on PostgreSQL. Bitmap will be exported as btree_gin index(es). Domain index are exported as b-tree but commented to be edited to mainly use FTS. Cluster, bitmap join and IOT indexes will not be exported at all. Reverse indexes are not exported too, you may use a trigram-based index (see pg_trgm) or a reverse() function based index and search. Use 'varchar_pattern_ops', 'text_pattern_ops' or 'bpchar_pattern_ops' operators in your indexes to improve search with the LIKE operator respectively into varchar, text or char columns.
JOB     0       0       0.00    Job are not exported. You may set external cron job with them.
SEQUENCE        4       0       1.00    Sequences are fully supported, but all call to sequence_name.NEXTVAL or sequence_name.CURRVAL will be transformed into NEXTVAL('sequence_name') or CURRVAL('sequence_name').
SYNONYM 0       0       0.00    SYNONYMs will be exported as views. SYNONYMs do not exists with PostgreSQL but a common workaround is to use views or set the PostgreSQL search_path in your session to access object outside the current schema.
TABLE   225     0       72.00    495 check constraint(s).       Total number of rows: 264690. Top 10 of tables sorted by number of rows:. topt has 52736 rows. po has 50830 rows. notification has 18911 rows. timeline_entry has 16556 rows. char_sample_types has 11400 rows. char_safety_aspects has 9488 rows. char_sample_props has 5358 rows. tech_spec has 4876 rows. mail_log_entry has 4778 rows. prop_data has 4358 rows. Top 10 of largest tables:.
-------------------------------------------------------------------------------
Total   359     0       248.90  248.90 cost migration units means approximatively 3 man-day(s). The migration unit was set to 5 minute(s)

-------------------------------------------------------------------------------
Migration level : A-3
-------------------------------------------------------------------------------

Migration levels:
    A - Migration that might be run automatically
    B - Migration with code rewrite and a human-days cost up to 5 days
    C - Migration with code rewrite and a human-days cost above 5 days
Technical levels:
    1 = trivial: no stored functions and no triggers
    2 = easy: no stored functions but with triggers, no manual rewriting
    3 = simple: stored functions and/or triggers, no manual rewriting
    4 = manual: no stored functions but with triggers or views with code rewriting
    5 = difficult: stored functions and/or triggers with code rewriting
-------------------------------------------------------------------------------

The tool is written in Perl, so I decided to put and run it inside Docker containers because I did not want to mess with my working machine or some VMs. To have quick turnaround times with my containers I split up the process into 3 steps:

  1. Export of the schema and data using a docker container
  2. On success copy the ora2pg project to the host
  3. Import the schema and data using another docker container

The ora2pg migration project is copied to the host machine allowing you to inspect the export and make adjustments if need be. Then you can copy it to the import container or simply bind mount the directory containing the ora2pg project.

The Dockerfile for the export image looks like this

FROM centos:7

# Prepare the system for ora2pg 
RUN yum install -y wget
RUN wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

COPY ol7-temp.repo /etc/yum.repos.d/
RUN yum install -y oraclelinux-release-el7
RUN mv /etc/yum.repos.d/ol7-temp.repo /etc/yum.repos.d/ol7-temp.repo.disabled
RUN yum install -y oracle-instantclient-release-el7
RUN yum install -y oracle-instantclient-basic
RUN yum install -y oracle-instantclient-devel
RUN yum install -y oracle-instantclient-sqlplus

RUN yum install -y perl perl-CPAN perl-DBI perl-Time-HiRes perl-YAML perl-local-lib make gcc
RUN yum install -y perl-App-cpanminus

RUN cpanm CPAN::Config
RUN cpanm CPAN::FirstTime

ENV LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib
ENV ORACLE_HOME=/usr/lib/oracle/21/client64

RUN perl -MCPAN -e 'install DBD::Oracle'

COPY ora2pg-21.1.tar.gz /tmp

WORKDIR /tmp
RUN tar zxf ora2pg-21.1.tar.gz && cd ora2pg-21.1 && perl Makefile.PL && make && make install

RUN mkdir -p /naomi/migration
RUN ora2pg --project_base /ora2pg --init_project my-migration
WORKDIR /ora2pg

COPY ora2pg.conf /ora2pg/my-migration/config/

CMD ora2pg -t SHOW_VERSION -c config/ora2pg.conf && ora2pg -t SHOW_TABLE -c config/ora2pg.conf\
 && ora2pg -t SHOW_REPORT --estimate_cost -c config/ora2pg.conf\
 && ./export_schema.sh && ora2pg -t INSERT -o data.sql -b ./data -c ./config/ora2pg.conf

Once the export looks good you can work on importing everything. The Dockerfile for the import image looks like this:

FROM centos:7

# Prepare the system for ora2pg 
RUN yum install -y wget
RUN wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

COPY ol7-temp.repo /etc/yum.repos.d/
RUN yum install -y oraclelinux-release-el7
RUN mv /etc/yum.repos.d/ol7-temp.repo /etc/yum.repos.d/ol7-temp.repo.disabled
RUN yum install -y oracle-instantclient-release-el7
RUN yum install -y oracle-instantclient-basic
RUN yum install -y oracle-instantclient-devel
RUN yum install -y oracle-instantclient-sqlplus
RUN yum install -y postgresql-server

RUN yum install -y perl perl-CPAN perl-DBI perl-Time-HiRes perl-YAML perl-local-lib make gcc
RUN yum install -y perl-App-cpanminus

RUN cpanm CPAN::Config
RUN cpanm CPAN::FirstTime

ENV LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib
ENV ORACLE_HOME=/usr/lib/oracle/21/client64

RUN perl -MCPAN -e 'install DBD::Oracle'

COPY ora2pg-21.1.tar.gz /tmp

WORKDIR /tmp
RUN tar zxf ora2pg-21.1.tar.gz && cd ora2pg-21.1 && perl Makefile.PL && make && make install

# you need to mount the project volume to /ora2pg
WORKDIR /ora2pg

CMD ./import_all.sh -d my_target_db -h $pg_host -U myuser -o myowner

Our target database runs on another host, so you need credentials to authenticate and perform all the required actions. Therefore we are the import container interactively. The PowerShell command for the import looks like this

docker run -it --rm -e pg_host=192.168.56.1 -v $PWD/ora2pg/my-migration:/ora2pg pgimport

The import script allows you to create the schema, sequences, indexes, constraints and load the data. I suggest adding the contraints after importing the data – a workflow supported by the import_all.sh script.

That way we got our Oracle database migrated into a PostgreSQL database. Unfortunately, this is only one part of the whole migration. The other part is making changes to the application code to correctly use the new database.

Running Tango-Servers in Docker

Containers are a great way of running your software in an environment that you defined and no one else needs to maintain of even know about. This is especially true if your software provides its service using a network port. All that operators have to provide is the execution platform for the container and the required resources.

Tango servers fit quite well into this type of software: Essentially they provide a service over the network. Unfortunately, they need a tango control system to be fully usable, so thats some other services. Luckily, there are docker images for this so building a stack of containers to run your Tango server in the scope of a control system is quite easy (sample docker-compose.yml):

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    environment:
      - TANGO_HOST=tango-cs:10000
    depends_on:
      - tango-cs
  tango-cs:
    container_name: tango-database
    image: tangocs/tango-cs:9.3.2-alpha.1-no-tango-test
    ports:
      - "10000:10000"
    environment:
      - TANGO_HOST=localhost:10000
      - MYSQL_HOST=tango-db:3306
      - MYSQL_USER=tango
      - MYSQL_PASSWORD=tango
      - MYSQL_DATABASE=tango
    links:
      - "tango-db:localhost"
    depends_on:
      - tango-db
  tango-db:
    container_name: mysql-database
    image: tangocs/mysql:9.2.2
    environment:
      - MYSQL_ROOT_PASSWORD=root
    volumes:
      - tango_mysql_data:/var/lib/mysql
volumes:
  tango_mysql_data:

Example Dockerfile for the image my-ts used for the my-tango-server container:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts

Unfortunately this naive approach has the following issues:

  • The network port of our tango server changes with each restart and is not available from outside the docker stack
  • The tango server communicates the internal IP-Address of its container to the tango database so connection attempts from the outside fail even if the port is correct
  • Even though the tango server container depends on the tango control system container the tango server might start up before the tango control system is available.

Let us tackle the issues one by one.

Expose a fixed port for the Tango server

This is probably the easiest issue to fix because it is well documented for OmniORB and widely used. So let us change docker-compose.yml to expose a port and pass it into the container as an environment variable:

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    ports:
      - "45450:45450"
   environment:
      - TANGO_HOST=tango-cs:10000
      - TANGO_SERVER_PORT=45450
    depends_on:
      - tango-cs
  tango-cs:
 ...

And use it in our Dockerfile for the my-ts image:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT

Fine, now our Tango server always listens on a defined port and that same port is exposed to the outside of our docker stack.

Publish the correct IP/hostname of the Tango server

I found it hard to find documentation about this and how to put it on the command line correctly but here is the solution to the problem. The crucial command line parameter is -ORBendPointPublish . We need to pass a hostname of the host machine into the container and let OmniORB publish that name (tango-server.local in our example):

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    ports:
      - "45450:45450"
   environment:
      - TANGO_HOST=tango-cs:10000
      - TANGO_SERVER_PORT=45450
      - TANGO_SERVER_PUBLISH=tango-server.local:45450
    depends_on:
      - tango-cs
  tango-cs:
 ...

And the corresponding changes to our Dockerfile:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT -ORBendPointPublish giop:tcp:$TANGO_SERVER_PUBLISH

Waiting for availability of a service

This one is not specific to our tango server and the whole stack but a general problem when building stacks where one service depends on another one being up and listening on a network port. Docker compose takes care of the startup order of the containers but does not check for the readiness of services running in the containers. In linux containers this can easily be achieved by using the small shell utility wait-for-it. Here we only need to make some changes to our Dockerfile:

FROM debian:buster
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install wait-for-it

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD wait-for-it $TANGO_HOST -- my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT -ORBendPointPublish giop:tcp:$TANGO_SERVER_PUBLISH

Depending on your distribution/base image installation of the tool (or other similar alternatives like dockerize or docker-compose-wait).

Summing it up

After moving some rocks out of the way it is dead easy to deploy your tango servers together with a complete tango control system to any host platform providing a container runtime. You do not need to delivery everything in one single stack if you do not want to. Most of the stuff described above works the same when deploying tango control system and tango servers in independent stacks or as separate containers even on multiple hosts.