Docker Interpreter with Environment Variables in RubyMine

As you know from previous blog entries, we now rely on Docker dev containers as interpreters for our IDEs. This has the advantage that we don’t need local installations of for example Ruby or single packages, but all requirements are in a Docker container and our machine stays clean.
RubyMine has some pitfalls for this way. So in this blog post, I’ll present you some hard-won insights and show you what solution we came to.

Those were our first problems:

  1. When using a single Docker image as an interpreter, it does not clean up everything when exiting the application and container. For example, the Server.pids file remains on the local machine, resulting in the following error: “A server is already running. Check path/Pids/Server.pids.” This behavior can be worked around with a Before Launch script that deletes the file when the application starts, but it’s not very nice. RubyMine unfortunately does not have Docker Container Settings, different to other Jetbrain IDEs, so a simple –rm does not work.
  2. Furthermore, we need to talk to our local machine from our Docker container, for example to access a DB or to use a VPN tunnel originating from the local machine. For this, the Docker container needs to know my IP address, or the respective IP address of each developer. In our version control system, however, we don’t want to constantly overwrite the IP addresses or check before each push that you don’t accidentally write up your own IP address. Our wish was to have a local environment variable MY_MACHINE_IP where each developer writes their IP address and the Docker container fetches it when the program starts. The normal integration of the system environment variables by simply checking this option unfortunately does not work here when we start the program in Docker. This is because the IDE then integrates the environment variables of the Docker container and not those of my local machine. Also, using a local environment variable to pass it to Docker doesn’t work in Run Configuration, nor in Docker Image creation, as the images below show. Same if you want to use PATH variables of the IDE instead of environment variables.
Environment Settings in Run Configuration
Environments in Dev Container

Our solution – Docker Compose:

Our first problem is solved by using docker compose directly. The problem with the Server.pid does not occur, because RubyMine manages the Docker Compose better and removes the Server.pid automatically at startup.

Below, I explain our setup of a Docker Compose as an interpreter and how it solved all the problems.

We created a simple Docker Compose yaml with the image we want to use as an interpreter. At this point, you can also define environment variables that use local environment variables of your local machine. This solves our second problem.

Docker Compose yaml

The definition of the volume is important at this point. Docker tries to store things there during the installation and throws an internal server error if the volume is missing: 
Error response from daemon: the working directory ‘C:\repositories\my-project’ is invalid, it needs to be an absolute path.

Now an interpreter can be set up in RubyMine. To do this, a new remote interpreter must be created in the Settings under Language & Frameworks: Ruby SDK and Gems. An example is shown in the figure below:

Ruby Docker Compose Interpreter

It is important here to select the interpreter after this, otherwise you will not be able to save and will get an error that the project has no interpreter.

Now the interpreter can be stored in the run configuration.

Recap:

We are now able to run our Ruby environment in a Docker container. Thus, the environment is independent of the local circumstances such as the installed Ruby version or packages, as these can all be found in the container. Also, each programmer can run the project locally without any further adjustments via a defined environment variable and the Docker container can still talk to other local Docker containers on the machine. Thus, the status in Gitlab is generally valid and not bound to the respective programmer by a customizable IP or the like.

Running Tango-Servers in Docker

Containers are a great way of running your software in an environment that you defined and no one else needs to maintain of even know about. This is especially true if your software provides its service using a network port. All that operators have to provide is the execution platform for the container and the required resources.

Tango servers fit quite well into this type of software: Essentially they provide a service over the network. Unfortunately, they need a tango control system to be fully usable, so thats some other services. Luckily, there are docker images for this so building a stack of containers to run your Tango server in the scope of a control system is quite easy (sample docker-compose.yml):

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    environment:
      - TANGO_HOST=tango-cs:10000
    depends_on:
      - tango-cs
  tango-cs:
    container_name: tango-database
    image: tangocs/tango-cs:9.3.2-alpha.1-no-tango-test
    ports:
      - "10000:10000"
    environment:
      - TANGO_HOST=localhost:10000
      - MYSQL_HOST=tango-db:3306
      - MYSQL_USER=tango
      - MYSQL_PASSWORD=tango
      - MYSQL_DATABASE=tango
    links:
      - "tango-db:localhost"
    depends_on:
      - tango-db
  tango-db:
    container_name: mysql-database
    image: tangocs/mysql:9.2.2
    environment:
      - MYSQL_ROOT_PASSWORD=root
    volumes:
      - tango_mysql_data:/var/lib/mysql
volumes:
  tango_mysql_data:

Example Dockerfile for the image my-ts used for the my-tango-server container:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts

Unfortunately this naive approach has the following issues:

  • The network port of our tango server changes with each restart and is not available from outside the docker stack
  • The tango server communicates the internal IP-Address of its container to the tango database so connection attempts from the outside fail even if the port is correct
  • Even though the tango server container depends on the tango control system container the tango server might start up before the tango control system is available.

Let us tackle the issues one by one.

Expose a fixed port for the Tango server

This is probably the easiest issue to fix because it is well documented for OmniORB and widely used. So let us change docker-compose.yml to expose a port and pass it into the container as an environment variable:

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    ports:
      - "45450:45450"
   environment:
      - TANGO_HOST=tango-cs:10000
      - TANGO_SERVER_PORT=45450
    depends_on:
      - tango-cs
  tango-cs:
 ...

And use it in our Dockerfile for the my-ts image:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT

Fine, now our Tango server always listens on a defined port and that same port is exposed to the outside of our docker stack.

Publish the correct IP/hostname of the Tango server

I found it hard to find documentation about this and how to put it on the command line correctly but here is the solution to the problem. The crucial command line parameter is -ORBendPointPublish . We need to pass a hostname of the host machine into the container and let OmniORB publish that name (tango-server.local in our example):

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    ports:
      - "45450:45450"
   environment:
      - TANGO_HOST=tango-cs:10000
      - TANGO_SERVER_PORT=45450
      - TANGO_SERVER_PUBLISH=tango-server.local:45450
    depends_on:
      - tango-cs
  tango-cs:
 ...

And the corresponding changes to our Dockerfile:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT -ORBendPointPublish giop:tcp:$TANGO_SERVER_PUBLISH

Waiting for availability of a service

This one is not specific to our tango server and the whole stack but a general problem when building stacks where one service depends on another one being up and listening on a network port. Docker compose takes care of the startup order of the containers but does not check for the readiness of services running in the containers. In linux containers this can easily be achieved by using the small shell utility wait-for-it. Here we only need to make some changes to our Dockerfile:

FROM debian:buster
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install wait-for-it

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD wait-for-it $TANGO_HOST -- my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT -ORBendPointPublish giop:tcp:$TANGO_SERVER_PUBLISH

Depending on your distribution/base image installation of the tool (or other similar alternatives like dockerize or docker-compose-wait).

Summing it up

After moving some rocks out of the way it is dead easy to deploy your tango servers together with a complete tango control system to any host platform providing a container runtime. You do not need to delivery everything in one single stack if you do not want to. Most of the stuff described above works the same when deploying tango control system and tango servers in independent stacks or as separate containers even on multiple hosts.