Using credentials in scripted Jenkins pipelines

The Jenkins continuous integration (CI) server allows job configuration to be scripted and version controlled instead of being configured using the web interface. That makes is easier to migrate jobs to another environment and changes easier to trace.

Such jobs are called “pipeline jobs” and come in two flavours: declarative and scripted. We prefer scripted pipelines because they offer you the full power of the groovy language.

One common problem in CI jobs is that you need credentials to log into other systems, e.g. for storing build artifacts or deploying to some staging server.

The credentials should of course never be stored as plain text in your repository, like directly in your Jenkinsfile. Also you do not want to appear them in build logs and the like.

Solution for scripted pipelines

Fortunately there is a nice solution available in the withCredentials-step.

First you need to manage the credentials in the central Jenkins credential management. There are several credential types like username and password, api token, secret text or username and private key.

Then you can reference them in your pipeline script like below:

// stuff to build the docker images...
    stage ('Transfer release images to registry') {
       withCredentials([usernamePassword(credentialsId: 'private-artifactory', passwordVariable: 'dockerKey', usernameVariable: 'dockerUser')]) {
            // avoid using credentials in groovy string interpolation
            sh label: 'Login to docker registry', script: '''
docker login --username $dockerUser --password $dockerKey ''' + my-artifactory.intranet

            // do something while being logged in
            sh label: 'Logout from docker registry', script: '''
                docker logout my-artifactory.intranet
            '''
    }
// stuff after publishing the docker images

Note that we do not use the injected environment variables in groovy’s string interpolation as that would expose the credentials on the underlying OS as the documentation states.

Migrating from Oracle to PostgreSQL

One promise of SQL for application developers is that changing the database management system (DBMS) is not that of a big deal. Due to the many specialties and not complete standards conformance of the database vendors it can be a big task to migrate from one DBMS vendor to another.

Nevertheless, there are plenty of good reasons to do so:

  • Cost of buying, running and maintaining the DBMS
  • Limitations of the current DBMS like performance, tool support, character sets, naming, data types and sizes etc.
  • Missing features like geospatial support, clustering, replication, sharding, timeseries support and so on
  • Support or requirements on the customers or operators side

Some of our long running projects that started several years ago had the requirement to work with an Oracle DBMS, version 8i at that time. Now, more than 10 years later our customer provides and prefers to host a PostgreSQL 13 cluster. Of course she would like us to migrate our applications over to the new DBMS and eventually get rid of the Oracle installation.

Challenges for the migration

Even though PostgreSQL is supports most of SQL:2016 core and most important features of Oracle there are enough differences and subtleties that make migration non-trivial. The most obvious items to look out for are

  • different column type names
  • SQL features and syntactical differences (sequences!)
  • PL/SQL functions syntax and features

Depending on your usage of database specific features you have to assess how much work and risk is expected.

Tools and migration process

Fortunately, there is a quite mature tool that can aid you along the process called ora2pg. It has tons of options to help you customizing the migration and a quite helpful assessment of the task ahead. The migration report looks like this:

-------------------------------------------------------------------------------
Ora2Pg v21.1 - Database Migration Report
-------------------------------------------------------------------------------
Version Oracle Database 12c Enterprise Edition Release 12.1.0.2.0
Schema  NAOMI-TEST
Size    91.44 MB

-------------------------------------------------------------------------------
Object  Number  Invalid Estimated cost  Comments        Details
-------------------------------------------------------------------------------
DATABASE LINK   0       0       0.00    Database links will be exported as SQL/MED PostgreSQL's Foreign Data Wrapper (FDW) extensions using oracle_fdw.
FUNCTION        1       0       1.00    Total size of function code: 0 bytes.
GLOBAL TEMPORARY TABLE  60      0       168.00  Global temporary table are not supported by PostgreSQL and will not be exported. You will have to rewrite some application code to match the PostgreSQL temporary table behavior.  ht_my_table <--- SNIP --->.
INDEX   69      0       6.90    0 index(es) are concerned by the export, others are automatically generated and will do so on PostgreSQL. Bitmap will be exported as btree_gin index(es). Domain index are exported as b-tree but commented to be edited to mainly use FTS. Cluster, bitmap join and IOT indexes will not be exported at all. Reverse indexes are not exported too, you may use a trigram-based index (see pg_trgm) or a reverse() function based index and search. Use 'varchar_pattern_ops', 'text_pattern_ops' or 'bpchar_pattern_ops' operators in your indexes to improve search with the LIKE operator respectively into varchar, text or char columns.
JOB     0       0       0.00    Job are not exported. You may set external cron job with them.
SEQUENCE        4       0       1.00    Sequences are fully supported, but all call to sequence_name.NEXTVAL or sequence_name.CURRVAL will be transformed into NEXTVAL('sequence_name') or CURRVAL('sequence_name').
SYNONYM 0       0       0.00    SYNONYMs will be exported as views. SYNONYMs do not exists with PostgreSQL but a common workaround is to use views or set the PostgreSQL search_path in your session to access object outside the current schema.
TABLE   225     0       72.00    495 check constraint(s).       Total number of rows: 264690. Top 10 of tables sorted by number of rows:. topt has 52736 rows. po has 50830 rows. notification has 18911 rows. timeline_entry has 16556 rows. char_sample_types has 11400 rows. char_safety_aspects has 9488 rows. char_sample_props has 5358 rows. tech_spec has 4876 rows. mail_log_entry has 4778 rows. prop_data has 4358 rows. Top 10 of largest tables:.
-------------------------------------------------------------------------------
Total   359     0       248.90  248.90 cost migration units means approximatively 3 man-day(s). The migration unit was set to 5 minute(s)

-------------------------------------------------------------------------------
Migration level : A-3
-------------------------------------------------------------------------------

Migration levels:
    A - Migration that might be run automatically
    B - Migration with code rewrite and a human-days cost up to 5 days
    C - Migration with code rewrite and a human-days cost above 5 days
Technical levels:
    1 = trivial: no stored functions and no triggers
    2 = easy: no stored functions but with triggers, no manual rewriting
    3 = simple: stored functions and/or triggers, no manual rewriting
    4 = manual: no stored functions but with triggers or views with code rewriting
    5 = difficult: stored functions and/or triggers with code rewriting
-------------------------------------------------------------------------------

The tool is written in Perl, so I decided to put and run it inside Docker containers because I did not want to mess with my working machine or some VMs. To have quick turnaround times with my containers I split up the process into 3 steps:

  1. Export of the schema and data using a docker container
  2. On success copy the ora2pg project to the host
  3. Import the schema and data using another docker container

The ora2pg migration project is copied to the host machine allowing you to inspect the export and make adjustments if need be. Then you can copy it to the import container or simply bind mount the directory containing the ora2pg project.

The Dockerfile for the export image looks like this

FROM centos:7

# Prepare the system for ora2pg 
RUN yum install -y wget
RUN wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

COPY ol7-temp.repo /etc/yum.repos.d/
RUN yum install -y oraclelinux-release-el7
RUN mv /etc/yum.repos.d/ol7-temp.repo /etc/yum.repos.d/ol7-temp.repo.disabled
RUN yum install -y oracle-instantclient-release-el7
RUN yum install -y oracle-instantclient-basic
RUN yum install -y oracle-instantclient-devel
RUN yum install -y oracle-instantclient-sqlplus

RUN yum install -y perl perl-CPAN perl-DBI perl-Time-HiRes perl-YAML perl-local-lib make gcc
RUN yum install -y perl-App-cpanminus

RUN cpanm CPAN::Config
RUN cpanm CPAN::FirstTime

ENV LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib
ENV ORACLE_HOME=/usr/lib/oracle/21/client64

RUN perl -MCPAN -e 'install DBD::Oracle'

COPY ora2pg-21.1.tar.gz /tmp

WORKDIR /tmp
RUN tar zxf ora2pg-21.1.tar.gz && cd ora2pg-21.1 && perl Makefile.PL && make && make install

RUN mkdir -p /naomi/migration
RUN ora2pg --project_base /ora2pg --init_project my-migration
WORKDIR /ora2pg

COPY ora2pg.conf /ora2pg/my-migration/config/

CMD ora2pg -t SHOW_VERSION -c config/ora2pg.conf && ora2pg -t SHOW_TABLE -c config/ora2pg.conf\
 && ora2pg -t SHOW_REPORT --estimate_cost -c config/ora2pg.conf\
 && ./export_schema.sh && ora2pg -t INSERT -o data.sql -b ./data -c ./config/ora2pg.conf

Once the export looks good you can work on importing everything. The Dockerfile for the import image looks like this:

FROM centos:7

# Prepare the system for ora2pg 
RUN yum install -y wget
RUN wget https://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

COPY ol7-temp.repo /etc/yum.repos.d/
RUN yum install -y oraclelinux-release-el7
RUN mv /etc/yum.repos.d/ol7-temp.repo /etc/yum.repos.d/ol7-temp.repo.disabled
RUN yum install -y oracle-instantclient-release-el7
RUN yum install -y oracle-instantclient-basic
RUN yum install -y oracle-instantclient-devel
RUN yum install -y oracle-instantclient-sqlplus
RUN yum install -y postgresql-server

RUN yum install -y perl perl-CPAN perl-DBI perl-Time-HiRes perl-YAML perl-local-lib make gcc
RUN yum install -y perl-App-cpanminus

RUN cpanm CPAN::Config
RUN cpanm CPAN::FirstTime

ENV LD_LIBRARY_PATH=/usr/lib/oracle/21/client64/lib
ENV ORACLE_HOME=/usr/lib/oracle/21/client64

RUN perl -MCPAN -e 'install DBD::Oracle'

COPY ora2pg-21.1.tar.gz /tmp

WORKDIR /tmp
RUN tar zxf ora2pg-21.1.tar.gz && cd ora2pg-21.1 && perl Makefile.PL && make && make install

# you need to mount the project volume to /ora2pg
WORKDIR /ora2pg

CMD ./import_all.sh -d my_target_db -h $pg_host -U myuser -o myowner

Our target database runs on another host, so you need credentials to authenticate and perform all the required actions. Therefore we are the import container interactively. The PowerShell command for the import looks like this

docker run -it --rm -e pg_host=192.168.56.1 -v $PWD/ora2pg/my-migration:/ora2pg pgimport

The import script allows you to create the schema, sequences, indexes, constraints and load the data. I suggest adding the contraints after importing the data – a workflow supported by the import_all.sh script.

That way we got our Oracle database migrated into a PostgreSQL database. Unfortunately, this is only one part of the whole migration. The other part is making changes to the application code to correctly use the new database.

Windows 10 quality of life features

Starting with Windows 10 Microsoft switched from big-bang releases of its operating system to so called rolling releases: They release new features and improvements in regular intervals – once or twice a year – without changing the product name.

The great thing is that users get the improvements made by Microsofts engineers much sooner than in the past where you had to wait several years for a big “service pack” to arrive or even a new major release of Windows like 2000, XP, Vista, 7, 8, 8.1 and finally 10 (I am leaving out the dark times on purpose 😉).

The bad thing is that it is harder to see what version or release you are running. Of course there is a (less visible) name for every Windows release. This version or codename sometimes gets mentioned on support pages or in blog posts because functionality of Window 10 can change significantly between these rolling feature updates. And sometimes you may run some app or tool that tells you need Windows 10 2004 or higher.

What version of Window 10 am I running?

I know of two simple ways to find out what version of Windows 10 you are running currently:

  1. Running the tool winver
  2. Opening the Settings -> System -> Info page

Why does it matter?

Another downside is that users often are not aware of new stuff added to their operating system. And Microsoft does an awful job promoting the changes and improvements!

Of course there are announcements about the big things after upgrading your operating system to the next feature level. And Microsoft uses these for marketing its own apps and services. They slap you their new Edge-browser in the face on every occasion and try to trick you in creating a Microsoft account. It is absolutely not obvious how to use Windows without a Microsoft account like the decades before. Skip the process here, continue without and risk your live…

On the other hand they really improve their software and slowly but steadily round the rough edges of their system. The UIs for environment variables are finally quite usable.

Now back to the main theme of the post: There are some hidden gems built into Windows 10 that I learned of only lately and I think are vastly underadvertised – unlike Microsoft’s marketing of their big products.

Built-in screen recorder

Ok, many gamers may know it because it Windows briefly displays the shortcut Win+G when starting a game. It is not only usable for games but you can record any window, capture application sounds and record your voice. You can easily record your own screencasts and video tutorials using this built-in solution.

Built-in clipboard manager

How often did you wish to be able copy multiple items and choose one of the last few copied elements when pasting? While such clipboard managers have been around for a long time and sometimes provide tons of useful features Windows 10 has a simple one built-in. Just press Win+V instead of Ctrl+V to paste your clipboard entry and you will get a list of the copied items to choose from.

Built-in screenshot/snipping tool

Many people may know the old way of making screenshots using the oddly labeled PrtScr-key (sometimes also PrntScrn or simply Print), opening a painting application like MS paint and pasting the image using Ctrl+V. Well, Microsoft improved this workflow a lot by including a snipping tool that you can activate using Win+Shift+S. This tool lets you select either a rectangular or free-form region, a window or an entire screen to capture. After doing that you get a notification allowing you to make modifications to the capture and save it to disk.

On-screen emoji-keyboard

Just a little helper in these modern social media times is the on-screen emoji-keyboard. Using Win+. you can activate it, browse tons of common emojis and enter them into you messages and texts 🐱‍🏍🤘.

Windows Terminal

Ok, this last but not least one is not (yet) built-in and mostly interesting for developers and power users. Nevertheless, I think it is noteworthy that Microsoft finally built a capable terminal application with modern features like multiple tabs, full unicode and font support, customizable background with blur and the ability to host different shells like the old and trusted command prompt CMD, the newer PowerShell and WSL. You can find it in the Microsoft store for free.

Conclusion

While releases of Windows 10 are more subtle than past new Windows releases many things change both under the hood and user visible. Every once in a while something you missed for years or installed third-party tools for may be added without you knowing. That’s another reason why talking to colleagues and friends and practices like pair-programming and brown-bag meetings are so valuable for sharing knowledge and experience.

I hope there is something for you in my findings of hidden windows gems. If you have some Windows 10 features you discovered and really like, feel free to leave them in the comments. I will gladly try them out!

Using (elastic)search with .NET Core

Many modern applications require powerful search mechanisms to become useful and make their users more productive. That is in large part due to the amount of data available to work with. Thankfully there are already powerful tools to index your data and make it searchable.

One of the most well known state-of-the-art solutions is ElasticSearch and it has an API to be used from .NET called NEST. While the documentation is ok I want to give a quick rundown on how to add searching capabilities to your .NET Core application. Some ideas are borrowed from the great post “Using Elasticsearch with ASP.NET Core and Docker“.

Getting ElasticSearch running

The easiest way to get up and running with elasicsearch is to use their docker images and just run the container on your development machine. I like to a docker compose file like the following to get elasticsearch and its tooling application kibana up and running fast:

version: '3.8'
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        container_name: elastic
        environment:
            - node.name=elastic
            - cluster.initial_master_nodes=elastic
        ports:
            - "9200:9200"
            - "9300:9300"
        volumes:
            - type: bind
              source: ./esdata
              target: /usr/share/elasticsearch/data
        networks:
            - esnetwork
    kibana:
        image: docker.elastic.co/kibana/kibana:7.8.0
        ports:
            - "5601:5601"
        networks:
            - esnetwork
        depends_on:
            - elasticsearch
volumes:
    esdata:
networks:
    esnetwork:
        driver: bridge

After you run it with docker compose you can talk to the search service on http://localhost:9200/ and the kibana management GUI on http://localhost:5601/. On the Kibana UI especially the Dev Tools and its console are interesting for experimenting with search queries.

Access ElasticSearch from your .NET Core app

I find it quite elegant to write a extension method for the IServiceCollection to configure the ElasticClient and register it as a Singleton to the dependency injection framework of .NET Core like so

    public static class ElasticSearchExtension
    {
        public static void AddElasticsearch(
            this IServiceCollection services, IConfiguration configuration)
        {
            var url = configuration["elasticsearch:url"];
            var settings = new ConnectionSettings(new Uri(url))
                    .DefaultMappingFor<SearchableDevice>(deviceMapping => deviceMapping
                        .IndexName("devices")
                        .IdProperty(dev => dev.Id)
                    )
                ;
            var client = new ElasticClient(settings);
            var response = client.Indices.Create("devices", creator => creator
                .Map<SearchableDevice>(device => device
                    .AutoMap()
                )
            );
            // maybe check response to be safe...
            services.AddSingleton<IElasticClient>(client);
        }
    }

The configuration block looks like following

  "ElasticSearch": {
    "url": "http://localhost:9200/"
  },

and allows for future extension.

Our search service has to be registered in Startup of course:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddElasticsearch(Configuration);
}

Indexing our objects

In the above example we built a SearchableDevice class whose public properties are going to be indexed by ElasticSearch. The API allows for a much more fined grained control about what and how to index but we want to keep things simple without having to worry about excluding Properties and so on. If you have set it up that way indexing a SearchableDevice is merely one simple call:

// SearchClient is the injected IElasticClient
// mySearchableDevice is an instance of SearchableDevice
SearchClient.IndexDocument(mySearchableDevice);

Searching for objects

When developing a search query I like to try it in the Kibana Dev Tools and then transform it to a NEST call. A simple query to look in all device properties if they start with “needle” looks like this:

{
  "query": {
    "multi_match": {
      "type": "phrase_prefix", 
      "fields": ["*"],
      "query": "needle"
    }
  }
}

The nice thing about the Kibana Dev Tools console is that it can display and complete the possible values for fields like “type” in your multi_match query.

That search query can then be translated to a NEST call in a straightforward way and looks this way:

var response = SearchClient.Search<SearchableDevice>(sd => sd
    .Query(q => q
        .MultiMatch(query => query
            .Type(TextQueryType.PhrasePrefix)
            .Fields("*")
            .Query("needle")
        )
    )
);

The search response contains the hits with their source objects and some metadata like the score and result count.

Bear in mind that ElasticSearch only returns the first/best 10 matches by default, so specifying the result size often might be closer to what you want.

Wrapping it up

Getting started with ElasticSearch in .NET Core does not require too much boiler plate an setup work if you use tools like docker and the NEST library. Making it usable and tuning the indexing and querying may require a lot of work to achieve the best results. On the other hand smaller applications can start-off with a simple search setup like shown above and simply evolve it when need be.

Docker runtime breaking your container

Docker (or container technology in general) is a great tool to clearly separate the concerns of developers and operations. We use it to simplify various tasks like building projects, packaging them for different platforms and deployment of our software onto the target machines like staging and production servers. All the specifics of the projects are contained and version controlled using the Dockerfiles and compose files.

Our operations only needs to provide some infrastructure able to build container images and run them. This works great most of the time and removes a lot of the friction between developers and operation where in the past snowflaky-servers needed to be setup and maintained. Developers often had to ask for specific setups and environments because each project had their own needs. That is all gone with this great container technology. Brave new world. Except when it suddenly does not work anymore.

Help, my deployment container stopped working!

As mentioned above we use docker to deploy our software to the target machines. These machines are often part of a corporate network protected by firewalls and only accessible using VPN. I already talked about how to use openvpn in a docker container for deployment. So the other day I was making a release of one of my long-running projects and pressing the deploy button for that project on our jenkins continuous integration server.

But instead of just leaning back, relaxing and watching the magic work the deployment failed and the red light lit up! A look into the job output showed that the connection to the target machine was refused. A quick check from the developer machine showed no problem on the receiving side. VPN, target machine and everything was up and running as usual.

After a quick manual deployment performed with care and administrator hat I went on an investigation journey…

What was going on?

The deployment job did not change for several months, the container image did not change and the rest of the infrastructure was working as expected. After more digging, debugging narrowing down the problem I found out, that openvpn did not work in the container anymore because of some strange permission denied error:

Tue May 19 15:24:14 2020 /sbin/ip addr add dev tap0 1xx.xxx.xxx.xxx/22 broadcast 1xx.xxx.xxx.xxx
Tue May 19 15:24:14 2020 /sbin/ip -6 addr add 2axx:1xxx:4:5xxx:9xx:5xxx:5xxx:4xxx/64 dev tap0
RTNETLINK answers: Permission denied
Tue May 19 15:24:14 2020 Linux ip -6 addr add failed: external program exited with error status: 2
Tue May 19 15:24:14 2020 Exiting due to fatal error

This hot trace made it easy to google for and revealed following issue on github: https://github.com/dperson/openvpn-client/issues/75. The cause of all the trouble was changed behaviour of the docker runtime. Our automatic updates had run over the weekend and actually installed a new package version of the docker runtime (see exerpt from apt history log):

containerd.io:amd64 (1.2.13-1, 1.2.13-2)

This subtle change broke my container! After some sacrifices to the whale gods I went on to implement the fix. Fortunately there is an easy way to get it working like before. You just have to pass following command line switch to docker run and everything works as expected:

--sysctl net.ipv6.conf.all.disable_ipv6=0

As nice as containers are for abstracting away hardware, operating systems and other environment details sometimes the container runtime shines through. It is just a shame that such things happen on minor releases or package release upgrades…

Contain me if you can – the art of tunneling

This blog post briefly explains the basics of SSH tunneling and dives into the detail of reverse tunneling. If you are already familiar with all these concepts, this might be a boring read. If you heard “reverse tunneling” for the first time now, you’re about to discover the dark side of a network administrator’s might.

SSH basics

SSH (secure shell) is a tool to connect to a remote computer. In this blog post, the remote computer is always “the server” and your local computer is “the notebook”. That’s just nomenclature, in reality, each computer (or “endpoint”, which is a misleading term, as we will see) can be any device you want, as long as “the server” runs a SSH server daemon and “the notebook” can run a SSH client program. Both computers can live in different local area networks (LAN). As long as it is possible to send network packets to the specific port the SSH server daemon listens to and you have some means to authenticate on the server, you have free reign over both LANs by utilizing SSH tunnels.

The trick to bypass the remote LAN firewall is to have a port forwarded to the server. This allows you to access the server’s port 22 by sending packets to the internet gateway’s worldwide address on a port of your choice, maybe 333. So the forwarding rule is:

gateway:333 -> server:22

If you can comprehend this, you’ve basically grasped the whole foundation of SSH tunneling, even if it had nothing to do with SSH at all.

SSH tunneling basics

Now that we have a connection to the server, we can enjoy the fun of forward tunneling. You didn’t just connect to the server, you opened up every computer in the remote LAN for direct access. By using the SSH connection, you can define a SSH tunnel to connect to another computer in the remote LAN by saying:

localhost:8000 -> another:80

This is what you give your SSH client, but what actually happens is:

localhost:8000 -> gateway:333 -> server:22 -> another:80

You open a local network port (8000) on your notebook that uses your SSH connection over the gateway and the server to send packets to the “another” computer in the remote LAN. Every byte you send to your port 8000 will be sent to port 80 of the “another” computer. Every byte it answers will be relayed all the way back to your client, speaking to your local port 8000. For your client, it will look as if port 80 of “another” and port 8000 of your notebook are the same thing (if you don’t mind a small delay). And the best thing: Your client can be anywhere in your local LAN. Given the right configuration, “notebook2” in your LAN can speak with port 80 of “another” in the remote LAN by using your SSH tunnel. If you’ve ever played the computer game “Portal”, SSH tunneling is like wielding the portal gun in networks, but with unlimited portals at the same time. Yes, you can have many SSH tunnels open at once. That’s pretty neat and powerful, but it’s essentially still only half of the story.

SSH reverse tunneling basics

What you can also do is to open a network port on the server that acts like a portal in the other direction. You probably noticed the arrows in the tunnel descriptions above. The arrows really give a direction. If your tunnel is

localhost:8000 -> another:80

you can send packets from you to another (and receive their answer), but it doesn’t work the other way around. The software listening at another:80 could not initiate a data transfer to localhost:8000 on its own. So SSH tunnels always have a direction, even if the answers flow back. It’s like a door phone: You, inside the building, can initiate a call to the entrance door phone at any time. The guest in front of the door can speak back, but is not able to “call you back” once you hang up. He can not strike up a conversation with you over the phone. (In reality he still has the doorbell to annoy you. Luckily, the doorbell is missing in networking.)

SSH reverse tunnels change the direction of your tunnels to do exactly that: Give somebody on the remote LAN the ability to send network packets to the server that will get “teleported” into your local LAN. Let’s try this:

localhost:8070 <- server:8080

If you set up a reverse tunnel, any computer that can send packets to the port 8080 on the server will in fact speak with your notebook on port 8070. This is true for the server itself, any computer in the remote LAN and any computer with a SSH tunnel to port 8080 on the server. Yes, you’ve just discovered that you can send a data packet from your LAN to the remote LAN and have it reverse tunneled to a third LAN. And if you haven’t thought about it yet: Every “endpoint” in your three-LAN hop can be a portal to yet another hop, with or without your knowledge. This is why “endpoint” is misleading: It implies an “end” to the data transfer. That might just be your perception of the situation. Every endpoint can be another portal until it’s portals all the way down.

SSH reverse tunneling fun

You can use this back and forth teleporting of network traffic to create all kinds of complexity.

But in our case, we want to use our new might for something fun. Let’s imagine the following scenario:

  • On your notebook, you can start a SSH connection to the server. You can open forward and reverse tunnels.
  • On your notebook, you run a VNC (Virtual Network Computing) server. VNC is basically a simple remote desktop technology that lets you see the display content of a remote computer and control its keyboard and mouse. You running the server means somebody else can connect to your notebook using a VNC client and control it as if it was local (if you can ignore a small delay).
  • Somebody else (let’s call her Eve), on her notebook, can also start a SSH connection to the server. She can open forward and reverse tunnels.
  • On her notebook, she has a VNC client software installed. She can connect and control VNC server machines.

And now the fun begins:

  • VNC servers typically listen on network port 5900. In the client, this is known as “display 0”, which means that a client connects to the port “display + 5900” on the server machine.
  • Using this knowledge, you create a reverse tunnel for your SSH connection to the server: localhost:5900 <- server:5901. Notice how we changed the port. This is not necessary, but possible.
  • Now, anybody with direct access to your network port 5900 (in your LAN) or access to the network port 5901 on the server (in the remote LAN) can remote control your desktop.
  • Eve creates a forward tunnel for her SSH connection to the server or any other machine in the remote LAN: localhost:5902 -> server:5901. Notice the port change again. Also just a possibility, not a requirement. Notice also that Eve doesn’t need to connect to the same server as you. Her server just needs to be able to initiate a data transfer with the server. This might be through multiple portals that don’t matter in our story.
  • Now, Eve starts her VNC client and lets it connect to localhost:2 (2 is the display number that gets resolved to local port 5902).
  • Eve can now see your screen, control your inputs and even transfer files (depending on the capabilities of your VNC setup).

This is the point where LAN containment doesn’t really matter anymore. Sure, you cannot reign freely over any machine anywhere, but you can always establish a combination of reverse and forward tunnels to connect two computers as if they were connected by a cable. Which is, in fact, a good analogy to think about SSH tunnels: cables with two different connector types that span between computers.

Wrap up

You are probably aware about the immediate security risks that follow all that complexity. If you want to mitigate some of the risks, have a look at stunnel. It is effectively a more secure (and constrainable) way to create portals between LANs.

And if you just want to have fun: Please be aware that most of the actions here are probably not legal if done without consent of all participants. Stay safe!

Running a for-loop in a docker container

Docker is a great tool for running services or deployments in a defined and clean environment. Operations just has to provide a host for running the containers and everything else is up to the developers. They can forge their own environment and setup all the prerequisites appropriately for their task. No need to beg the admins to install some tools and configure server machines to fit the needs of a certain project. The developers just define their needs in a Dockerfile.

The Dockerfile contains instructions to setup a container in a domain specific language (DSL). This language consists only of a couple commands and is really simple. Like every language out there, it has its own quirks though. I would like to show a solution to one I encountered when trying to deploy several items to a target machine.

The task at hand

We are developing a distributed system for data acquisition, storage and real-time-display for one of our clients. We deliver the different parts of the system as deb-packages for the target machines running at the customer’s site. Our customer hosts her own debian repository using an Artifactory server. That all seems simple enough, because artifactory tells you how to upload new artifacts using curl. So we built a simple container to perform the upload using curl. We tried to supply the bash shell script required to the CMD instruction of the Dockerfile but ran into issues with our first attempts. Here is the naive, dysfunctional Dockerfile:

FROM debian:stretch
RUN DEBIAN_FRONTEND=noninteractive apt-get update &amp;&amp; apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get update &amp;&amp; apt-get -y install dpkg curl

# Setup work dir, $PROJECT_ROOT must be mounted as a volume under /elsa
WORKDIR /packages

# Publish the deb-packages to clients artifactory
CMD for package in *.deb; do\n\
  ARCH=`dpkg --info $package | grep "Architecture" | sed "s/Architecture:\ \([[:alnum:]]*\).*/\1/g" | tr -d [:space:]`\n\
  curl -H "X-JFrog-Art-Api:${API_KEY}" -XPUT "${REPOSITORY_URL}/${package};deb.distribution=${DISTRIBUTION};deb.component=non-free;deb.architecture=$ARCH" -T ${package} \n\
  done

The command fails because the for-shell built-in instruction does not count as a command and the shell used to execute the script is sh by default and not bash.

The solution

After some unsuccessfull attempts to set the shell to /bin/bash using dockers’ SHELL instruction we finally came up with the solution for an inline shell script in the CMD instruction of a Dockerfile:

FROM debian:stretch
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install dpkg curl

# Setup work dir, $PROJECT_ROOT must be mounted as a volume under /elsa
WORKDIR /packages

# Publish the deb-packages to clients artifactory
CMD /bin/bash -c 'for package in *.deb;\
do ARCH=`dpkg --info $package | grep "Architecture" | sed "s/Architecture:\ \([[:alnum:]]*\).*/\1/g" | tr -d [:space:]`;\
  curl -H "X-JFrog-Art-Api:${API_KEY}" -XPUT "${REPOSITORY_URL}/${package};deb.distribution=${DISTRIBUTION};deb.component=non-free;deb.architecture=$ARCH" -T ${package};\
done'

The trick here is to call bash directly and supplying the shell script using the -c parameter. An alternative would have been to extract the script into an own file and call that in the CMD instruction like so:

# Publish the deb-packages to clients artifactory
CMD ["deploy.sh", "${API_KEY}", "${REPOSITORY_URL}", "${DISTRIBUTION}"]

In the above case I prefer the inline solution because of the short and simple script, no need for an additional external file and worrying about how to pass the parameters to the script.

Using parameterized docker builds

Docker is a great addition to you DevOps toolbox. Sometimes you may want to build several similar images using the same Dockerfile. That’s where parameterized docker builds come in:

They provide the ability to provide configuration values at image build time. Do not confuse this with environment variables when running the container! We used parameterized builds for example to build images for creating distribution-specific packages of native libraries and executables.

Our use case

We needed to package some proprietary native programs for several linux distribution version, in our case openSuse Leap. Build ARGs allow us to use a single Dockerfile but build several images and run them to build the packages for each distribution version. This can be easily achieved using so-called multi-configuration jobs in  the jenkins continuous integration server. But let us take a look at the Dockerfile first:

ARG LEAP_VERSION=15.1
FROM opensuse/leap:$LEAP_VERSION
ARG LEAP_VERSION=15.1

# add our target repository
RUN zypper ar http://our-private-rpm-repository.company.org/repo/leap-$LEAP_VERSION/ COMPANY_REPO

# install some pre-requisites
RUN zypper -n --no-gpg-checks refresh && zypper -n install rpm-build gcc-c++

WORKDIR /buildroot

CMD rpmbuild --define "_topdir `pwd`" -bb packaging/project.spec

Notice the ARG instruction defines a parameter name and a default value. That allows us to configure the image at build time using the --build-arg command line flag. Now we can build a docker image for Leap 15.0 using a command like:

docker build -t project-build --build-arg LEAP_VERSION=15.0 -f docker/Dockerfile .

In our multi-configuration jobs we call docker build with the variable from the axis definition to build several images in one job using the same Dockerfile.

A gotcha

As you may have noticed we have put the same ARG instruction twice in the Dockerfile: once before the FROM instruction and another time after FROM. This is because the build args are cleared after each FROM instruction. So beware in multi-stage builds, too. For more information see the docker documentation and this discussion. This had cost us quite some time as it was not as clearly documented at the time.

Conclusion

Parameterized builds allow for easy configuration of your Docker images at image build time. This increases flexibility and reduces duplication like maintaining several almost identical Dockerfiles. For runtime container configuration provide environment variables  to the docker run command.

Lombok-Tooling surprise

Some months ago we took over development and maintenance of a Java EE-based web application. The project has ok-code for the most part and uses mostly standard libraries from the Java ecosystem. One of them is lombok which strives to reduce the boilerplate code needed and make the code more readable and concise.

Fortunately there is a plugin for IntelliJ, our favourite Java IDE that understands lombok and allows for easy navigation and code hints. For example you can jump from getMyField() to the lombok-annotated backing field of the getter and so on.

This sounds very good but some day we were debugging a weird behaviour. An abstract class contained a special implementation of a Map as its field and was annotated with @Getter and @Setter. But somehow the type and contents of the Map changed without calling the setter.

What was happening here? After a quite some time digging spent in the debugger we noticed, that the getter was overridden in a subclass. Normally, IntelliJ shows an Icon with navigation options beside overridden/overriding methods. Unfortunately for us not for lombok annotations!

Consider the following code:

public class SuperClass {

    @Getter
    private List strings = new ArrayList();

    public List getInts() {
        return new ArrayList();
    }
}

public class NotSoSuperClass extends SuperClass {
    @Override
    public List getStrings() {
        return Arrays.asList("Many", "Strings");
    }

    @Override
    public List getInts() {
        return Arrays.asList(1,2,3);
    }
}

The corresponding code in IntelliJ looks like this:

2019-08-08 10_54_49-lombok-plugin-surprise-demo

Notice that IntelliJ puts a nice little icon next to the line number where the class or a methods is subclassed/overridden. But not so for the lombok getter. This tiny detail lead to quite a surprise and cost us some hours.

Of course you can argue, that the code design is broken, but hey, that was the state and the tools are there to help you discover such weird quirks.

We opened an issue for the lombok IntelliJ plugin, so maybe it will be enhanced to provide such additional tooling information to be on par with plain old java code.

 

Debugging Web Pages for iOS

Web developers use browser tools like the Web Inspector in Chrome and Safari or the Developer Tools in Firefox to develop, debug and test web pages. In Safari you have to enable the developer menu first: Safari -> Preferences… -> Advanced -> Show develop menu in menu bar

All these tools offer modes where you can display the page layout at various screen sizes. In Safari this is called the Responsive Design Mode and can be found in the Develop menu. This is essential for checking the page layout for mobile devices. There are however some differences in behaviour, which can only be tested on the real devices or in a simulator. For example, dropdown menus can trigger a wheel selector on mobile devices, while the desktop browser renders them as regular dropdown menus, even in responsive design mode.

Here are some tips for debugging web pages for iOS devices in the simulator:

Using Web Inspector with the iOS Simulator

Within the mobile Safari browser you can’t simply open the Web Inspector console as you would do when developing a web page using a desktop browser. But you can connect the Web Inspector of your desktop Safari to the mobile Safari browser instance running in the iOS simulator:

  • Start the iOS simulator from Xcode: Xcode -> Open Developer Tool -> Simulator
  • Select the desired device: Hardware -> Device -> e.g. iOS 12.1 -> iPhone SE
  • Open the web page in Safari within the simulator
  • Open the desktop version of Safari

In Safari’s Develop menu the simulator now shows up as a device, e.g. “Simulator – iPhone SE – iOS 12.1 (16B91)”. The web page you opened in the simulator should be listed as submenu item. If you click this menu item the Web Inspector opens. It’s now connected to the simulated Safari instance and you can debug the mobile variant of your web page.

Workaround for Clearing the Cache

When using a desktop web browser one can easily bypass the local browser cache when reloading a web page by holding the shift key while pressing the reload button. Sometimes this is necessary to see changes in effect while developing a web application. However, this doesn’t work in Safari running within the iOS emulator. There’s a little workaround: You can open the web page in an incognito tab, which means the cache is cleared each time you close the tab and re-open it again in a new incognito tab.