Using (elastic)search with .NET Core

Many modern applications require powerful search mechanisms to become useful and make their users more productive. That is in large part due to the amount of data available to work with. Thankfully there are already powerful tools to index your data and make it searchable.

One of the most well known state-of-the-art solutions is ElasticSearch and it has an API to be used from .NET called NEST. While the documentation is ok I want to give a quick rundown on how to add searching capabilities to your .NET Core application. Some ideas are borrowed from the great post “Using Elasticsearch with ASP.NET Core and Docker“.

Getting ElasticSearch running

The easiest way to get up and running with elasicsearch is to use their docker images and just run the container on your development machine. I like to a docker compose file like the following to get elasticsearch and its tooling application kibana up and running fast:

version: '3.8'
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        container_name: elastic
        environment:
            - node.name=elastic
            - cluster.initial_master_nodes=elastic
        ports:
            - "9200:9200"
            - "9300:9300"
        volumes:
            - type: bind
              source: ./esdata
              target: /usr/share/elasticsearch/data
        networks:
            - esnetwork
    kibana:
        image: docker.elastic.co/kibana/kibana:7.8.0
        ports:
            - "5601:5601"
        networks:
            - esnetwork
        depends_on:
            - elasticsearch
volumes:
    esdata:
networks:
    esnetwork:
        driver: bridge

After you run it with docker compose you can talk to the search service on http://localhost:9200/ and the kibana management GUI on http://localhost:5601/. On the Kibana UI especially the Dev Tools and its console are interesting for experimenting with search queries.

Access ElasticSearch from your .NET Core app

I find it quite elegant to write a extension method for the IServiceCollection to configure the ElasticClient and register it as a Singleton to the dependency injection framework of .NET Core like so

    public static class ElasticSearchExtension
    {
        public static void AddElasticsearch(
            this IServiceCollection services, IConfiguration configuration)
        {
            var url = configuration["elasticsearch:url"];
            var settings = new ConnectionSettings(new Uri(url))
                    .DefaultMappingFor<SearchableDevice>(deviceMapping => deviceMapping
                        .IndexName("devices")
                        .IdProperty(dev => dev.Id)
                    )
                ;
            var client = new ElasticClient(settings);
            var response = client.Indices.Create("devices", creator => creator
                .Map<SearchableDevice>(device => device
                    .AutoMap()
                )
            );
            // maybe check response to be safe...
            services.AddSingleton<IElasticClient>(client);
        }
    }

The configuration block looks like following

  "ElasticSearch": {
    "url": "http://localhost:9200/"
  },

and allows for future extension.

Our search service has to be registered in Startup of course:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddElasticsearch(Configuration);
}

Indexing our objects

In the above example we built a SearchableDevice class whose public properties are going to be indexed by ElasticSearch. The API allows for a much more fined grained control about what and how to index but we want to keep things simple without having to worry about excluding Properties and so on. If you have set it up that way indexing a SearchableDevice is merely one simple call:

// SearchClient is the injected IElasticClient
// mySearchableDevice is an instance of SearchableDevice
SearchClient.IndexDocument(mySearchableDevice);

Searching for objects

When developing a search query I like to try it in the Kibana Dev Tools and then transform it to a NEST call. A simple query to look in all device properties if they start with “needle” looks like this:

{
  "query": {
    "multi_match": {
      "type": "phrase_prefix", 
      "fields": ["*"],
      "query": "needle"
    }
  }
}

The nice thing about the Kibana Dev Tools console is that it can display and complete the possible values for fields like “type” in your multi_match query.

That search query can then be translated to a NEST call in a straightforward way and looks this way:

var response = SearchClient.Search<SearchableDevice>(sd => sd
    .Query(q => q
        .MultiMatch(query => query
            .Type(TextQueryType.PhrasePrefix)
            .Fields("*")
            .Query("needle")
        )
    )
);

The search response contains the hits with their source objects and some metadata like the score and result count.

Bear in mind that ElasticSearch only returns the first/best 10 matches by default, so specifying the result size often might be closer to what you want.

Wrapping it up

Getting started with ElasticSearch in .NET Core does not require too much boiler plate an setup work if you use tools like docker and the NEST library. Making it usable and tuning the indexing and querying may require a lot of work to achieve the best results. On the other hand smaller applications can start-off with a simple search setup like shown above and simply evolve it when need be.

Docker runtime breaking your container

Docker (or container technology in general) is a great tool to clearly separate the concerns of developers and operations. We use it to simplify various tasks like building projects, packaging them for different platforms and deployment of our software onto the target machines like staging and production servers. All the specifics of the projects are contained and version controlled using the Dockerfiles and compose files.

Our operations only needs to provide some infrastructure able to build container images and run them. This works great most of the time and removes a lot of the friction between developers and operation where in the past snowflaky-servers needed to be setup and maintained. Developers often had to ask for specific setups and environments because each project had their own needs. That is all gone with this great container technology. Brave new world. Except when it suddenly does not work anymore.

Help, my deployment container stopped working!

As mentioned above we use docker to deploy our software to the target machines. These machines are often part of a corporate network protected by firewalls and only accessible using VPN. I already talked about how to use openvpn in a docker container for deployment. So the other day I was making a release of one of my long-running projects and pressing the deploy button for that project on our jenkins continuous integration server.

But instead of just leaning back, relaxing and watching the magic work the deployment failed and the red light lit up! A look into the job output showed that the connection to the target machine was refused. A quick check from the developer machine showed no problem on the receiving side. VPN, target machine and everything was up and running as usual.

After a quick manual deployment performed with care and administrator hat I went on an investigation journey…

What was going on?

The deployment job did not change for several months, the container image did not change and the rest of the infrastructure was working as expected. After more digging, debugging narrowing down the problem I found out, that openvpn did not work in the container anymore because of some strange permission denied error:

Tue May 19 15:24:14 2020 /sbin/ip addr add dev tap0 1xx.xxx.xxx.xxx/22 broadcast 1xx.xxx.xxx.xxx
Tue May 19 15:24:14 2020 /sbin/ip -6 addr add 2axx:1xxx:4:5xxx:9xx:5xxx:5xxx:4xxx/64 dev tap0
RTNETLINK answers: Permission denied
Tue May 19 15:24:14 2020 Linux ip -6 addr add failed: external program exited with error status: 2
Tue May 19 15:24:14 2020 Exiting due to fatal error

This hot trace made it easy to google for and revealed following issue on github: https://github.com/dperson/openvpn-client/issues/75. The cause of all the trouble was changed behaviour of the docker runtime. Our automatic updates had run over the weekend and actually installed a new package version of the docker runtime (see exerpt from apt history log):

containerd.io:amd64 (1.2.13-1, 1.2.13-2)

This subtle change broke my container! After some sacrifices to the whale gods I went on to implement the fix. Fortunately there is an easy way to get it working like before. You just have to pass following command line switch to docker run and everything works as expected:

--sysctl net.ipv6.conf.all.disable_ipv6=0

As nice as containers are for abstracting away hardware, operating systems and other environment details sometimes the container runtime shines through. It is just a shame that such things happen on minor releases or package release upgrades…

Contain me if you can – the art of tunneling

This blog post briefly explains the basics of SSH tunneling and dives into the detail of reverse tunneling. If you are already familiar with all these concepts, this might be a boring read. If you heard “reverse tunneling” for the first time now, you’re about to discover the dark side of a network administrator’s might.

SSH basics

SSH (secure shell) is a tool to connect to a remote computer. In this blog post, the remote computer is always “the server” and your local computer is “the notebook”. That’s just nomenclature, in reality, each computer (or “endpoint”, which is a misleading term, as we will see) can be any device you want, as long as “the server” runs a SSH server daemon and “the notebook” can run a SSH client program. Both computers can live in different local area networks (LAN). As long as it is possible to send network packets to the specific port the SSH server daemon listens to and you have some means to authenticate on the server, you have free reign over both LANs by utilizing SSH tunnels.

The trick to bypass the remote LAN firewall is to have a port forwarded to the server. This allows you to access the server’s port 22 by sending packets to the internet gateway’s worldwide address on a port of your choice, maybe 333. So the forwarding rule is:

gateway:333 -> server:22

If you can comprehend this, you’ve basically grasped the whole foundation of SSH tunneling, even if it had nothing to do with SSH at all.

SSH tunneling basics

Now that we have a connection to the server, we can enjoy the fun of forward tunneling. You didn’t just connect to the server, you opened up every computer in the remote LAN for direct access. By using the SSH connection, you can define a SSH tunnel to connect to another computer in the remote LAN by saying:

localhost:8000 -> another:80

This is what you give your SSH client, but what actually happens is:

localhost:8000 -> gateway:333 -> server:22 -> another:80

You open a local network port (8000) on your notebook that uses your SSH connection over the gateway and the server to send packets to the “another” computer in the remote LAN. Every byte you send to your port 8000 will be sent to port 80 of the “another” computer. Every byte it answers will be relayed all the way back to your client, speaking to your local port 8000. For your client, it will look as if port 80 of “another” and port 8000 of your notebook are the same thing (if you don’t mind a small delay). And the best thing: Your client can be anywhere in your local LAN. Given the right configuration, “notebook2” in your LAN can speak with port 80 of “another” in the remote LAN by using your SSH tunnel. If you’ve ever played the computer game “Portal”, SSH tunneling is like wielding the portal gun in networks, but with unlimited portals at the same time. Yes, you can have many SSH tunnels open at once. That’s pretty neat and powerful, but it’s essentially still only half of the story.

SSH reverse tunneling basics

What you can also do is to open a network port on the server that acts like a portal in the other direction. You probably noticed the arrows in the tunnel descriptions above. The arrows really give a direction. If your tunnel is

localhost:8000 -> another:80

you can send packets from you to another (and receive their answer), but it doesn’t work the other way around. The software listening at another:80 could not initiate a data transfer to localhost:8000 on its own. So SSH tunnels always have a direction, even if the answers flow back. It’s like a door phone: You, inside the building, can initiate a call to the entrance door phone at any time. The guest in front of the door can speak back, but is not able to “call you back” once you hang up. He can not strike up a conversation with you over the phone. (In reality he still has the doorbell to annoy you. Luckily, the doorbell is missing in networking.)

SSH reverse tunnels change the direction of your tunnels to do exactly that: Give somebody on the remote LAN the ability to send network packets to the server that will get “teleported” into your local LAN. Let’s try this:

localhost:8070 <- server:8080

If you set up a reverse tunnel, any computer that can send packets to the port 8080 on the server will in fact speak with your notebook on port 8070. This is true for the server itself, any computer in the remote LAN and any computer with a SSH tunnel to port 8080 on the server. Yes, you’ve just discovered that you can send a data packet from your LAN to the remote LAN and have it reverse tunneled to a third LAN. And if you haven’t thought about it yet: Every “endpoint” in your three-LAN hop can be a portal to yet another hop, with or without your knowledge. This is why “endpoint” is misleading: It implies an “end” to the data transfer. That might just be your perception of the situation. Every endpoint can be another portal until it’s portals all the way down.

SSH reverse tunneling fun

You can use this back and forth teleporting of network traffic to create all kinds of complexity.

But in our case, we want to use our new might for something fun. Let’s imagine the following scenario:

  • On your notebook, you can start a SSH connection to the server. You can open forward and reverse tunnels.
  • On your notebook, you run a VNC (Virtual Network Computing) server. VNC is basically a simple remote desktop technology that lets you see the display content of a remote computer and control its keyboard and mouse. You running the server means somebody else can connect to your notebook using a VNC client and control it as if it was local (if you can ignore a small delay).
  • Somebody else (let’s call her Eve), on her notebook, can also start a SSH connection to the server. She can open forward and reverse tunnels.
  • On her notebook, she has a VNC client software installed. She can connect and control VNC server machines.

And now the fun begins:

  • VNC servers typically listen on network port 5900. In the client, this is known as “display 0”, which means that a client connects to the port “display + 5900” on the server machine.
  • Using this knowledge, you create a reverse tunnel for your SSH connection to the server: localhost:5900 <- server:5901. Notice how we changed the port. This is not necessary, but possible.
  • Now, anybody with direct access to your network port 5900 (in your LAN) or access to the network port 5901 on the server (in the remote LAN) can remote control your desktop.
  • Eve creates a forward tunnel for her SSH connection to the server or any other machine in the remote LAN: localhost:5902 -> server:5901. Notice the port change again. Also just a possibility, not a requirement. Notice also that Eve doesn’t need to connect to the same server as you. Her server just needs to be able to initiate a data transfer with the server. This might be through multiple portals that don’t matter in our story.
  • Now, Eve starts her VNC client and lets it connect to localhost:2 (2 is the display number that gets resolved to local port 5902).
  • Eve can now see your screen, control your inputs and even transfer files (depending on the capabilities of your VNC setup).

This is the point where LAN containment doesn’t really matter anymore. Sure, you cannot reign freely over any machine anywhere, but you can always establish a combination of reverse and forward tunnels to connect two computers as if they were connected by a cable. Which is, in fact, a good analogy to think about SSH tunnels: cables with two different connector types that span between computers.

Wrap up

You are probably aware about the immediate security risks that follow all that complexity. If you want to mitigate some of the risks, have a look at stunnel. It is effectively a more secure (and constrainable) way to create portals between LANs.

And if you just want to have fun: Please be aware that most of the actions here are probably not legal if done without consent of all participants. Stay safe!

Running a for-loop in a docker container

Docker is a great tool for running services or deployments in a defined and clean environment. Operations just has to provide a host for running the containers and everything else is up to the developers. They can forge their own environment and setup all the prerequisites appropriately for their task. No need to beg the admins to install some tools and configure server machines to fit the needs of a certain project. The developers just define their needs in a Dockerfile.

The Dockerfile contains instructions to setup a container in a domain specific language (DSL). This language consists only of a couple commands and is really simple. Like every language out there, it has its own quirks though. I would like to show a solution to one I encountered when trying to deploy several items to a target machine.

The task at hand

We are developing a distributed system for data acquisition, storage and real-time-display for one of our clients. We deliver the different parts of the system as deb-packages for the target machines running at the customer’s site. Our customer hosts her own debian repository using an Artifactory server. That all seems simple enough, because artifactory tells you how to upload new artifacts using curl. So we built a simple container to perform the upload using curl. We tried to supply the bash shell script required to the CMD instruction of the Dockerfile but ran into issues with our first attempts. Here is the naive, dysfunctional Dockerfile:

FROM debian:stretch
RUN DEBIAN_FRONTEND=noninteractive apt-get update &amp;&amp; apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get update &amp;&amp; apt-get -y install dpkg curl

# Setup work dir, $PROJECT_ROOT must be mounted as a volume under /elsa
WORKDIR /packages

# Publish the deb-packages to clients artifactory
CMD for package in *.deb; do\n\
  ARCH=`dpkg --info $package | grep "Architecture" | sed "s/Architecture:\ \([[:alnum:]]*\).*/\1/g" | tr -d [:space:]`\n\
  curl -H "X-JFrog-Art-Api:${API_KEY}" -XPUT "${REPOSITORY_URL}/${package};deb.distribution=${DISTRIBUTION};deb.component=non-free;deb.architecture=$ARCH" -T ${package} \n\
  done

The command fails because the for-shell built-in instruction does not count as a command and the shell used to execute the script is sh by default and not bash.

The solution

After some unsuccessfull attempts to set the shell to /bin/bash using dockers’ SHELL instruction we finally came up with the solution for an inline shell script in the CMD instruction of a Dockerfile:

FROM debian:stretch
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install dpkg curl

# Setup work dir, $PROJECT_ROOT must be mounted as a volume under /elsa
WORKDIR /packages

# Publish the deb-packages to clients artifactory
CMD /bin/bash -c 'for package in *.deb;\
do ARCH=`dpkg --info $package | grep "Architecture" | sed "s/Architecture:\ \([[:alnum:]]*\).*/\1/g" | tr -d [:space:]`;\
  curl -H "X-JFrog-Art-Api:${API_KEY}" -XPUT "${REPOSITORY_URL}/${package};deb.distribution=${DISTRIBUTION};deb.component=non-free;deb.architecture=$ARCH" -T ${package};\
done'

The trick here is to call bash directly and supplying the shell script using the -c parameter. An alternative would have been to extract the script into an own file and call that in the CMD instruction like so:

# Publish the deb-packages to clients artifactory
CMD ["deploy.sh", "${API_KEY}", "${REPOSITORY_URL}", "${DISTRIBUTION}"]

In the above case I prefer the inline solution because of the short and simple script, no need for an additional external file and worrying about how to pass the parameters to the script.

Using parameterized docker builds

Docker is a great addition to you DevOps toolbox. Sometimes you may want to build several similar images using the same Dockerfile. That’s where parameterized docker builds come in:

They provide the ability to provide configuration values at image build time. Do not confuse this with environment variables when running the container! We used parameterized builds for example to build images for creating distribution-specific packages of native libraries and executables.

Our use case

We needed to package some proprietary native programs for several linux distribution version, in our case openSuse Leap. Build ARGs allow us to use a single Dockerfile but build several images and run them to build the packages for each distribution version. This can be easily achieved using so-called multi-configuration jobs in  the jenkins continuous integration server. But let us take a look at the Dockerfile first:

ARG LEAP_VERSION=15.1
FROM opensuse/leap:$LEAP_VERSION
ARG LEAP_VERSION=15.1

# add our target repository
RUN zypper ar http://our-private-rpm-repository.company.org/repo/leap-$LEAP_VERSION/ COMPANY_REPO

# install some pre-requisites
RUN zypper -n --no-gpg-checks refresh && zypper -n install rpm-build gcc-c++

WORKDIR /buildroot

CMD rpmbuild --define "_topdir `pwd`" -bb packaging/project.spec

Notice the ARG instruction defines a parameter name and a default value. That allows us to configure the image at build time using the --build-arg command line flag. Now we can build a docker image for Leap 15.0 using a command like:

docker build -t project-build --build-arg LEAP_VERSION=15.0 -f docker/Dockerfile .

In our multi-configuration jobs we call docker build with the variable from the axis definition to build several images in one job using the same Dockerfile.

A gotcha

As you may have noticed we have put the same ARG instruction twice in the Dockerfile: once before the FROM instruction and another time after FROM. This is because the build args are cleared after each FROM instruction. So beware in multi-stage builds, too. For more information see the docker documentation and this discussion. This had cost us quite some time as it was not as clearly documented at the time.

Conclusion

Parameterized builds allow for easy configuration of your Docker images at image build time. This increases flexibility and reduces duplication like maintaining several almost identical Dockerfiles. For runtime container configuration provide environment variables  to the docker run command.

Lombok-Tooling surprise

Some months ago we took over development and maintenance of a Java EE-based web application. The project has ok-code for the most part and uses mostly standard libraries from the Java ecosystem. One of them is lombok which strives to reduce the boilerplate code needed and make the code more readable and concise.

Fortunately there is a plugin for IntelliJ, our favourite Java IDE that understands lombok and allows for easy navigation and code hints. For example you can jump from getMyField() to the lombok-annotated backing field of the getter and so on.

This sounds very good but some day we were debugging a weird behaviour. An abstract class contained a special implementation of a Map as its field and was annotated with @Getter and @Setter. But somehow the type and contents of the Map changed without calling the setter.

What was happening here? After a quite some time digging spent in the debugger we noticed, that the getter was overridden in a subclass. Normally, IntelliJ shows an Icon with navigation options beside overridden/overriding methods. Unfortunately for us not for lombok annotations!

Consider the following code:

public class SuperClass {

    @Getter
    private List strings = new ArrayList();

    public List getInts() {
        return new ArrayList();
    }
}

public class NotSoSuperClass extends SuperClass {
    @Override
    public List getStrings() {
        return Arrays.asList("Many", "Strings");
    }

    @Override
    public List getInts() {
        return Arrays.asList(1,2,3);
    }
}

The corresponding code in IntelliJ looks like this:

2019-08-08 10_54_49-lombok-plugin-surprise-demo

Notice that IntelliJ puts a nice little icon next to the line number where the class or a methods is subclassed/overridden. But not so for the lombok getter. This tiny detail lead to quite a surprise and cost us some hours.

Of course you can argue, that the code design is broken, but hey, that was the state and the tools are there to help you discover such weird quirks.

We opened an issue for the lombok IntelliJ plugin, so maybe it will be enhanced to provide such additional tooling information to be on par with plain old java code.

 

Debugging Web Pages for iOS

Web developers use browser tools like the Web Inspector in Chrome and Safari or the Developer Tools in Firefox to develop, debug and test web pages. In Safari you have to enable the developer menu first: Safari -> Preferences… -> Advanced -> Show develop menu in menu bar

All these tools offer modes where you can display the page layout at various screen sizes. In Safari this is called the Responsive Design Mode and can be found in the Develop menu. This is essential for checking the page layout for mobile devices. There are however some differences in behaviour, which can only be tested on the real devices or in a simulator. For example, dropdown menus can trigger a wheel selector on mobile devices, while the desktop browser renders them as regular dropdown menus, even in responsive design mode.

Here are some tips for debugging web pages for iOS devices in the simulator:

Using Web Inspector with the iOS Simulator

Within the mobile Safari browser you can’t simply open the Web Inspector console as you would do when developing a web page using a desktop browser. But you can connect the Web Inspector of your desktop Safari to the mobile Safari browser instance running in the iOS simulator:

  • Start the iOS simulator from Xcode: Xcode -> Open Developer Tool -> Simulator
  • Select the desired device: Hardware -> Device -> e.g. iOS 12.1 -> iPhone SE
  • Open the web page in Safari within the simulator
  • Open the desktop version of Safari

In Safari’s Develop menu the simulator now shows up as a device, e.g. “Simulator – iPhone SE – iOS 12.1 (16B91)”. The web page you opened in the simulator should be listed as submenu item. If you click this menu item the Web Inspector opens. It’s now connected to the simulated Safari instance and you can debug the mobile variant of your web page.

Workaround for Clearing the Cache

When using a desktop web browser one can easily bypass the local browser cache when reloading a web page by holding the shift key while pressing the reload button. Sometimes this is necessary to see changes in effect while developing a web application. However, this doesn’t work in Safari running within the iOS emulator. There’s a little workaround: You can open the web page in an incognito tab, which means the cache is cleared each time you close the tab and re-open it again in a new incognito tab.

Automated vulnerability checking of software dependencies

The OWASP organization is focused on improving the security of software systems and regularly publishes lists with security risks, such as the OWASP Top 10 Most Critical Web Application Security Risks or the Mobile Top 10 Security Risks. Among these are common attack vectors like command injections, buffer overruns, stack buffer overflow attacks and SQL injections.

When developing software you have to be aware of these in order to avoid and prevent them. If your project depends on third-party software components, such as open source libraries, you have to assess those dependencies for security risks as well. It is not enough to do this just once. You have to check them regularly and watch for any known, publicly disclosed, vulnerabilities in these dependencies.

Publicly known information-security vulnerabilities are tracked according to the Common Vulnerabilities and Exposures (CVE) standard. Each vulnerability is assigned an ID, for example CVE-2009-2704, and published in the National Vulnerability Database (NVD) by the U.S. government. Here’s an example for such an entry.

Automated Dependency Checking

There are tools and services to automatically check the dependencies of your project against these publicly known vulnerabilities, for example the OWASP Dependency Check or the Sonatype OSS Index. In order to use them your project has to use a dependency manager, for example Maven in the Java world or NuGet in the .NET ecosystem.

Here’s how to integrate the OWASP Dependency Check into your Maven based project, by adding the following plugin to the pom.xml file:

<plugin>
  <groupId>org.owasp</groupId>
  <artifactId>dependency-check-maven</artifactId>
  <version>5.0.0-M1</version>
  <executions>
    <execution>
      <goals>
        <goal>check</goal>
      </goals>
    </execution>
  </executions>
</plugin>

When you run the Maven goal dependency-check:check you might see an output like this:

One or more dependencies were identified with known vulnerabilities in Project XYZ:

jboss-j2eemgmt-api_1.1_spec-1.0.1.Final.jar (pkg:maven/org.jboss.spec.javax.management.j2ee/jboss-j2eemgmt-api_1.1_spec@1.0.1.Final, cpe:2.3:a:sun:j2ee:1.0.1:*:*:*:*:*:*:*) : CVE-2009-2704, CVE-2009-2705
...

The output tells you which version of a dependency is affected and the CVE ID. Now you can use this ID to look it up in the NVD database and inform yourself about the potential dangers of the vulnerability and take action, like updating the dependency if there is a newer version, which addresses the vulnerability.

Using Ansible vault for sensitive data

We like using ansible for our automation because it has minimum requirements for the target machines and all around infrastructure. You need nothing more than ssh and python with some libraries. In contrast to alternatives like puppet and chef you do not need special server and client programs running all the time and communicating with each other.

The problem

When setting up remote machines and deploying software systems for your customers you will often have to use sensitive data like private keys, passwords and maybe machine or account names. On the one hand you want to put your automation scripts and their data under version control and use them from your continuous integration infrastructure. On the other hand you do not want to spread the secrets of your customers all around your infrastructure and definately never ever in your source code repository.

The solution

Ansible supports encrypting sensitive data and using them in playbooks with the concept of vaults and the accompanying commands. Setting it up requires some work but then usage is straight forward and works seamlessly.

The high-level conversion process is the following:

  1. create a directory for the data to substitute on a host or group basis
  2. extract all sensitive variables into vars.yml
  3. copy vars.yml to vault.yml
  4. prefix variables in vault.yml with vault_
  5. use vault variables in vars.yml

Then you can encrypt vault.yml using the ansible-vault command providing a password.

All you have to do subsequently is to provide the vault password along with your usual playbook commands. Decryption for playbook execution is done transparently on-the-fly for you, so you do not need to care about decryption and encryption of your vault unless you need to update the data in there.

The step-by-step guide

Suppose we want work on a target machine run by your customer but providing you access via ssh. You do not want to store your ssh user name and password in your repository but want to be able to run the automation scripts unattended, e.g. from a jenkins job. Let us call the target machine ceres.

So first you setup the directory structure by creating a directory for the target machine called $ansible_script_root$/host_vars/ceres.

To log into the machine we need two sensitive variables: ansible_user and ansible_ssh_pass. We put them into a file called $ansible_script_root$/host_vars/ceres/vars.yml:

ansible_user: our_customer_ssh_account
ansible_ssh_pass: our_target_machine_pwd

Then we copy vars.yml to vault.yml and prefix the variables with vault_ resulting in $ansible_script_root$/host_vars/ceres/vault.yml with content of:

vault_ansible_user: our_customer_ssh_account
vault_ansible_ssh_pass: our_target_machine_pwd

Now we use these new variables in our vars.xml like this:

ansible_user: "{{ vault_ansible_user }}"
ansible_ssh_pass: "{{ vault_ansible_ssh_pass }}"

Now it is time to encrypt the vault using the command

ANSIBLE_VAULT_PASS="ourpwd" ansible-vault encrypt host_vars/ceres/vault.yml

resulting a encrypted vault that can be put in source control. It looks something like

$ANSIBLE_VAULT;1.1;AES256
35323233613539343135363737353931636263653063666535643766326566623461636166343963
3834323363633837373437626532366166366338653963320a663732633361323264316339356435
33633861316565653461666230386663323536616535363639383666613431663765643639383666
3739356261353566650a383035656266303135656233343437373835313639613865636436343865
63353631313766633535646263613564333965343163343434343530626361663430613264336130
63383862316361363237373039663131363231616338646365316236336362376566376236323339
30376166623739643261306363643962353534376232663631663033323163386135326463656530
33316561376363303339383365333235353931623837356362393961356433313739653232326638
3036

Using your playbook looks similar to before, you just need to provide the vault password using one of several options like specifying a password file, environment variable or interactive input. In our example we just use the environment variable inline:

ANSIBLE_VAULT_PASS="ourpwd" ansible-playbook -i inventory work-on-customer-machines.yml

After setting up your environment appropriately with a password file and the ANSIBLE_VAULT_PASSWORD_FILE environment variable your playbook commands are exactly the same like without using a vault.

Conclusion

The ansible vault feature allows you to safely store and use sensitive data in your infrastructure without changing too much using your automation scripts.

Analysing a React web app using SonarQube

Many developers especially from the Java world may know the code analysis platform SonarQube (formerly SONAR). While its focus was mostly integration all the great analysis tools for Java the modular architecture allows plugging tools for other languages to provide linter results and code coverage under the same web interface.

We are a polyglot bunch and are using more and more React in addition to our Java, C++, .Net and “what not” projects. Of course we would like the same quality overview for these JavaScript projects as we are used to in other ecosystems. So I tried SonarQube for react.

The start

Using SonarQube to analyse a JavaScript project is as easy as for the other languages: Just provide a sonar-project.properties file specifying the sources and some paths for analysis results and there you go. It may look similar to the following for a create-react-app:

sonar.projectKey=myproject:webclient
sonar.projectName=Webclient for my cool project
sonar.projectVersion=0.3.0

#sonar.language=js
sonar.sources=src
sonar.exclusions=src/tests/**
sonar.tests=src/tests
sonar.sourceEncoding=UTF-8

#sonar.test.inclusions=src/tests/**/*.test.js
sonar.coverage.exclusions=src/tests/**

sonar.junit.reportPaths=test-results/test-report.junit.xml
sonar.javascript.lcov.reportPaths=coverage/lcov.info

For the coverage you need to add some settings to your package.json, too:

{ ...
"devDependencies": {
"enzyme": "^3.3.0",
"enzyme-adapter-react-16": "^1.1.1",
"eslint": "^4.19.1",
"eslint-plugin-react": "^7.7.0",
"jest-junit": "^3.6.0"
},
"jest": {
"collectCoverageFrom": [
"src/**/*.{js,jsx}",
"!**/node_modules/**",
"!build/**"
],
"coverageReporters": [
"lcov",
"text"
]
},
"jest-junit": {
"output": "test-results/test-report.junit.xml"
},
...
}

This is all nice but the set of built-in rules for JavaScript is a bit thin and does not fit React apps nicely.

ESLint to the recue

But you can make SonarQube use ESLint and thus become more useful.

First you have to install the ESLint Plugin for SonarQube from github.

Second you have to setup ESLint to your liking using eslint --init in your project. That results in a eslintrc.js similar to this:

module.exports = {
  'env': {
    'browser': true,
    'commonjs': true,
    'es6': true
  },
  'extends': 'eslint:recommended',
  'parserOptions': {
    'ecmaFeatures': {
      'experimentalObjectRestSpread': true,
      'jsx': true
    },
    'sourceType': 'module'
  },
  'plugins': [
    'react'
  ],
  'rules': {
    'indent': [
      'error',
      2
    ],
    'linebreak-style': [
      'error',
      'unix'
    ],
    'quotes': [
      'error',
      'single'
    ],
    'semi': [
      'error',
      'always'
    ]
  }
};

Lastly enable the ESLint ruleset for your project in sonarqube and look at the results. You may need to tune one thing or another but you will get some useful static analysis helping you to improve your code quality further.