Ansible for deployment with docker

We are using Jenkins not only for our continuous integration needs but also for running deployments at the push of a button. Back in the dark times™ this often meant using Apache Ant in combination with JSch to copy the projects artifacts to some target machine and execute some remote commands over ssh.
After gathering some experience with Ansible and docker we took a new shot at the task and ended up with one-shot docker containers executing some ansible scripts. The general process stayed the same but the new solution is more robust in general and easier to maintain.
In addition to the project- and deployment-process-specific simplifications we can now use any jenkins slave with a docker installation and network access. No need for an ant installation and a java runtime anymore.
However there are some quirks you have to overcome to get the whole thing running. Let us see how we can accomplish non-interactive (read: fully automated) deployments using the tools mentioned above.

The general process

The deployment job in Jenkins has to perform the following steps:

  1. Fetch the artifacts to deploy from somewhere, e.g. a release job
  2. Build the docker image provisioned with ansible
  3. Run a container of the image providing credentials and artifacts through the environment and/or volumes

The first step can easily be implemented using the Copy Artifact Plugin.

The Dockerfile

You can use mostly any linux base image for your deployment needs as long as the distribution comes with ansible. I chose Ubunt 18.04 LTS because it is sufficiently up-to-date and we are quite familiar with it. In addition to ssh and ansible we only need the command for running the playbook. In our case the the playbook and ssh-credentials reside in the deployment folder of our projects, of course encrypted in an ansible vault. To be able to connect to the target machine we need the vault password to decrypt the ssh password or private key. Therefore we inject the vault password as an environment variable when running the container.

A simple Dockerfile may look as follows:

FROM ubuntu:18.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install ansible ssh

SHELL ["/bin/bash", "-c"]
# A place for our vault password file
RUN mkdir /ansible/

# Setup work dir
WORKDIR /deployment

COPY deployment/${TARGET_HOST} .

# Deploy the proposal submission system using openvpn and ansible
CMD echo -e "${VAULT_PASSWORD}" > /ansible/vault.access && \
ansible-vault decrypt --vault-password-file=/ansible/vault.access ${TARGET_HOST}/credentials/deployer && \
ansible-playbook -i ${TARGET_HOST}, -u root --key-file ${TARGET_HOST}/credentials/deployer ansible/deploy.yml && \
ansible-vault encrypt --vault-password-file=/ansible/vault.access ${TARGET_HOST}/credentials/deployer && \
rm -rf /ansible

Note that we create a temporary vault password file using the ${VAULT_PASSWORD} environment variable for ansible vault decryption. A second noteworthy detail is the ad-hoc inventory using the ${TARGET_HOST} environment variable terminated with comma. Using this variable we also reference the correct credential files and other host-specific data files. That way the image will not contain any plain text credentials.

The Ansible playbook

The playbook can be quite simple depending on you needs. In our case we deploy a WAR-archive to a tomcat container and make a backup copy of the old application first:

---
- hosts: all
  vars:
    artifact_name: app.war
    webapp_directory: /var/lib/tomcat8/webapps
  become: true

  tasks:
  - name: copy files to target machine
    copy: src={{ item }} dest=/tmp
    with_fileglob:
    - "/artifact/{{ artifact_name }}"
    - "../{{ inventory_hostname }}/context.xml"

  - name: stop our servlet container
    systemd:
      name: tomcat8
      state: stopped

  - name: make backup of old artifact
    command: mv "{{ webapp_directory }}/{{ artifact_name }}" "{{ webapp_directory }}/{{ artifact_name }}.bak"

  - name: delete exploded webapp
    file:
      path: "{{ webapp_directory }}/app"
      state: absent

  - name: mv artifact to webapp directory
    copy: src="/tmp/{{ artifact_name }}" dest="{{ webapp_directory }}" remote_src=yes
    notify: restart app-server

  - name: mv context file to container context
    copy: src=/tmp/context.xml dest=/etc/tomcat8/Catalina/localhost/app.xml remote_src=yes
    notify: restart app-server

  handlers:
  - name: restart app-server
    systemd:
      name: tomcat8
      state: started

The declarative syntax and the use of handlers make the process quite straight-forward. Of note here is the - hosts: all line that is used with our command line ad-hoc inventory. That way we do not need an inventory file in our project repository or somewhere else. The host can be provided by the deployment job. That brings us to the Jenkins-Job that puts everything together.

The Jenkins job

As you probably already noticed in the parts above there are several inputs required by the container and the ansible script:

  1. A volume with the artifacts to deploy
  2. A volume with host specific credentials and data files
  3. Vault credentials
  4. The target host(s)

We inject passwords using the EnvInject Plugin which stored them encrypted and masks them in all the logs. I like to prefix such environmental settings in Jenkins with JOB_ do denote their origin. After the copying build step which puts the artifacts into the artifact directory we can use a small shell script to build the image and run the container with all the needed stuff wired up. The script may look like the following:

docker build -t app-deploy -f deployment/docker/Dockerfile .
docker run --rm \
-e VAULT_PASSWORD=${JOB_VAULT_PASSWORD} \
-e TARGET_HOST=${JOB_TARGET_HOST} \
-e ANSIBLE_HOST_KEY_CHECKING=False \
-v `pwd`/artifact:/artifact \
app-deploy

DOCKER_RUN_RESULT=`echo $?`
exit $DOCKER_RUN_RESULT

We run the container with the --rm flag to delete it right after execution leaving nothing behind. We also disable host key checking for ansible to keep the build non-interactive. The last two lines let the jenkins job report the result to the container execution.

Wrapping it up

Essentially we have a deployment consisting of three parts. A docker image definition of the required environment, an ansible playbook performing the real work and a jenkins job with a small shell script wiring it all together so that it can be executed at the push of a button. We do not store any clear text credentials persistently at neither location, so that we are reasonably secure for running the deployment.

Ansible in Jenkins

Ansible is a powerful tool for automation of your IT infrastructure. In contrast to chef or puppet it does not need much infrastructure like a server and client (“agent”) programs on your target machines. We like to use it for keeping our servers and desktop machines up-to-date and provisioned in a defined, repeatable and self-documented way.

As of late ansible has begun to replace our different, custom-made – but already automated – deployment processes we implemented using different tools like ant scripts run by jenkins-jobs. The natural way of using ansible for deployment in our current infrastructure would be using it from jenkins with the jenkins ansible plugin.

Even though the plugin supports the “Global Tool Configuration” mechanism and automatic management of several ansible installations it did not work out of the box for us:

At first, the executable path was not set correctly. We managed to fix that but then the next problem arose: Our standard build slaves had no jinja2 (python templating library) installed. Sure, that are problems you can easily fix if you decide so.

For us, it was too much tinkering and snowflaking our build slaves to be feasible and we took another route, that you can consider: Running ansible from an docker image.

We already have a host for running docker containers attached to jenkins so our current state of deployment with ansible roughly consists of a Dockerfile and a Jenkins job to run the container.

The Dockerfile is as simple as


FROM ubuntu:14.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y dist-upgrade && apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-add-repository ppa:ansible/ansible-2.4
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install ansible

# Setup work dir
WORKDIR /project/provisioning

# Copy project directory into container
COPY . /project

# Deploy the project
CMD ansible-playbook -i inventory deploy-project.yml

And the jenkins build step to actually run the deployment looks like


docker build -t project-deploy .
docker run project-deploy

That way we can tailor our deployment machine to conveniently run our ansible playbooks for the specific project without modifying our normal build slave setups and adding complexity on their side. All the tinkering with the jenkins ansible plugin is unnecessary going this way and relying on docker and what the container provides for running ansible.

Integrating .NET projects with Gradle

Recently I have created Gradle build scripts for several .NET projects, bot C# and VB.NET projects. Projects for the .NET platform are usually built with MSBuild, which is part of the .NET Framework distribution and itself a full-blown build automation tool: you can define build targets, their dependencies and execute tasks to reach the build targets. I have written about the basics of MSBuild in a previous blog post.

The .NET projects I was working on were using MSBuild targets for the various build stages as well. Not only for building and testing, but also for the release and deployment scripts. These scripts were called from our Jenkins CI with the MSBuild Jenkins Plugin.

Gradle plugins

However, I wasn’t very happy with MSBuild’s clunky Ant-like XML based syntax, and for most of our other projects we are using Gradle nowadays. So I tried Gradle for a new .NET project. I am using the Gradle MSBuild and Gradle NUnit plugins. Of course, the MSBuild Gradle plugin is calling MSBuild, so I don’t get rid of MSBuild completely, because Visual Studio’s .csproj and .vbproj project files are essentially MSBuild scripts, and I don’t want to get rid of them. So there is one Gradle task which to calls MSBuild, but everything else beyond the act of compilation is automated with regular Gradle tasks, like copying files, zipping release artifacts etc.

Basic usage of the MSBuild plugin looks like this:

plugins {
  id "com.ullink.msbuild" version "2.18"
}

msbuild {
  // either a solution file
  solutionFile = 'DemoSolution.sln'
  // or a project file (.csproj or .vbproj)
  projectFile = file('src/DemoSoProject.csproj')

  targets = ['Clean', 'Rebuild']

  destinationDir = 'build/msbuild/bin'
}

The plugin offers lots of additional options, be sure to check out the documentation on Github. If you want to give the MSBuild step its own task name, which is currently not directly mentioned on the Github page, use the task type Msbuild from the package com.ullink:

import com.ullink.Msbuild

// ...

task buildSolution(type: 'Msbuild', dependsOn: '...') {
  // ...
}

Since the .NET projects I’m working on use NUnit for unit testing, I’m using the NUnit Gradle plugin by the same creator as well. Again, please consult the documentation on the Github page for all available options. What I found necessary was setting the nunitHome option, because I don’t want the plugin to download a NUnit release from the internet, but use the one that is included with our project. Also, if you want a task with its own name or multiple testing tasks, use the NUnit task type in the package com.ullink.gradle.nunit:

import com.ullink.gradle.nunit.NUnit

// ...

task test(type: 'NUnit', dependsOn: 'buildSolution') {
  nunitVersion = '3.8.0'
  nunitHome = "${project.projectDir}/packages/NUnit.ConsoleRunner.3.8.0/tools"
  testAssemblies = ["${project.projectDir}/MyProject.Tests/bin/Release/MyProject.Tests.dll"]
}
test.dependsOn.remove(msbuild)

With Gradle I am now able to share common build tasks, for example for our release process, with our other non .NET projects, which use Gradle as well.

Analyzing gradle projects using SonarQube without gradle plugin

SonarQube makes static code analysis easy for a plethora of languages and environments. In many of our newer projects we use gradle as our buildsystem and jenkins as our continuous integration server. Integrating sonarqube in such a setup can be done in a couple of ways, the most straightforward being

  • Integrating SonarQube into your gradle build and invoke the gradle script in jenkins
  • Letting jenkins invoke the gradle build and execute the SonarQube scanner

I chose the latter one because I did not want to add further dependencies to the build process.

Configuration of the SonarQube scanner

The SonarQube scanner must be configured by property file called sonar-project.properties by default:

# must be unique in a given SonarQube instance
sonar.projectKey=domain:project
# this is the name and version displayed in the SonarQube UI. Was mandatory prior to SonarQube 6.1.
sonar.projectName=My cool project
sonar.projectVersion=23

sonar.sources=src/main/java
sonar.tests=src/test/java
sonar.java.binaries=build/classes/java/main
sonar.java.libraries=../lib/**/*.jar
sonar.java.test.libraries=../lib/**/*.jar
sonar.junit.reportPaths=build/test-results/test/
sonar.jacoco.reportPaths=build/jacoco/test.exec

sonar.modules=application,my_library,my_tools

# Encoding of the source code. Default is default system encoding
sonar.sourceEncoding=UTF-8
sonar.java.source=1.8

sonar.links.ci=http://${my_jenkins}/view/job/MyCoolProject
sonar.links.issue=http://${my_jira}/browse/MYPROJ

After we have done that we can submit our project to the SonarQube scanner using the jenkins SonarQube plugin and its “Execute SonarQube Scanner” build step.

Optional: Adding code coverage to our build

Even our gradle-based projects aim to be self-contained. That means we usually do not use repositories like mavenCentral for our dependencies but store them all in a lib directory along the project. If we want to add code coverage to such a project we need to add jacoco in the version corresponding to the jacoco-gradle-plugin to our libs in build.gradle:

allprojects {
    apply plugin: 'java'
    apply plugin: 'jacoco'
    sourceCompatibility = 1.8

    jacocoTestReport {
        reports {
            xml.enabled true
        }
        jacocoClasspath = files('../lib/org.jacoco.core-0.7.9.jar',
            '../lib/org.jacoco.report-0.7.9.jar',
            '../lib/org.jacoco.ant-0.7.9.jar',
            '../lib/asm-all-5.2.jar'
        )
    }
}

Gotchas

Our jenkins build job consists of 2 steps:

  1. Execute gradle
  2. Submit project to SonarQube’s scanner

By default gradle stops execution on failure. That means later tasks like jacocoTestReport are not executed if a test fails. We need to invoke gradle with the --continue switch to always run all of our tasks.

Integrating catch2 with CMake and Jenkins

A few years back, we posted an article on how to get CMake, googletest and jenkins to play nicely with each other. Since then, Phil Nash’s catch testing library has emerged as arguably the most popular thing to write your C++ tests in. I’m going to show how to setup a small sample project that integrates catch2, CMake and Jenkins nicely.

Project structure

Here is the project structure we will be using in our example. It is a simple library that implements left-pad: A utility function to expand a string to a minimum length by adding a filler character to the left.

├── CMakeLists.txt
├── source
│   ├── CMakeLists.txt
│   ├── string_utils.cpp
│   └── string_utils.h
├── externals
│   └── catch2
│       └── catch.hpp
└── tests
    ├── CMakeLists.txt
    ├── main.cpp
    └── string_utils.test.cpp

As you can see, the code is organized in three subfolders: source, externals and tests. source contains your production code. In a real world scenario, you’d probably have a couple of libraries and executables in additional subfolders in this folder.

The source folder

set(TARGET_NAME string_utils)

add_library(${TARGET_NAME}
  string_utils.cpp
  string_utils.h)

target_include_directories(${TARGET_NAME}
  INTERFACE ./)

install(TARGETS ${TARGET_NAME}
  ARCHIVE DESTINATION lib/)

The library is added to the install target because that’s what we typically do with our artifacts.

I use externals as a place for libraries that go into the projects VCS. In this case, that is just the catch2 single-header distribution.

The tests folder

I typically mirror the filename and path of the unit under test and add some extra tag, in this case the .test. You should really not need headers here. The corresponding CMakeLists.txt looks like this:

set(UNIT_TEST_LIST
  string_utils)

foreach(NAME IN LISTS UNIT_TEST_LIST)
  list(APPEND UNIT_TEST_SOURCE_LIST
    ${NAME}.test.cpp)
endforeach()

set(TARGET_NAME tests)

add_executable(${TARGET_NAME}
  main.cpp
  ${UNIT_TEST_SOURCE_LIST})

target_link_libraries(${TARGET_NAME}
  PUBLIC string_utils)

target_include_directories(${TARGET_NAME}
  PUBLIC ../externals/catch2/)

add_test(
  NAME ${TARGET_NAME}
  COMMAND ${TARGET_NAME} -o report.xml -r junit)

The list and the loop help me to list the tests without duplicating the .test tag everywhere. Note that there’s also a main.cpp included which only defines the catch’s main function:

#define CATCH_CONFIG_MAIN
#include <catch.hpp>

The add_test call at the bottom tells CTest (CMake’s bundled test-runner) how to run catch. The “-o” switch commands catch to direct its output to a file, report.xml. The “-r” switch sets the report mode to JUnit format. We will need both to integrate with Jenkins.

The top-level folder

The CMakeLists.txt in the top-level folder needs to call enable_testing() for our setup. Other than that, it just directs to the subfolders via add_subdirectory().

Jenkins

Now all that is needed is to setup Jenkins accordingly. Setup jenkins to get your code, add a “CMake Build” build-step. Hit “Add build tool invocation” and check “Use cmake” to let cmake handle the invocation of your build tool (e.g. make). You also specify the target here, which is typically “install” or “package” via the “–target” switch.

Now you add another step that runs the tests via CTest. Add another Build Step, this time “CMake/CPack/CTest Execution” and pick CTest. The one quirk with this is that it will let the build fail when CTest returns a non-zero exit code – which it does when any tests fail. Usually, you want the build to become unstable and not failed if that happens. Hence set “1-65535” in the “Ignore exit codes” input.

The final step is to let jenkins use the report.xml that we had CTest generate so it can generate the test result charts and tables. To do that, add the post-build action: “Publish JUnit test result report” and point it to tests/report.xml.

Done!

That’s it. Now you got your CI running nice catch tests. The code for this example is available on our github.

Simple build triggers with secured Jenkins CI

The jenkins continuous integration (CI) server provides several ways to trigger builds remotely, for example from a git hook. Things are easy on an open jenkins instance without security enabled. It gets a little more complicated if you like to protect your jenkins build environment.

Git plugin notify commit url

For git there is the “notifyCommitUrl” you can use in combination with the Poll SCM settings:

$JENKINS_URL/git/notifyCommit?url=http://$REPO/project/myproject.git

Note two things regarding this approach:

  1. The url of the source code repository given as a parameter must match the repository url of the jenkins job.
  2. You have to check the Poll SCM setting, but you do not need to provide a schedule

Another drawback is its restriction to git-hosted jobs.

Jenkins remote access api

Then there is the more general and more modern jenkins remote access api, where you may trigger builds regardless of the source code management system you use.
curl -X POST $JENKINS_URL/job/$JOB_NAME/build?token=$TOKEN

It allows even triggering parameterized builds with HTTP POST requests like:

curl -X POST $JENKINS_URL/job/$JOB_NAME/build \
--user USER:TOKEN \
--data-urlencode json='{"parameter": [{"name":"id", "value":"123"}, {"name":"verbosity", "value":"high"}]}'

Both approaches work great as long as your jenkins instance is not secured and everyone can do everything. Such a setting may be fine in your companies intranet but becomes a no-go in more heterogenious environments or with a public jenkins server.

So the way to go is securing jenkins with user accounts and restricted access. If you do not want to supply username/password as part of the url for doing HTTP BASIC auth and create users just for your repository triggers there is another easy option:

Using the Build Authorization Token Root Plugin!

Build authorization token root plugin

The plugin introduces a configuration setting in the Build triggers section to define an authentication token:

It also exposes a url you can access without being logged in to trigger builds just providing the token specified in the job:

$JENKINS_URL/buildByToken/build?job=$JOB_NAME&token=$TOKEN

Or for parameterized builds something like:

$JENKINS_URL/buildByToken/buildWithParameters?job=$JOB_NAME&token=$TOKEN&Type=Release

Conclusion

The token root plugin does not need HTTP POST requests but also works fine using HTTP GET. It does neither requires a user account nor the awkward Poll SCM setting. In my opinion it is the most simple and pragmatic choice for build triggering on a secured jenkins instance.

Using passwords with Jenkins CI server

For many of our projects the Jenkins continuous integration (CI) server is one important cornerstone. The well known “works on my machine” means nothing in our company. Only code in repositories and built, tested and packaged by our CI servers counts. In addition to building, testing, analyzing and packaging our projects we use CI jobs for deployment and supervision, too. In such jobs you often need some sort of credentials like username/password or public/private keys.

If you are using username/password they do not only appear in the job configuration but also in the console build logs. In most cases this is undesirable but luckily there is an easy way around it: using the Environment Injector Plugin.

In the plugin you can “inject passwords to the build as environment variables” for use in your commands and scripts.inject-passwords-configuration

The nice thing about this is that the passwords are not only masked in the job configuration (like above) but also in the console logs of the builds!inject-passwords-console-log

Another alternative doing mostly the same is the Credentials Binding Plugin.

There is a lot more to explore when it comes to authentication and credential management in Jenkins as you can define credentials at the global level, use public/private key pairs and ssh agents, connect to a LDAP database and much more. Just do not sit back and provide security related stuff plaintext in job configurations or your deployments scripts!