3 rules for projects under version control

Checking out a project under version control should be easy and repeatable. Here are a few tips on how to achieve that:

1. Self-containment

You should not need a specifically configured machine to start working on a project. Ideally, you clone the project and get started. Many things can be a problem to achieve that.
Maybe your project is needs a specific operating system, dependency installed, hardware or database setup to run. I personally draw that line at this:
You get a manual with the code that helps you to set up your development environment once and this should be as automated & easy as possible. You should always be able to run your projects without periphery, i.e. specific hardware that can be plugged in or databases that need to be installed.
To achieve this for hardware, you can often fake it via polymorphic interfaces and dependency injection, much like mocking it for testing. The same can be done with databases – or you can use in-memory databases as a fallback.

2. Separate build-artifacts

Building the project into an executable form should be clearly separated from the source-controlled files. For example, the build should never ever modify a file that is under version control. Ideally, the “build” directory is completely independent from the source – enabling a true out-of-source build. However, it is often a good middle ground to allow building in a few dedicated directories in your source repository – but these need to be in .gitignore.

3. Separate runtime data

In the same way, running your project should not touch any source controlled files. Ideally, the project can be run out-of-source. This is trivial for small programs that do not have data, but once some data needs to be managed by the source control system, it gets a little more tricky for the executables to find the data. For data that needs to be changed by the program (we call these “stores”), it is advisable to maintain templates in the VCS or in codes, and copy them to the runtime directory during the build process. For data that is not changed by running the program, such as images, videos, translation-tables etc., you can copy them as well, or make sure the program finds them in the source repository.

Following these guidelines will make it easier to work with version control, especially when multiple people are involved.

Ansible in Jenkins

Ansible is a powerful tool for automation of your IT infrastructure. In contrast to chef or puppet it does not need much infrastructure like a server and client (“agent”) programs on your target machines. We like to use it for keeping our servers and desktop machines up-to-date and provisioned in a defined, repeatable and self-documented way.

As of late ansible has begun to replace our different, custom-made – but already automated – deployment processes we implemented using different tools like ant scripts run by jenkins-jobs. The natural way of using ansible for deployment in our current infrastructure would be using it from jenkins with the jenkins ansible plugin.

Even though the plugin supports the “Global Tool Configuration” mechanism and automatic management of several ansible installations it did not work out of the box for us:

At first, the executable path was not set correctly. We managed to fix that but then the next problem arose: Our standard build slaves had no jinja2 (python templating library) installed. Sure, that are problems you can easily fix if you decide so.

For us, it was too much tinkering and snowflaking our build slaves to be feasible and we took another route, that you can consider: Running ansible from an docker image.

We already have a host for running docker containers attached to jenkins so our current state of deployment with ansible roughly consists of a Dockerfile and a Jenkins job to run the container.

The Dockerfile is as simple as


FROM ubuntu:14.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y dist-upgrade && apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-add-repository ppa:ansible/ansible-2.4
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install ansible

# Setup work dir
WORKDIR /project/provisioning

# Copy project directory into container
COPY . /project

# Deploy the project
CMD ansible-playbook -i inventory deploy-project.yml

And the jenkins build step to actually run the deployment looks like


docker build -t project-deploy .
docker run project-deploy

That way we can tailor our deployment machine to conveniently run our ansible playbooks for the specific project without modifying our normal build slave setups and adding complexity on their side. All the tinkering with the jenkins ansible plugin is unnecessary going this way and relying on docker and what the container provides for running ansible.

Integrating .NET projects with Gradle

Recently I have created Gradle build scripts for several .NET projects, bot C# and VB.NET projects. Projects for the .NET platform are usually built with MSBuild, which is part of the .NET Framework distribution and itself a full-blown build automation tool: you can define build targets, their dependencies and execute tasks to reach the build targets. I have written about the basics of MSBuild in a previous blog post.

The .NET projects I was working on were using MSBuild targets for the various build stages as well. Not only for building and testing, but also for the release and deployment scripts. These scripts were called from our Jenkins CI with the MSBuild Jenkins Plugin.

Gradle plugins

However, I wasn’t very happy with MSBuild’s clunky Ant-like XML based syntax, and for most of our other projects we are using Gradle nowadays. So I tried Gradle for a new .NET project. I am using the Gradle MSBuild and Gradle NUnit plugins. Of course, the MSBuild Gradle plugin is calling MSBuild, so I don’t get rid of MSBuild completely, because Visual Studio’s .csproj and .vbproj project files are essentially MSBuild scripts, and I don’t want to get rid of them. So there is one Gradle task which to calls MSBuild, but everything else beyond the act of compilation is automated with regular Gradle tasks, like copying files, zipping release artifacts etc.

Basic usage of the MSBuild plugin looks like this:

plugins {
  id "com.ullink.msbuild" version "2.18"
}

msbuild {
  // either a solution file
  solutionFile = 'DemoSolution.sln'
  // or a project file (.csproj or .vbproj)
  projectFile = file('src/DemoSoProject.csproj')

  targets = ['Clean', 'Rebuild']

  destinationDir = 'build/msbuild/bin'
}

The plugin offers lots of additional options, be sure to check out the documentation on Github. If you want to give the MSBuild step its own task name, which is currently not directly mentioned on the Github page, use the task type Msbuild from the package com.ullink:

import com.ullink.Msbuild

// ...

task buildSolution(type: 'Msbuild', dependsOn: '...') {
  // ...
}

Since the .NET projects I’m working on use NUnit for unit testing, I’m using the NUnit Gradle plugin by the same creator as well. Again, please consult the documentation on the Github page for all available options. What I found necessary was setting the nunitHome option, because I don’t want the plugin to download a NUnit release from the internet, but use the one that is included with our project. Also, if you want a task with its own name or multiple testing tasks, use the NUnit task type in the package com.ullink.gradle.nunit:

import com.ullink.gradle.nunit.NUnit

// ...

task test(type: 'NUnit', dependsOn: 'buildSolution') {
  nunitVersion = '3.8.0'
  nunitHome = "${project.projectDir}/packages/NUnit.ConsoleRunner.3.8.0/tools"
  testAssemblies = ["${project.projectDir}/MyProject.Tests/bin/Release/MyProject.Tests.dll"]
}
test.dependsOn.remove(msbuild)

With Gradle I am now able to share common build tasks, for example for our release process, with our other non .NET projects, which use Gradle as well.

Analyzing gradle projects using SonarQube without gradle plugin

SonarQube makes static code analysis easy for a plethora of languages and environments. In many of our newer projects we use gradle as our buildsystem and jenkins as our continuous integration server. Integrating sonarqube in such a setup can be done in a couple of ways, the most straightforward being

  • Integrating SonarQube into your gradle build and invoke the gradle script in jenkins
  • Letting jenkins invoke the gradle build and execute the SonarQube scanner

I chose the latter one because I did not want to add further dependencies to the build process.

Configuration of the SonarQube scanner

The SonarQube scanner must be configured by property file called sonar-project.properties by default:

# must be unique in a given SonarQube instance
sonar.projectKey=domain:project
# this is the name and version displayed in the SonarQube UI. Was mandatory prior to SonarQube 6.1.
sonar.projectName=My cool project
sonar.projectVersion=23

sonar.sources=src/main/java
sonar.tests=src/test/java
sonar.java.binaries=build/classes/java/main
sonar.java.libraries=../lib/**/*.jar
sonar.java.test.libraries=../lib/**/*.jar
sonar.junit.reportPaths=build/test-results/test/
sonar.jacoco.reportPaths=build/jacoco/test.exec

sonar.modules=application,my_library,my_tools

# Encoding of the source code. Default is default system encoding
sonar.sourceEncoding=UTF-8
sonar.java.source=1.8

sonar.links.ci=http://${my_jenkins}/view/job/MyCoolProject
sonar.links.issue=http://${my_jira}/browse/MYPROJ

After we have done that we can submit our project to the SonarQube scanner using the jenkins SonarQube plugin and its “Execute SonarQube Scanner” build step.

Optional: Adding code coverage to our build

Even our gradle-based projects aim to be self-contained. That means we usually do not use repositories like mavenCentral for our dependencies but store them all in a lib directory along the project. If we want to add code coverage to such a project we need to add jacoco in the version corresponding to the jacoco-gradle-plugin to our libs in build.gradle:

allprojects {
    apply plugin: 'java'
    apply plugin: 'jacoco'
    sourceCompatibility = 1.8

    jacocoTestReport {
        reports {
            xml.enabled true
        }
        jacocoClasspath = files('../lib/org.jacoco.core-0.7.9.jar',
            '../lib/org.jacoco.report-0.7.9.jar',
            '../lib/org.jacoco.ant-0.7.9.jar',
            '../lib/asm-all-5.2.jar'
        )
    }
}

Gotchas

Our jenkins build job consists of 2 steps:

  1. Execute gradle
  2. Submit project to SonarQube’s scanner

By default gradle stops execution on failure. That means later tasks like jacocoTestReport are not executed if a test fails. We need to invoke gradle with the --continue switch to always run all of our tasks.

Gradle projects as Debian packages

Gradle is a great tool for setting up and building your Java projects. If you want to deliver them for Ubuntu or other debian-based distributions you should consider building .deb packages. Because of the quite steep learning curve of debian packaging I want to show you a step-by-step guide to get you up to speed.

Prerequisites

You have a project that can be built by gradle using gradle wrapper. In addition you have a debian-based system where you can install and use the packaging utilities used to create the package metadata and the final packages.

To prepare the debian system you have to install some packages:

sudo apt install dh-make debhelper javahelper

Generating packaging infrastructure

First we have to generate all the files necessary to build full fledged debian packages. Fortunately, there is a tool for that called dh_make. To correctly prefill the maintainer name and e-mail address we have to set 2 environment variables. Of course, you could change them later…

export DEBFULLNAME="John Doe"
export DEBEMAIL="john.doe@company.net"
cd $project_root
dh_make --native -p $project_name-$version

Choose “indep binary” (“i”) as type of package because Java is architecture indendepent. This will generate the debian directory containing all the files for creating .deb packages. You can safely ignore all of the files ending with .ex as they are examples features like manpage-generation, additional scripts pre- and post-installation and many other aspects.

We will concentrate on only two files that will allow us to build a nice basic package of our software:

  1. control
  2. rules

Adding metadata for our Java project

In the control file fill all the properties if relevant for your project. They will help your users understand what the package contains and whom to contact in case of problems. You should add the JRE to depends, e.g.:

Depends: openjdk-8-jre, ${misc:Depends}

If you have other dependencies that can be resolved by packages of the distribution add them there, too.

Define the rules for building our Java project

The most important file is the rules makefile which defines how our project is built and what the resulting package contents consist of. For this to work with gradle we use the javahelper dh_make extension and override some targets to tune the results. Key in all this is that the directory debian/$project_name/ contains a directory structure with all our files we want to install on the target machine. In our example we will put everything into the directory /opt/my_project.

#!/usr/bin/make -f
# -*- makefile -*-

# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1

%:
	dh $@ --with javahelper # use the javahelper extension

override_dh_auto_build:
	export GRADLE_USER_HOME="`pwd`/gradle"; \
	export GRADLE_OPTS="-Dorg.gradle.daemon=false -Xmx512m"; \
	./gradlew assemble; \
	./gradlew test

override_dh_auto_install:
	dh_auto_install
# here we can install additional files like an upstart configuration
	export UPSTART_TARGET_DIR=debian/my_project/etc/init/; \
	mkdir -p $${UPSTART_TARGET_DIR}; \
	install -m 644 debian/my_project.conf $${UPSTART_TARGET_DIR};

# additional install target of javahelper
override_jh_installlibs:
	LIB_DIR="debian/my_project/opt/my_project/lib"; \
	mkdir -p $${LIB_DIR}; \
	install lib/*.jar $${LIB_DIR}; \
	install build/libs/*.jar $${LIB_DIR};
	BIN_DIR="debian/my_project/opt/my_project/bin"; \
	mkdir -p $${BIN_DIR}; \
	install build/scripts/my_project_start_script.sh $${BIN_DIR}; \

Most of the above should be self-explanatory. Here some things that cost me some time and I found noteworthy:

  • Newer Gradle version use a lot memory and try to start a daemon which does not help you on your build slaves (if using a continous integration system)
  • The rules file is in GNU make syntax and executes each command separately. So you have to make sure everything is on “one line” if you want to access environment variables for example. This is achieved by \ as continuation character.
  • You have to escape the $ to use shell variables.

Summary

Debian packaging can be daunting at first but using and understanding the tools you can build new packages of your projects in a few minutes. I hope this guide helps you to find a starting point for your gradle-based projects.

Advanced deb-packaging with CMake

CMake has become our C/C++ build tool of choice because it provides good cross-platform support and very reasonable IDE (Visual Studio, CLion, QtCreator) integration. Another very nice feature is the included packaging support using the CPack module. It allows to create native deployable artifacts for a plethora of systems including NSIS-Installer for Windows, RPM and Deb for Linux, DMG for Mac OS X and a couple more.

While all these binary generators share some CPACK-variables there are specific variables for each generator to use exclusive packaging system features or requirements.

Deb-packaging features

The debian package management system used not only by Debian but also by Ubuntu, Raspbian and many other Linux distributions. In addition to dependency handling and versioning packagers can use several other features, namely:

  • Specifying a section for the packaged software, e.g. Development, Games, Science etc.
  • Specifying package priorities like optional, required, important, standard
  • Specifying the relation to other packages like breaks, enhances, conflicts, replaces and so on
  • Using maintainer scripts to customize the installation and removal process like pre- and post-install, pre- and post-removal
  • Dealing with configuration files to protect end user customizations
  • Installing and linking files and much more without writing shell scripts using ${project-name}.{install | links | ...} files

All these make the software easier to package or easier to manage by your end users.

Using deb-features with CMake

Many of the mentioned features are directly available as appropriately named CMake-variables all starting with CPACK_DEBIAN_.  I would like to specifically mention the CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA variable where you can set the maintainer scripts and one of my favorite features: conffiles.

Deb protects files under /etc from accidental overwriting by default. If you want to protect files located somewhere else you specify them in a file called conffiles each on a separate line:

/opt/myproject/myproject.conf
/opt/myproject/myproject.properties

If the user made changes to these files she will be asked what to do when updating the package:

  • keep the own version
  • use the maintainer version
  • review the situation and merge manually.

For extra security files like myproject.conf.dpkg-dist and myproject.conf.dpkg-old are created so no changes are lost.

Unfortunately, I did not get the linking feature working without using maintainer scripts. Nevertheless I advise you to use CMake for your packaging work instead of packaging using the native debhelper way.

It is much more natural for a CMake-based project and you can reuse much of your metadata for other target platforms. It also shields you from a lot of the gory details of debian packaging without removing too much of the power of deb-packages.

4 Tips for better CMake

We are doing one of those list posts again! This time, I will share some tips and insights on better CMake. Number four will surprise you! Let’s hop right in:

Tip #1

model dependencies with target_link_libraries

I have written about this before, and this is still my number one tip on CMake. In short: Do not use the old functions that force properties down the file hierarchy such as include_directories. Instead set properties on the targets via target_link_libraries and its siblings target_compile_definitions, target_include_directories and target_compile_options and “inherit” those properties via target_link_libraries from different modules.

Tip #2

always use find_package with REQUIRED

Sure, having optional dependencies is nice, but skipping on REQUIRED is not the way you want to do it. In the worst case, some of your features will just not work if those packages are not found, with no explanation whatsoever. Instead, use explicit feature-toggles (e.g. using option()) that either skip the find_package call or use it with REQUIRED, so the user will know that another lib is needed for this feature.

Tip #3

follow the physical project structure

You want your build setup to be as straight forward as possible. One way to simplify it is to follow the file system and and the artifact structure of your code. That way, you only have one structure to maintain. Use one “top level” file that does your global configuration, e.g. find_package calls and CPack configuration, and then only defers to subdirectories via add_subdirectory. Only for direct subdirectories though: if you need extra levels, those levels should have their own CMake files. Then build exactly one artifact (e.g. add_executable or add_library) per leaf folder.

Tip #4

make install() an option()

It is often desirable to include other libraries directly into your build process. For example, we usually do this with googletest for our unit test. However, if you do that and use your install target, it will also install the googletest headers. That is usually not what you want! Some libraries handle this automagically by only doing the install() calls when they are the top level project. Similar to the find_package tip above, I like to do this with an option() for explicit user control!

Generating done

That is it for today! I hope this is helps and we will all see better CMake code in the future.