We are using Jenkins not only for our continuous integration needs but also for running deployments at the push of a button. Back in the dark times™ this often meant using Apache Ant in combination with JSch to copy the projects artifacts to some target machine and execute some remote commands over ssh.
After gathering some experience with Ansible and docker we took a new shot at the task and ended up with one-shot docker containers executing some ansible scripts. The general process stayed the same but the new solution is more robust in general and easier to maintain.
In addition to the project- and deployment-process-specific simplifications we can now use any jenkins slave with a docker installation and network access. No need for an ant installation and a java runtime anymore.
However there are some quirks you have to overcome to get the whole thing running. Let us see how we can accomplish non-interactive (read: fully automated) deployments using the tools mentioned above.
The general process
The deployment job in Jenkins has to perform the following steps:
- Fetch the artifacts to deploy from somewhere, e.g. a release job
- Build the docker image provisioned with ansible
- Run a container of the image providing credentials and artifacts through the environment and/or volumes
The first step can easily be implemented using the Copy Artifact Plugin.
The Dockerfile
You can use mostly any linux base image for your deployment needs as long as the distribution comes with ansible. I chose Ubunt 18.04 LTS because it is sufficiently up-to-date and we are quite familiar with it. In addition to ssh and ansible we only need the command for running the playbook. In our case the the playbook and ssh-credentials reside in the deployment folder of our projects, of course encrypted in an ansible vault. To be able to connect to the target machine we need the vault password to decrypt the ssh password or private key. Therefore we inject the vault password as an environment variable when running the container.
A simple Dockerfile may look as follows:
FROM ubuntu:18.04 RUN DEBIAN_FRONTEND=noninteractive apt-get update RUN DEBIAN_FRONTEND=noninteractive apt-get -y dist-upgrade RUN DEBIAN_FRONTEND=noninteractive apt-get -y install software-properties-common RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install ansible ssh SHELL ["/bin/bash", "-c"] # A place for our vault password file RUN mkdir /ansible/ # Setup work dir WORKDIR /deployment COPY deployment/${TARGET_HOST} . # Deploy the proposal submission system using openvpn and ansible CMD echo -e "${VAULT_PASSWORD}" > /ansible/vault.access && \ ansible-vault decrypt --vault-password-file=/ansible/vault.access ${TARGET_HOST}/credentials/deployer && \ ansible-playbook -i ${TARGET_HOST}, -u root --key-file ${TARGET_HOST}/credentials/deployer ansible/deploy.yml && \ ansible-vault encrypt --vault-password-file=/ansible/vault.access ${TARGET_HOST}/credentials/deployer && \ rm -rf /ansible
Note that we create a temporary vault password file using the ${VAULT_PASSWORD}
environment variable for ansible vault decryption. A second noteworthy detail is the ad-hoc inventory using the ${TARGET_HOST}
environment variable terminated with comma. Using this variable we also reference the correct credential files and other host-specific data files. That way the image will not contain any plain text credentials.
The Ansible playbook
The playbook can be quite simple depending on you needs. In our case we deploy a WAR-archive to a tomcat container and make a backup copy of the old application first:
--- - hosts: all vars: artifact_name: app.war webapp_directory: /var/lib/tomcat8/webapps become: true tasks: - name: copy files to target machine copy: src={{ item }} dest=/tmp with_fileglob: - "/artifact/{{ artifact_name }}" - "../{{ inventory_hostname }}/context.xml" - name: stop our servlet container systemd: name: tomcat8 state: stopped - name: make backup of old artifact command: mv "{{ webapp_directory }}/{{ artifact_name }}" "{{ webapp_directory }}/{{ artifact_name }}.bak" - name: delete exploded webapp file: path: "{{ webapp_directory }}/app" state: absent - name: mv artifact to webapp directory copy: src="/tmp/{{ artifact_name }}" dest="{{ webapp_directory }}" remote_src=yes notify: restart app-server - name: mv context file to container context copy: src=/tmp/context.xml dest=/etc/tomcat8/Catalina/localhost/app.xml remote_src=yes notify: restart app-server handlers: - name: restart app-server systemd: name: tomcat8 state: started
The declarative syntax and the use of handlers make the process quite straight-forward. Of note here is the - hosts: all
line that is used with our command line ad-hoc inventory. That way we do not need an inventory file in our project repository or somewhere else. The host can be provided by the deployment job. That brings us to the Jenkins-Job that puts everything together.
The Jenkins job
As you probably already noticed in the parts above there are several inputs required by the container and the ansible script:
- A volume with the artifacts to deploy
- A volume with host specific credentials and data files
- Vault credentials
- The target host(s)
We inject passwords using the EnvInject Plugin which stored them encrypted and masks them in all the logs. I like to prefix such environmental settings in Jenkins with JOB_ do denote their origin. After the copying build step which puts the artifacts into the artifact directory we can use a small shell script to build the image and run the container with all the needed stuff wired up. The script may look like the following:
docker build -t app-deploy -f deployment/docker/Dockerfile . docker run --rm \ -e VAULT_PASSWORD=${JOB_VAULT_PASSWORD} \ -e TARGET_HOST=${JOB_TARGET_HOST} \ -e ANSIBLE_HOST_KEY_CHECKING=False \ -v `pwd`/artifact:/artifact \ app-deploy DOCKER_RUN_RESULT=`echo $?` exit $DOCKER_RUN_RESULT
We run the container with the --rm
flag to delete it right after execution leaving nothing behind. We also disable host key checking for ansible to keep the build non-interactive. The last two lines let the jenkins job report the result to the container execution.
Wrapping it up
Essentially we have a deployment consisting of three parts. A docker image definition of the required environment, an ansible playbook performing the real work and a jenkins job with a small shell script wiring it all together so that it can be executed at the push of a button. We do not store any clear text credentials persistently at neither location, so that we are reasonably secure for running the deployment.
2 thoughts on “Ansible for deployment with docker”