Setting Grails session timeout in production

Grails 3 was a great update to the framework and kept it up-to-date with modern requirement in web development. Modularization, profiles, revamped build system and configuration were all great changes that made working with grails more productive and fun again.
I quite like the choice of YAML for the configuration settings because you can easily describe sections and hierarchies without much syntactic noise.

Unfortunately, there are some caveats. One of them went live and caused a (minor) irritation for our customer:

The session timeout was back to the 30 minutes default and not prolongued to the one hour we all agreed upon some years (!) ago.

Investigating the cause

Our configuration in application.yml was correctly set to the desired one hour timeout and in development everything was working as expected. But the thing is that the setting server.session.timeout is only applied to the embedded tomcat. If your application is deployed to a standalone servlet container this setting is ignored. Unfortunately it is far from obvious which settings in application.yml are used in what situation.

In the case of a standalone servlet container you would just edit your applications web.xml and the container would use the setting there. While this would work, it is not very nice because you have two locations for one setting. In software development we call that duplication. What makes things worse is, that there is no web.xml in our case! So what now?

The solution

We have two problems here

  1. Providing the functionality our customer desires
  2. Removing the code duplication so that development and production work the same way

Our solution is to apply the setting from application.yml to the HTTP-Session of the request using an interceptor:

class SessionInterceptor {
    int order = -1000

    SessionInterceptor() {
        matchAll()
    }

    boolean before() {
        int sessionTimeout = grailsApplication.config.getProperty('server.session.timeout') as int
        log.info("Configured session timeout is: ${sessionTimeout}")
        request.session?.setMaxInactiveInterval(sessionTimeout)
        true
    }
}

That way we use a single source of truth, namely the configuration in application.yml, both in development and production.

 

Ignoring YAGNI – 12 years later

Fourteen years ago, we started to build a distributed system to gather environmental data in an automated 24/7 fashion. Our development process was agile and made heavy use of short iterations (at least that was what they were then, today they are normal-sized). So the system grew with many small new features and improvements, giving the customer immediate business value.

One part of the system was the task scheduler. Because the system had to run 24/7 and be mostly independent of human interaction, the task scheduler’s job was to launch different measurement processes at the right time. We had done extensive domain crunching and figured out that all tasks follow a rigid time regime like “start every 10 minutes” or “start every hour”, regardless of the processes’ runtime. This made the scheduler rather easy to develop. You should keep it simple, after all.

But another result of the domain crunching bothered us: The schedule of all tasks originated from the previous software system, built 30 years ago and definitely unfit for the modern software world. The schedules weren’t really rooted in the domain, they all had technical explanations like “the recording of the values is done sequentially and takes up to 8 minutes, we can’t record them more often than that”. For our project, the measurement hardware was changed, so our recording took a couple of milliseconds. We could store and display the values continuously, if the need arises.

So we discussed the required simpleness or complexity of the task scheduler with the customer and they seemed pleased with all the new possibilities. But they decided that the current schedules were sufficient and didn’t need to be changed. We could go ahead and build our simple task scheduler.

And this is when we decided to abandon KISS and make the task scheduler more powerful than needed. “But you ain’t going to need it!” was the enemy. Because we knew that the customer will inevitably come around and make use of their new possibilities. We knew that if we build the system with more complexity, we would be the heroes in a future time, wearing a smug smile and telling the customer: “We’ve already built this, you can use it right away”. Oh how glorious this prospect of the future shone! Just a few more thoughts going into the code and we’re set for a bright future.

Let me tell you a few details about the “few more thoughts” with the example of an “every hour” task schedule. Instead of hard-coding the schedule, we added a configuration file with a cron-like expression for the schedule. You could now leverage the power of cron expressions to design your schedule as you see fit. If you wanted to change the schedule from “every hour” to “every odd minute and when the pale moon rises”, you could do so. The task scheduler had to interpret the configuration file and make sure that tasks don’t pile up: If you schedule a task to run “every minute”, but it takes two minutes to process, you’ve essentially built a time-bomb for your system load. This must not be feasible.

But it doesn’t stop there. A lot of functionality, most of which wasn’t even present or outlined at the time of our decision, relies implicitly on that schedule. Two examples: There are manual operations that must not be performed during the execution of the task. The system goes into a “protected state” around the task execution. It disables these operations a few minutes before the scheduled execution and even some time afterwards. If you had a fixed schedule of “every hour”, you could even hard-code the protected timespan. With a possible dynamic schedule, you have to calculate your timespan based on the current schedule and warn your operator if it isn’t possible anymore to find a time slot to even perform the manual operation.
The second example is a functionality that supervises the completeness of the recorded data. The problem is: This functionality is on another computer (it’s a distributed system, remember?) that doesn’t know about the configuration files. To be able to scan the data archive and say “everything that should be there, is there”, the second computer needs to know about all the schedules of the first computers (there are many of them, recording their data on their own schedules and transferring it to the second computer). And if a schedule changes, the second computer needs to take the change into account and scan the data archive for two areas: one area with the old schedule and one area with the new schedule. Otherwise, there would be false alarms.

You can probably see that the one decision to make the task scheduler a little more complex and configurable as required had quite some impact on the complexity of other parts of the system. But this investment will be worth it as soon as the customer changes the schedule! The whole system is programmed, tested and documented to facilitate schedule changes. We are ready!

It’s been over twelve years since we wrote the first line of code for the more complex implementation (I’ve checked the source control logs). The customer hasn’t changed a single bit of the schedule yet. There are over twenty “first computers” and they all still run the same task schedule as initially planned. Our decision did nothing but to add accidental complexity to the system. It probably introduced some bugs along the way, too. It certainly increased our required level of awareness (“hurdle of understanding”) during the development of features that are somewhat coupled with the task schedule.

In short: It’s been a disaster. The smug smile we thought we’d wear has been replaced by a deep frown. Who wrote all that mess? And why? It wasn’t the customer, it was us. We will never be going to need it.

Integrating conan, CMake and Jenkins

In my last posts on conan, I explained how to start migrating your project to use a few simple conan libraries and then how to integrate a somewhat more complicated library with custom build steps.

Of course, you still want your library in CI. We previously advocated simply adding some dependencies to your source tree, but in other cases, we provisioned our build-systems with the right libraries on a system-level (alternatively, using docker). Now using conan, this is all totally different – we want to avoid setting up too many dependencies on our build-system. The fewer dependencies they have, the less likely they will accidentally be used during compilation. This is crucial to implement portability of your artifacts.

Setting up the build-systems

The build systems still have to be provisioned. You will at least need conan and your compiler-suite installed. Whether to install CMake is a point of contention – since the CMake-Plugin for Jenkins can do that.

Setting up the build job

The first thing you usually need is to configure your remotes properly. One way to do this is to use conan config install command, which can synchronize remotes (or the whole of the conan config) from either a folder, a zip file or a git repository. Since I like to have stuff readable in plain text in my repository, I opt to store my remotes in a specific folder. Create a new folder in your repository. I use ci/conan_config in this example. In it, place a remotes.txt like this:

bincrafters https://api.bintray.com/conan/bincrafters/public-conan True
conan-center https://conan.bintray.com True

Note that conan needs a whole folder, you cannot read just this file. Your first command should then be to install these remotes:

conan config install ci/conan_config

Jenkins’ CMake for conan

The next step prepares for installing our dependencies. Depending on whether you’re building some of those dependencies (the --build option), you might want to have CMake available for conan to call. This is a problem when using the Jenkins CMake Plugin, because that only gives you cmake for its specific build steps, while conan simply uses the cmake executable by default. If you’re provisioning your build-systems with conan or not building any dependencies, you can skip this step.
One way to give conan access to the Jenkins CMake installation is to run a small CMake script via a “CMake/CPack/CTest execution” step and have it configure conan appropriatly. Create a file ci/configure_for_conan.cmake:

execute_process(COMMAND conan config set general.conan_cmake_program=\"${CMAKE_COMMAND}\")

Create a new “CMake/CPack/CTest execution” step with tool “CMake” and arguments “-P ci/configure_for_conan.cmake”. This will setup conan with the given cmake installation.

Install dependencies and build

Next run the conan install command:

mkdir build && cd build
conan install .. --build missing

After that, you’re ready to invoke cmake and the build tool with an additional “CMake Build” step. The build should now be up and running. But who am I kidding, the build is always red on first try 😉

Oracle database date and time literals

For one of our projects I work with time series data stored in an Oracle database, so I write a lot of SQL queries involving dates and timestamps. Most Oracle SQL queries I came across online use the TO_DATE function to specify date and time literals within queries:

SELECT * FROM events WHERE created >= TO_DATE('2012-04-23 16:30:00', 'YYYY-MM-DD HH24:MI:SS')

So this is what I started using as well. Of course, this is very flexible, because you can specify exactly the format you want to use. On the other hand it is very verbose.

From other database systems like PostgreSQL databases I was used to specify dates in queries as simple string literals:

SELECT * FROM events WHERE created BETWEEN '2018-01-01' AND '2018-01-31'

This doesn’t work in Oracle, but I was happy find out that Oracle supports short date and timestamp literals in another form:

SELECT * FROM events WHERE created BETWEEN DATE'2018-01-01' AND DATE'2018-01-31'

SELECT * FROM events WHERE created > TIMESTAMP'2012-04-23 16:30:00'

These date/time literals where introduced in Oracle 9i, which isn’t extremely recent. However, since most online tutorials and examples seem to use the TO_DATE function, you may be happy to find out about this little convenience just like me.

Using OpenVPN in an automated deployment

In my previous post I showed how to use Ansible in a Docker container to deploy a software release to some remote server and how to trigger the deployment using a Jenkins job.

But what can we do if the target server is not directly reachable over SSH?

Many organizations use a virtual private network (VPN) infrastructure to provide external parties access to their internal services. If there is such an infrastructure in place we can extend our deployment process to use OpenVPN and still work in an unattended fashion like before.

Adding OpenVPN to our container

To be able to use OpenVPN non-interactively in our container we need to add several elements:

  1. Install OpenVPN:
    RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install openvpn
  2. Create a file with the credentials using environment variables:
    CMD echo -e "${VPN_USER}\n${VPN_PASSWORD}" > /tmp/openvpn.access
  3. Connect with OpenVPN using an appropriate VPN configuration and wait for OpenVPN to establish the connection:
    openvpn --config /deployment/our-client-config.ovpn --auth-user-pass /tmp/openvpn.access --daemon && sleep 10

Putting it together our extended Dockerfile may look like this:

FROM ubuntu:18.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install ansible ssh
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install openvpn

SHELL ["/bin/bash", "-c"]
# A place for our vault password file
RUN mkdir /ansible/

# Setup work dir
WORKDIR /deployment

COPY deployment/${TARGET_HOST} .

# Deploy the proposal submission system using openvpn and ansible
CMD echo -e "${VAULT_PASSWORD}" > /ansible/vault.access && \
echo -e "${VPN_USER}\n${VPN_PASSWORD}" > /tmp/openvpn.access && \
ansible-vault decrypt --vault-password-file=/ansible/vault.access ${TARGET_HOST}/credentials/deployer && \
openvpn --config /deployment/our-client-config.ovpn --auth-user-pass /tmp/openvpn.access --daemon && \
sleep 10 && \
ansible-playbook -i ${TARGET_HOST}, -u root --key-file ${TARGET_HOST}/credentials/deployer ansible/deploy.yml && \
ansible-vault encrypt --vault-password-file=/ansible/vault.access ${TARGET_HOST}/credentials/deployer && \
rm -rf /ansible && \
rm /tmp/openvpn.access

As you can see the CMD got quite long and messy by now. In production we put the whole process of executing the Ansible and OpenVPN commands into a shell script as part of our deployment infrastructure. That way we separate the preparations of the environment from the deployment steps themselves. The CMD looks a bit more friendly then:

CMD echo -e "${VPN_USER}\n${VPN_PASSWORD}" > /vpn/openvpn.access && \
echo -e "${VAULT_PASSWORD}" > /ansible/vault.access && \
chmod +x deploy.sh && \
./deploy.sh

As another aside you may have to mess around with nameserver configuration depending on the OpenVPN configuration and other infrastructure details. I left that out because it seems specific to the setup with our customer.

Fortunately, there is nothing to do on the ansible side as the whole VPN stuff should be completely transparent for the tools using the network connection to do their job. However we need some additional settings in our Jenkins job.

Adjusting the Jenkins Job

The Jenkins job needs the VPN credentials added and some additional parameters to docker run for the network tunneling to work in the container. The credentials are simply another injected password we may call JOB_VPN_PASSWORD and the full script may now look like follows:

docker build -t app-deploy -f deployment/docker/Dockerfile .
docker run --rm \
--cap-add=NET_ADMIN \
--device=/dev/net/tun \
-e VPN_USER=our-client-vpn-user \
-e VPN_PASSWORD=${JOB_VPN_PASSWORD} \-e VAULT_PASSWORD=${JOB_VAULT_PASSWORD} \
-e TARGET_HOST=${JOB_TARGET_HOST} \
-e ANSIBLE_HOST_KEY_CHECKING=False \
-v `pwd`/artifact:/artifact \
app-deploy

DOCKER_RUN_RESULT=`echo $?`
exit $DOCKER_RUN_RESULT

Conclusion

Adding VPN-support to your automated deployment is not that hard but there are some details to be aware of:

  • OpenVPN needs the credentials in a file – similar to Ansible – to be able to run non-interactively
  • OpenVPN either stays in the foreground or daemonizes right away if you tell it to, not only after the connection was successful. So you have to wait a sufficiently long time before proceeding with your deployment process
  • For OpenVPN to work inside a container docker needs some additional flags to allow proper tunneling of network connections
  • DNS-resolution can be tricky depending on the actual infrastructure. If you experience problems either tune your setup by adjusting name resolution in the container or access the target machines using IPs…

After ironing out the gory details depicted above we have a secure and convenient deployment process that saves a lot of time and nerves in the long run because you can update the deployment at the press of a button.

Domain-aligned bugs

Frank C. Müller [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0) ]

Imagine that you are an user of a typical enterprise software that handles commercial products and their prices. There are different prices in the software that are somehow related to each other. There is the purchase price that indicates your cost if you buy the product. There is the retail price that gets listed in your price lists and is paid by your customers, should they buy the product. You probably already figured out that the retail price should never be lower than the purchase price, because that would mean you lose money with every successful sale.

Let’s say that the enterprise software not only handles products, but also parts. Several parts combined, with some manufacturing effort, result in a product. Each part has a purchase price, the resulting product has a retail price. The retail price of the product should be higher than the sum of purchase prices of the parts. If it isn’t, you lose the costs of the manufacturing effort and some extra money with every successful sale.

If for any reason you cannot clearly estimate your manufacturing effort, the enterprise software has another input field for an amount of money that you can add to the sum of the parts’ costs. We call this field the “sales bonus”. So, if you sell a product made up of parts, your customer has to pay a price that consists at least of the retail prices of the parts and the sales bonus. Of course, your customer has an individual discount percentage that needs to be subtracted from the total price. Are you still following?

You are now thinking in the domain of price determination and financial mathematics. If you were the user of said enterprise software, you’d probably expect some bugs like these:

  • It is possible to enter a retail price lower than the purchase price
  • The price of products manufactured from parts isn’t calculated correctly
  • It is possible to enter a negative sales bonus
  • The total price with discount could be lower than the sum of purchase prices of the parts without a warning

All of them are bugs in the domain. All of them can be explained to a domain expert or a user with terms and concepts from the domain.

But what about the bug when you sell a product that consists of three parts, each with a retail price of 10 €, and a sales bonus of 5 €. You want to create a quote for your customer and the price shows up as 34,99999999998 €. You are a bit bewildered and try to countervail the apparent rounding error by changing the sales bonus to 5,00000000002 €. After this change you get another crazy total price and your prices in the database are different from what you entered, too. Everything seems to destabilize and deviate further and further from clear cut prices.

As a programmer, you know what happened. You know what caused this effect of numerical instability. Somebody stored monetary values in a floating point number. You know that is a bad idea and you’d never do this. But this blog post isn’t about you or what you should do or not do. It is about the user, expert in his domain, that stumbles over the bug as described and has to make some decision on how to fix it. This user cannot use any knowledge from the domain to even understand the mechanics of the bug. You, as the programmer, cannot explain this bug in terms of things the user already knows. You need to be vague (“the software doesn’t store the exact values, just approximations”) or introduce additional complexity (“we store this value by splitting it into a significand and multiply it with a factor consisting of a fixed base and an exponent. We can omit the base and just store the significand and the exponent and express a very large numerical range in just a few bits. Think about how cool that is!”).

Read the last explanation again, from the viewpoint of a salesman. We want to add some prices in the range of a few €, slap a moderate discount on top and call it a day. We don’t care about bits or exponential formulas. That is not part of our domain and it shouldn’t affect our domain or software that works in our domain. Confronting us with technical details reflects negatively on your ability to solve our problems. You seem to burden us with your problems in exchange.

As domain experts, we want only domain-aligned bugs.

Using protobuf with conan and CMake

In my last post, I showed how I got my feet wet while migrating the dependencies of my existing code-base to conan. The first major hurdle I saw coming when I started was adding something with a “special” build step, e.g. something like source-preprocessing. In my case, this was protobuf, where a special build-step converts .proto files to sources and headers.

In my previous solution, my devenv build scripts would install the protobuf converter binary to my devenv’s bin/ folder, which I then used to run my preprocessing. At first, it was not obvious how to do this with conan. It turns out that the lovely people and bincrafters made this pretty comfortable. conan_basic_setup() will add all required package paths to your CMAKE_MODULE_PATH, which you can use to include() some bundled CMake scripts that will either let you execute the protobuf-compiler via a target or run protobuf_generate to automagically handle the preprocessing. It’s probably worth noting, that this really depends on how the package is made. Conan does not really have an official way on how to handle this.

Let’s start with some sample code – Person.proto, like the sample from the protobuf website:

message Person {
  required string name = 1;
  required int32 id = 2;
  optional string email = 3;
}

And some sample code that uses it:

#include "Person.pb.h"

int main(int argn, char** argv)
{
  Person message;
  message.set_name("Hello Protobuf");
  std::cout << message.name() << std::endl;
}

Again, we’re using the bincrafters repository for our dependencies in a conanfile.txt:

[requires]
protobuf/3.6.1@bincrafters/stable
protoc_installer/3.6.1@bincrafters/stable

[options]

[generators]
cmake

Now we just need to wire it all up in the CMakeLists.txt

cmake_minimum_required(VERSION 3.0)
project(ProtobufTest)

include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
conan_basic_setup(TARGETS KEEP_RPATHS)

# This loads the cmake/protoc-config.cmake file
# from the protoc_installer dependency
include(cmake/protoc-config)

set(TARGET_NAME ProtobufSample)

# Just add the .proto files to the target
add_executable(${TARGET_NAME}
  Person.proto
  ProtobufTest.cpp
)

# Let this function to the magic
protobuf_generate(TARGET ${TARGET_NAME})

# Need to use protobuf, of course
target_link_libraries(${TARGET_NAME}
  PUBLIC CONAN_PKG::protobuf
)

# Make sure we can find the generated headers
target_include_directories(${TARGET_NAME}
  PUBLIC ${CMAKE_CURRENT_BINARY_DIR}
)

There you have it! Pretty neat, and all without a brittle find_package call.