Setting Grails session timeout in production

Grails 3 was a great update to the framework and kept it up-to-date with modern requirement in web development. Modularization, profiles, revamped build system and configuration were all great changes that made working with grails more productive and fun again.
I quite like the choice of YAML for the configuration settings because you can easily describe sections and hierarchies without much syntactic noise.

Unfortunately, there are some caveats. One of them went live and caused a (minor) irritation for our customer:

The session timeout was back to the 30 minutes default and not prolongued to the one hour we all agreed upon some years (!) ago.

Investigating the cause

Our configuration in application.yml was correctly set to the desired one hour timeout and in development everything was working as expected. But the thing is that the setting server.session.timeout is only applied to the embedded tomcat. If your application is deployed to a standalone servlet container this setting is ignored. Unfortunately it is far from obvious which settings in application.yml are used in what situation.

In the case of a standalone servlet container you would just edit your applications web.xml and the container would use the setting there. While this would work, it is not very nice because you have two locations for one setting. In software development we call that duplication. What makes things worse is, that there is no web.xml in our case! So what now?

The solution

We have two problems here

  1. Providing the functionality our customer desires
  2. Removing the code duplication so that development and production work the same way

Our solution is to apply the setting from application.yml to the HTTP-Session of the request using an interceptor:

class SessionInterceptor {
    int order = -1000

    SessionInterceptor() {
        matchAll()
    }

    boolean before() {
        int sessionTimeout = grailsApplication.config.getProperty('server.session.timeout') as int
        log.info("Configured session timeout is: ${sessionTimeout}")
        request.session?.setMaxInactiveInterval(sessionTimeout)
        true
    }
}

That way we use a single source of truth, namely the configuration in application.yml, both in development and production.

 

Integrating conan, CMake and Jenkins

In my last posts on conan, I explained how to start migrating your project to use a few simple conan libraries and then how to integrate a somewhat more complicated library with custom build steps.

Of course, you still want your library in CI. We previously advocated simply adding some dependencies to your source tree, but in other cases, we provisioned our build-systems with the right libraries on a system-level (alternatively, using docker). Now using conan, this is all totally different – we want to avoid setting up too many dependencies on our build-system. The fewer dependencies they have, the less likely they will accidentally be used during compilation. This is crucial to implement portability of your artifacts.

Setting up the build-systems

The build systems still have to be provisioned. You will at least need conan and your compiler-suite installed. Whether to install CMake is a point of contention – since the CMake-Plugin for Jenkins can do that.

Setting up the build job

The first thing you usually need is to configure your remotes properly. One way to do this is to use conan config install command, which can synchronize remotes (or the whole of the conan config) from either a folder, a zip file or a git repository. Since I like to have stuff readable in plain text in my repository, I opt to store my remotes in a specific folder. Create a new folder in your repository. I use ci/conan_config in this example. In it, place a remotes.txt like this:

bincrafters https://api.bintray.com/conan/bincrafters/public-conan True
conan-center https://conan.bintray.com True

Note that conan needs a whole folder, you cannot read just this file. Your first command should then be to install these remotes:

conan config install ci/conan_config

Jenkins’ CMake for conan

The next step prepares for installing our dependencies. Depending on whether you’re building some of those dependencies (the --build option), you might want to have CMake available for conan to call. This is a problem when using the Jenkins CMake Plugin, because that only gives you cmake for its specific build steps, while conan simply uses the cmake executable by default. If you’re provisioning your build-systems with conan or not building any dependencies, you can skip this step.
One way to give conan access to the Jenkins CMake installation is to run a small CMake script via a “CMake/CPack/CTest execution” step and have it configure conan appropriatly. Create a file ci/configure_for_conan.cmake:

execute_process(COMMAND conan config set general.conan_cmake_program=\"${CMAKE_COMMAND}\")

Create a new “CMake/CPack/CTest execution” step with tool “CMake” and arguments “-P ci/configure_for_conan.cmake”. This will setup conan with the given cmake installation.

Install dependencies and build

Next run the conan install command:

mkdir build && cd build
conan install .. --build missing

After that, you’re ready to invoke cmake and the build tool with an additional “CMake Build” step. The build should now be up and running. But who am I kidding, the build is always red on first try 😉

Using protobuf with conan and CMake

In my last post, I showed how I got my feet wet while migrating the dependencies of my existing code-base to conan. The first major hurdle I saw coming when I started was adding something with a “special” build step, e.g. something like source-preprocessing. In my case, this was protobuf, where a special build-step converts .proto files to sources and headers.

In my previous solution, my devenv build scripts would install the protobuf converter binary to my devenv’s bin/ folder, which I then used to run my preprocessing. At first, it was not obvious how to do this with conan. It turns out that the lovely people and bincrafters made this pretty comfortable. conan_basic_setup() will add all required package paths to your CMAKE_MODULE_PATH, which you can use to include() some bundled CMake scripts that will either let you execute the protobuf-compiler via a target or run protobuf_generate to automagically handle the preprocessing. It’s probably worth noting, that this really depends on how the package is made. Conan does not really have an official way on how to handle this.

Let’s start with some sample code – Person.proto, like the sample from the protobuf website:

message Person {
  required string name = 1;
  required int32 id = 2;
  optional string email = 3;
}

And some sample code that uses it:

#include "Person.pb.h"

int main(int argn, char** argv)
{
  Person message;
  message.set_name("Hello Protobuf");
  std::cout << message.name() << std::endl;
}

Again, we’re using the bincrafters repository for our dependencies in a conanfile.txt:

[requires]
protobuf/3.6.1@bincrafters/stable
protoc_installer/3.6.1@bincrafters/stable

[options]

[generators]
cmake

Now we just need to wire it all up in the CMakeLists.txt

cmake_minimum_required(VERSION 3.0)
project(ProtobufTest)

include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
conan_basic_setup(TARGETS KEEP_RPATHS)

# This loads the cmake/protoc-config.cmake file
# from the protoc_installer dependency
include(cmake/protoc-config)

set(TARGET_NAME ProtobufSample)

# Just add the .proto files to the target
add_executable(${TARGET_NAME}
  Person.proto
  ProtobufTest.cpp
)

# Let this function to the magic
protobuf_generate(TARGET ${TARGET_NAME})

# Need to use protobuf, of course
target_link_libraries(${TARGET_NAME}
  PUBLIC CONAN_PKG::protobuf
)

# Make sure we can find the generated headers
target_include_directories(${TARGET_NAME}
  PUBLIC ${CMAKE_CURRENT_BINARY_DIR}
)

There you have it! Pretty neat, and all without a brittle find_package call.

Migrating an existing C++ codebase to conan

This is a bit of a battle report of migrating the dependencies in my C++ projects to use the conan package manager.
In the past weeks I have started to use conan in half a dozen both work and personal projects. Here’s my experiences so far.

Before

The first real project I started with was my personal game project. The “before” setup used a mixture if techniques to handle dependencies and uses CMake to do most of the heavy lifting.
Most dependencies reside in the “devenv”, which is a separate CMake project that I use to build and bundle the dependencies in a specific installation folder. It uses ExternalProject_Add for most parts (e.g. Boost, SDL, Lua, curl and OpenSSL), add_subdirectory for a few others (pugixml and lz4) and just install(FILES...) for a few header only libs like JSON for Modern C++, Catch2 and spdlog. It should be noted that there are relatively few interdependencies between the projects in there.
Because it is more convenient to update, I keep a few dependencies that I control myself directly in the source tree, either as git externals or just copies of the source files.
I try to keep usage of system dependencies to a minimum so that the resulting binary is more portable to the average gamer who does not want to know about libraries and dependencies and such nonsense. This setup has been has been mostly painless and working for my three platforms Windows, Linux and Mac – at least as long as I did not try to change it significantly.

Baby steps

Since not all my dependencies are available on conan and small iterations are usually more successful, I decided to proceed by changing only a single dependency to conan. For this dependency, it’s a good idea to pick something that does not have many compile-time options and is more or less platform agnostic. So I opted for boost over, e.g. SDL or wxWidgets. Boost was also one of the most painful dependencies to build, if only for the insane amount of files it produces and the time it takes to copy those ten-thousands of files to the install location.

Getting started..

There are currently two popular variants of boost available through conan. The “normal” variant on conan’s main repository/remote “conan-center” and a modular version that splits boost into its component libraries on the bincrafters remote, e.g. Boost.Filesystem. The modular version is more appealing conceptually, and I also had a better time getting it to work in my first tests, so I picked that. I did a quick grep for #include <boost/ through my code for an initial guess which boost libraries I needed to get and created a corresponding conanfile.txt in my project root.

[requires]
boost_filesystem/1.69.0@bincrafters/stable
boost_math/1.69.0@bincrafters/stable
boost_random/1.69.0@bincrafters/stable
boost_property_tree/1.69.0@bincrafters/stable
boost_assign/1.69.0@bincrafters/stable
boost_heap/1.69.0@bincrafters/stable
boost_optional/1.69.0@bincrafters/stable
boost_program_options/1.69.0@bincrafters/stable
boost_iostreams/1.69.0@bincrafters/stable
boost_system/1.69.0@bincrafters/stable

[options]
boost:shared=False

[generators]
cmake

Now conan plays really nice with “single configuration generators” like the new CMake/Ninja support in VS2017 and onward. Basically, just cd into your build dir and call something like conan install -s build_type=Debug -s build_type=x86 whenever you want to update dependencies. More info can be found in the official documentation. The workflow for CLion is essentially the same.

Using it in your build

After the last command, conan will download (or build) the dependencies and generate a file with all the corresponding paths.
To use it, include it from cmake like this:

include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
conan_basic_setup(TARGETS KEEP_RPATHS)

It will then provide targets for all the requested boost libraries that you can link to like this:

target_link_libraries(myTarget
  PUBLIC CONAN_PKG::boost_filesystem
)

I wanted to make sure that the compiler build using the new boost files and not the old ones. Because I have a generic include into my devenv that was still going to be in my compilers include-paths for all the other dependencies, so I just renamed boost’s header include folder on disc. After my first successful compile I felt confident enough to delete them.

First problems

There was one major problem: some of my in-source dependencies had their own claim on using boost via passed CMake variables, Boost_LIBRARY_DIRS and Boost_INCLUDE_DIR. I adapted their CMakeLists.txt to allow for injecting appropriate targets instead. Not the cleanest solution, but it got my builds green again fast.

There’s a still a lot to cover on this: The other platforms had their own quirks and I migrated way more than just this first project. Also, there is still ways to go for a full migration with my game project. But more on that in my next blog post…

Java’s OptionalInt et al. versus Optional<T>

In Java 8 the Optional type was introduced to avoid the (ab)use of nullable types and null to indicate the absence of a value. It allows the programmer to clearly indicate whether the potential absence of a value is intentional or accidental.

Such option types, sometimes also called Maybe types, have been established in other programming languages, mostly in statically typed functional programming languages like ML and derivatives, but are also emerging in more mainstream languages like Swift.

Java’s Optional type is, to put it mildly, not the most sophisticated implementation of this concept, mostly due to limitations of Java’s existing type system. The Optional type is nullable itself, it’s not a sum type, so it has to rely on runtime exceptions to signal invalid access of a non-existent value, but it’s still useful. Static analysers, usually built into IDEs, can do what the compiler doesn’t and warn if the value is accessed without checking for its presence first.

The Optional type suffers from another limitation of Java’s type system: the fact that primitive types like int, long, double etc. and reference types, derived from Object, aren’t unified in a single type hierarchy. Related to that, primitive types can’t be used as generic type parameters in Java. The language works around this with additional boxed types like Integer, Long and Double for each primitive type.

When the stream API and the Optional type were introduced in Java 8, those primitive types were once again treated with special types: there’s not just Stream<T>, but also IntStream, LongStream, DoubleStream, there’s not just Optional<T>, but also OptionalInt, OptionalLong, OptionalDouble, the same for consumers, suppliers, predicates and functions.

This was done to avoid boxing and unboxing, but also makes it unpleasant to use. What’s worse is that the Optional variants for the primitive types don’t offer the same functionality as Optional<T>: they are lacking the filter, map and flatMap methods as well as the ofNullable factory method. All in all they are less useful than the real Optional, and there’s no convenient way to convert back and forth between, for example, an OptionalInt and Optional<Integer>.

The above mentioned annoyances are the reason why we prefer the generic variant over the special ones for the primitive types by default. Hopefully a future Java release will mitigate this dichotomy between those types, at least by adding the missing methods, but we are not aware of any plans for this yet.

Using WPF-Toolkits CheckComboBox with Data-Binding

Xceed’s WPF Toolkit is a popular extension to the standard components offered by Microsoft’s WPF. One fancy control that I have been using lately is the CheckComboBox, which is a ComboBox that show’s a list of items and checkboxes when opened and a list of selected items when closed. For example, it is great for selecting filtering options in smaller sets.
However, it took me a little bit to get it all up and running with DataBinding. I am going to walk you throught it. For reference, I’m starting with a .NET 4.6.1 WPF App in Visual Studio 2017.

First you have to install Extended.Wpf.Toolkit, which I am doing via VS’s built-in package manager. To actually use the control, I am adding an XML namespace into my MainWindow’s XAML:

xmlns:xctk="http://schemas.xceed.com/wpf/xaml/toolkit"

Then I’m adding the control in a simple StackPanel, while already adding DataBindings:

<xctk:CheckComboBox
  ItemsSource="{Binding Path=Options}"
  DisplayMemberPath="Name"
  SelectedMemberPath="Selected"/>

This means that my control will look at a collection named “Options” in my view-model, using it’s elements “Name” property for display and its “Selected” property for the checkmark. If you run the program at this point, you should be able to see an empty CheckComboBox, albeit badly layouted.

Now it’s time to create the view model. Let’s start with a small class-let to represent our items:

class Item
{
  public string Name { get; set; }
  public bool Selected { get; set; }
}

As you can see, the names match what we set for DisplayMemberPath and SelectedMemberPath in the XAML. Now for the ViewModel class:

class ViewModel
{
  public ViewModel()
  {
    var languages = new string[]
    {
      "C", "C#", "C++", "D", "Java",
      "Rust", "Python", "ES6"
    };
    
    Options = new List<Item>();
    foreach (var language in languages)
    {
      Options.Add(new Item {
          Name = language,
          Selected = true });
    }
  }
  
  public List<Item> Options { get; set; }
}

If you run it at this point, you should be able to see an all-selected list of programming languages in the drop-down. But it is lacking a crucial detail: it is not observable, meaning the component will not be notified if the data in the view-model is changed by other means. To make sure that it can, the Item list and the Item have to implement the INotifyPropertyChanged interface. To do that, you have to fire a specific event whenever a property changes with the name of that property in it.

Let’s do that for the Item first:

class Item : INotifyPropertyChanged
{
  private bool _selected;
  private string _name;

  public string Name
  {
      get => _name; set
      {
        _name = value;
        EmitChange(nameof(Name));
      }
  }
  public bool Selected
  {
    get => _selected; set
    {
      _selected = value;
      EmitChange(nameof(Selected));
    }
  }

  private void EmitChange(params string[] names)
  {
    if (PropertyChanged == null)
      return;
    foreach (var name in names)
      PropertyChanged(this,
        new PropertyChangedEventArgs(name));
  }

 public event PropertyChangedEventHandler
                PropertyChanged;
}

That got bigger! But it’s not a lot of meat really. For the Item list, we can just use ObservableCollection instead of List:

public ObservableCollection<Item> Options {get; set;}

That’s it. Two-way data binding set-up for the item collection, and you can now change the view-model and have the component react to it, but also react to changes from the component by hooking into the property-set functions.
Now you could also implement INotifyPropertyChanged for the ViewModel, if you intend to swap in new ObserableCollections, but that is not necessary for this example.

Configurable React backend in deployment

In my last post I explained how to make you React App configurable with the backend endpoint as an example. I did not make clear that the depicted approach is build-time configurability.

If you want deploy- or runtime-time configurability the most simple approach is to provide global variables in your index.html like so:

<!DOCTYPE html>
<html lang="en">
  <head>
    <script>
      window.REACT_APP_BACKEND_API_BASE_URL= 'http://some.other.server:5000';
      window.APPLICATION_CONFIGURATION = {
        settingA: 'aValue',
        anotherSetting: 'anotherValue'
      };
    </script>
  </head>
  <body>
    <noscript>
      You need to enable JavaScript to run this app.
    </noscript>
    <div id="root"></div>
  </body>
</html>

We use (or activate) this configuration similar to the build-time approach with .env files:

// If we have a differing backend configured, replace the global fetch()
// instead of process.env.REACT_APP_BACKEND_API_BASE_URL
// we now use window.REACT_APP_BACKEND_API_BASE_URL
if (window.REACT_APP_BACKEND_API_BASE_URL !== undefined
    && window.REACT_APP_BACKEND_API_BASE_URL !== '') {
  applyBaseUrlToFetch(window.REACT_APP_BACKEND_API_BASE_URL);
}

That way an automated process or a human administrator can deploy the same artifact to different servers with customized settings. This approach is briefly explained in the create-react-app documentation. In addition a server-side application could replace placeholders dynamically in the html file, e.g. with data from a configuration database.

I personally like this approach because it allows us to use the same build artifact for internal testing, staging systems and production at the clients site. It also allows the client to make some basic configuration themselves.