Every Unit Test Is a Stage Play – Part II

In the first part of this series, I talked about describing unit tests with the metaphor of a stage play that tells short stories about your system.

This metaphor holds its water in nearly every aspect of unit testing and guides my approach from each single line of code to the whole concept of test coverage. In this series, I’m focussing on one aspect in each part.

Today, we look at the backdrop.

We learnt from the first part that most unit tests are performed by four roles that appear on stage. In a theater, the stage is oftentimes decorated by additional items that facilitate the story. This is the backdrop (or the coulisse) of the play. We have the same thing in unit tests.

In a unit test, the stage is the code inside the test method:

@Test
public void rounds_up_to_the_next_decimal_power() {
    final Configuration given = new Configuration(
        StringVirtualFile.createFileFromContent(
            "report.config",
	    "scale.maximum=2E5"
	)
    );
    final ReportConfiguration target = new ReportConfiguration(
        given
        SuffixProvider.none
    );
    final Optional<Double> actual = target.scaleMaximum();
    final double expected = 1E6;
    assertThat(actual).contains(expected);
}

A good director is very picky about every detail that appears on stage. There should be no incidental item visible for the audience. In the case of an unit test, the audience is the next developer that reads the test code.

The stage doesn’t need to be devoid of “extras” and furniture, but it should be limited to the essential. A theater play isn’t a movie set where eye candy is seen as something positive. During the play, the audience should recognize the actors (and their roles) easily and without searching between all the clutter.

So we need to do two things: Declutter the stage and identify the extras.

Decluttering the stage

In our example above, there is a mismatch between the code required to instantiate the given role and the rest of the whole test. If this were a real play, half the time would be spent on an extra that doesn’t even speak a line. It is implied that the target gets some information from the given, but it isn’t shown. We need to remove some details from the stage that aren’t even relevant to the story, but introduced in detail just like the essential things. It might be fun and suspenseful to guess if the “report.config” detail in line three is more meaningful than the “scale.maximum” specified in line four, but unit test stories are not meant to be mysterious or even entertaining. They are meant to inform the audience about a little fact of the tested system. There will be thousands of little stories about the system. Make them trivial to read and easy to understand.

We need to move the stage props off the stage:

@Test
void rounds_up_to_the_next_decimal_power() {
    Configuration given = givenScaleMaximumOf("2E5");
    final ReportConfiguration target = new ReportConfiguration(
        given,
        irrelevant()
    );
    final Optional<Double> actual = target.scaleMaximum();
    final double expected = 1E6;
    assertThat(actual).contains(expected);
}

private Configuration givenScaleMaximumOf(
    String scaleMaximum
) {
    return new Configuration(
        StringVirtualFile.createFileFromContent(
            "report.config",
            "scale.maximum=" + scaleMaximum
        )
    );
}

private SuffixProvider irrelevant() {
    return SuffixProvider.none;
}

By moving the initialization code of the Configuration object to a new private method, we employ the storytelling device of “conveniently prepared circumstances”. The private method is moved to the “lower decks” or the “backstage” area of our test class. “The stage is on top” might be our motto. Everything “private” or not-Test-annotated should not be visible first and should not be required reading.

Notice how I introduced a parameter to the new “givenScaleMaximumOf” method in order to keep the necessary test details on stage. The audience should not have to look behind the curtains to gather important information about the story. I made the parameter a string (and not a double or integer) because it is just a prop. The story doesn’t benefit from it being typecasted correctly. And if you look back, it was a string before, too.

Identifying the extras

I’ve also extracted the “magic number” or “silent extra” SuffixProvider.none into its own method. This method adds nothing but a name that conveys the meaning of the value to the story instead of the value itself. If I were a directory in a theater, this actor would have a plain and bland costume in contrast to the bright colored main roles. Maybe the stage lighting would illuminate the main roles and keep the extra in the shadowy area, too.

Now, the focus of our test method is back on the main story. The attention of our audience will be on the stage and it will not be burdened by irrelevant details. We will even label props as dispensable if they are.

Keep your stages free from clutter and the eyes of your audience on the story. Test code is boring by nature. It doesn’t have to be plodding, too.

Epilogue

This is the second part of a series. All parts are linked below:

Every Unit Test Is a Stage Play – Part I

At the last dev brunch, I got the recommendation about a talk that tries to explain functional programming differently. What really got me was the effectiveness of the changed vocabulatory. I’ve seen this before, in the old talk about test driven development and behaviour driven development. But in my head, I think about unit tests with another overarching metaphor that I’m trying to explain in this blog post series:

Every unit test is a stage play that tells a short story about your system.

And this metaphor really guides my approach to nearly any aspect of unit testing, from each single line of code to the whole concept of test coverage. So I’m breaking my explanation into (at least) five parts and focus on one aspect in each part.

Today, we look at the actors.

In each classic play, there are well-known roles that can be played by nearly any human, but always stay the same role. There’s the hero, the (comedic) sidekick and of course, the villain or antagonist. In every show of Romeo and Juliet, there will be a Romeo. It might not be the most convincing Romeo ever, but the role stays the same, no matter the cast.

The same thing is true for every well-formed unit test. There are four roles that always appear on stage:

  • target: This is the object under test or the code under test if you don’t use objects. The target is probably different for every unit test you write, but the role is always present. I’ve seen it being called “cut” for “code under test”, but I prefer “target”. If you see a reference named “target” in my test code, you can be sure about the role it plays in the story.
  • actual: If you can design your code to adhere to the simple “parameters in, result out” call pattern, the “result out” is the “actual”. It is what your target produced when challenged by the specific parameters of your test. One trick to testable code is to design the “actual” role as being quite simple and “flat”.
  • expected: This might be the closest thing to an antagonist in your play. The “expected” role is filled with the value (or values) that your “actual” is measured against. If your “actual” is simple, your “expected” will be simple, too. If your “actual” is too complex, the “expected” role will be overbearing. In any case, the “expected” role is what drives your assertions.
  • given: Our hero, the “target”, is often dependent on entry parameters or secondary objects (mocked or not). These sidekicks are part of the “given” role. You might think about the “given-when-then” storytelling structure of behaviour driven design for the name. If you strive for a simple code structure, the required “given” in your unit test should be manageable.

As you can see, the story of a typical unit test is always the same: The target, with the help of the given, produces an actual that competes against the expected.

If this story has a happy end, your test runs green. If the actual fails the expectation, your test runs red. If the target fails to produce an actual at all (think about an exception or error), your whole play falls apart and the test runs red.

Enough theory, let’s look at an unit test that uses the four roles:

@Test
public void rounds_up_to_the_next_decimal_power() {
    final Configuration given = new Configuration(
        StringVirtualFile.createFileFromContent(
            "report.config",
	    "scale.maximum=2E5"
	)
    );
    final ReportConfiguration target = new ReportConfiguration(
        given
        SuffixProvider.none
    );
    final Optional<Double> actual = target.scaleMaximum();
    final double expected = 1E6;
    assertThat(actual).contains(expected);
}

I’ve highlighted the roles for better visibility. Note that for a role to appear in the play, it doesn’t really have to be named explicitely. Most of the time, the last two lines would be collated into one:

assertThat(actual).contains(1E6);

You can still see the “expected” role play its part, but not as prominent as before.

You also probably saw the extra “given” that wasn’t highlighted:

SuffixProvider.none

It might be relevant to the story or really be an uncredited extra that is not crucial in the target’s journey to produce the correct actual. If that’s the case, it seems appropriate not to name it. We will learn about techniques that I use to make these extras more nondescript in a later part. Right now, we can differentiate between main roles that are named and secondary roles that are just there, as part of the scenery. Just don’t fool your audience by having an unnamed actor contribute an important piece to the story’s success. That might be a cool plot twist, but I’m not here to be surprised.

Let your tests perform boring plays, but lots of them.

By using the four roles of test play, you make it clear to the reader (your real audience) what to expect from your test code parts. Don’t name irrelevant test code parts and only omit the role names if there are no extras on stage.

Your audience will still find your play boring (that’s the fate of nearly all test code), but it won’t feel disregarded or, even worse, deceived.

Epilogue

This is the first part of a series. All parts are linked below:

The algorithm in an algorithm – Builder design pattern

In the following blog post, I would like to explain to you, the design pattern builder, why this is an algorithm in the algorithm and what advantages result from it.

General

The builder is a creational design pattern. It separates the construction of complex objects from their representations, allowing the same construction processes to be reused.

The design pattern consists of a director, the builder interface, and concrete builder implementations. The director is responsible for the abstract construction of the product and has a defined interface with the builder to pass the design instructions. The concrete builders then build the concrete product according to the instructions and can also provide the generated product.

So in the end, the director defines its own little programming language inside the program where the construction instructions can be programmed as algorithm. The builder then executes that algorithm. So we have a program in the program, an algorithm in the algorithm. Crazy!

Example cake recipe

We know such procedures from real life. For example, from the kitchen. When you bake a cake, you take your yellow mixing bowl, the ingredients, and the blue mixer and make the dough. Very concrete.

But now, if someone asks about the recipe, then we abstract it from our concrete equipment to a general manual. There, it only says you are mixing the ingredients, and your yellow mixing bowl and blue mixer are not mentioned. So someone else can bake the cake in their own kitchen with their own equipment. Should your blue mixer ever fail, you can easily carry out the recipe with a whisk or with the new food processor.

Example file generation

An example from programming is the generation of a file. For example, a pdf certificate. If you program everything directly in PDFBox, it works first. But if you ever want to use a different library, or if you also want the certificate as a normal text document or image, you need to rewrite everything.

With the design pattern, you would have an algorithm that says I want “certificate” as a title, then a dividing line, then a table and then this paragraph. Exactly how this will be implemented is not known. The PDFBox builder takes these instructions and creates the file with its own library-specific commands.

If the library or file type changes, only one new builder needs to be written. For example, a text file builder, an image builder or an OpenPDF builder. The logic of how the certificate should look at the end remains unchanged.

Conclusion

Finally, separating the construction from the production offers some advantages. The program is more expandable and modifiable. It also complies with the single responsibility principle. The disadvantage is a close coupling between the product, the concrete builder, and the classes involved in the construction, making it difficult to change the basic process.

Unexpected: Web Sockets close when downloading a file

The browsers work in mysterious ways.

That is, mostly they have certain reasons for behaving why they do, but because so many details are too sophisticated as to have been explicitly defined in some specification (…yet).

Recently, we came across a new problem that was surprisingly non-googleable.

Many of our interactive Web GUIs are based on Web Sockets for real-time updates. As is usual. Now, one live system showed a strange effect, because our Web Socket always broke down when the user clicked on a <a>...</a> link to download a file.

This was strange for multiple reasons, but it boiled down to:

  • It happened under the current Firefox, but not under Chrome
  • It happened not when the link had target = "_blank" (i.e. the link was opening in a new window)
  • The Web Socket closed with the “going away” status code 1001, indicating no further error

And to boil it further down, the solution was to definitely include the download attribute for this particular <a> Element.

The web socket stayed open for both browsers in both of these cases:

<a href="..." target="_blank" rel="noopener noreferrer">
  This leaves the Web Socket intact
</a>

<a href="..." download>
  This also leaves the Web Socket intact
</a>

<a href="...">
  This might close the Web Socket, or not, just as the browser feels today
</a>

(For the rel="noopener noreferrer", see also here.)

The data from the href endpoint was served with the corresponding Content-Disposition and Content-Type HTTP headers. For Chrome, this is enough to identify that link as a download. Firefox was not so sure, it believes that you are actually “going away”.

Efficient integer powers of floating-point numbers in C++

Given a floating-point number x, it is quite easy to square it: x = x * x;, or x *= x;. Similarly, to find its cube, you can use x = x * x * x;.

However, when raising it to the 4’th power, things get more interesting: There’s the naive way: x = x * x * x * x;. And the slightly obscure way x *= x; x *= x; which saves a multiplication.

When raining to the 8’th power, the naive way really loses its appeal: x = x * x * x * x * x * x * x * x; versus x *= x; x *= x; x *= x;, that’s 7 multiplications version just 3. This process can easily be extended for raising a number to any power-of-two N, and will only use O(log(n)) multiplications.

The algorithm can also easily be extended to work with any integer power. This works by decomposing the number into product of power-of-twos. Luckily, that’s exactly what the binary representation so readily available on any computer is. For example, let us try x to the power of 20. That’s 16+4, i.e. 10100 in binary.

x *= x; // x is the original x^2 after this
x *= x; // x is the original x^4 after this
result = x;
x *= x; // x is the original x^8 after this
x *= x; // x is the original x^16 after this
result *= x;

Now let us throw this into some C++ code, with the power being a constant. That way, the optimizer can take out all the loops and generate just the optimal sequence of multiplications when the power is known at compile time.

template <unsigned int y> float nth_power(float x)
{
  auto p = y;
  auto result = ((p & 1) != 0) ? x : 1.f;
  while(p > 0)
  {
    x *= x;
    p = p >> 1;
    if ((p & 1) != 0)
      result *= x;
  }

  return result;
}

Interestingly, the big compilers do a very different job optimizing this. GCC optimizes out the loops with -O2 exactly up to nth_power<15>, but continues to do so with -O3 on higher powers. clang reliably takes out the loops even with just -O2. MSVC doesn’t seem to eliminate the loops at all, nor does it remove the multiplication with 1.f if the lowest bit is not set. Let me know if you find an implementation that MSVC can optimize! All tested on the compiler explorer godbolt.org.

Defeating TimescaleDB or shooting yourself in the foot

Today, almost everything produces periodically data: A car, cloud-connected smart-home solutions, your “smart” refrigerator and yourself using a fitness tracker. This data has to be stored in a certain way to be usable. We want fast access, beautiful charts and different aggregations over time.

There a several options both free and commercial for storing such time-series data like InfluxDB, Prometheus or TimescaleDB. They are optimised for this kind of data taking advantage of different time-series properties:

  • Storing is (mostly) append-only
  • Data points have a (strict) ordering in time
  • Data points often have fixed and variable meta data in addition to the data value itself

In one of our projects we have to store large quantities of data every 200ms. Knowing PostgreSQL as a powerful relational database management system we opted for TimescaleDB that promises time-series features for an SQL database.

Ingestion, probably the biggest problem of traditional (SQL) databases, of the data worked as expected from the beginning. We were able to insert new data at a constant rate without degrading performance.

The problem

However we got severe performance problems when querying data after some time…

So one of the key points of using a time-series database strangely did not work for us… Fortunately, since Timescale is only an extension to PostgreSQL we could use just use explain/explain analyse with our slow queries to find out what was going on.

It directly showed us that *all* chunks of our hypertable were queried instead of only the ones containing the data we were looking for. Checking our setup and hypertable definition showed that Timescale itself was working as expected and chunking the data correctly on the time axis.

After a bit more analysis and thought is was clear: The problem was our query! We used something like start_time <= to AND from <= end_time in our where clause to denote the time interval containing the requested data. Here start_time is the partition column of our hypertable.

This way we were forcing the database to look at all chunks from long ago until our end-time timestamp.

The solution

Ok, so we have to reformulate our where clause that only the relevant chunks are queried. Timescale can easily do this when we do something like start_time >= from AND start_time < to where from and to are the start and end of our desired interval. That way usually only one or only a few chunks of the hypertable are search and everything is lighning-fast even with billions of rows in our hypertable.

Of course the results of our two queries are not 100% the same sematically. But you can easily achieve the desired results by correctly calculation the start and end of the time-series data to fetch.

Conclusion

Time-series databases are powerful tools for todays data requirements. As with other tools you need some understanding of the underlying concepts and sometimes even implementation to not accidently defeat the purpose and features of the tools.

Used in the correct way they enable you to work with large amounts of data in a performant way todays users would expect. We had similar experiences with full text search engines like solr and ElasticSearch, too.

Regular expressions in JavaScript

In one of our applications, users can maintain info button texts themselves. For this purpose, they can insert the desired info button text in a text field when editing. The end user then sees the text as a HTML element.

Now, for better structuring, the customer wants to make lists inside the text field. So there was a need to frame lines beginning with a hyphen with the <li></li> HTML tags.

I used JavaScript to realize this issue. This was my first use of regular expressions in JavaScript, so I had to learn their language-specific specials. In the following article, I explain the general syntax and my solution.

General syntax

For the replacement, you can either specify a string to search for or a regular expression. To indicate that it is a regular expression, the expression is enclosed in slashes.

let searchString = "Test";
let searchRegex = /Test/;

It is also possible to put individual parts of the regular expression in brackets and then use them in the replacement part with $1, $2, etc.

let hello = "Hello Tom";
let simpleBye = hello.replace(/Hello/, "Bye");    
//Bye Tom
let bye = hello.replace(/Hello (.*)/, "Bye $1!"); 
//Bye Tom!

In general, with replace, the first match is replaced. With replaceAll, all occurrences are replaced. But these rules just work for searching strings. With regular expressions, modifiers decide if all matches were searched and replaced. To find and replace all of them, you must add modifiers to the expression.

Modifiers

Modifiers are placed at the end of a regular expression and define how the search is performed. In the following, I present just a few of the modifiers.

The modifier i is used for case-insensitive searching.

let hello = "hello Tom";
let notFound = hello.replaceAll(/Hello/, "Bye");
//hello Tom
let found= hello.replaceAll(/Hello/i, "Bye");
//Bye Tom

To find all occurrences, independent of whether replace or replaceAll is called, the modifier g must be set.

let hello = "Hello Tom, Hello Anna";
let first = hello.replaceAll(/Hello/, "Bye");
//Bye Tom, Hello Anna
let replaceAll = hello.replaceAll(/Hello/g, "Bye");
//Bye Tom, Bye Anna
let replace = hello.replace(/Hello/g, "Bye");
//Bye Tom, Bye Anna

Another modifier can be used for searching in multi-line texts. Normally, the characters ^ and $ are for the start and end of the text. With the modifier m, the characters also match at the start and end of the line.

let hello = `Hello Tom,
hello Anna,
hello Paul`;
let byeAtBegin = hello.replaceAll(/^Hello/gi, "Bye");     
//Bye Tom, 
//hello Anna,
//hello Paul
let byeAtLineBegin = hello.replaceAll(/^Hello/gim, "Bye");     
//Bye Tom, 
//Bye Anna,
//Bye Paul

Solution

With this toolkit, I can now convert the hyphens into HTML <li></li>. I also remove the line breaks at the end because, in real code, they will be replaced with <br/> in the next step, and I do not want empty lines between the list points.

let infoText = `This is an important field. You can input:
- right: At the right side
- left: At the left side`;
let htmlInfo = infoText.replaceAll(/^-(.*)\n/gm, "<li>$1</li>");
//This is an important field. You can input:
//<li>right: At the right side</li><li>left: At the left side</li>

If you are familiar with the syntax and possibilities of JavaScript, it offers good functions, such as taking over parts of the regular expression.

My conan 2 Consumer Workflow

A great many things changed in the transition from conan 1.x to conan 2.x. For me, as an application-developer first, the main thing was how I consume packages. The two IDEs I use the most in C++ are Visual Studio and CLion, so I needed a good workflow with those. For Visual Studio, I am using its CMake integration, otherwise known as “folder mode”, which lets you directly open a project with a CMakeLists.txt file in it, instead of generating a solution and opening that. The deciding factor for me is that that uses Ninja as a build tool instead of MSBuild, which often is a lot faster. I have had projects with 3.5x build-time speed ups. As an added bonus, CLion supports very much the same workflow, which reduces friction when switching between platforms.

Visual Studio

First, we’re going to need some local profiles. I typically treat them as ‘build configurations’, with one profile for debug and release on each platform. I put them under version control with the project. A good starting point to create them is conan profile detect, which guesses your environment. To create a profile to a file, go to your project folder and use something like:

conan profile detect --name ./windows_release

Note the ./ in the name, which will instruct conan to create a profile in the current working directory instead of in your user settings. For me, this generates the following profile:

[settings]
arch=x86_64
build_type=Release
compiler=msvc
compiler.cppstd=14
compiler.runtime=dynamic
compiler.version=194
os=Windows

Conan will warn you, that this is only a guess and you should make sure that the values work for you. I usually bump up the compiler.cppstd to at least 20, but the really important change is to change the CMake generator to Ninja, after which the profile should look something like this:

[settings]
arch=x86_64
build_type=Release
compiler=msvc
compiler.cppstd=20
compiler.runtime=dynamic
compiler.version=194
os=Windows

[conf]
tools.cmake.cmaketoolchain:generator=Ninja

Copy and edit the build_type to create a corresponding profile for debug builds.

While conanfile.txt still works for specifying your dependencies, I now recommend directly using conanfile.py from the get go, as some options like overriding dependencies are now exclusive to it. Here’s an example installing the popular logging library spdlog:

from conan import ConanFile
from conan.tools.cmake import cmake_layout


class ProjectRecipe(ConanFile):
    settings = "os", "compiler", "build_type", "arch"
    generators = "CMakeToolchain", "CMakeDeps"

    def requirements(self):
        self.requires("spdlog/1.14.1")

    def layout(self):
        cmake_layout(self)

Note that I am using cmake_layout to setup the folder structure, which will make conan put the files it generates in build/Release for the windows_release profile we created.

Now it is time to install the dependencies using conan install. Make sure you have a clean project before this, e.g. there are no other build/config folders like build/, out/ and .vs/. Specifically, do not open the project in Visual Studio before doing that, as it will create another build setup. You already need the CMakeLists.txt at this point, but it can be empty. For completeness, here’s one that works with the conanfile.py from above:

cmake_minimum_required(VERSION 3.28)
project(ConanExample)

find_package(spdlog CONFIG REQUIRED)

add_executable(conan_example
  main.cpp
)

target_link_libraries(conan_example
  spdlog::spdlog
)

Run this in your project folder:

conan install . -pr:a ./windows_release

This will install the dependencies and even tell you what to put in your CMakeLists.txt to use them. More importantly for the Visual Studio integration, it will create a CMakeUserPresets.json file that will allow Visual Studio to find the prepared build folder once you open the project. If there is no CMakeLists.txt when you call conan install, this file will not be created! Note that you generally do not want this file under version control.

Now that this is setup, you can finally open the project in Visual Studio. You should see a configuration named “conan-release” already available and CMake should run without errors. After this point, you can let conan add new configurations and Visual Studio should automatically pick them up through the CMake user presets.

CLion

The process is essentially the same for CLion, except that the profile will probably look different, depending on the platform. Switching the generator to Ninja is not as essential, but I still like to do it for the speed advantages.

Again, make sure you let conan setup the initial build folders and CMakeUserPresets.json and not the IDE. CLion will then pick them up and work with them like Visual Studio does.

Additional thoughts

I like to create additional script files that I use to setup/update the dependencies. For example, in windows, I create a conan_install.bat file like this:

@echo Installing debug dependencies
conan install . -pr:a conan/windows_debug --build=missing %*
@if %errorlevel% neq 0 exit /b %errorlevel%

@echo Installing release dependencies
conan install . -pr:a conan/windows_release --build=missing %*
@if %errorlevel% neq 0 exit /b %errorlevel%

Have you used other workflows successfully in these or different environments? Let me know about them!

Updating Grails: From 5 to 6

We have a long history of maintaining quite a large grails application since Grails 1.0. Over the first few major versions upgrading was a real pain.

The situation changed dramatically after Grails 3 as you can see in our former blog posts and and the upgrade from 3 to 4. Going from 4 to 5 was so smooth that I did not even dedicate a blog post to it.

A few weeks ago we decided to upgrade from version 5 to 6 and here is a short summary of our experiences. Things fortunately went quite smooth again:

The changes

The new major version 6 contains mostly dependency upgrades and official support for Java 17 which is probably the biggest selling point.

Some other minor things to note are without any particular order:

  • The logback configuration file name has changed from logback.groovy to logback-config.groovy
  • The pathing-jar is already setup for you, so you can remove the directive from your build files if you had it in
  • The build uses the standard gradle-application plugin allowing some more simplifications in your build files and infrastructure

Other noteworthy news

Object Computing stepped down as the steward of the grails framework and informed the community in an open letter. While there is still the grails foundation and the open source community we can expect the development and changes to slow down.

Whether this results in negative developer experience remains to be seen.

Conclusion

Using and maintaining a Grails application developed to a rather smooth ride and only poorly maintained plugins hurt your experience. The framework and its foundation have been pretty solid for some time now.

Regarding new projects you certainly have to evaluate if using grails is the best option. For an full stack framework the answer maybe yes, but if you only need a powerful API backend lighter and more modern frameworks like micronaut or javalin may be a better choice.

How to use LDAP in a Javalin Server

I recently implemented authentication and authorization via LDAP in my Javalin web server. I encountered a few pitfalls in the process. That is why I am sharing my experiences in this blog article.

Javalin

I used pac4j for the implementation. This is a modular library that allows you to replicate your own use case with different authenticators, clients and web server connection libraries. In this case I use “org.pac4j:pac4j-ldap” as authenticator, “org.pac4j:pac4j-http” as client and “org.pac4j:javalin-pac4j” as web server.

In combination with Javalin, pac4j independently manages the session and forwards it for authentication if you try to access a protected path.

var config = new LdapConfigFactory().build();
var callback = new CallbackHandler(config, null, true);

Javalin.create()
   .before("administration", new SecurityHandler(config, "FormClient", "admin"))
   .get("administration", ctx -> webappHandler.serveWebapp(ctx))
   .get("login", ctx -> webappHandler.serveWebapp(ctx))
   .get("forbidden", ctx -> webappHandler.serveWebapp(ctx))
   .get("callback", callback)
   .post("callback", callback)
   .start(7070);

In this example code the path to the administration is protected by the SecurityHandler. The “FormClient” indicates that in the event of missing authentication, the user is forwarded to a form for authentication. The specification “admin” defines that the user must also be authorized to the role “admin”.

LDAP Config Factory

I configured LDAP using my own ConfigFactory. Here, for example, I define the callback and login route. In addition, my self-written authorizer and http action adapter are assigned. I will go into these two areas in more detail below. The login form requires the authenticator here. For us, this is an LdapProfileService.

public class LdapConfigFactory implements ConfigFactory {
    @Override
    public Config build(Object... parameters) {
        var formClient = new FormClient("http://localhost:7070/login", createLdapProfileService());
        var clients = new Clients("http://localhost:7070/callback", formClient);
        var config = new Config(clients);

        config.setWebContextFactory(JEEContextFactory.INSTANCE);
        config.setSessionStoreFactory(JEESessionStoreFactory.INSTANCE);
        config.setProfileManagerFactory(ProfileManagerFactory.DEFAULT);
        config.addAuthorizer("admin", new LdapAuthorizer());
        config.setHttpActionAdapter(new HttpActionAdapter());

        return config;
    }
}

LDAP Profile Service

I implement a separate method for configure the service. The LDAP connection requires the url and a user for the connection and the query of the active directory. The LDAP connection is defined in the ConnectionConfig. It is also possible to activate TLS here, but in our case we use LDAPS.

The Distinguished Name must also be defined. Queries only search for users under this path.

private static LdapProfileService createLdapProfileService() {
    var url = "ldaps://test-ad.com";
    var baseDN = "OU=TEST,DC=schneide,DC=com";
    var user = "username";
    var password = "password";

    ConnectionConfig connConfig = ConnectionConfig.builder()
            .url(url)
            .connectionInitializers(new BindConnectionInitializer(user, new Credential(password)))
            .build();

    var connectionFactory = new DefaultConnectionFactory(connConfig);

    SearchDnResolver dnResolver = SearchDnResolver.builder()
            .factory(connectionFactory)
            .dn(baseDN)
            .filter("(displayName={user})")
            .subtreeSearch(true)
            .build();

    SimpleBindAuthenticationHandler authHandler = new SimpleBindAuthenticationHandler(connectionFactory);

    Authenticator authenticator = new Authenticator(dnResolver, authHandler);

    return new LdapProfileService(connectionFactory, authenticator, "memberOf,displayName,sAMAccountName", baseDN);
}

The SearchDNResolver is used to search for the user to be authenticated. A filter can be defined for the match with the user name. And, very importantly, the subtreeSearch must be activated. By default, it is set to false, which means that only users who appear exactly in the BaseDN are found.

The SimpleBindAuthenticationHandler can be used together with the Authenticator for authentication with user and password.

Finally, in the LdapProfileService, a comma-separated string can be used to define which attributes of a user should be queried after authentication and transferred to the user profile.

With all of these settings, you will be redirected to the login page when you try to accessing administration. The credentials is then matched against the active directory via LDAP and the user is authenticated. In addition, I want to check that the user is in the administrator group and therefore authorized. Unfortunately, pac4j cannot do this on its own because it cannot interpret the attributes as roles. That’s why I build my own authorizer.

Authorizer

public class LdapAuthorizer extends ProfileAuthorizer {
    @Override
    protected boolean isProfileAuthorized(WebContext context, SessionStore sessionStore, UserProfile profile) {
        var group = "CN=ADMIN_GROUP,OU=Groups,OU=TEST,DC=schneide,DC=com";
        var attribute = (List) profile.getAttribute("memberOf");
        return attribute.contains(group);
    }

    @Override
    public boolean isAuthorized(WebContext context, SessionStore sessionStore, List<UserProfile> profiles) {
        return isAnyAuthorized(context, sessionStore, profiles);
    }
}

The attributes defined in LdapProfileService can be found in the user profile. For authorization, I query the group memberships to check if the user is in the group. If the user has been successfully authorized, he is redirected to the administration page. Otherwise the http status code forbidden is returned.

Javalin Http Action Adapter

Since I want to display a separate page that shows the user the Forbidden, I build my own JavalinHttpActionAdapter.

public class HttpActionAdapter extends JavalinHttpActionAdapter {
    @Override
    public Void adapt(HttpAction action, WebContext webContext) {
        JavalinWebContext context = (JavalinWebContext) webContext;
        if(action.getCode() == HttpConstants.FORBIDDEN){
            context.getJavalinCtx().redirect("/forbidden");
            throw new RedirectResponse();
        }
        return super.adapt(action, context);
    }
}

This redirects the request to the Forbidden page instead of returning the status code.

Conclusion

Overall, the use of pac4j for authentication and authorization on javalin facilitates the work and works well. Unfortunately, the documentation is rather poor, especially for the LDAP module. So the setup was a bit of a journey of discovery and I had to spend a lot of time looking for the root cause of some problems like subtreeSearch.