Optional polymorphism by delegation

A code design pattern I’ve used a lot in recent times is the “optional-based polymorphism” that looks like a delegation to another type that might not be available. It might be an implementation of the FCoI-principle (Favour Composition over Inheritance).

Let’s look at an example: An application has several different engines that move stuff around. Some engines are based on limit switches. They move until they are stopped by a physical switch. The application can make these engines move from one predefined position to the next, but not anywhere in between. Another type of engines is based on a relative position. You give the engine the new target position and it positions itself there, without any limit switches or predefined positions.

Traditional approach

A typical implementation using inheritance would be a common supertype “Engine” that provides the functionality both engine types exhibit. From there, we would define two subtypes that extend the functionality in their desired way. One subtype would be the “LimitSwitchEngine”, the other one the “PositionableEngine”.

Our client code that wants to use a particular engine has two possibilities: It only requires the common functionality of an engine and can work with the supertype. Or it needs to perform a downcast after checking the actual type of the engine.

Cast methods

The optional-based polymorphism guides the client code towards the specific subtype by providing all possibilities in the common interface:

public interface Engine {

	/* Common functionality */
	
	boolean isMoving();
	
	void emergencyStop();
	
	/* optional-based polymorphism */
	
	Optional<LimitSwitchEngine> boundToLimitSwitches();
	
	Optional<PositionableEngine> freelyPositionable();
}

The client code uses the Engine’s interface only as a stepping stone for the specific engine that is required for your use case. If the engine object cannot provide that functionality, you’ll get an empty Optional. Else you retrieve your reference to the specific type and work with it.

Disadvantages

One disadvantage of this approach is the fact that the supertype is aware and even dependent on the different subtypes. You limit the scope of your type hierarchy to the types offered in the “entrance interface”. You can still use the traditional downcast way as described in the introduction for all other types, but that separates them into “featured” and “non-featured” subtypes. So this approach will violate the Open/Closed principle by not being open to extension without modification.

Another disadvantage is that your typical navigation in the IDE doesn’t work as well anymore. If you want to know about all the different types of engines in the system, you can’t just look at the type hierarchy of the Engine type anymore. This is because of the first advantage this pattern brings:

Advantages

Not only gets this style rid of the downcast, it frees your type system up in two different dimensions: The LimitSwitchEngine and PositionableEngine don’t need to be subtypes of Engine. They can be totally independent types with no real connection to the Engine. And they can be different instances. Of course, there is no need to use any of these freedoms. You can still inherit PositionableEngine from Engine and implement both types in the same object. But it isn’t mandatory anymore.

Another advantage is discoverability. Your typical type hierarchy lookup in the IDE is replaced with code completion lookup. If you get the names right, this pattern feels like writing code on rails, because your code completion proposals will lead you to the correct place.

Your opinion

What is your opinion on this pattern? What would you expect from a code design that provides those “casting” methods? Tell us in the comments!

Porting an application from Oracle to PostgreSQL

Currently, we are in the process of porting a larger application from an Oracle database to a PostgreSQL database as an additional database system. The aim is for the application to be functional not only with one database system but with both: with Oracle for one customer and with PostgreSQL for another customer.

The first step in this process was to conduct a high-level analysis of what effort this would entail and where the application would need to be adjusted. The application consists of multiple services. A frontend service in Ruby on Rails and a variety of backend services in Java. Rails uses an O/R mapper called Active Record, which is relatively database-agnostic. No major adaptation effort was expected here. Ultimately, only the Active Record adapter needs to be configured per instance.

The Java services do not use an O/R mapper. They make direct SQL queries to the target database system using JDBC. This is where the greatest potential for issues existed. Fortunately, all database queries here are encapsulated in repository classes, making the SQL queries easily accessible and adaptable for different database management systems. However, this has not proven necessary so far. The SQL queries are written in standard SQL, without using database-specific features. The code is limited to basic features such as tables, views, sequences, SELECTs, INSERTs, UPDATEs, and JOINS, all of which are supported by both databases. Particularly, no stored procedures are used, which wouldn’t necessarily be portable.

For implementation, the initial goal was to make the existing Oracle database schema of the application reproducible on a PostgreSQL database, in a way that a new instance can be set up fresh in a Docker container at any time.

The best way to achieve this is with a database migration tool. The options considered were: Active Record Migrations, Liquibase, and Flyway. We ultimately chose Liquibase, which we have had positive experiences with. It can be used standalone and allows for formulating migrations in a database-independent manner. Additionally, it offers the option to export an existing schema as a migrations changelog file using the generate-changelog command.

We only had to slightly adjust the generated changelog to make it work for both database systems. In a Liquibase changelog, you can define variables referred to as properties, which can take on different values for various database systems. One use case for this is having different names for column data types:

<property name="varchar" value="varchar" dbms="postgresql"/>
<property name="varchar" value="VARCHAR2" dbms="oracle"/>

These properties can then be used, for example, in the following way:

<column name="DESCRIPTION" type="${varchar}(4000)">

Here’s a helpful table that maps Oracle datatypes to their equivalent PostgreSQL data types. After some editing of the changelog, mainly through search and replace, we had a migration for the database schema that works for both systems.

Naming is hard and Java Enums don’t help

This is a short blog post about a bug in my code that stumped me for some moments. I try to tell it in a manner where you can follow the story and try to find the solution before I reveal it. You can also just read along and learn something about Java Enums and my coding style.

A code structure that I use sometimes is the Enum type that implements an interface:

public enum BuiltinTopic implements Topic {

    administration("Administration"),
    userStatistics("User Statistics"),
    ;
	
    private final String denotation;

    private BuiltinTopic(String denotation) {
        this.denotation = denotation;
    }
	
    @Override
    public String denotation() {
        return this.denotation;
    }
}

The Topic interface is nothing special in this example. It serves as a decoupling layer for the (often large) part of client code that doesn’t need to know about any specifics that stem from the Enum type. It helps with writing tests that aren’t coupled to locked-down types like Enums. It is just some lines of code:

public interface Topic {

    String denotation();
}

Right now, everything is fine. The problems start when I discovered that the denotation text is suited for the user interface, but not for the configuration. In order to be used in the configuration section of the application, it must not contain spaces. Ok, so let’s introduce a name concept and derive it from the denotation:

public interface Topic {

    String denotation();
	
    default String name() {
        return Without.spaces(denotation());
    }
}

I’ve chosen a default method in the interface so that all subclasses have the same behaviour. The Without.spaces() method does exactly what the name implies.

The new method works well in tests:

@Test
public void name_contains_no_spaces() {
    Topic target = () -> "User Statistics";
    assertEquals(
       "UserStatistics",
       target.name()
    );
}

The perplexing thing was that it didn’t work in production. The names that were used to look up the configuration entries didn’t match the expected ones. The capitalization was wrong!

To illustrate the effect, take a look at the following test:

@Test
public void name_contains_no_spaces() {
    Topic target = BuiltinTopic.userStatistics;
    assertEquals(
        "userStatistics",
        target.name()
    );
}

You can probably spot the difference in the assertion. It is “userStatistics” instead of “UserStatistics”. For a computer, that’s a whole different text. Why does the capitalization of the name change from testing to production?

The answer lies in the initialization of the test’s target variable:

In the first test, I use an ad-hoc subtype of Topic.

In the second test and in production, I use an object of type BuiltinTopic. This object is an instance of an Enum.

In Java, Enum classes and Enum objects are enriched with automatically generated methods. One of these methods equip Enum instances with a name() method that has a default implementation to return the Enum instances’ variable/constant name. Which in my case is “userStatistics”, the same string I expect, minus the correct capitalization of the first character.

If I had named the Enum instance “UserStatistics”, everything would have worked out until somebody changes the name or adds another instance with a slight difference in naming.

If I had named my Enum instance something totally different like “topic2”, it would have been obvious. But in this case, with only the minor deviation, I was compelled to search for problems elsewhere.

The problem is that the auto-generated name() method overwrites my default method, but only in cases of real Enum instances.

So I thought hard about the name of the name() method and decided that I don’t really want a name(), I want an identifier(). And that made the problem go away:

public interface Topic {

    String denotation();
	
    default String identifier() {
        return Without.spaces(denotation());
    }
}

Because the configuration code only refers to the Topic type, it cannot see the name() method anymore and only uses the identifier() that creates the correct strings.

I don’t see any (sane) way to prohibit the Java Enum from automatically overwriting my methods when the signature matches. So it feels natural to sidestep the problem by changing names.

Which shows once more that naming is hard. And soft-restricting certain names like Java Enums do doesn’t lighten the burden for the programmer.

The Optional Wildcast

This blog post presents a particular programming technique that I happen to use more often in recent months. It doesn’t state that this technique is superior or more feasible than others. It’s just a story about a different solution to an old programming problem.

Let’s program a class hierarchy for animals, in particular for mammals and birds. You probably know where this leads up to, but let’s start with a common solution.

Both mammals and birds behave like animals, so they are subclasses of it. Birds have the additional behaviour of laying eggs for reproduction. We indicate this feature by implementing the Egglaying interface.

Mammals feed their offsprings by giving them milk. There are two mammals in our system, a cow and the platypus. The cow behaves like the typical mammal and gives a lot of milk. The platypus also feeds their young with milk, but only after they hatched from their egg. Yes, the platypus is a rare exception in that it is both a mammal and egglaying. We indicate this odd fact by implementing the Egglaying interface, too.

If our code wants to access the additional methods of the Egglaying interface, it has to check if the given object implements it and then upcasts it. I call this type of cast “wildcast” because they seem to appear out of nowhere when reading the code and seemingly don’t lead up or down the typical type hierarchy. Why would a mammal lay eggs?

One of my approaches that I happen to use more often recently is to indicate the existence of real wildcast with a Optional return type. In theory, you can wildcast from anywhere to anyplace you want. But only some of these jumps have a purpose in the domain. And an explicit casting method is a good way to highlight this purpose:

public abstract class Mammal {
	public Optional<Egglaying> asEgglaying() {
		return Optional.empty();
	}
}

The “asEgglaying()” method might return an Egglaying object, or it might not. As you can see, on default, it returns only an empty Optional. This means that no cow, horse, cat or dog has to think about laying eggs, they just aren’t into it by default.

public class Platypus extends Mammal implements Egglaying {
	@Override
	public Optional<Egglaying> asEgglaying() {
		return Optional.of(this);
	}
}

The platypus is another story. It is the exception to the rule and knows it. The code “Optional.of(this)” is typical for this coding technique.

A client that iterates over a collection of mammals can now incorporate the special case with more grace:

for (Mammal each : List.of(mammals())) {
	each.lactate();
	each.asEgglaying().ifPresent(Egglaying::breed);
}

Compare this code with a more classic approach using a wildcast:

for (Mammal each : List.of(mammals())) {
	each.lactate();
	if (each instanceof Egglaying) {
		((Egglaying) each).breed();
	}
}

My biggest grief with the classic approach is that the instanceof is necessary for the functionality, but not guided by the domain model. It comes as a surprise and has no connection to the Mammal type. In the Optional wildcast version, you can look up the callers of “asEgglaying()” and see all the special code that is written for the small number of mammals that lay eggs. In the classic approach, you need to search for conditional upcasts or separate between code for birds and special mammal code when looking up the callers.

In my real-world projects, this “optional wildcast” style facilitates domain discovery by code completion and seems to lead me to more segregated type systems. These impressions are personal and probably biased, so I would like to hear from your experiences or at least opinions in the comments.

Create custom jre in Docker

I recently wrote a Java application and wanted to run it in a Docker container. Unfortunately, my program needs a module from the Java jdk, which is why I can’t run it in the standard jre. When I run the program with the jdk, everything works but I have much more than I actually need and a 390MB docker container.

That’s why I set out to build my own jre, cut-down exactly to my application needs.

For this I found two Gradle plugins that help me. Jdeps checks the dependencies of an application and can summarise them in a file. This file can be used as input for the Jlink plugin. Jlink builds its own jre from it.

In the following I will present a way to build a custom jre with gradle and the two plugins in a multistage dockerfile and run an application in it.

Configuration in Gradle

First we need to do some configuration in gradle.build file. To use jdeps, the application must be packaged as an executable jar file and all dependencies must be copied in a seperate folder . First we create a task that copies all current dependencies to folder named lib. For the jar, we need to define the main class and set the class path to search for all dependencies in the lib folder.

// copies all the jar dependencies of your app to the lib folder
task copyDependencies(type: Copy) {
    from configurations.runtimeClasspath
    into "$buildDir/lib"
}

jar {
    manifest {
        attributes["Main-Class"] = "Main"
        attributes["Class-Path"] = configurations.runtimeClasspath
            .collect{'lib/'+it.getName()}.join(' ')
    }
}

Docker Image

Now we can get to work on the Docker image.

As a first step, we build our jre in a Java jdk. To do this, we run the CopyDependencies task we just created and build the Java application with gradle. Then we let jdeps collect all dependencies from the lib folder.
The output is written to the file jre-deps.info. It is therefore important that no errors or warnings are output, which is set with the -q parameter. The print-module-deps is crucial so that the dependencies are output and saved in the file.

The file is now passed to jlink and a custom-fit jre for the application is built from it. The parameters set in the call also reduce the size. The settings can be read in detail in the plugin documentation linked above.

FROM eclipse-temurin:17-jdk-alpine AS jre-build

COPY . /app
WORKDIR /app

RUN chmod u+x gradlew; ./gradlew copyDependencies; ./gradlew build

# find JDK dependencies dynamically from jar
RUN jdeps \
--ignore-missing-deps \
-q \
--multi-release 17 \
--print-module-deps \
--class-path build/lib/* \
build/libs/app-*.jar > jre-deps.info

RUN jlink \
--compress 2 \
--strip-java-debug-attributes \
--no-header-files \
--no-man-pages \
--output jre \
--add-modules $(cat jre-deps.info)

FROM alpine:3.17.0
WORKDIR /deployment

# copy the custom JRE produced from jlink
COPY --from=jre-build /app/jre jre
# copy the app dependencies
COPY --from=jre-build /app/build/lib/* lib/
# copy the app
COPY --from=jre-build /app/build/libs/app-*.jar app.jar

# run in user context
RUN useradd schneide
RUN chown schneide:schneide .
USER schneide

# run the app on startup
CMD jre/bin/java -jar app.jar

In the second Docker stage, a smaller image is used and only the created jre, the dependencies and the app.jar are copied into it. The app is then executed with the jre.

In my case, I was able to reduce the Docker container size to 110 MB with the alpine image, which is less than a third. Or, using a ubuntu:jammy image, to 182 MB, which is also more than half.

Help me with the Spiderman Operator

From time to time, I encounter a silly syntax in Java that I silently dubbed the “spiderman operator” because of all the syntactically pointing that’s going on. My problem is that it’s not very readable, I don’t know an alternative syntax for it and my programming style leads me more often to it than I am willing to ignore.

The spiderman operator looks like this:

x -> () -> x

In its raw form, it means that you have a function that takes x and returns a Supplier of x:

Function<X, Supplier<X>> rawForm = x -> () -> x;

That in itself is not very useful or mysterious, but if you take into account that the Supplier<X> is just one possible type you can return, because in Java, as long as the signature fits, the thing sits, it gets funnier.

A possible use case

Let’s define a type that is an interface with just one method:

public interface DomainValue {
    BigDecimal value();
}

In Java, the @FunctionalInterface annotation is not required to let the interface be, in fact, a functional interface. It only needs to have one method without implementation. How can we provide methods with implementation in Java interfaces. Default methods are the way:

@FunctionalInterface
public interface DomainValue {
    BigDecimal value();

    default String denotation() {
        return getClass().getSimpleName();
    }
}

Let’s say that we want to load domain values from a key-value-store with the following access method:

Optional<Double> loadEntry(String key)

If there is no entry with the given key or the syntax is not suitable to be interpreted as a double, the method returns Optional.emtpy(). Else it returns the double value wrapped in an Optional shell. We can convert it to our domain value like this:

Optional<DomainValue> myValue = 
    loadEntry("current")
        .map(BigDecimal::new)
        .map(x -> () -> x);

And there it is, the spiderman operator. We convert from Double to BigDecimal and then to DomainValue by saying that we want to convert our BigDecimal to “something that can supply a BigDecimal”, which is exactly what our DomainValue can do.

A bigger use case

Right now, the DomainValue type is nothing more than a mantle around a numerical value. But we can expand our domain to have more specific types:

public interface Voltage extends DomainValue {
}
public interface Power extends DomainValue {
    @Override
    default String denotation() {
        return "Electric power";
    }
}

Boring!

public interface Current extends DomainValue {
    default Power with(Voltage voltage) {
	return () -> value().multiply(voltage.value());
    }
}

Ok, this is maybe no longer boring. We can implement a lot of domain functionality just in interfaces and then instantiate ad-hoc types:

Voltage europeanVoltage = () -> BigDecimal.valueOf(220);
Current powerSupply = () -> BigDecimal.valueOf(2);
Power usage = powerSupply.with(europeanVoltage);

Or we load the values from our key-value-store:

Optional<Voltage> maybeVoltage = 
    loadEntry("voltage")
        .map(BigDecimal::new)
        .map(x -> () -> x);

Optional<Current> maybeCurrent = 
    loadEntry("current")
        .map(BigDecimal::new)
        .map(x -> () -> x);

You probably see it already: We have some duplicated code! The strange thing is, it won’t go away so easily.

The first call for help

But first I want to sanitize the code syntactically. The duplication is bad, but the spiderman operator is just unreadable.

If you have an idea how the syntax of the second map() call can be improved, please comment below! Just one request: Make sure your idea compiles beforehands.

Failing to eliminate the duplication

There is nothing easier than eliminating the duplication above: The code is syntactically identical and only the string parameter is different – well, and the return type. We will see how this affects us.

What we cannot do:

<DV extends DomainValue> Optional<DV> loadFor(String entry) {
    Optional<BigDecimal> maybeValue = load(entry);
    return maybeValue.map(x -> () -> x);
}

Suddenly, the spiderman operator does not compile with the error message:

The target type of this expression must be a functional interface

I can see the problem: Subtypes of DomainValue are not required to stay compatible to the functional interface requirement (just one method without implementation).

Interestingly, if we work with a wildcard for the generic, it compiles:

Optional<? extends DomainValue> loadFor(String entry) {
    Optional<BigDecimal> maybeValue = load(entry);
    return maybeValue.map(x -> () -> x);
}

The problem is that we still need to downcast to our specific subtype afterwards. But we can use this insight and move the downcast into the method:

<DV extends DomainValue> Optional<DV> loadFor(
	String entry,
	Class<DV> type
) {
	Optional<BigDecimal> maybeValue = load(entry);
	return maybeValue.map(x -> type.cast(x));
}

Which makes our code readable enough, but at the price of using reflection:

Optional<Voltage> european = loadFor("voltage", Voltage.class);
Optional<Current> powerSupply = loadFor("current", Current.class);

I’m not a fan of this solution, because downcasts are dangerous and reflection is dangerous, too. Mixing two dangerous things doesn’t neutralize the danger most of the time. This code will fail during runtime sooner or later, without any compiler warning us about it. If you don’t believe me, add a second method without implementation to the Current interface and see if the compiler warns you. Hint: This is what you will see at runtime:

java.lang.ClassCastException: Cannot cast java.math.BigDecimal to Current

But, to our surprise, it doesn’t even need a second method. The code above doesn’t work. Even if we reintroduce our spiderman operator (with an additional assignment to help the type inference), the cast won’t work:

<DV extends DomainValue> Optional<DV> loadFor(
    String entry,
    Class<DV> type
) {
    Optional<BigDecimal> maybeValue = load(entry);
    Optional<DomainValue> maybeDomainValue = maybeValue.map(x -> () -> x);
    return maybeDomainValue.map(x -> type.cast(x));
}

The ClassCastException just got a lot more mysterious:

java.lang.ClassCastException: Cannot cast Loader$$Lambda$8/0x00000008000028c0 to Current

My problem is that I am stuck. There is working code that uses the spiderman operator and produces code duplication, but there is no way around the duplication that I can think of. I can get objects for the supertype (DomainValue), but not for a specific subtype of it. If I want that, I have to accept duplication. Or am I missing something?

The second call for help

If you can think about a way to eliminate the duplication, please tell me (or us) in the comments. This problem doesn’t need to be solved for my peace of mind or the sanity of my code – the duplication is confined to a particular place.

Being used to roam nearly without boundaries in the Java syntax (25 years of thinking in Java will do that to you), this particular limitation hit hard. If you can give me some ideas, I would be grateful.

LDAP-Authentication in Wildfly (Elytron)

Authentication is never really easy to get right but it is important. So there are plenty of frameworks out there to facilitate authentication for developers.

The current installment of the authentication system in Wildfly/JEE7 right now is called Elytron which makes using different authentication backends mostly a matter of configuration. This configuration however is quite extensive and consists of several entities due to its flexiblity. Some may even say it is over-engineered…

Therefore I want to provide some kind of a walkthrough of how to get authentication up and running in Wildfly elytron by using a LDAP user store as the backend.

Our aim is to configure the authentication with a LDAP backend, to implement login/logout and to secure our application endpoints using annotations.

Setup

Of course you need to install a relatively modern Wildfly JEE server, I used Wildfly 26. For your credential store and authentication backend you may setup a containerized Samba server, like I showed in a previous blog post.

Configuration of security realms, domains etc.

We have four major components we need to configure to use the elytron security subsystem of Wildfly:

  • The security domain defines the realms to use for authentication. That way you can authenticate against several different realms
  • The security realms define how to use the identity store and how to map groups to security roles
  • The dir-context defines the connection to the identity store – in our case the LDAP server.
  • The application security domain associates deployments (aka applications) with a security domain.

So let us put all that together in a sample configuration:

<subsystem xmlns="urn:wildfly:elytron:15.0" final-providers="combined-providers" disallowed-providers="OracleUcrypto">
    ...
    <security-domains>
        <security-domain name="DevLdapDomain" default-realm="AuthRealm" permission-mapper="default-permission-mapper">
            <realm name="AuthRealm" role-decoder="groups-to-roles"/>
        </security-domain>
    </security-domains>
    <security-realms>
        ...
        <ldap-realm name="LdapRealm" dir-context="ldap-connection" direct-verification="true">
            <identity-mapping rdn-identifier="CN" search-base-dn="CN=Users,DC=ldap,DC=schneide,DC=dev">
                <attribute-mapping>
                    <attribute from="cn" to="Roles" filter="(member={1})" filter-base-dn="CN=Users,DC=ldap,DC=schneide,DC=dev"/>
                </attribute-mapping>
            </identity-mapping>
        </ldap-realm>
        <ldap-realm name="OtherLdapRealm" dir-context="ldap-connection" direct-verification="true">
            <identity-mapping rdn-identifier="CN" search-base-dn="CN=OtherUsers,DC=ldap,DC=schneide,DC=dev">
                <attribute-mapping>
                    <attribute from="cn" to="Roles" filter="(member={1})" filter-base-dn="CN=auth,DC=ldap,DC=schneide,DC=dev"/>
                </attribute-mapping>
            </identity-mapping>
        </ldap-realm>
        <distributed-realm name="AuthRealm" realms="LdapRealm OtherLdapRealm"/>
    </security-realms>
    <dir-contexts>
        <dir-context name="ldap-connection" url="ldap://ldap.schneide.dev:389" principal="CN=Administrator,CN=Users,DC=ldap,DC=schneide,DC=dev">
            <credential-reference clear-text="admin123!"/>
        </dir-context>
    </dir-contexts>
</subsystem>
<subsystem xmlns="urn:jboss:domain:undertow:12.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="DevLdapDomain" statistics-enabled="true">
    ...
    <application-security-domains>
        <application-security-domain name="myapp" security-domain="DevLdapDomain"/>
    </application-security-domains>
</subsystem>

In the above configuration we have two security realms using the same identity store to allow authenticating users in separate subtrees of our LDAP directory. That way we do not need to search the whole directory and authentication becomes much faster.

Note: You may not need to do something like that if all your users reside in the same subtree.

The example shows a simple, but non-trivial use case that justifies the complexity of the involved entities.

Implementing login functionality using the Framework

Logging users in, using their session and logging them out again is almost trivial after all is set up correctly. Essentially you use HttpServletRequest.login(username, password), HttpServletRequest.getSession() , HttpServletRequest.isUserInRole(role) and HttpServletRequest.logout() to manage your authentication needs.

That way you can check for active session and the roles of the current user when handling requests. In addition to the imperative way with isUserInRole() we can secure endpoints declaratively as shown in the last section.

Declarative access control

In addition to fine grained imperative access control using the methods on HttpServletRequest we can use annotations to secure our endpoints and to make sure that only authenticated users with certain roles may access the endpoint. See the following example:

@WebServlet(urlPatterns = ["/*"], name = "MyApp endpoint")
@ServletSecurity(
    HttpConstraint(
        transportGuarantee = ServletSecurity.TransportGuarantee.NONE,
        rolesAllowed = ["oridnary_user", "super_admin"],
    )
)
public class MyAppEndpoint extends HttpServlet {
...
}

To allow unauthenticated access you can use the value attribute instead of rolesAllowed in the HttpConstraint:

@ServletSecurity(
    HttpConstraint(
        transportGuarantee = ServletSecurity.TransportGuarantee.NONE,
        value = ServletSecurity.EmptyRoleSemantic.PERMIT)
)

I hope all of the above helps to setup simple and secure authentication and authorization in Wildfly/JEE.

Packaging Java-Project as DEB-Packages

Providing native installation mechanisms and media of your software to your customers may be a large benefit for them. One way to do so is packaging for the target linux distributions your customers are running.

Packaging for Debian/Ubuntu is relatively hard, because there are many ways and rules how to do it. Some part of our software is written in Java and needs to be packaged as .deb-packages for Ubuntu.

The official way

There is an official guide on how to package java probjects for debian. While this may be suitable for libraries and programs that you want to publish to official repositories it is not a perfect fit for your custom project that you provide spefically to your customers because it is a lot of work, does not integrate well with your delivery pipeline and requires to provide packages for all of your dependencies as well.

The convenient way

Fortunately, there is a great plugin for ant and maven called jdeb. Essentially you include and configure the plugin in your pom.xml as with all the other build related stuff and execute the jdeb goal in your build pipeline and your are done. This results in a nice .deb-package that you can push to your customers’ repositories for their convenience.

A working configuration for Maven may look like this:

<build>
    <plugins>
        <plugin>
            <artifactId>jdeb</artifactId>
            <groupId>org.vafer</groupId>
            <version>1.8</version>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>jdeb</goal>
                    </goals>
                    <configuration>
                        <dataSet>
                            <data>
                                <src>${project.build.directory}/${project.build.finalName}-jar-with-dependencies.jar</src>
                                <type>file</type>
                                <mapper>
                                    <type>perm</type>
                                    <prefix>/usr/share/java</prefix>
                                </mapper>
                            </data>
                            <data>
                                <type>link</type>
                                <linkName>/usr/share/java/MyProjectExecutable</linkName>
                                <linkTarget>/usr/share/java/${project.build.finalName}-jar-with-dependencies.jar</linkTarget>
                                <symlink>true</symlink>
                            </data>
                            <data>
                                <src>${project.basedir}/src/deb/MyProjectStartScript</src>
                                <type>file</type>
                                <mapper>
                                    <type>perm</type>
                                    <prefix>/usr/bin</prefix>
                                    <filemode>755</filemode>
                                </mapper>
                            </data>
                        </dataSet>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

If you are using gradle as your build tool, the ospackage-plugin may be worth a look. I have not tried it personally, but it looks promising.

Wrapping it up

Packaging your software for your customers drastically improves the user experience for users and administrators. Doing it the official debian-way is not always the best or most efficient option. There are many plugins or extensions for common build systems to conveniently build native packages that may easier for many use-cases.

Serving static resources in Javalin running as servlets

Javalin is a nice JVM-based microframework targetted at web APIs supporting Java and Kotlin as implementation language. Usually, it uses Jetty and runs standalone on the server or in a container.

However, those who want or need to deploy it to a servlet container/application server like Tomcat or Wildfly can do so by only changing a few lines of code and annotating at least one Url as a @WebServlet. Most of your application will continue to run unchanged.

But why do I say only “most of your application”?

Unfortunately, Javalin-jetty and Javalin-standalone do not provide complete feature parity. One important example is serving static resources, especially, if you do not want to only provide an API backend service but also serve resources like a single-page-application (SPA) or an OpenAPI-generated web interface.

Serving static resources in Javalin-jetty

Serving static files is straightforward and super-simple if you are using Javalin-jetty. Just configure the Javalin app using config.addStaticFiles() to specify some paths and file locations and your are done.

The OpenAPI-plugin for Javalin uses the above mechanism to serve it’s web interface, too.

Serving static resources in Javalin-standalone

Javalin-standalone, which is used for deployment to application servers, does not support serving static files as this is a jetty feature and standalone is built to run without jetty. So the short answer is: you can not!

The longer answer is, that you can implement a workaround by writing a servlet based on Javalin-standalone to serve files from the classpath for certain Url-paths yourself. See below a sample implementation in Kotlin using Javalin-standalone to accomplish the task:

package com.schneide.demo

import io.javalin.Javalin
import io.javalin.http.Context
import io.javalin.http.HttpCode
import java.net.URLConnection
import javax.servlet.annotation.WebServlet
import javax.servlet.http.HttpServlet
import javax.servlet.http.HttpServletRequest
import javax.servlet.http.HttpServletResponse

private const val DEFAULT_CONTENT_TYPE = "text/plain"

@WebServlet(urlPatterns = ["/*"], name = "Static resources endpoints")
class StaticResourcesEndpoints : HttpServlet() {
    private val wellknownTextContentTypes = mapOf(
        "js" to "text/javascript",
        "css" to "text/css"
    )

    private val servlet = Javalin.createStandalone()
        .get("/") { context ->
            serveResource(context, "/public", "index.html")
        }
        .get("/*") { context ->
            serveResource(context, "/public")
        }
        .javalinServlet()!!

    private fun serveResource(context: Context, prefix: String, fileName: String = "") {
        val filePath = context.path().replace(context.contextPath(), prefix) + fileName
        val resource = javaClass.getResourceAsStream(filePath)
        if (resource == null) {
            context.status(HttpCode.NOT_FOUND).result(filePath)
            return
        }
        var mimeType = URLConnection.guessContentTypeFromName(filePath)
        if (mimeType == null) {
            mimeType = guessContentTypeForWellKnownTextFiles(filePath)
        }
        context.contentType(mimeType)
        context.result(resource)
    }

    private fun guessContentTypeForWellKnownTextFiles(filePath: String): String {
        if (filePath.indexOf(".") == -1) {
            return DEFAULT_CONTENT_TYPE
        }
        val extension = filePath.substring(filePath.lastIndexOf('.') + 1)
        return wellknownTextContentTypes.getOrDefault(extension, DEFAULT_CONTENT_TYPE)
    }

    override fun service(req: HttpServletRequest?, resp: HttpServletResponse?) {
        servlet.service(req, resp)
    }
}

The code performs 3 major tasks:

  1. Register a Javalin-standalone app as a WebServlet for certain URLs
  2. Load static files bundled in the WAR-file from defined locations
  3. Guess the content-type of the files as good as possible for the response

Feel free to use and modify the code in your project if you find it useful. I will try to get this workaround into Javalin-standalone if I find the time to improve feature-parity between Javalin-jetty and Javalin-standalone. Until then I hopy you find the code useful.

JDBC’s wasNull method pitfall

Java’s java.sql package provides a general API for accessing data stored in relational databases. It is part of JDBC (Java Database Connectivity). The API is relatively low-level, and is often used via higher-level abstractions based on JDBC, such as query builders like jOOQ, or object–relational mappers (ORMs) like Hibernate.

If you choose to use JDBC directly you have to be aware that the API relatively old. It was added as part of JDK 1.1 and predates later additions to the language such as generics and optionals. There are also some pitfalls to be avoided. One of these pitfalls is ResultSet’s wasNull method.

The wasNull method

The wasNull method reports whether the database value of the last ‘get’ call for a nullable table column was NULL or not:

int height = resultSet.getInt("height");
if (resultSet.wasNull()) {
    height = defaultHeight;
}

The wasNull check is necessary, because the return type of getInt is the primitive data type int, not the nullable Integer. This way you can find out whether the actual database value is 0 or NULL.

The problem with this API design is that the ResultSet type is very stateful. Its state does not only change with each row (by calling next method), but also with each ‘get’ method call.

If any other ‘get’ method call is inserted between the original ‘get’ method call and its wasNull check the code will be wrong. Here’s an example. The original code is:

var width = rs.getInt("width");
var height = rs.getInt("height");
var size = new Size(width, rs.wasNull() ? defaultHeight : height);

A developer now wants to add a third dimension to the size:

var width = rs.getInt("width");
var height = rs.getInt("height");
var depth = rs.getInt("depth");
var size = new Size(width, rs.wasNull() ? defaultHeight : height, depth);

It’s easy to overlook the wasNull call, or to wrongly assume that adding another ‘get’ method call is a safe code change. But the wasNull check now refers to “depth” instead of “height”, which breaks the original intention.

Advice

So my advice is to wrap the ‘get’ calls for nullable database values in their own methods that return an Optional:

Optional<Integer> getOptionalInt(ResultSet rs, String columnName) {
    final int value = rs.getInt(columnName);
    if (rs.wasNull()) {
        return Optional.empty();
    }
    return Optional.of(value);
}

Now the default value fallback can be safely applied with the orElse method:

var width = rs.getInt("width");
var height = getOptionalInt(rs, "height").orElse(defaultHeight);
var depth = rs.getInt("depth");
var size = new Size(width, height, depth);