Why Java’s built-in hash functions are unsuitable for password hashing

Passwords are one of the most sensitive pieces of information handled by applications. Hashing them before storage ensures they remain protected, even if the database is compromised. However, not all hashing algorithms are designed for password security. Java’s built-in hashing mechanisms used e.g. by HashMap, are optimized for performance—not security.

In this post, we will explore the differences between general-purpose and cryptographic hash functions and explain why the latter should always be used for passwords.

Java’s built-in hashing algorithms

Java provides a hashCode() method for most objects, including strings, which is commonly used in data structures like HashMap and HashSet. For instance, the hashCode() implementation for String uses a simple algorithm:

public int hashCode() {
    int h = 0;
    for (int i = 0; i < value.length; i++) {
        h = 31 * h + value[i];
    }
    return h;
}

This method calculates a 32-bit integer hash by combining each character in the string with the multiplier 31. The goal is to produce hash values for efficient lookups.

This simplicity makes hashCode() extremely efficient for its primary use case—managing hash-based collections. Its deterministic nature ensures that identical inputs always produce the same hash, which is essential for consistent object comparisons. Additionally, it provides decent distribution across hash table buckets, minimizing performance bottlenecks caused by collisions.

However, the same features that make the functions ideal for collections are also its greatest weaknesses when applied to password security. Because it’s fast, an attacker could quickly compute the hash for any potential password and compare it to a leaked hash. Furthermore, it’s 32-bit output space is too small for secure applications and lead to frequent collisions. For example:

System.out.println("Aa".hashCode()); // 2112
System.out.println("BB".hashCode()); // 2112

The lack of randomness (such as salting) and security-focused features make hashCode() entirely unsuitable for protecting passwords. You can manually add a random value before passing the string into the hash algorithm, but the small output space and high speed still make it possible to generate a lookup table quickly. It was never designed to handle adversarial scenarios like brute-force attacks, where attackers attempt billions of guesses per second.

Cryptographic hash algorithms

Cryptographic hash functions serve a completely different purpose. They are designed to provide security in the face of adversarial attacks, ensuring that data integrity and confidentiality are maintained. Examples include general-purpose cryptographic hashes like SHA-256 and password-specific algorithms like bcrypt, PBKDF2, and Argon2.

They produce fixed-length outputs (e.g., 256 bits for SHA-256) and are engineered to be computationally infeasible to reverse. This makes them ideal for securing passwords and other sensitive data. In addition, some cryptographic password-hashing libraries, such as bcrypt, incorporate salting automatically—a technique where a random value is added to the password before hashing. This ensures that even identical passwords produce different hash values, thwarting attacks that rely on precomputed hashes (rainbow tables).

Another critical feature is key stretching, where the hashing process is deliberately slowed down by performing many iterations. For example, bcrypt and PBKDF2 allow developers to configure the number of iterations, making brute-force attacks significantly more expensive in terms of time and computational resources.

Conclusion

Java’s built-in hash functions, such as hashCode(), are designed for speed, efficiency, and consistent behavior in hash-based collections. They are fast, deterministic, and effective at spreading values evenly across buckets.

On the other hand, cryptographic hash algorithms are purpose-built for security. They prioritize irreversibility, randomness, and computational cost, all of which are essential for protecting passwords against modern attack vectors.

Java’s hashCode() is an excellent tool for managing hash-based collections, but it was never intended for the high-stakes realm of password security.

Updating Grails: From 5 to 6

We have a long history of maintaining quite a large grails application since Grails 1.0. Over the first few major versions upgrading was a real pain.

The situation changed dramatically after Grails 3 as you can see in our former blog posts and and the upgrade from 3 to 4. Going from 4 to 5 was so smooth that I did not even dedicate a blog post to it.

A few weeks ago we decided to upgrade from version 5 to 6 and here is a short summary of our experiences. Things fortunately went quite smooth again:

The changes

The new major version 6 contains mostly dependency upgrades and official support for Java 17 which is probably the biggest selling point.

Some other minor things to note are without any particular order:

  • The logback configuration file name has changed from logback.groovy to logback-config.groovy
  • The pathing-jar is already setup for you, so you can remove the directive from your build files if you had it in
  • The build uses the standard gradle-application plugin allowing some more simplifications in your build files and infrastructure

Other noteworthy news

Object Computing stepped down as the steward of the grails framework and informed the community in an open letter. While there is still the grails foundation and the open source community we can expect the development and changes to slow down.

Whether this results in negative developer experience remains to be seen.

Conclusion

Using and maintaining a Grails application developed to a rather smooth ride and only poorly maintained plugins hurt your experience. The framework and its foundation have been pretty solid for some time now.

Regarding new projects you certainly have to evaluate if using grails is the best option. For an full stack framework the answer maybe yes, but if you only need a powerful API backend lighter and more modern frameworks like micronaut or javalin may be a better choice.

How to use LDAP in a Javalin Server

I recently implemented authentication and authorization via LDAP in my Javalin web server. I encountered a few pitfalls in the process. That is why I am sharing my experiences in this blog article.

Javalin

I used pac4j for the implementation. This is a modular library that allows you to replicate your own use case with different authenticators, clients and web server connection libraries. In this case I use “org.pac4j:pac4j-ldap” as authenticator, “org.pac4j:pac4j-http” as client and “org.pac4j:javalin-pac4j” as web server.

In combination with Javalin, pac4j independently manages the session and forwards it for authentication if you try to access a protected path.

var config = new LdapConfigFactory().build();
var callback = new CallbackHandler(config, null, true);

Javalin.create()
   .before("administration", new SecurityHandler(config, "FormClient", "admin"))
   .get("administration", ctx -> webappHandler.serveWebapp(ctx))
   .get("login", ctx -> webappHandler.serveWebapp(ctx))
   .get("forbidden", ctx -> webappHandler.serveWebapp(ctx))
   .get("callback", callback)
   .post("callback", callback)
   .start(7070);

In this example code the path to the administration is protected by the SecurityHandler. The “FormClient” indicates that in the event of missing authentication, the user is forwarded to a form for authentication. The specification “admin” defines that the user must also be authorized to the role “admin”.

LDAP Config Factory

I configured LDAP using my own ConfigFactory. Here, for example, I define the callback and login route. In addition, my self-written authorizer and http action adapter are assigned. I will go into these two areas in more detail below. The login form requires the authenticator here. For us, this is an LdapProfileService.

public class LdapConfigFactory implements ConfigFactory {
    @Override
    public Config build(Object... parameters) {
        var formClient = new FormClient("http://localhost:7070/login", createLdapProfileService());
        var clients = new Clients("http://localhost:7070/callback", formClient);
        var config = new Config(clients);

        config.setWebContextFactory(JEEContextFactory.INSTANCE);
        config.setSessionStoreFactory(JEESessionStoreFactory.INSTANCE);
        config.setProfileManagerFactory(ProfileManagerFactory.DEFAULT);
        config.addAuthorizer("admin", new LdapAuthorizer());
        config.setHttpActionAdapter(new HttpActionAdapter());

        return config;
    }
}

LDAP Profile Service

I implement a separate method for configure the service. The LDAP connection requires the url and a user for the connection and the query of the active directory. The LDAP connection is defined in the ConnectionConfig. It is also possible to activate TLS here, but in our case we use LDAPS.

The Distinguished Name must also be defined. Queries only search for users under this path.

private static LdapProfileService createLdapProfileService() {
    var url = "ldaps://test-ad.com";
    var baseDN = "OU=TEST,DC=schneide,DC=com";
    var user = "username";
    var password = "password";

    ConnectionConfig connConfig = ConnectionConfig.builder()
            .url(url)
            .connectionInitializers(new BindConnectionInitializer(user, new Credential(password)))
            .build();

    var connectionFactory = new DefaultConnectionFactory(connConfig);

    SearchDnResolver dnResolver = SearchDnResolver.builder()
            .factory(connectionFactory)
            .dn(baseDN)
            .filter("(displayName={user})")
            .subtreeSearch(true)
            .build();

    SimpleBindAuthenticationHandler authHandler = new SimpleBindAuthenticationHandler(connectionFactory);

    Authenticator authenticator = new Authenticator(dnResolver, authHandler);

    return new LdapProfileService(connectionFactory, authenticator, "memberOf,displayName,sAMAccountName", baseDN);
}

The SearchDNResolver is used to search for the user to be authenticated. A filter can be defined for the match with the user name. And, very importantly, the subtreeSearch must be activated. By default, it is set to false, which means that only users who appear exactly in the BaseDN are found.

The SimpleBindAuthenticationHandler can be used together with the Authenticator for authentication with user and password.

Finally, in the LdapProfileService, a comma-separated string can be used to define which attributes of a user should be queried after authentication and transferred to the user profile.

With all of these settings, you will be redirected to the login page when you try to accessing administration. The credentials is then matched against the active directory via LDAP and the user is authenticated. In addition, I want to check that the user is in the administrator group and therefore authorized. Unfortunately, pac4j cannot do this on its own because it cannot interpret the attributes as roles. That’s why I build my own authorizer.

Authorizer

public class LdapAuthorizer extends ProfileAuthorizer {
    @Override
    protected boolean isProfileAuthorized(WebContext context, SessionStore sessionStore, UserProfile profile) {
        var group = "CN=ADMIN_GROUP,OU=Groups,OU=TEST,DC=schneide,DC=com";
        var attribute = (List) profile.getAttribute("memberOf");
        return attribute.contains(group);
    }

    @Override
    public boolean isAuthorized(WebContext context, SessionStore sessionStore, List<UserProfile> profiles) {
        return isAnyAuthorized(context, sessionStore, profiles);
    }
}

The attributes defined in LdapProfileService can be found in the user profile. For authorization, I query the group memberships to check if the user is in the group. If the user has been successfully authorized, he is redirected to the administration page. Otherwise the http status code forbidden is returned.

Javalin Http Action Adapter

Since I want to display a separate page that shows the user the Forbidden, I build my own JavalinHttpActionAdapter.

public class HttpActionAdapter extends JavalinHttpActionAdapter {
    @Override
    public Void adapt(HttpAction action, WebContext webContext) {
        JavalinWebContext context = (JavalinWebContext) webContext;
        if(action.getCode() == HttpConstants.FORBIDDEN){
            context.getJavalinCtx().redirect("/forbidden");
            throw new RedirectResponse();
        }
        return super.adapt(action, context);
    }
}

This redirects the request to the Forbidden page instead of returning the status code.

Conclusion

Overall, the use of pac4j for authentication and authorization on javalin facilitates the work and works well. Unfortunately, the documentation is rather poor, especially for the LDAP module. So the setup was a bit of a journey of discovery and I had to spend a lot of time looking for the root cause of some problems like subtreeSearch.

How building blocks can be a game changer for your project

I have just completed my first own project. In this project I have a web server whose main task is to load files and return them as a stream.
In the first version, there was a static handler for each file type, which loaded the file and sent it as a stream.

As part of a refactoring, I then built building blocks for various tasks and the handler changed fundamentally.

The initial code

Here you can see a part of my web server. It offers a path for both images and audio files. The real server has significantly more handlers and file types.

public class WebServer{
    public static void main(String[] args) {
        var app = Javalin.create()
            .get("api/audio/{root}/{path}/{number}", FileHandler::handleAudio)
            .get("api/img/{root}/{path}/{number}", FileHandler::handleImage)
            .start(7070);
    }
}

Below is a simplified illustration of the handlers. I have also shown the imports of the web server class and the web server technology, in this case Javalin. The imports are not complete. They are only intended to show these two dependencies.

import WebServer;
import io.javalin.http.Context;
// ...

public class FileHandler {

    public static void handleImage(Context ctx) throws IOException {
        var resource = Application.class.getClassLoader().getResourceAsStream(
            String.format(
                "file/%s/%s/%s.jpg", 
                ctx.pathParam("root"), 
                ctx.pathParam("path"), 
                ctx.pathParam("fileName")));
        String mimetyp= "image/jpg";

        if (resource != null) {
            ctx.writeSeekableStream(resource, mimetyp);
        }
    }

    public static void handleAudio(Context ctx) {
        var resource = Application.class.getClassLoader().getResourceAsStream(
            String.format(
                "file/%s/%s/%s.mp3", 
                ctx.pathParam("root"), 
                ctx.pathParam("path"), 
                ctx.pathParam("fileName")));
        String mimetyp= "audio/mp3";

        if (resource == null) {
            resource = Application.class.getClassLoader().getResourceAsStream(
            String.format(
                "file/%s/%s/%s.mp4", 
                ctx.pathParam("root"), 
                ctx.pathParam("path"), 
                ctx.pathParam("fileName")));
            mimetyp= "video/mp4";
        }

        if (resource != null) {
            ctx.writeSeekableStream(resource, mimetyp);
        }
    }
}

The handler methods each load the file and send it to the Javalin context. The audio handler first searches for an audio file and, if this is not available, for a video file. The duplication is easy to see. And with the static handler and Javalin dependencies, the code is not testable.

Refactoring

So first I build an interface, called a decorator, for the context to get the Javalin dependency out of the handlers. A nice bonus: I can manage the handling of not found files centrally. In the web server, I then inject the new WebContext instead of the Context.

public interface WebContext {
    String pathParameter(String key);
    void sendNotFound();
    void sendResourceAs(String type, InputStream resource);
    default void sendResourceAs(String type, Optional<InputStream> resource){
        if(resource.isEmpty()){
            sendNotFound();
            return;
        }
        sendResourceAs(type, resource.get());
    }
}

public class JavalinWebContext implements WebContext {
    private final Context context;

    public JavalinWebContext(Context context){
        this.context = context;
    }

    @Override
    public String pathParameter(String key) {
        return context.pathParam(key);
    }

    @Override
    public void sendNotFound() {
        context.status(HttpStatus.NOT_FOUND);
    }

    @Override
    public void sendResourceAs(String type, InputStream resource) {
        context.writeSeekableStream(resource, type);
    }
}

Then I write a method to load the file and send it as a stream.

private boolean sendResourceFor(String path, String mimetyp, Context context){
    var resource = Application.class.getClassLoader().getResourceAsStream(path);
    if (resource != null) {
        context.sendResourceAs(mimetyp, resource);
        return true;
    }
    return false;
}

The next step is to build a loader for the files, which I pass to the no longer static handler during initialization. Here I can run a quick check to see if anyone is trying to manipulate the specified path.

public class ResourceLoader {
    private final ClassLoader context;

    public ResourceLoader(ClassLoader context){
        this.context = context;
    }

    public Optional<InputStream> asStreamFrom(String path){
        if(path.contains("..")){
            return Optional.empty();
        }
        return Optional.ofNullable(this.context.getResourceAsStream(path));
    }
}

Finally, I build an extra class for the paths. The knowledge of where the files are located and how to determine the path from the context should not be duplicated everywhere.

public class FileCoordinate {
    private static final String FILE_CATEGORY = "file";
    private final String root;
    private final String path;
    private final String fileName;
    private final String extension;

    private FileCoordinate(String root, String path, String fileName, String extension){
        super();
        this.root = root;
        this.path = path;
        this.fileName = fileName;
        this.extension = extension;
    }

    private static FileCoordinate pathFromWebContext(WebContext context, String extension){
        return new FileCoordinate(
                context.pathParameter("root"),
                context.pathParameter("path"),
                context.pathParameter("fileName"),
                extension
        );
    }

    public static FileCoordinate toImageFile(WebContext context){
        return FileCoordinate.pathFromWebContext(context, "jpg");
    }

    public static FileCoordinate toAudioFile(WebContext context){
        return FileCoordinate.pathFromWebContext(context, "mp3");
    }

    public FileCoordinate asToVideoFile(){
        return new FileCoordinate(
                root,
                path,
                fileName,
                "mp4"
        );
    }

    public String asPath(){
        return String.format("%s/%s/%s/%s.%s", FILE_CATEGORY, root, path, fileName, extension);
    }
}

Result

My handler looks like this after refactoring:

public class FileHandler {
    private final ResourceLoader resource;

    public FileHandler(ResourceLoader resource){
        this.resource = resource;
    }

    public void handleImage(WebContext context) {
        var coordinate = FileCoordinate.toImageFile(context);
        sendResourceFor(coordinate, "image/jpg", context);     
    }

    public void handleAudio(WebContext context) {
        var coordinate = FileCoordinate.toAudioFile(context);
        var found = sendResourceFor(coordinate, "audio/mp3", context);
        if(!found)
            sendResourceFor(coordinate.asToVideoFile(), "video/mp4", context);
    }

    private boolean sendResourceFor(FileCoordinate coordinate, String mimetype, WebContext context){
        var stream = resource.asStreamFrom(coordinate);
        context.sendResourceAs(mimetype, stream);
        return stream.isPresent();
    }
}

It is much shorter, easier to read and describes more what is done and not how it is technically done. Another advantage is that I can, for example, fully test my FileCoordinate and mock my WebContext.

For just these two handler methods, it still looks like a overkill. Overall, more code has been created than has disappeared and yes, a smaller modification would probably have been sufficient for this handler alone. But my application is not just this handler and most of them are much more complex. For example, I work a lot with json files, which are loaded and which my loader can now simply interpret using an additional function that return a JsonNode instead a stream. The conversion has significantly reduced the complexity of the application, avoided duplications and made the code more secure and testable.

Swagger-ui for any JVM-based backend

We often implement web applications with a React frontend and one of a large pool of backend frameworks/technologies. These include Micronaut, .NET Framework, Javalin, Flask, Eclipse Jetty among others.

A documentation of the API that allows calling the endpoints can be very helpful even during development to illustrate the API usage. OpenAPI and implementations like Swagger and Swashbuckle fulfill this task quite well.

While many of the backend frameworks support documenting and calling the API using Swagger-UI out-of-the-box or using plugins some frameworks like Undertow do not have direct support for it. Nevertheless, it is relatively easy to add an OpenAPI documentation with a web interface to almost any backend. We demonstrate the setup here using Undertow because it is used in some of our projects.

Adding dependencies and build config

Since we mostly use Gradle for our JVM-based backends we will highlight the additions to build.gradle to make to generate the OpenAPI definition. It consists of the following steps:

  • Adding the Swagger gradle plugin
  • Adding dependencies for the required annotations
  • Configuring the resolve-task of the gradle plugin

Here is an example exerpt of our build.gradle:

plugins {
  id 'java'
  id 'application'
// ...
  id "io.swagger.core.v3.swagger-gradle-plugin" version "2.2.20"
}

dependencies {
// ...
    implementation 'io.undertow:undertow-core:2.3.12.Final'
    implementation 'io.undertow:undertow-servlet:2.3.12.Final'
    implementation 'io.swagger.core.v3:swagger-annotations:2.2.20'
    implementation 'org.jboss.resteasy:jaxrs-api:3.0.12.Final'
}

resolve {
    outputFileName = 'demo-server'
    outputFormat = 'JSON'
    prettyPrint = 'TRUE'
    classpath = sourceSets.main.runtimeClasspath
    resourcePackages = ['com.schneide.demo.server', 'com.schneide.demo.server.api']
    outputDir = layout.buildDirectory.dir('resources/main/swagger').get().asFile
}

Adding the annotations

We need to add some annotations to our code so that the OpenAPI JSON (or YAML) file will be generated.

The API root class looks like below:

@OpenAPIDefinition(info =
@Info(
        title = "Demo Server Web-API",
        version = "0.11",
        description = "REST API for the demo web application..",
        contact = @Contact(
                url = "https://www.softwareschneiderei.de",
                name = "Softwareschneiderei GmbH",
                email = "kontakt@softwareschneiderei.de")
)
)
public class ApiHandler {
    public ApiHandler() {
        super();
    }

    /**
     *  Connect our handlers
     */
    public RoutingHandler createApiHandler() {
        final RoutingHandler api = new RoutingHandler();
        api.get("/demo", new DemoHandler());
        // ...
        return api;
    }
}

We also refactored our handlers to separate the business api and the Undertow handler interface methods to generate a expressive API.

The result looks something like this:

@Path("/api/demo")
public class DemoHandler implements HttpHandler {
    public DemoHandler() {
        super();
    }

    @Override
    public void handleRequest(HttpServerExchange exchange) throws Exception {
         exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "application/json");
            final Map<String, Deque<String>> params = exchange.getQueryParameters();
            final int month = Integer.parseInt(params.get("month").getFirst());
            final int year = Integer.parseInt(params.get("year").getFirst());
        exchange.getResponseSender().send(new Gson().toJson(getDaysIn(year, month)));
    }

    @GET
    @Operation(
            summary = "Get number of days in the given month and year",
            responses = {
                    @ApiResponse(
                            responseCode = "200",
                            description = "A number of days in the given month and year",
                            content = @Content(mediaType = "application/json",
                                    schema = @Schema(implementation = Integer.class)
                            )
                    )
            }
    )
    public Integer getDaysIn(@QueryParam("year") int year, @QueryParam("year") int month) {
        return YearMonth.of(year, month).lengthOfMonth();
    }
}

When running the resolve task all of the above results in a OpenAPI definition file in build/resources/main/swagger/demo-server.json.

Swagger-UI for the API definition

Now that we have this API definition we can use it to generate clients and – more important to us – generate a web UI documenting the API and allowing to execute and demo the functionality. For this we simply download the Swagger-UI distribution and place the contents of the dist/ folder in src/main/resources/swagger-ui. We then have to let Undertow serve definition and UI like so:

class DemoServer {
    public DemoServer() {
        final GracefulShutdownHandler rootHandler = gracefulShutdown(createHandler());
        Undertow.builder().addHttpListener(8080, "localhost").setHandler(rootHandler).build().start();
    }

    private HttpHandler createHandler() {
        return path()
                .addPrefixPath("/api", new ApiHandler().createApiHandler())
                .addPrefixPath("/swagger-ui", resource(new ClassPathResourceManager(getClass().getClassLoader(), "swagger-ui/"))
                        .setWelcomeFiles("index.html"))
                .addPrefixPath("/swagger", resource(new ClassPathResourceManager(getClass().getClassLoader(), "swagger/")));
    }
}

Note: I tried using the swagger-ui webjar but was unable to configure the location (the URL) of my OpenAPI definition file. Therefore I used the plain swagger-ui download instead.

Wrapping it up

We have to do some setup work and potentially some refactorings to provide a meaninful API documentation for our backend. After this work it is mostly adding some Annotations to methods and types used in your web API.

Serving static files from a JAR with with Jetty

Web applications are often deployed into some kind of web server or web container like Wildfly, Tomcat, Apache or IIS. For single-instance services this can be overkill and serving requests can easily be done in-process without interfacing with some external web container or web server.

A proven and popular framework for an in-process web container and webserver is Eclipse Jetty for Java. For easy deployment and distribution you can build a single application archive containing everything: the static resources, web request handling, all your code and dependencies needed to run your application.

If you package your application that way there is one caveat when trying to serve static resources from this application archive. Let us have a look how to do it with Jetty.

Using the DefaultServlet for static resources on the file system

Jetty comes with a Servlet-Implementation for serving static resources out-of-the-box. It is called DefaultServlet and can be used like below:

Server server = new Server();
ServerConnector connector = new ServerConnector(server);
connector.setHost(listenAddress);
connector.setPort(listenPort);
server.addConnector(connector);
var context = new ServletContextHandler();
context.setContextPath("/");
var defaultServletHolder = context.addServlet(DefaultServlet.class, "/static/*");
defaultServletHolder.setInitParameter("resourceBase", "/var/www/static");
server.setHandler(context);
server.start();

This works great in the case of static resources available somewhere on the filesystem.

But how do we specify where the DefaultServlet can find the resources within our application archive?

Using the DefaultServlet for static in-JAR resources

The only thing that we need to change is the init-parameter called resourceBase from a normal file path to a path in the JAR. What does a path to the files inside a JAR look like and how do we construct it? It took me a while to figure it out, but here is what I came up with and it works perfectly in my use cases:

private String getResourceBase() throws MalformedURLException {
	URL resourceFile = getClass().getClassLoader().getResource("static/existing-file-inside-jar.txt");
    return new URL(Objects.requireNonNull(resourceFile).getProtocol(), resourceFile.getHost(), resourceFile.getPath()
        .substring(0, resourceFile.getPath().lastIndexOf("/")))
        .toString();
}

The method results in a string like jar:file:/path/to/application.jar!/static. Using such a path as the resourceBase for the DefaultServlet allows you to serve all the stuff from the /static directory inside your application (or any other) jar containing the class this method resides in.

A few notes on the code

Why don’t we just hardcode a path after the jar:-protocol? Well, the file path may chance in several scenarios:

  • Running the application on a different platform or operating system
  • Renaming the application archive – it could contain the version number for example…
  • Installing or copying the application archive to a different location on the file system

Using an existing resource and reusing the relevant parts of the URL-specification for the resource base directory solves all these issues because it is computed at runtime.

However, the code assumes that there is at least one resource available in the JAR and that its path is known at compile time.

In newer JDKs like 21 LTS constructing an URL using the constructor is deprecated but I did not bother to rewrite the code to use URI because of time constraints. That is left up to you or a future release…

As always I hope someone finds the code useful and drops a comment.

If You Teach It, Teach It Right

Recently, I gained a glimpse of source code that gets taught in beginner’s developer courses. There was one aspect that really irked me, because I think it is fundamentally wrong from the pedagogical point of view and disrespectful towards the students.

Let me start with an abbreviated example of the source code. It is written in Java and tries to exemplify for-loops and if-statements. I omitted the if-statements in my renarration:

Scanner scanner = new Scanner(System.in);

int[] operands = new int[2];
for (int i = 0; i < operands.length; i++) {
    System.out.println("Enter a number: ");
    operands[i] = Integer.parseInt(scanner.nextLine());
}
int sum = operands[0] + operands[1];
System.out.println("The sum of your numbers is " + sum);

scanner.close();

As you can see, the code opens a possibility to input characters in the first line, asks for a number twice and calculates the sum of both numbers. It then outputs the result on the console.

There are a lot of problems with this code. Some are just coding style level, like using an array instead of a list. Others are worrisome, like the lack of exception handling, especially in the Integer.parseInt() line. Well, we can tolerate cumbersome coding style. It’s not that the computer would care anyway. And we can look over the missing exception handling because it would be guaranteed to overwhelm beginning software developers. They will notice that things go wrong once they enter non-numbers.

But the last line of this code block is just an insult. It introduces the students to the concept of resources and teaches them the wrong way to deal with them.

Just a quick reminder why this line is so problematic: The java.util.Scanner is a resource, as indicated by the implementation of the interface java.io.Closeable (that is a subtype of java.lang.AutoCloseable, which will be important in a minute). Resources need to be relased, freed, disposed or closed after usage. In Java, this step is done by calling the close() method. If you somehow fail to close a resource, it stays open and hogs memory and other important things.

How can you fail to close the Scanner in our example? Simple, just provoke an exception between the first and the last line of the block. If you don’t see the output about “The sum of your number”, the resource is still open.

You can argue that in this case, because of the missing exception handling, the JVM exits and the resource gets released nonetheless. This is correct.

But I’m not worried about my System.in while I’m running this code. I’m worried about the perception of the students that they have dealt with the resource correctly by calling close() at the end.

They learn it the wrong way first and the correct way later – hopefully. During my education, nobody corrected me or my peers. We were taught the wrong way and then left in the belief that we know everything. And I’ve seen too many other developers making the same stupid mistakes to know that we weren’t the only ones.

What is the correct way to deal with the problem of resource disposal in Java (since 2011, at least)? There is an explicit statement that supports us with it: try-with-resources, which leads to the following code:

try (
    Scanner scanner = new Scanner(System.in);
) {
int[] operands = new int[2];
for (int i = 0; i < operands.length; i++) {
        System.out.println("Enter a number: ");
        operands[i] = Integer.parseInt(scanner.nextLine());
}
int sum = operands[0] + operands[1];
System.out.println("The sum of your numbers is " + sum);
}

I know that the code looks a lot more intimidating at the beginning now, but it is correct from a resource safety point of view. And for a beginning developer, the first lines of the full example already look dreading enough:

import java.util.Scanner;

public class Main {

    public static void main(String[] arguments) {
        // our code from above
    }
}

Trying to explain to an absolute beginner why the “public class” or the “String[] arguments” are necessary is already hard. Saying once more that “this is how you do it, the full explanation follows” is doing less damage on the long run than teaching a concept wrong and then correcting it afterwards, in my opinion.

If you don’t want to deal with the complexity of those puzzling lines, maybe Java, or at least the “full-blown” Java isn’t the right choice for your course? Use a less complex language or at least the scripting ability of the language of your choice. If you want to concentrate on for-loops and if-statements, maybe the Java REPL, called JShell, is the better suited medium? C# has the neat feature of “top-level statements” that gets rid of most ritual around your code. C# also calls the try-with-resources just “using”, which is a lot more inviting than the peculiar “try”.

But if you show the complexity to your students, don’t skimp on overall correctness. Way too much bad code got written with incomplete knowledge from beginners that were never taught the correct way. And the correct way is so much easier today than 25 years ago, when I and my generation of developers scratched their heads why their programs wouldn’t run as stable and problem-free as anticipated.

So, let me reiterate my point: There is no harm in simplification, as long as it doesn’t compromise correctness. Teaching incorrect or even unsafe solutions is unfair for the students.

Using JSON-Schema for data exchange

Several years ago XML was a quite popular document format – mostly due to its schema validation possibilities and clearly defined structure. Many libraries made working with such data documents possible (not really nice or a pleasure…) and humans could read them if need be. XML as a text format is programming language agnostic and processable in practically all useful programming environments.

Working with XML always was more of a pain for me. Fortunately, since then a lot of time passed and alternatives like JSON, YAML and TOML arised. All of them have their strengths and weaknesses and can fill similar roles as XML.

In general they have 2 things in common compared to XML:

  1. superiour readability
  2. lacking validation compared to XML schema

Nowadays, JSON is very widespread due to the popularity of JavaScript and perhaps the most used data exchange format across the internet. Despite having some syntactic quirks like its strictness about commas and forbidding of comments it is imho quite a good format. It is concise, human-readable, flexible and relatively simple. Many languages treat it like nested dictionaries so understanding and working with JSON is easy.

The main drawback is missing documentation and validation.

Enter JSON schema

JSON schema is a specification with accompanying libraries to fix the major issues about JSON. You define a schema of your data documents in JSON and put it in separate files. This adds the missing features to your JSON data documents I complained about: documentation and the possibility of automatic validation.

How does a simple JSON schema file look like? Let us have a look:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "$id": "https://cars.softwareschneiderei.com/car.schema.json",
  "title": "Car",
  "description": "Describing some important properties of a car",
  "type": "object",
  "properties": {
    "manufacturer": {
      "description": "The company producing the car.",
      "type": "string"
    },
    "model": {
      "description": "The name of the car model.",
      "type": "string"
    },
    "engineType": {
      "description": "One of the available engine types.",
      "enum": [
        "gasoline",
        "diesel",
        "hybrid",
        "electric"
      ]
    },
    "availableColors": {
      "description": "The colors the car is available in. Some colors may increase the price.",
      "type": "array",
      "items": {
        "type": "string"
      },
      "minItems": 1,
      "uniqueItems": true
    },
    "price": {
      "description": "The price tag in € including VAT. The price is optional.",
      "type": "number"
    }
  },
  "required": [
    "manufacturer",
    "model",
    "engine",
    "availableColors"
  ]
}

A valid data document could look like below:

{
  "manufacturer": "Porsche",
  "model": "911 Turbo",
  "engineType": "gasoline",
  "availableColors": [ "black", "blue", "red", "yellow", "white" ],
  "price": 150000
}

I think we can easily see how the documentation helps to understand the data, for example regarding the price property. The Java code to validate the data may look similar to this:

public void processCarData(String carFileName) { 
  var schemaDefinition = Json.decodeValue(readFile("car.schema.json"));
  JsonSchema schema = JsonSchema.of((JsonObject) schemaDefinition);
  var schemaValidator = Validator.create(schema, new JsonSchemaOptions()
      .setDraft(Draft.DRAFT202012)
      .setBaseUri("https://cars.softwareschneiderei.com"));
  var car = Json.decodeValue(readFile(carFileName));

  if (!schemaValidator.validate(car).getValid()) {
    throw new IllegalArgumentException("The format of the car data is invalid.");
  }
  // Data valid, work with it...
}

Conclusion

As depicted above JSON schema fills some important gaps when using JSON as an data exchange format. It offers helpful tools to document data structure and to validate data against a definition written in JSON itself.

That way we can safely work with the data, document the structure and still maintain the other good properties of JSON like interoperability, human-readability and data effienciency.

How to Lay a Minefield

… in Minesweeper, of course.

One of the basic building blocks of my hands-on software development workshops are Coding Katas. Minesweeper, the classic game that every Microsoft Windows user knows since 1992, is one of the “medium everything” Katas: Medium scope, medium complexity, medium demand. One advantage is that virtually nobody needs any more explanation than “program a Minesweeper”.

The first milestone when programming a Minesweeper game is to lay the mines on the field in a random fashion. And to my continual surprise, this seems to be a hard task for most participants. Not hard in the sense of giving up, but hard in the sense that the solution lacks suitability or even correctness.

So in order to have a reference point I can use in future workshops and to discuss the usual approaches, here is how you lay a minefield in an adequate way.

The task is to fill a grid of tiles or cells with a predefined amount of mines. Let’s say you have a grid of ten rows and ten columns and want to place ten mines randomly.

Spoiler alert: If you want to think about this problem on your own, don’t read any further and develop your solution first!

Task analysis

If you pause for a moment and think about the ingredients that you need for the task, you’ll find three things:

  • A die (in the form of a random number generator)
  • An amount of mines to place (maybe just represented by a counter variable)
  • An amount of tiles (stored in a data structure)

Each solution I’ll present uses one of these things as the primary focus of interest. My language for this effect is that the solution “takes the thing into the hands”, while the other two things “lay on the table”. That’s because if you simulate how the solution works with real objects like paper and dice, you’ll really have one thing in your hands and the others on the table most of the time.

Solution #1: The probability approach

One way to think about the task of placing 10 mines somewhere on 100 tiles is to calculate the probability of a mine being on a tile and just roll the dice for every tile.

Here’s some code that shows how this approach might look like in Java:

private static final int fieldWidth = 10;
private static final int fieldHeigth = 10;

public static Set<Point> placeMinesOn(Set<Point> field) {
	Random rng = new Random();
	final double probability = 0.1D;
	for (int column = 0; column < fieldWidth; column++) {
		for (int row = 0; row < fieldHeigth; row++) {
			if (rng.nextDouble() < probability) {
				field.add(new Point(row, column));
			}
		}
	}
	return field;
}

Before we discuss the effects of this code, let’s have a little talk about the data structure that I use for the tiles:

The most common approach to model a two-dimensional grid of tiles in software is to use a two-dimensional array. There is nothing wrong with it, it’s just not the most practical approach. In reality, it is really cumbersome compared to its alternatives (yes, there are several). My approach is to separate the aspect of “two-dimensionalness” from my data structure. That’s why I use a set of points. The set (like a HashSet) is a one-dimensional data structure that more or less can only say “yes, I know this point” or “no, I never heard of this point”. To determine if a certain point (or tile at this coordinate) contains a mine, it just needs to be included in the set. An empty set represents an empty field. If you remove “cleared” mines from the set, its size is the number of remaining mines. With a two-dimensional array, you probably write several loop-in-loop structures, one of them just to count non-cleared mines.

Ok, now back to the solution. It is the approach that holds the die in the hands and uses it for every tile. The problem with it is that our customer didn’t ask for a probability of 10 percent for a mine, he/she asked for 10 mines. And the code above might place 10 mines, or 9, or 11, or even 14. In fact, the code places somewhere between 0 and 100 mines on the field.

The one thing this solution has going for it is the guaranteed runtime. We roll the dice 100 times and that’s it.

So we can categorize this solution as follows:

  • Correctness: not guaranteed
  • Runtime: guaranteed

If I were the customer, I would reject the solution because it doesn’t produce the outcome I require. A minesweeper contest based on this code would end in a riot.

Solution #2: Sampling with replacement

If you don’t take up the die, but the mines and try to dispense them on the field, you implement our second solution. As long as you have mines on hand, you choose a field at random and place it there. The only exception is that you can’t place a mine above a mine, so you have to check for the presence of a mine first.

Here’s the code for this solution in Java:

public static Set<Point> placeMinesOn(Set<Point> field) {
	Random rng = new Random();
	int remainingMines = 10;
	while (remainingMines > 0) {
		Point randomTile = new Point(
			rng.nextInt(fieldHeigth),
			rng.nextInt(fieldWidth)
		);
		if (field.contains(randomTile)) {
			continue;
		}
		field.add(randomTile);
		remainingMines--;
	}
	return field;
}

This solution works better than the previous one for the correctness category. There will always be 10 mines on the field once we are finished. The problem is that we can’t guarantee that we are able to place the mines in time before our player gets bored. Taking it to the extreme means that this code might run forever, especially if your random number generator isn’t up to standards.

So, the participants of your minesweeper contest might not protest the arbitrary number of mines on their field, but maybe just because they don’t know yet that they’ll always get 10 mines dispersed.

  • Correctness: guaranteed
  • Runtime: not guaranteed

This solution will probably work alright in reality. I just don’t see the need to utilize it when there is a clearly superior solution at hands.

Solution #3: Sampling without replacement

So far, we picked up the die and the mines. But what if we pick up the tiles? That’s the third solution. In short, we but all tiles in a bag and pick one by random. We place a mine there and don’t put it back into the bag. After we’ve picked 10 tiles and placed 10 mines, we solved the problem.

Here’s code that implements this solution in Java:

public static Set<Point> placeMinesOn(Set<Point> field) {
	List<Point> allTiles = new LinkedList<Point>();
	for (int column = 0; column < fieldWidth; column++) {
		for (int row = 0; row < fieldHeigth; row++) {
			allTiles.add(new Point(row, column));
		}
	}
	
	Collections.shuffle(allTiles);
	
	int mines = 10;
	for (int i = 0; i < mines; i++) {
		Point tile = allTiles.remove(0);
		field.add(tile);
	}
	return field;
}

The cool thing about this solution is that it excels in both correctness and runtime. Maybe we use some additional memory for our bag and some runtime for the shuffling, but both can be predefined to an upper limit.

Yet, I rarely see this solution in my workshops. It’s only after I challenge their first solution that people come up with it. I’m biased, of course, because I’ve seen too many approaches (successful and failed) and thought about the problem way longer than usual. But you, my reader, are probably an impartial mind on this topic and can give some thoughts in the comments. I would appreciate it!

So, let’s categorize this approach:

  • Correctness: guaranteed
  • Runtime: guaranteed

If I were the customer, this would be my anticipation. My minesweeper contest would go unchallenged as long as nobody finds a flaw in the shuffle algorithm.

Summary

It is suprisingly hard to solve simple tasks like “distribute N elements on a X*Y grid” in an adequate way. My approach to deconstruct and analyze these tasks is to visualize myself doing them “in reality”. This is how I come up with the “thing in hands” metaphor that help me to imagine new solutions. I hope this helps you sometimes in the future.

How do you lay a minefield and how do you find your solutions? Write a blog post or leave a comment!

Converting a Grails app from war deployment in Tomcat to docker

A few years ago we had real servers with servlet containers installed and managed by administrators. These machines were bare-metal unicorns and had to be kept alive as long as possible (for many years). With the advent of first virtual machines and then container runtimes like Docker the approach and responsibilities for hosting web applications changed.

Our application not only went from Grails framework version 1 to 5 (atm, and soon to version 6) but also the above voyage regarding the hosting environment.

While the step from bare-metal to virtual machine was negligible from the developer and application side the migration to docker-based hosting is quite a step. Therefore I would like to depict the biggest changes of running a Grails application as a WAR deployment in Tomcat to a standalone application in Docker.

The original setting

We had our Grails application running on a server with a Tomcat servlet container for years. On the server we had also an ElasticSearch instance running and saved documents in a few well-defined directories on the local file system. Our database (Oracle in the past, PostgreSQL for over a year now) was running managed in a data center. We used container features like JNDI for parts of the configuration and datasources.

Deployment was essentially uploading a new WAR and context.xml file to the Tomcat servlet container using an Ansible playbook (where we restarted the servlet container to ensure not leaving any resource leaks…).

This setup worked quite well for years, only upgrades of the host operating system along with Tomcat and Java Virtual Machine (JVM) updates meant some work.

The target setup

Our customer has been in the process of converting most of the services running on virtual machines to dockerized deployments managed using Portainer. Most of the provisioning is automated and there is almost no snow flaking of the virtual machines hosting the docker runtimes. This eases the operation and administration of the services a lot and makes it much easier to setup new instances of a service and to monitor them.

So it was our task to migrate our setup on the the host VM to a docker-based deployment. In the future we basically push a docker image of our application to a docker registry of the customer and update the stack using Portainer.

The journey to a dockerized Grails app

The first thing to do were some changes to build.gradle to build a Jar-application instead of a WAR archive. We decided it was easier and more lightweight to run the Grails app standalone with the embedded tomcat than to use Tomcat in a docker container:

  • Remove the gradle war plugin
  • Change providedCompile dependencies to implementation
  • Add a task to prepare the Dockerfile and context to build our bootable application image:
task prepareDocker(type: Copy, dependsOn: assemble) {
    description = 'Copy files from src/main/docker and application jar to Docker temporal build directory'
    group = 'Docker'

    from 'src/main/docker'
    from project.bootJar

    into dockerBuildDir
}
  • Add an appropiate Dockerfile to our docker context in src/main/docker:
FROM eclipse-temurin:11-jdk-jammy
MAINTAINER Mihael Koep "mihael.koep@softwareschneiderei.de"

EXPOSE 8080

WORKDIR /app
COPY naomi-boot.jar .

ENV GRAILS_ENV development
# Additional env variables for configuration

CMD java -Dgrails.env=$GRAILS_ENV -jar /app/application-boot.jar
  • Remove JNDI usages and convert configuration from context.xml and JNDI to environment variables
  • Add a docker-compose.yml for the stack definition (here outlined for development with a database as part of the stack)
version: '3.7'
services:
  my-app:
    container_name: my-app-application
    image: ${IMAGE_TAG}
    environment:
      - GRAILS_ENV=development
      - DB_URL=jdbc:postgresql://my-app-db:5432/my-app
    volumes:
      - my-appdata:/app/data_home
    ports:
      - 8080:8080
    depends_on:
      - my-app-db
      - my-app-elastic
  my-app-db:
    container_name: my-app-database-stack
    image: postgres:13
    environment:
      - POSTGRES_USER=my-app
      - POSTGRES_PASSWORD=my-db-password
      - POSTGRES_DB=my-app
    ports:
      - 5432:5432
  my-app-elastic:
    container_name: my-app-elastic-stack
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    environment:
      - node.name=es01
      - cluster.name=my-app-es
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - searchdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9300:9300
volumes:
  searchdata:
  my-appdata:

Conclusion

The journey from classical deployments to a dockerized environment using a docker stack is not that long for a Grails application (or most other Java web applications). It makes it trivial setting up a new instance or migrating it to another host as most configuration is in version controlled files and there is next to none snowflaking. The customer has an easier time running the web application because the requirements are only a container-runtime and explicitly formulated in the stack definition.

In addition, the developers are free to choose the JVM and other details within the containers without having to cross-check with the administrators of the customer if they are supported by their organization.