Changing the keyboard navigation behaviour of form inputs

The default behaviour in HTML forms is that you can move the focus from one input element to the next via the tab key and submit the form via the enter key. This is also how dialogs work on most operating systems when using the native UI components. This behaviour is consistent across all browsers, and changing it messes with the user’s expectations and reduces accessibility. So I would normally advise against changing this behaviour without good reasons.

However, one of our customers wanted a different behaviour for an application developed by us. This application replaced an older application where the enter key did not submit the form, but moved the focus to the next input element. The ‘muscle memory’ effect made users accidentally submit the form by hitting the enter key, causing frustration. Since this application is not a public web site, but merely a web technology based intranet application with a small and specialized user base, changing the default behaviour is acceptable if the users want it.

So here’s how to do it. The following JavaScript function focusNextInputOnEnter takes a form element as a parameter and changes the focus behaviour on the input elements within this form.

function focusNextInputOnEnter(form) {
  var inputs = form.querySelectorAll('input, select, textarea');
  for (var i = 0; i < inputs.length; i++) {
    var input = inputs[i];
    input.addEventListener('keypress', (function(index) {
      return function(event) {
        if (!isEnter(event.which)) {
          return;
        }
        var nextIndex = index + 1;
        while (nextIndex < inputs.length) {
          var nextInput = inputs[nextIndex];
          if (nextInput.disabled) {
            nextIndex++;
            continue;
          }
          nextInput.focus();
          break;
        }
      };
    })(i));
  }

  function isEnter(keyCode) {
    return keyCode === 13;
  }
}

It works by handling the keypress events on the input elements and checking the key code for the enter key (code 13). It has an additional check so that disabled input elements are skipped.

To apply this change in behaviour to a form we have to call the function when the DOM content is loaded:

<form id="demo-form">
  <input type="text">
  <input type="text" disabled="disabled">
  <input type="checkbox">
  <select>
    <option>A</option>
    <option>B</option>
  </select>
  <textarea></textarea>
  <input type="text">
  <input type="text">
</form>

<script>
  document.addEventListener('DOMContentLoaded', function() {
    focusNextInputOnEnter(document.getElementById('demo-form'));
  });
</script>

I want to reiterate my warning that you should definitely not do this for public web sites, and elsewhere only if you know that this is what your users want.

Using (elastic)search with .NET Core

Many modern applications require powerful search mechanisms to become useful and make their users more productive. That is in large part due to the amount of data available to work with. Thankfully there are already powerful tools to index your data and make it searchable.

One of the most well known state-of-the-art solutions is ElasticSearch and it has an API to be used from .NET called NEST. While the documentation is ok I want to give a quick rundown on how to add searching capabilities to your .NET Core application. Some ideas are borrowed from the great post “Using Elasticsearch with ASP.NET Core and Docker“.

Getting ElasticSearch running

The easiest way to get up and running with elasicsearch is to use their docker images and just run the container on your development machine. I like to a docker compose file like the following to get elasticsearch and its tooling application kibana up and running fast:

version: '3.8'
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        container_name: elastic
        environment:
            - node.name=elastic
            - cluster.initial_master_nodes=elastic
        ports:
            - "9200:9200"
            - "9300:9300"
        volumes:
            - type: bind
              source: ./esdata
              target: /usr/share/elasticsearch/data
        networks:
            - esnetwork
    kibana:
        image: docker.elastic.co/kibana/kibana:7.8.0
        ports:
            - "5601:5601"
        networks:
            - esnetwork
        depends_on:
            - elasticsearch
volumes:
    esdata:
networks:
    esnetwork:
        driver: bridge

After you run it with docker compose you can talk to the search service on http://localhost:9200/ and the kibana management GUI on http://localhost:5601/. On the Kibana UI especially the Dev Tools and its console are interesting for experimenting with search queries.

Access ElasticSearch from your .NET Core app

I find it quite elegant to write a extension method for the IServiceCollection to configure the ElasticClient and register it as a Singleton to the dependency injection framework of .NET Core like so

    public static class ElasticSearchExtension
    {
        public static void AddElasticsearch(
            this IServiceCollection services, IConfiguration configuration)
        {
            var url = configuration["elasticsearch:url"];
            var settings = new ConnectionSettings(new Uri(url))
                    .DefaultMappingFor<SearchableDevice>(deviceMapping => deviceMapping
                        .IndexName("devices")
                        .IdProperty(dev => dev.Id)
                    )
                ;
            var client = new ElasticClient(settings);
            var response = client.Indices.Create("devices", creator => creator
                .Map<SearchableDevice>(device => device
                    .AutoMap()
                )
            );
            // maybe check response to be safe...
            services.AddSingleton<IElasticClient>(client);
        }
    }

The configuration block looks like following

  "ElasticSearch": {
    "url": "http://localhost:9200/"
  },

and allows for future extension.

Our search service has to be registered in Startup of course:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddElasticsearch(Configuration);
}

Indexing our objects

In the above example we built a SearchableDevice class whose public properties are going to be indexed by ElasticSearch. The API allows for a much more fined grained control about what and how to index but we want to keep things simple without having to worry about excluding Properties and so on. If you have set it up that way indexing a SearchableDevice is merely one simple call:

// SearchClient is the injected IElasticClient
// mySearchableDevice is an instance of SearchableDevice
SearchClient.IndexDocument(mySearchableDevice);

Searching for objects

When developing a search query I like to try it in the Kibana Dev Tools and then transform it to a NEST call. A simple query to look in all device properties if they start with “needle” looks like this:

{
  "query": {
    "multi_match": {
      "type": "phrase_prefix", 
      "fields": ["*"],
      "query": "needle"
    }
  }
}

The nice thing about the Kibana Dev Tools console is that it can display and complete the possible values for fields like “type” in your multi_match query.

That search query can then be translated to a NEST call in a straightforward way and looks this way:

var response = SearchClient.Search<SearchableDevice>(sd => sd
    .Query(q => q
        .MultiMatch(query => query
            .Type(TextQueryType.PhrasePrefix)
            .Fields("*")
            .Query("needle")
        )
    )
);

The search response contains the hits with their source objects and some metadata like the score and result count.

Bear in mind that ElasticSearch only returns the first/best 10 matches by default, so specifying the result size often might be closer to what you want.

Wrapping it up

Getting started with ElasticSearch in .NET Core does not require too much boiler plate an setup work if you use tools like docker and the NEST library. Making it usable and tuning the indexing and querying may require a lot of work to achieve the best results. On the other hand smaller applications can start-off with a simple search setup like shown above and simply evolve it when need be.

State Management for emotionally overwhelmed React rookies

State Management for overwhelmed React rookies

The topic of React state management is nowhere near new. Just to be safe what we‘re talking about here: Consider a multitude of components, which, in nice React-fashion, are finely interleaved into each other, each one with a Single Responsibility, each one only holding as much information as it needs. Depending on the complexity of your application, there can now be a lot of complex dependencies, where one small component somewhere might cause another small component totally-elsewhere to update (re-render), without these having to really know much about each other, because we strive for Low Coupling. In front-end development, this is not only done in terms of „cleaner code“, but also in the performance problem of having to re-render stuff that is not actually changing.

So, just a few months back, a new competitor appeared in the question of React state management, which was open-sourced by Facebook and is called Recoil. The older top dog in this field is the widely-used Redux, with smaller interventions of libraries like MobX, that also aimed to offer an alternative of managing state in smaller applications, and when React in version 16.3 opened up a new standard of Context API, it already officially advanced quite a step into the direction of an official React solution to these questions.

There‘s probably not a single web developer on earth who wouldn‘t agree that in our field, one of the most fun…fundamental challenges is the effort of staying afloat on top of the turbulent JavaScript-osphere. If you are the type of person who doesn‘t want to jump on every bandwagon, but still don‘t want to miss out on all the amazing opportunities that all this progress could give you, you better start a bunch of side projects (call them „recreational“ if you like) and give yourself a chance to dive into particular technologies with confined scope, for some research time.

This is what I‘ve done now and I try to focus completely on the issues that an ambitious developer can experience when having all these choices. This is what I want to outline for you now, because as usual – if you have lots of time studying a single technology, you can succeed in spite of many limitations, and you also get used to certain kinds of things you might have to do that you originally didn‘t want to do, and so on and so on.

So with Redux, nobody really appeared to talk a lot of bad things about it and there even are some Mariuses who seem to be absolutely in love with the official Redux documentation, that are actually more of a guide to time-tested Best Practices, giving you the opportunity to do things right and have a scaleable state container which supports you even if your application grows to large dimensions. Then there‘s stuff like a time-travelling state debugger and the flexible middleware integration which I didn‘t even touch yet. When your project has a number of unrelated data structures, there‘s the Ducks pattern that advises you to organize your required Reducers, Actions and Action Creators in a coherently arranged files. Which, however, turned complicated in my one project in which the types of data objects aren‘t as unrelated, and I had to remove all the combineReducer() logic and ended up with one large global state object; I now have one source file that just consists of everything-related-with-redux and for my purpose, this seems fine, but I still have to write rather cumbersome connect(mapStateToProps, mapDispatchToProps) structure in every component in which I want to access the state. I would prefer to have smaller state containers, but maybe it‘s due to the structure of my project that makes these complicated.

It really is that way: Due to the everchanging recommendations that come with the evolution of React, the question of how to do things best (read: best for your specific purpose), always stays fresh. Since React 16.8 and the arrival of Hooks there is a procession towards less boilerplate code, favoring functional components with a leaner appearance. In this spirit, I strived for something less Redux-y. E.g. if I want some text in my state to be set; I would have to do something like

// ./ducks/TextDucks.js
// avoid having to rely on a magical string, therefore save the string to a constant
const SET_TEXT = 'SET_TEXT'; 

// action creator
export const setTextCreator = (text) ==> ({type: CLEAR_TEXT, payload: {text}});

const Reducer = (state = initialState, {type, payload}) => {
  //... other state stuff
  if (type === SET_TEXT) {
    return {...state, text: payload.text};
  }
}

================
// Component file
import {setTextCreator} from './ducks/TextDucks.js';
const mapDispatchToProps = (dispatch) => ({
  setText: text => dispatch(setTextCreator(text));
});
const Component = ({setText, ...}) => {
  // here can I actually use setText()
};
export default connect(..., mapDispatchToProps)(Component);

Which is more organized than passing along some setText(‘‘) function through my whole component tree, but still feels a bit overhead.

Then there was MobX. Which seemed to be quite lightweight and clearly laid out a coherent use of the Observable pattern, implemented with its own JavaScript decorators that follow a simple structure. Still, however, the lookandfeel of this code would appear to differ quite a lot from my usual coding style, which kept me from actually using it. This decision was then advanced by certain statements online, who, some years ago, actually predicted that the advancement of React’s own Context API would make any third-party library redundant. Now to be fair, React’s current Context API, with its useReducer() and useContext() also makes it possible to imiate a Redux-like structure already, but consider it as ways of thinking: If you write your code in the same style as you would with Redux’ recommendations, why not use it directly? Clearly, the point of avoiding Redux should go towards the direction of thinking differently.

The Context API actually supplies the underlying structure on which Redux’ own <Provider> builds. Insofar, it is not a big astonishment that you can accomplish similar things with it. Using the Context API, you wrap your whole Application like

// myContext.js
import React from "react";
const TextContext = React.createContext();
export default TextContext;

// App.js
import TextContext from './myContext';
const App = () => <TextContext.Provider value={"initial text"}>{/* actual app components here */}</TextContext.Provider>;

// some subComponent.js
import React from 'react';
import TextContext fom './myContext';
const SubComponent = (props) => {
  const [text, setText] = React.useContext(TextContext);
  // now use setText() as you would with a local React useState dispatch function..
}

Personally, I found that arrangement a bit clearer than the Redux structure, if you ‘re not used to Redux’ way of thinking anyway. However, if your state grows and is more than just a text, you would either keep state information in one large object again, or wrap your <App/> in a ton of different Contexts which I personally disdained already when I just had three different global state variables to manage.

Having all these possbilities at hand, one might wonder why Facebook felt the need to implement a new way of state management with Recoil. Recoil is still in its experimental phase, however, it didn’t take long to find one aspect very promising: The coding style of managing state in Recoil feels a lot like writing React code itself, which itself makes it very smooth to operate, as you don’t have to treat global state much different than local state. Our example would look like this

// textState.js
import * as Recoil from 'recoil';
export const text = Recoil.atom({key: 'someUniqueKey', default: 'inital text'});

// App.js
import {RecoilRoot} from 'recoil';
const App = () => <RecoilRoot>{/* here the actual app components */}</RecoilRoot>

// some Component.js
import * as TextState from './textState';
const [text, setText] = Recoil.useRecoilState(TextState.text);
// from then on, you can use setText() like you would as a React useState dispatch function

Even more simple, with Recoil you directly have access to the single useRecoilValue() function to just read the text value, and the single useSetRecoilState() function to just dispatch a new text. This avoids the complication of having to re-think your treating of whatever-in-your-state-is-global differently from what is local. Your <App/> component doesn’t grow to ugly levels of intendation, and you can neatly organize everything state-related in a separate file.

As someone who considers himself quite eager to learn new technologies, but also wants to quickly see some results without having to learn a lot of fresh basic understanding first, I had the most fun trying out Recoil in my projects, I have to admit. However, I totally believe that the demise of Redux is not closing in that soon at all, due to its focus on sustainability. For the future, I would aim to see my one Recoil project grow, and I keep you updated on how well this grows…

Updating Grails 3.3.x to 4.0.x

We have a long history of maintaining a fairly large grails application which we took from Grails 1.0 to 4.0. We sometimes decided to skip some intermediate releases of the framework because of problems or missing incentives to upgrade. If your are interested in our experiences of the past, feel free to have a look our stories:

This is the next installment of our journey to the latest and greatest version of the Grails framework. This time the changes do not seem as intimidating like going from 2.x to 3.x. There are less moving parts, at least from the perspective of an application developer where almost everything stayed the same (gradle build system, YAML configuration, Geb functional tests etc.). Under the hood there are of course some bigger changes like new major versions of GORM/Hibernate and Spring Boot and the switch to Micronaut as the parent application context.


The hurdles we faced

  • For historical reasons our application uses flush mode “auto”. This does not work until today, see https://github.com/grails/grails-core/issues/11376
  • The most work intensive change is that Hibernate 5 requires you to perform your work in transactions. So we have dozens of places where we need to add missing @Transactional annotations to make especially saving domain objects work. Therefore we have to essentially test the whole application.
  • The handling of HibernateProxies again became more intransparent which led to numerous IllegalArgumentExceptions (“object ist not an instance of declaring type”). Sometimes we could move from generated hashCode()/equals() implementations to the groovy-Annotation @EqualsAndHashCode (actually a good thing) whereas in other places we did manual unwrapping or switched to eager fetching to avoid these problems.

In addition we faced minor gotchas like changed configuration entries. The one that cost us some hours was the subtle change of server.contextPath to server.servlet.context-path but nothing major or blocking.

We also had to transform many of our unit and integration tests to Spock and the new Grails Testing Support framework. It makes the tests more readable anyway and feels more fruitful than trying to debug the old-style Grails Test Mixins based tests.

Improvements

One major improvement for us in the Grails ecosystem is the good news that the shiro plugin is again officially available, maintained and cleaned up (see https://github.com/nerdErg/grails-shiro). Now we do not need to use our own poor man’s port anymore.

Open questions

Regarding the proclaimed performance improvements and reduced memory consumptions we do not have final numbers or impressions yet. We will deliver results on this front in the future.

More important is an incovenience we are still facing regarding hot-code-reloading. It does not work for us even using OpenJDK 8 with the old spring-loaded mechanism. The new restart-style of micronaut/spring-boot is not really productive for us because the startup times can easily reach the minute range even on fast hardware.

Pro-Tip

My hottest advice for you is this one:

Create a fresh Grails 4 app and compare central files like application.yml and build.gradle to get up to the state-of-the-art.

Conclusion

While this upgrade still was a lot of work and meant many places had to be touched it was a lot smoother than many of the previous ones. We hope that things improve further in the future as the technological stack is up-to-date and much more mature than in the early days…

Adding a dynamic React page to your classic grails multi-page application

We are developing and maintaining a more than 10 years old classic multi-page application based on the Grails web framework. With the advent of HTML 5 and modern browsers with faster JavaScript engines user expect more and more dynamic and pleasant user experience (UX) from web applications. Our application is used by hundreds of users and our customer expects a stable, familiar and feature-rich experience that continues to improve over time. Something like a complete rewrite of the UI is way out of scope time- and budget-wise.

One of the new feature requests would benefit highly from a client-side JavaScript implementation so we looked at our options. Fortunately it is quite easy to integrate a react app with grails and the gradle build system. So we implemented the new page almost completely as a react app while leaving all the other pages as normal server-side rendered Groovy Server Pages (GSP). The result is quite convincing and opens up a transition path to more and more dynamic client-side pages and perhaps even to the complete transformation to a single-page-application (SPA) in a distant future.

Integrating a React-App into Grails build process

The Grails react-webpack profile can serve as a great starting point to integrate a react app into an existing grails project. First you create the react app for the new page in the folder src/main/webapp, using the create-react-app scripts for example. Then you need to add a $GRAILS_PROJECT/webpack.config.js to configure webpack appropriately like so:

var path = require('path');

module.exports = {
  entry: './src/main/webapp/index.js',
  output: {
    path: path.join(__dirname, 'grails-app/assets/javascripts'),
    publicPath: '/assets/',
    filename: 'bundle.js'
  },
  module: {
    rules: [
      {
        test: /\.js$/,
        include: path.join(__dirname, 'src/main/webapp'),
        use: {
          loader: 'babel-loader',
          options: {
            presets: ["@babel/preset-env", "@babel/preset-react"],
            plugins: ["transform-class-properties"]
          }
        }
      },
      {
        test: /\.css$/,
        use: [
          'style-loader',
          'css-loader'
        ]
      },
      {
        test: /\.(jpe?g|png|gif|svg)$/i,
        use: {
          loader: 'url-loader?limit=10000&prefix=assets/!img'
        }
      }
    ]
  }
};

The next step is to move the package.json to the $GRAILS_PROJECT directory because we want gradle tasks to take care of building and bundling it as a grails asset. To make this convenient we add some gradle tasks employing yarn to our build.gradle:

buildscript {
    dependencies {
        ...
        classpath "com.moowork.gradle:gradle-node-plugin:1.2.0"
    }
}

...

apply plugin:"com.moowork.node"

...

node {
    version = '12.15.0'
    yarnVersion = '1.22.0'
    distBaseUrl = 'https://nodejs.org/dist'
    download = true
}

task bundle(type: YarnTask, dependsOn: 'yarn') {
    group = 'build'
    description = 'Build the client bundle'
    args = ['run', 'bundle']
}

task webpack(type: YarnTask, dependsOn: 'yarn') {
    group = 'application'
    description = 'Build the client bundle in watch mode'
    args = ['run', 'start']
}

bootRun.dependsOn(['bundle'])
assetCompile.dependsOn(['bundle'])

...

Now we have integrated our new react app with the grails build system and packaging. The webpack task allows updating the javascript bundle on the fly so that we have almost the same hot reloading support when developing as with the rest of grails.

Delivering the react app as a page

Now that we have integrated the react app in the build and packaging process of our grails application we need to deliver it when the new page is requested by the browser. This is quite simple and straightforward and can be achieved with a GSP like so:

<html>
<head>
    <meta name="layout" content="main"/>
    <title>
        <g:message code="example.header"/>
    </title>
</head>
<body>
    <div id="react-content">
    </div>
    <asset:javascript src="bundle.js"/>
</body>
</html>

Now you just have to develop the endpoints for the javascript app in form of normal grails controllers rendering JSON instead of GSP views. This is extremely easy using groovy maps and the grails JSON converters:

import grails.converters.JSON

class DataApiController {

    def getData = {
        def responseData = [
            name: 'John',
            age: 37
        ]
        render responseData as JSON
    }
}

Conclusion

Grails and its build infrastructure is flexible enough to easily integrate SPA pages into an existing traditional web application. This allows you to deliver modern UX and features expected by nowadays users without completely rewriting your trusty and proven grails application. The process can be gradually and individual pages/views can be renewed when needed. That way you can continually add value to your customer while incrementally modernizing your application.

How to disable IP address logging for Apache web server and Tomcat

The General Data Protection Regulation (GDPR) prohibits excessive or unnecessary collection of personal data. IP adresses in server log files are considered personal data.

Logging of IP addresses is usually enabled by default in fresh web server installations. This article describes how to disable it for the Apache web server and the Tomcat application server, which are a common combination for JVM based web applications.

Apache Web Server

The apache web server logs the HTTP requests from web clients in log files called *access.log, which includes IP addresses. Another apache log file, which can contain IP addresses is error.log.

To disable IP address logging for Apache edit the main configuration file, usually called httpd.conf and scan it for LogFormat entries, for example:

LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined

The format ‘verb’ %h stands for the hostname or IP address. This is what you want to remove. You may also want to remove the “Referer” and “User-Agent” components of the log format for privacy:

LogFormat "%l %u %t \"%r\" %>s %O" combined

Restart the server for the changes to take effect.

Tomcat Application Server

For Tomcat you have to configure the so-called AccessLogValve in the server.xml configuration file located in the $CATALINA_HOME/conf directory. The configuration entry looks like this:

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
       prefix="localhost_access_log." suffix=".txt"
       pattern="%h %l %u %t "%r" %s %b" />

The pattern attribute specifies the log format. Again, the relevant format verb you want to remove is %h for the hostname or IP address:

       pattern="%l %u %t "%r" %s %b" />

Restart the server for the changes to take effect.

Debugging Web Pages for iOS

Web developers use browser tools like the Web Inspector in Chrome and Safari or the Developer Tools in Firefox to develop, debug and test web pages. In Safari you have to enable the developer menu first: Safari -> Preferences… -> Advanced -> Show develop menu in menu bar

All these tools offer modes where you can display the page layout at various screen sizes. In Safari this is called the Responsive Design Mode and can be found in the Develop menu. This is essential for checking the page layout for mobile devices. There are however some differences in behaviour, which can only be tested on the real devices or in a simulator. For example, dropdown menus can trigger a wheel selector on mobile devices, while the desktop browser renders them as regular dropdown menus, even in responsive design mode.

Here are some tips for debugging web pages for iOS devices in the simulator:

Using Web Inspector with the iOS Simulator

Within the mobile Safari browser you can’t simply open the Web Inspector console as you would do when developing a web page using a desktop browser. But you can connect the Web Inspector of your desktop Safari to the mobile Safari browser instance running in the iOS simulator:

  • Start the iOS simulator from Xcode: Xcode -> Open Developer Tool -> Simulator
  • Select the desired device: Hardware -> Device -> e.g. iOS 12.1 -> iPhone SE
  • Open the web page in Safari within the simulator
  • Open the desktop version of Safari

In Safari’s Develop menu the simulator now shows up as a device, e.g. “Simulator – iPhone SE – iOS 12.1 (16B91)”. The web page you opened in the simulator should be listed as submenu item. If you click this menu item the Web Inspector opens. It’s now connected to the simulated Safari instance and you can debug the mobile variant of your web page.

Workaround for Clearing the Cache

When using a desktop web browser one can easily bypass the local browser cache when reloading a web page by holding the shift key while pressing the reload button. Sometimes this is necessary to see changes in effect while developing a web application. However, this doesn’t work in Safari running within the iOS emulator. There’s a little workaround: You can open the web page in an incognito tab, which means the cache is cleared each time you close the tab and re-open it again in a new incognito tab.

Setting Grails session timeout in production

Grails 3 was a great update to the framework and kept it up-to-date with modern requirement in web development. Modularization, profiles, revamped build system and configuration were all great changes that made working with grails more productive and fun again.
I quite like the choice of YAML for the configuration settings because you can easily describe sections and hierarchies without much syntactic noise.

Unfortunately, there are some caveats. One of them went live and caused a (minor) irritation for our customer:

The session timeout was back to the 30 minutes default and not prolongued to the one hour we all agreed upon some years (!) ago.

Investigating the cause

Our configuration in application.yml was correctly set to the desired one hour timeout and in development everything was working as expected. But the thing is that the setting server.session.timeout is only applied to the embedded tomcat. If your application is deployed to a standalone servlet container this setting is ignored. Unfortunately it is far from obvious which settings in application.yml are used in what situation.

In the case of a standalone servlet container you would just edit your applications web.xml and the container would use the setting there. While this would work, it is not very nice because you have two locations for one setting. In software development we call that duplication. What makes things worse is, that there is no web.xml in our case! So what now?

The solution

We have two problems here

  1. Providing the functionality our customer desires
  2. Removing the code duplication so that development and production work the same way

Our solution is to apply the setting from application.yml to the HTTP-Session of the request using an interceptor:

class SessionInterceptor {
    int order = -1000

    SessionInterceptor() {
        matchAll()
    }

    boolean before() {
        int sessionTimeout = grailsApplication.config.getProperty('server.session.timeout') as int
        log.info("Configured session timeout is: ${sessionTimeout}")
        request.session?.setMaxInactiveInterval(sessionTimeout)
        true
    }
}

That way we use a single source of truth, namely the configuration in application.yml, both in development and production.

 

Configurable React backend in deployment

In my last post I explained how to make you React App configurable with the backend endpoint as an example. I did not make clear that the depicted approach is build-time configurability.

If you want deploy- or runtime-time configurability the most simple approach is to provide global variables in your index.html like so:

<!DOCTYPE html>
<html lang="en">
  <head>
    <script>
      window.REACT_APP_BACKEND_API_BASE_URL= 'http://some.other.server:5000';
      window.APPLICATION_CONFIGURATION = {
        settingA: 'aValue',
        anotherSetting: 'anotherValue'
      };
    </script>
  </head>
  <body>
    <noscript>
      You need to enable JavaScript to run this app.
    </noscript>
    <div id="root"></div>
  </body>
</html>

We use (or activate) this configuration similar to the build-time approach with .env files:

// If we have a differing backend configured, replace the global fetch()
// instead of process.env.REACT_APP_BACKEND_API_BASE_URL
// we now use window.REACT_APP_BACKEND_API_BASE_URL
if (window.REACT_APP_BACKEND_API_BASE_URL !== undefined
    && window.REACT_APP_BACKEND_API_BASE_URL !== '') {
  applyBaseUrlToFetch(window.REACT_APP_BACKEND_API_BASE_URL);
}

That way an automated process or a human administrator can deploy the same artifact to different servers with customized settings. This approach is briefly explained in the create-react-app documentation. In addition a server-side application could replace placeholders dynamically in the html file, e.g. with data from a configuration database.

I personally like this approach because it allows us to use the same build artifact for internal testing, staging systems and production at the clients site. It also allows the client to make some basic configuration themselves.

Cache configuration with WildFly, Infinispan, CDI and JCache

This post is about a specific problem I encountered using the WildFly application server in combination with the Infinispan cache module, CDI and the JCache API. If you don’t use this combination of technologies this post is probably not relevant or interesting to you, but I hope it will help someone who encounters the same problem.

The problem

After upgrading an application from WildFly 10 to WildFly 13 it became apparent that the settings for the Infinispan caches from the WildFly configuration file are no longer applied to the caches used by the application.

The cache settings in the WildFly configuration specify a cache container, several local caches and the object memory sizes and expiration lifespans of these caches:

<subsystem xmlns="urn:jboss:domain:infinispan:6.0">
  <cache-container name="myapp" default-cache="default" module="org.wildfly.clustering.web.infinispan" statistics-enabled="true">
    <local-cache name="default" statistics-enabled="true">
      <object-memory size="10000"/>
      <expiration lifespan="86400000"/>
    </local-cache>
    <local-cache name="foo" statistics-enabled="true">
      <object-memory size="10000"/>
      <expiration lifespan="600000"/>
    </local-cache>
  </cache-container>
</subsystem>

The cache manager is injected via CDI resource injection in a Config class as the default cache manager:

class Config {
    @Produces
    @Resource(lookup = "java:jboss/infinispan/container/myapp")
    private EmbeddedCacheManager defaultCacheManager;
}

The caches are used via the @CacheResult annotation from the JCache API (JSR-107):

class FooService {
    @CacheResult(cacheName = "foo")
    public List<Foo> getFoo(String query) {
        // ...
    }
}

With this setup the application worked, the service results were cached, but the cache settings from the configuration file were not applied, as could be seen by inspecting the MBeans of the caches via JConsole. Instead the caches used a default configuration with an expiration lifespan of -1 (never), even though they were assigned to the cache container “myapp” as configured.

The solution

One particular answer to a similar problem description on StackOverflow was helpful in finding the solution. Each cache must be injected once via CDI resource lookup as well:

import org.infinispan.Cache;

class Config {
    @Resource(lookup = "java:jboss/infinispan/cache/myapp/foo")
    private Cache<String, Object> fooCache;

    // ...
}

The format of the JNDI path is:

"java:jboss/infinispan/cache/${cacheContainerName}/${cacheName}"

The property itself will be unused, but the @CacheResult annotation will now use the cache with the correct configuration.