Making the backend of your React App configurable

Nowadays, the frontend and backend of a web application usually are separate parts – oftentimes implemented using different technologies – communicating with each other using HTTP or websockets. For simplicity and smaller deployments they are hostet on the same web server. There are several reasons to deploy them on different servers like load distribution, security, different environments running the same frontend with differing backends and so on.

To allow separate deployments without changing the frontend code per deployment we need to make the backend transparently configurable. Fortunately, this is relatively easy for frontend written in React and set up with create-react-app. To make this fully transparent for your frontend code we need to

  1. Make the backend URL configurable
  2. Replace the fetch() function to use the configured backend
  3. Activate the setup at the start of our app

Configuring a React App

Create-react-app provides a configuration mechanism with custom environment variables using .env-files. We can simply provide different env-files for our environments where we can configure different aspects of our application. In our use case this is the backend URL.

// The base url of the backend API. Add path prefix if the API does not run at the server root.
REACT_APP_BACKEND_API_BASE_URL=http://some.other.server:5000

Inside our React App we can reference the configured values using {process.env.REACT_APP_BACKEND_API_BASE_URL}.

Making the use of our configured backend transparent

In a modern JavaScript app the main mean to communicate with the backend is the fetch()-API. To make the use of our configured backend transparent we can replace the global fetch()-function with our version like so:

// remember the original fetch-function to delegate to
const originalFetch = global.fetch;

export const applyBaseUrlToFetch = (baseUrl) => {
  // replace the global fetch() with our version where we prefix the given URL with a baseUrl
  global.fetch = (url, options) => {
    const finalUrl = baseUrl + url;
    return originalFetch(finalUrl, options);
  };
};

That way all of our fetch() calls are re-routed to the configured backend.

Activating our fetch()-customization

Now that we have all the pieces of our infrastructure in place we need to activate the changes to fetch on application startup. So we add code like below to our index.js:

// If we have a differing backend configured, replace the global fetch()
if (process.env.REACT_APP_BACKEND_API_BASE_URL !== undefined && process.env.REACT_APP_BACKEND_API_BASE_URL !== '') {
  applyBaseUrlToFetch(process.env.REACT_APP_BACKEND_API_BASE_URL);
}

Now all our calls to a relative URL will be prefixed with a configurable base and that way different backends can be used with the same application code.

Caveats

The above approach works nicely if you have exactly one backend for your app and do not fetch from other sources. If you do, you may want to expose the original fetch function as something like fetchExternal() to be able to explicitly fetch from other sources.

In addition, if frontend and backend reside on different servers/sites using differring DNS-names you will have to configure CORS for your backends or your browser will refuse to make the requests!

Did Java just flip the switch?

Twenty years ago, a groundbreaking book was published: Refactoring by Martin Fowler. In this book, we learnt about 72 ways to improve our code and, even more important, over 20 unique signs of bad code, so-called code smells. Among these code smells were obvious ones like “Duplicated Code” and “Long Parameter List” and more specific ones like “Temporary Field” and “Switch Statements”.

Switch is the main offender

What is wrong with a Switch Statement, you ask? Well, nearly everything. Let’s review three flaws of a classic switch statement in Java on different levels:

  • Syntax: The syntax of a switch is clunky at best. Whoever thought that “fall-through” should be the default behaviour and subsequently forced millions of developers to “break” their cases is responsible for so much unnecessary extra work. Think about how a “fallthrough” statement instead of a “break” could have changed the world.
  • Code Design: Each switch statement is an inherent complexity hog. At least if you measure classic complexity metrics like McCabe or cyclomatic complexity. Anything but the smallest switches results in complexity counts that are through the roof. And a small switch is just a syntactically bloated if/else.
  • Programming Paradigm: The reason Martin Fowler advocated against using switch statements is because the alternative, using polymorphism to implicitly switch over the object type, wasn’t common knowledge 20 years ago. Switch statements were the cornerstones of explicit conditional logic and were prone to repetition, leading to duplicated code – another code smell.

There are more things wrong with a classic switch statement, but the logic is clear: Take away the culmilations of explicit conditional logic and developers will adjust their approach and adopt more diverse paradigms. If you think this through, you can also argue that taking away the “else” keyword (as the Object Calisthenics do) or even the “if” statement (as advocated by the anti-if campaign) leads to even more diversity and progressive programming.

Switch in rehabilitation?

For me, a switch statement was nearly always the wrong choice for a given problem. And experienced thinkers like Martin Fowler backed my opinion, so I couldn’t be wrong – right?

In the second edition of Refactoring, published early this year, Martin Fowler changes his position towards the Switch Statement considerably. A single switch isn’t the gateway drug to imperative programming anymore. You’ll need to have “Repeated Switches” to count as a code smell. You can still use “Replace Conditional with Polymorphism”, but the enthusiasm about implicit condititonal structures like polymorphism has faded. Martin Fowler writes that today, we all know about the different ways to express conditional logic. I’m not so sure. He also writes that many languages support more sophisticated forms of switch statements. Ok, but what about the mainstream languages like Java?

My biggest problem with the classic switch statement was that it was a “single purpose” structure. It could only be used to jump to a limited number of code addresses based on a limited type of criterium. I prefer code structures that are “dual purpose” or even “multi-purpose”. When Java’s switch statement got upgraded to switch over Enums (Java 5) and Strings (Java 7), it got more powerful, but still only supported one use case: explicit branching over a condition.

Switch with dual use

In the upcoming Java 12 (yes, we’ve come a long way in terms of version numbers since Java 8), the “Java Enhancement Process” JEP 325 will be included: Switch Expressions. It is marked as a preview language feature, meaning it is ready for usage, but open for discussion – and you’ll have to enable it explicitly. In the grand scheme of things for Java, it is a stepping stone for JEP 305: Pattern Matching for instanceof that will also change the switch statement even further.

With Switch Expressions, you can use a switch statement to essentially inline a method that uses lots of explicit conditionals to map one value to another:

int numLetters = switch (day) {
    case MONDAY, FRIDAY, SUNDAY -> 6;
    case TUESDAY                -> 7;
    case THURSDAY, SATURDAY     -> 8;
    case WEDNESDAY              -> 9;
};

Your switch can now return a result. And with that improvement, it isn’t single purposed anymore. This is the moment when a switch statement isn’t the most clunky and error-prone way to solve a problem anymore, but maybe even elegant and straight to the point.

This is the moment I definitely change my opinion about the switch statement (in Java) and welcome it back into my solution toolbox. How could such an ugly duckling become such a beautiful swan? And why did this take us twenty years?

You can read more about the new switch statement in this brilliant blog post from Nicolai Parlog.

Anyways, the “else” keyword is now even more obsolete than ever.

Adaptive random generation of multiple outcomes

Games often want to use randomness to spice up the experience – it can be a lot more exciting to risk something when you are not entirely certain of the result. In my game abstractanks, the power-ups are generated randomly. However, when playing the game, it seemed like it was always generating the same power-ups in a row, which can be kind of frustrating. Tough luck, because this is just how uniform randomness behaves – as anyone who ever played the board game Sorry! or the german class Mensch ärgere dich nicht! can sure testify.

Let’s try that with C# code:

var outcomes = new []{'A', 'B', 'C', 'D', 'E', 'F'};
for (int i = 0; i < 200; ++i)
{
  var index = random.Next(outcomes.Length-1);
  Console.Write(outcomes[index]);
}

This will produce something like this:

ECDDBABCEEBBDDADEDECCAECECEADBBCCCDAEBEECDBCACAEAA
BDEACECBDDBAEDCEEAEAECDEEEECBCCEECEDCBAECCCBDCDDEA
CEAABDEEBDEAEBABABDEBDAACBECBBAACAEDEEBAECECECCBAB
BBAAEEDEDEEBCACDDEBBCBACADDDBAECBAEDACBEAEABBCAEEA

See all those strings of Cs and Es? Horrible! That does not feel random!

Games Dota 2 work with this by tweaking distribution after each “roll”. Specifically, for things like Phantom Assassin’s Coup de Grace, the chance is increased slightly after each unsuccessful attempt, making the 4th or so attempt a guaranteed critical strike.
But this technique only work nicely for one event in a stream of attempts. It fails to make multiple outcomes look better.
For game I devised an algorithm with two properties:
1. The same roll never appears twice in succession
2. No long stretches without a specific roll
Here’s my algorithm to do that:

var outcomes = new []{'A', 'B', 'C', 'D', 'E', 'F'};
var chances = new []{1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
for (int i = 0; i < 200; ++i)
{
  // Normalize chances so they sum up to 1.0, then build their prefix sum
  var total = chances.Sum();
  var prefixSum = chances
    .Select(x => x / total)
    .Aggregate(new List<double>(), (list, x) =>
    {
      list.Add(list.LastOrDefault() + x);
      return list;
    }).ToList();

  // Roll an outcome
  var roll = random.NextDouble();
  var pick = prefixSum.FindIndex(x => x >= roll);
  Console.Write(outcomes[pick]);

  // Now adapt the distribution by removing all chance from
  // the last pick and distributing it to the N-1 others
  var increment = chances[pick] / (chances.Length - 1);
  chances = chances.Select((x, index) =>
  {
    if (index == pick)
      return 0.0;
    else
      return x + increment;
  }).ToArray();
}

And here’s what that produces:

AEDFBCAEFDBAEDFCBDECBFCFDBEACFDECBDAEFCEBACDEAFBDC
AFEBCABFDECDAFBECFADFCEBACFDCEDBFAEDCDBDAFEDBFEABD
CFABEDFBACDEAEFBECADBAFECAFBDACECFAEBDCAFCDBAEDAFA
BDFCDEACDBFEDFCBACDAFBEADFCDEFBEACBEFDCECFABDFCDBE

Much nicer for my needs, but it still looks pretty random. Also, my statistics buddy assured me that this algorithm still guarantees equal outcome probability for all items when run forever. I do now know if this is a new technique, but I did not find anything like this when I was looking for it.
Let me know if this is of any use to you.

Object slicing with Grails and GORM

Some may know the problem called object slicing when passing or assigning polymorphic objects by value in C++. The issue is not limited to C++ as we experienced recently in one of our web application based on Grails. If you are curious just stay awhile and listen…

Our setting

Some of our domain entities use inheritance and their containing entities determine what to do using some properties. You may call that bad design but for now let us take it as it is and show some code to clarify the situation:

@Entity
class Container {
  private A a

  def doSomething() {
    if (hasActuallyB()) {
      return a.bMethod()
    }
    return a.something()
  }
}

@Entity
class A {

  def something() {
    return 'Something A does'
  }
}

@Entity
class B extends A {

  def bMethod() {
    return 'Something only B can do'
  }
}

class ContainerController {

  def save = {
    new Container(b: new B()).save()
  }

  def show = {
    def container = Container.get(params.id)
    [result: container.doSomething()]
  }
}

Such code worked for us without problems in until we upgraded to Grails 3. Suddenly we got exceptions like:

2019-02-18 17:03:43.370 ERROR --- [nio-8080-exec-1] o.g.web.errors.GrailsExceptionResolver   : MissingMethodException occurred when processing request: [GET] /container/show
No signature of method: A.bMethod() is applicable for argument types: () values: []. Stacktrace follows:

Caused by: groovy.lang.MissingMethodException: No signature of method: A.bMethod() is applicable for argument types: () values: []
at Container.doSomething(Container.groovy:123)

Debugging showed our assumptions and checks were still true and the Container member was saved correctly as a B. Still the groovy method call using duck typing did not work…

What is happening here?

Since the domain entities are persistent objects mapped by GORM and (in our case) Hibernate they do not always behave like your average POGO (plain old groovy object). They may in reality be Javassist proxy instances when fetched from the database. These proxies are set up to respond to the declared type and not the actual type of the member! Clearly, an A does not respond to the bMethod().

A workaround

Ok, the class hierarchy is not that great but we cannot rewrite everything. So what now?

Fortunately there is a workaround: You can explicitly unwrap the proxy object using GrailsHibernateUtil.unwrapIfProxy() and you have a real instance of B and your groovy duck typing and polymorphic calls work as expected again.

Makeup on a zombie – Java Swing UX improvements

When I learned Java programming in 1997, the AWT classes were the default way to create graphical user interfaces. The AWT widgets were not very sophisticated and really ugly, so it is no surprise they were replaced by a new widget toolkit, called “Swing”, as soon as possible. At the end of 1998, the Swing graphical API was the default way to develop GUIs for desktop applications on the Java platform.

Today, twenty years later, the Swing API is still part of the Java core SDK and ready for your adventures in GUI creation. But time has taken a toll on the technology. The widgets, once displayed with a state-of-the-art design, look really outdated. Swing introduced the concept of pluggable “Look-and-Feels” (L&F), so you could essentially re-skin your interface with a few lines of code, but all L&Fs look ugly and feel cumbersome now. You can say that Java Swing is a zombie: It is still available and in use in its latest development state, but makes no progress in regard of improvements. If software development follows one rule, it is that software that isn’t actively developed anymore is dead.

My personal date when Java Swing died was the day Chet Haase (author of the Java Swing book “Filthy Rich Clients”) left Sun Microsystems to work for Adobe. That was in 2008. The technology received several important updates since then, but soon after, JavaFX got on the stage (and left it, and went back on, left it again, and is now an optional download for the Java SDK). Desktop GUIs are even more dead than Java Swing, because “mobile first” and “web second” don’t leave much room for “desktop third”. Consequentially, Java FX will not receive support from Oracle after 2022.

But there are still plenty of desktop applications and they won’t go away anytime soon. There is a valid use case for a locally installed program with a graphical user interface on a physical computer. And there are still lots of “legacy systems” that need maintenance and improvements. Most of them are entangled with their UI toolkit of choice – a choice made before 2007, when “mobile first” wasn’t even available as an option.

Because those legacy systems still exist and are used, their users want to experience the look and feel of today’s applications. And this is where the fun begins: You apply makeup on a zombie to let it appear a little bit less ugly than it really is.

Recently, my task was to improve the keyboard handling of a Java Swing desktop application. It was surprisingly easy to add a tad of modern “feel”, and this gives me hope that the zombie might stay semi-alive longer than I thought. As you might already have guessed, StackOverflow is a goldmine for answers on ancient technology. Here are my first few improvements and their respective answer on StackOverflow:

  • Let’s suppose you want or need to interact with your application without a mouse or touchscreen. Your first attempt to start an interaction is to press the “menu” key in order to activate the application menu. This would be the “Alt” key on a windows system. For modern applications, your input focus is now at the menu bar. In Java Swing applications, nothing happens. You have to press “Alt” and a mnemonic character to enter a specific menu. If you want to reduce the initial hurdle to just one key, you need to teach all your Java Swing menus to react to the “Alt” key alone: https://stackoverflow.com/a/8659116
  • Speaking of focus, in modern applications you can move your focus by using the arrow keys. Java Swing still thinks that “Tab” and “Shift+Tab” is the pinnacle of focus control. If you want to improve the behavior (and therefore the “feel”) of your focus traversal, you can do it globally for your application: https://stackoverflow.com/a/8255423
  • And if you want to enable the Return/Enter key for button activation, you can do it with just one line: https://stackoverflow.com/a/440536

If you happen to work on a Java Swing application and want some cheap user experience upgrades, I’ve assembled all the knowledge above into a neat little class that you can use as an add-on utility class: https://github.com/dlindner/java-swing-ux/blob/master/src/com/schneide/swing/ux/KeyboardUX.java

What are your makeup tips for zombies?

A programmer’s report on the Global Game Jam

Last weekend was this year’s global game jam. A game jam is a type of event where the participants – the “jammers” – create a game in a fixed, usually quite short, amount of time with some additional restrictions like a theme. Usually, this means computer games, although you can also make board or pen and paper games.

A worldwide event

As the name implies, the global game jam is a worldwide event. Most bigger cities have “hubs”, i.e. jam locations, where the jammers gather at the beginning of the weekend in their local time zone. Then, a few motivating keynote videos are shown before the theme is announced. Last year’s theme was “Transmission”, while this year’s theme was “What Home means to you”. A game using that theme is to be created within the next 48-hours.

The pitch

People get a few minutes to brainstorm about the theme and maybe get an idea that they want to implement. There is kind of a small stage area, where people with those ideas pitch them to the others in hopes of getting a team assembled. Last year, a guy even made a short power-point presentation to pitch his idea.
While you can make a game on your own, that is pretty hard. You typically need at least programming, graphics, audio, music, project-management and game-design expertise on your team. It is rare to find that in a single person.

Implementation

So if you have a nice idea, or join a team with one – now implementation work starts.

Last year, my team quickly abandoned of using Unity as the engine for our terra-forming game, since no one on the team had any experience with it. We switched to Love2D, a neat little framework based on Lua. It is a very lightweight little framework and Lua is an elegant engine. Given that our team had no dedicated graphics people, I am pretty proud of the little game that we finished: Terraformer

This year, I joined another team and we picked the Godot engine to create our game. Godot comes with its own programming language called “Godot script”. The syntax looks like it borrows heavily from both Python and JavaScript. It is, however, quite pleasant to work with, except that most higher-order functional features that are all so common on most languages seem to be missing. But it does have coroutines, so there’s that.
Either way, working with godot was mostly pretty pleasant, except for its interaction with git. Just opening specific parts of the game in the godot editor would change files, and sometimes we’d get huge nonsensical changes, and merge conflicts, when someone on another platform (i.e. between Mac and Windows) pushed something.
We ended up making this little game: Home God
I’m particularly proud of the character movement controls. I think they turned out quite fluidly.

Not just for game programmers…

The global game jam is an awesome experience for non-game programmers looking to improve their craft. In fact, most participants in my hub where at most hobbyists, from what I could tell. The tight time-budget and the objective of just getting it to work will give many programmers a totally different perspective of how to do things. Things that steal your time will hurt a lot.

Unexpected RESTEasy application upgrade surprise

The setting

A few months ago we got to maintain a RESTEasy application running in a Wildfly 10 container. The application uses RESTEasy as both, server and client and contains a few custom interceptors and providers.

Now our client wants to move on to Wildfly 13 as deployment target. Most of the application works out-of-the-box or just by upgrading some dependencies in the new container but some critical parts like the REST client requests stopped working.

The investigation

After some digging through the error messages it became clear our interceptors and providers were not called anymore. What has changed? Wildfly 13 comes with RESTEasy 3.5.1 while we were using 3.0 in Wildfly 10. Looking at the upgrade documentation leaves us puzzled though:

RESTEasy 3.5 series is a spin-off of the old RESTEasy 3.0 series, featuring JAX-RS 2.1 implementation.

The reason why 3.5 comes from 3.0 instead of the 3.1 / 4.0 development streams is basically providing users with a selection of RESTEasy 4 critical / strategic new features, while ensuring full backward compatiblity. As a consequence, no major issues are expected when upgrading RESTEasy from 3.0.x to 3.5.x.

We are using the standard classpath scanning method which discovers annotated RESTEasy classes and registers them for the application. Trying to register them explicitly in the application yielded the message, that our providers are already registered:

RESTEASY002155: Provider class mypackage.MyProvider is already registered. 2nd registration is being ignored.

Scanning and registration seemed to just work alright. So what was happening here?

The resolution

After a bit more investigation we realized the issue was on the client side only! In Wildfly 10/RESTEasy 3.0 the providers were automatically registered for the client, too. This is not the case anymore in Wildfly 13/RESTEasy 3.5! You have to register them with the client either using the ResteasyClientBuilder or the ResteasyClient you are using like mentioned in the documentation:

Client client = new ResteasyClientBuilder() // Activate gzip compression on client:
                    .register(AcceptEncodingGZIPFilter.class)
                    .register(GZIPDecodingInterceptor.class)
                    .register(GZIPEncodingInterceptor.class)
                    .build();

This subtle change in (undocumented?) behaviour took several hours to debug. Nevertheless, we actually like the change because we prefer doing things explicitly instead of using some magic. So now it is clear what interceptors and providers our REST client is using.