# State Management for emotionally overwhelmed React rookies

State Management for overwhelmed React rookies

The topic of React state management is nowhere near new. Just to be safe what we‘re talking about here: Consider a multitude of components, which, in nice React-fashion, are finely interleaved into each other, each one with a Single Responsibility, each one only holding as much information as it needs. Depending on the complexity of your application, there can now be a lot of complex dependencies, where one small component somewhere might cause another small component totally-elsewhere to update (re-render), without these having to really know much about each other, because we strive for Low Coupling. In front-end development, this is not only done in terms of „cleaner code“, but also in the performance problem of having to re-render stuff that is not actually changing.

So, just a few months back, a new competitor appeared in the question of React state management, which was open-sourced by Facebook and is called Recoil. The older top dog in this field is the widely-used Redux, with smaller interventions of libraries like MobX, that also aimed to offer an alternative of managing state in smaller applications, and when React in version 16.3 opened up a new standard of Context API, it already officially advanced quite a step into the direction of an official React solution to these questions.

There‘s probably not a single web developer on earth who wouldn‘t agree that in our field, one of the most fun…fundamental challenges is the effort of staying afloat on top of the turbulent JavaScript-osphere. If you are the type of person who doesn‘t want to jump on every bandwagon, but still don‘t want to miss out on all the amazing opportunities that all this progress could give you, you better start a bunch of side projects (call them „recreational“ if you like) and give yourself a chance to dive into particular technologies with confined scope, for some research time.

This is what I‘ve done now and I try to focus completely on the issues that an ambitious developer can experience when having all these choices. This is what I want to outline for you now, because as usual – if you have lots of time studying a single technology, you can succeed in spite of many limitations, and you also get used to certain kinds of things you might have to do that you originally didn‘t want to do, and so on and so on.

So with Redux, nobody really appeared to talk a lot of bad things about it and there even are some Mariuses who seem to be absolutely in love with the official Redux documentation, that are actually more of a guide to time-tested Best Practices, giving you the opportunity to do things right and have a scaleable state container which supports you even if your application grows to large dimensions. Then there‘s stuff like a time-travelling state debugger and the flexible middleware integration which I didn‘t even touch yet. When your project has a number of unrelated data structures, there‘s the Ducks pattern that advises you to organize your required Reducers, Actions and Action Creators in a coherently arranged files. Which, however, turned complicated in my one project in which the types of data objects aren‘t as unrelated, and I had to remove all the combineReducer() logic and ended up with one large global state object; I now have one source file that just consists of everything-related-with-redux and for my purpose, this seems fine, but I still have to write rather cumbersome connect(mapStateToProps, mapDispatchToProps) structure in every component in which I want to access the state. I would prefer to have smaller state containers, but maybe it‘s due to the structure of my project that makes these complicated.

It really is that way: Due to the everchanging recommendations that come with the evolution of React, the question of how to do things best (read: best for your specific purpose), always stays fresh. Since React 16.8 and the arrival of Hooks there is a procession towards less boilerplate code, favoring functional components with a leaner appearance. In this spirit, I strived for something less Redux-y. E.g. if I want some text in my state to be set; I would have to do something like

```// ./ducks/TextDucks.js
// avoid having to rely on a magical string, therefore save the string to a constant
const SET_TEXT = 'SET_TEXT';

// action creator
export const setTextCreator = (text) ==> ({type: CLEAR_TEXT, payload: {text}});

const Reducer = (state = initialState, {type, payload}) => {
//... other state stuff
if (type === SET_TEXT) {
}
}

================
// Component file
import {setTextCreator} from './ducks/TextDucks.js';
const mapDispatchToProps = (dispatch) => ({
setText: text => dispatch(setTextCreator(text));
});
const Component = ({setText, ...}) => {
// here can I actually use setText()
};
export default connect(..., mapDispatchToProps)(Component);
```

Which is more organized than passing along some setText(‘‘) function through my whole component tree, but still feels a bit overhead.

Then there was MobX. Which seemed to be quite lightweight and clearly laid out a coherent use of the Observable pattern, implemented with its own JavaScript decorators that follow a simple structure. Still, however, the lookandfeel of this code would appear to differ quite a lot from my usual coding style, which kept me from actually using it. This decision was then advanced by certain statements online, who, some years ago, actually predicted that the advancement of React’s own Context API would make any third-party library redundant. Now to be fair, React’s current Context API, with its useReducer() and useContext() also makes it possible to imiate a Redux-like structure already, but consider it as ways of thinking: If you write your code in the same style as you would with Redux’ recommendations, why not use it directly? Clearly, the point of avoiding Redux should go towards the direction of thinking differently.

The Context API actually supplies the underlying structure on which Redux’ own <Provider> builds. Insofar, it is not a big astonishment that you can accomplish similar things with it. Using the Context API, you wrap your whole Application like

```// myContext.js
import React from "react";
const TextContext = React.createContext();
export default TextContext;

// App.js
import TextContext from './myContext';
const App = () => <TextContext.Provider value={"initial text"}>{/* actual app components here */}</TextContext.Provider>;

// some subComponent.js
import React from 'react';
import TextContext fom './myContext';
const SubComponent = (props) => {
const [text, setText] = React.useContext(TextContext);
// now use setText() as you would with a local React useState dispatch function..
}
```

Personally, I found that arrangement a bit clearer than the Redux structure, if you ‘re not used to Redux’ way of thinking anyway. However, if your state grows and is more than just a text, you would either keep state information in one large object again, or wrap your <App/> in a ton of different Contexts which I personally disdained already when I just had three different global state variables to manage.

Having all these possbilities at hand, one might wonder why Facebook felt the need to implement a new way of state management with Recoil. Recoil is still in its experimental phase, however, it didn’t take long to find one aspect very promising: The coding style of managing state in Recoil feels a lot like writing React code itself, which itself makes it very smooth to operate, as you don’t have to treat global state much different than local state. Our example would look like this

```// textState.js
import * as Recoil from 'recoil';
export const text = Recoil.atom({key: 'someUniqueKey', default: 'inital text'});

// App.js
import {RecoilRoot} from 'recoil';
const App = () => <RecoilRoot>{/* here the actual app components */}</RecoilRoot>

// some Component.js
import * as TextState from './textState';
const [text, setText] = Recoil.useRecoilState(TextState.text);
// from then on, you can use setText() like you would as a React useState dispatch function
```

Even more simple, with Recoil you directly have access to the single useRecoilValue() function to just read the text value, and the single useSetRecoilState() function to just dispatch a new text. This avoids the complication of having to re-think your treating of whatever-in-your-state-is-global differently from what is local. Your <App/> component doesn’t grow to ugly levels of intendation, and you can neatly organize everything state-related in a separate file.

As someone who considers himself quite eager to learn new technologies, but also wants to quickly see some results without having to learn a lot of fresh basic understanding first, I had the most fun trying out Recoil in my projects, I have to admit. However, I totally believe that the demise of Redux is not closing in that soon at all, due to its focus on sustainability. For the future, I would aim to see my one Recoil project grow, and I keep you updated on how well this grows…

# Be precise, round twice

Recently after implementing a new feature in a software that outputs lots of floating point numbers, I realized that the last digits were off by one for about one in a hundred numbers. As you might suspect at this point, the culprit was floating point arithmetic. This post is about a solution, that turned out to surprisingly easy.

The code I was working on loads a couple of thousands numbers from a database, stores all the numbers as doubles, does some calculations with them and outputs some results rounded half-up to two decimal places. The new feature I had to implement involved adding constants to those numbers. For one value, 0.315, the constant in one of my test cases was 0.80. The original output was “0.32” and I expected to see “1.12” as the new rounded result, but what I saw instead was “1.11”.

## What happened?

After the fact, nothing too surprising – I just hit decimals which do not have a finite representation as a binary floating point number. Let me explain, if you are not familiar with this phenomenon: 1/3 happens to be a fraction which does not have a finte representation as a decimal:

1/3=0.333333333333…

If a fraction has a finite representation or not, depends not only on the fraction, but also on the base of your numbersystem. And so it happens, that some innocent looking decimal like 0.8=4/5 has the following representation with base 2:

4/5=0.1100110011001100… (base 2)

So if you represent 4/5 as a double, it will turn out to be slightly less. In my example, both numbers, 0.315 and 0.8 do not have a finite binary representation and with those errors, their sum turns out to be slightly less than 1.115 which yields “1.11” after rounding. On a very rough count, in my case, this problem appeared for about one in a hundred numbers in the output.

## What now?

The customer decided that the problem should be fixed, if it appears too often and it does not take to much time to fix it. When I started to think about some automated way to count the mistakes, I began to realize, that I actually have all the information I need to compute the correct output – I just had to round twice. Once say, at the fourth decimal place and a second time to the required second decimal place:

```(new BigDecimal(0.8d+0.315d))
.setScale(4, RoundingMode.HALF_UP)
.setScale(2, RoundingMode.HALF_UP)
```

Which produces the desired result “1.12”.

If doubles are used, the errors explained above can only make a difference of about $10^{-15}$, so as long as we just add a double to a number with a short decimal representation while staying in the same order of magnitude, we can reproduce the precise numbers from doubles by setting the scale (which amounts to rounding) of our double as a BigDecimal.

But of course, this can go wrong, if we use numbers, that do not have a short neat decimal representation like 0.315. In my case, I was lucky. First, I knew that all the input numbers have a precision of three decimal places. There are some calculations to be done with those numbers. But: All numbers are roughly in the same order of magnitude and there is only comparing, sorting, filtering and the only honest calculation is taking arithmetic means. And the latter only means I had to increase the scale from 4 to 8 to never see any error again.

So, this solution might look a bit sketchy, but in the end it solves the problem with the limited time budget, since the only change happens in the output function. And it can also be a valid first step of a migration to numbers with managed precision.

# A Code Centric World

In main-stream OOP, polymorphism is achieved by virtual functions. To reuse some code, you simply need one implementation of a specific “virtual” interface. Bigger programs are composed by some functions calling other functions calling yet other functions. Virtual functions introduce a flexibility here to that allow parts of the call tree to be replaced, allowing calling functions to be reused by running on different, but homogenuous, callees. This is a very “code centric” view of a program. The data is merely used as context for functions calling each other.

# Duality

Let us, for the moment, assume that all the functions and objects that such a program runs on, are pure. They never have any side effects, and communicate solely via parameters to and return values from the function. Now that’s not traditional OOP, and a more functional-programming way of doing things, but it is surely possible to structure (at least large parts of) traditional OOP programs that way. This premise helps understanding how data oriented design is in fact dual to the traditional “code centric” view of a program: Instead of looking at the functions calling each other, we can also look at how the data is being transformed by each step in the program because that is exactly what goes into, and comes out of each function. IS-A becomes “produces/consumes compatible data”.

# Cooking without functions

I am using C# in the example, because LINQ, or any nice map/reduce implementation, makes this really staight-forward. But the principle applies to many languages. I have been using the technique in C++, C#, Java and even dBase.
Let’s say we have a recipe of sorts that has a few ingredients encoded in a simple class:

```class Ingredient
{
public string Name { get; set; }
public decimal Amount { get; set; }
}
```

We store them in a simple `List` and have a nice function that can compute the percentage of each ingredient:

```public static IReadOnlyList<(string, decimal)>
Percentages(IEnumerable<Ingredient> incredients)
{
var sum = incredients.Sum(x => x.Amount);
return incredients
.Select(x => (x.Name, x.Amount / sum))
.ToList();
}
```

Now things change, and just to make it difficult, we need a new ingredient type that is just a little more complicated:

```class IngredientInfo
{
public string Name { get; set; }
/* other useful stuff */
}

class ComplicatedIngredient
{
public IngredientInfo Info { get; set; }
public decimal Amount { get; set; }
}
```

And we definitely want to use the old, simple one, as well. But we need our percentage function to work for recipes that have both `Ingredient`s and also `ComplicatedIngredient`s. Now the go-to OOP approach would be to introduce a common interface that is implemented by both classes, like this:

```interface IIngredient
{
string GetName();
string GetAmount();
}
```

That is trivial to implement for both classes, but adds quite a bunch of boilerplate, just about doubling the size of our program. Then we just replace `IReadOnlyList<Ingredient>` by `IReadOnlyList<IIngredient>` in the `Percentage` function. That last bit is just so violating the Open/Closed principle, but just because we did not use the interface right away (Who thought YAGNI was a good idea?). Also, the new interface is quite the opposite of the Tell, don’t ask principle, but there’s no easy way around that because the “Percentage” function only has meaning on a `List<>` of them.

# Cooking with data

But what if we just use data as the interface? In this case, it so happens that we can easiely turn a `ComplicatedIngredient` into an `Ingredient` for our purposes. In C#’s LINQ, a simple Select() will do nicely:

```var simplified = complicated
.Select(x => new Ingredient
{
Name = x.Info.Name,
Amount = x.Amount
});
```

Now that can easiely be passed into the `Percentages` function, without even touching it. Great!

In this case, one object could neatly be converted into the other, which is often not the case in practice. However, there’s often a “common denominator class” that can be found pretty much the same way as extracting a common interface would. Just look at the info you can retrieve from that imaginary interface. In this case, that was the same as the original Ingredients class.

# Further thoughts

To apply this, you sometimes have to restructure your programs a little bit, which often means going wide instead of deep. For example, you might have to convert your data to a homogenuous form in a preprocessing step instead of accessing different objects homogenuously directly in your algorithms, or use postprocessing afterwards.
In languages like C++, this can even net you a huge performance win, which is often cited as the greatest thing about data-oriented design. But, first and foremost, I find that this leads to programs that are easier to understand for both machine and people. I have found myself using this data-centric form of code reuse a lot more lately.

Are you using something like this as well or are you still firmly on the override train, and why? Tell me in the comments!

# Updating Grails 3.3.x to 4.0.x

We have a long history of maintaining a fairly large grails application which we took from Grails 1.0 to 4.0. We sometimes decided to skip some intermediate releases of the framework because of problems or missing incentives to upgrade. If your are interested in our experiences of the past, feel free to have a look our stories:

This is the next installment of our journey to the latest and greatest version of the Grails framework. This time the changes do not seem as intimidating like going from 2.x to 3.x. There are less moving parts, at least from the perspective of an application developer where almost everything stayed the same (gradle build system, YAML configuration, Geb functional tests etc.). Under the hood there are of course some bigger changes like new major versions of GORM/Hibernate and Spring Boot and the switch to Micronaut as the parent application context.

# The hurdles we faced

• For historical reasons our application uses flush mode “auto”. This does not work until today, see https://github.com/grails/grails-core/issues/11376
• The most work intensive change is that Hibernate 5 requires you to perform your work in transactions. So we have dozens of places where we need to add missing `@Transactional` annotations to make especially saving domain objects work. Therefore we have to essentially test the whole application.
• The handling of HibernateProxies again became more intransparent which led to numerous `IllegalArgumentException`s (“object ist not an instance of declaring type”). Sometimes we could move from generated `hashCode()/equals()` implementations to the groovy-Annotation `@EqualsAndHashCode` (actually a good thing) whereas in other places we did manual unwrapping or switched to eager fetching to avoid these problems.

In addition we faced minor gotchas like changed configuration entries. The one that cost us some hours was the subtle change of `server.contextPath` to `server.servlet.context-path` but nothing major or blocking.

We also had to transform many of our unit and integration tests to Spock and the new Grails Testing Support framework. It makes the tests more readable anyway and feels more fruitful than trying to debug the old-style Grails Test Mixins based tests.

# Improvements

One major improvement for us in the Grails ecosystem is the good news that the shiro plugin is again officially available, maintained and cleaned up (see https://github.com/nerdErg/grails-shiro). Now we do not need to use our own poor man’s port anymore.

# Open questions

Regarding the proclaimed performance improvements and reduced memory consumptions we do not have final numbers or impressions yet. We will deliver results on this front in the future.

More important is an incovenience we are still facing regarding hot-code-reloading. It does not work for us even using OpenJDK 8 with the old spring-loaded mechanism. The new restart-style of micronaut/spring-boot is not really productive for us because the startup times can easily reach the minute range even on fast hardware.

# Pro-Tip

My hottest advice for you is this one:

Create a fresh Grails 4 app and compare central files like `application.yml` and `build.gradle` to get up to the state-of-the-art.

# Conclusion

While this upgrade still was a lot of work and meant many places had to be touched it was a lot smoother than many of the previous ones. We hope that things improve further in the future as the technological stack is up-to-date and much more mature than in the early days…

# Getting started with exact arithmetic and F#

In this blog post, I claimed that some exact arithmetic beyond rational numbers can be implemented on a computer. Today I want to show you how that might be done by showing you the beginning of my implementation. I chose F# for the task, since I have been waiting for an opportunity to check it out anyway. So this post is a more practical (first) follow up on the more theoretic one linked above with some of my F# developing experiences on the side.

F# turned out to be mostly pleasant to use, the only annoying thing that happened to me along the way was some weirdness of F# or of the otherwise very helpful IDE Rider: F# seems to need a compilation order of the source code files and I only found out by acts of desperation that this order is supposed to be controlled by drag & drop:

The code I want to (partially) explain is available on github:

https://github.com/felixwellen/ExactArithmetic

I will link to the current commit, when I discuss specifc sections below.

## Prerequesite: Rational numbers and Polynomials

As explained in the ‘theory post’, polynomials will be the basic ingredient to cook more exact numbers from the rationals. The rationals themselves can be built from ‘BigInteger’s (source). The basic arithmetic operations follow the rules commonly tought in schools (here is addition):

```static member (+) (l: Rational, r: Rational) =
Rational(l.up * r.down + r.up * l.down,
l.down * r.down)
```

‘up’ and ‘down’ are ‘BigInteger’s representing the nominator and denominator of the rational number. ‘-‘, ‘*’ and ‘/’ are defined in the same style and extended to polynomials with rational coefficients (source).

There are two things important for this post, that polynomials have and rationals do not have: Degrees and remainders. The degree of a polynomial is just the number of its coefficients minus one, unless it is constant zero. The zero-polynomial has degree -1 in my code, but that specific value is not too important – it just needs to be smaller than all the other degrees.

Remainders are a bit more work to calculate. For two polynomials P and Q where Q is not zero, there is always a unique polynomial R that has a smaller degree such that:

P = Q * D + R

For some polynomial D (the algorithm is here).

## Numberfields and examples

The ingredients are put together in the type ‘NumberField’ which is the name used in algebra, so it is precisely what is described here. Yet it is far from obvious that this is the ‘same’ things as in my example code.

One source of confusion of this approach to exact arithmetic is that we do not know which solution of a polynomial equation we are using. In the example with the square root, the solutions only differ in the sign, but things can get more complicated. This ambiguity is also the reason that you will not find a function in my code, that approximates the elements of a numberfield by a decimal number. In order to do that, we would have to choose a particular solution first.

Now, in the form of unit tests unit tests, we can look at a very basic example of a number field: The one from the theory-post containing a solution of the equation X²=2:

```let TwoAsPolynomial = Polynomial([|Rational(2,1)|])
let ModulusForSquareRootOfTwo =
Polynomial.Power(Polynomial.X,2) - TwoAsPolynomial
let E = NumberField(ModulusForSquareRootOfTwo)
let TwoAsNumberFieldElement = NumberFieldElement(E, TwoAsPolynomial)

[<Fact>]
let ``the abstract solution is a solution of the given equation``() =
let e = E.Solution in  (* e is a solution of the equation 'X^2-2=0' *)
Assert.Equal(E.Zero, e * e - TwoAsNumberFieldElement)
```

There are applications of these numbers which have no obvious relation to square roots. For example, there are numberfields containing roots of unity, which would allow us to calculate with rotations in the plane by rational fraction of a full rotation. This might be the topic of a follow up post…

# C++ pass-thru parameters

So in ye olde days, before C++11 and move semantics, it was common for functions to use mutable references to pass container-content to the caller, like this:

```void random_between(std::vector<int>& out,
int left, int right, std::size_t N)
{
std::uniform_int_distribution<>
distribution(left, right);
for (std::size_t i = 0; i < N; ++i)
out.push_back(distribution(rng));
}
```

and you would often use it like this:

```std::vector<int> numbers;
random_between(numbers, 7, 42, 10);
```

Basically trading expressiveness and convenience for speed/efficiency.

# Convenience is king

Now obviously, those days are over. With move-semantics and guaranteed copy-elision backing us up, it is usually fine to just return the filled container, like this:

```std::vector<int> random_between(int left, int right,
std::size_t N)
{
std::vector<int> out;
std::uniform_int_distribution<>
distribution(left, right);
for (std::size_t i = 0; i < N; ++i)
out.push_back(distribution(rng));
return out;
}
```

Now you no longer have to initialize the container to use this function and the function also became pure, clearly differentiating between its inputs and outputs.

# Mostly better?

However, there is a downside: Before, the function could be used to append multiple runs into the same container, like this:

```std::vector<int> numbers;
for (int i = 0; i < 5; ++i)
random_between(numbers, 50*i + 7, 50*i + 42, 10);
```

That use case suddenly became a lot harder. Also, what if you want to keep your vector around and just `.clear()` it before calling the function again later, to save allocations? That’s also no longer possible. I am not saying that these two use cases should make you prefer the old variant, as they tend not to happen very often. But when they do, it’s all the more annoying. So what if we could have your cake and eat it, too?

# A Compromise

```std::vector<int> random_between(int left, int right,
std::size_t N, std::vector<int> out = {})
{
std::uniform_int_distribution<>
distribution(left, right);
for (std::size_t i = 0; i < N; ++i)
out.push_back(distribution(rng));
return out;
}
```

Now you can use it to just append again:

```std::vector<int> numbers;
for (int i = 0; i < 5; ++i)
numbers = random_between(
50*i + 7, 50*i + 42, 10, std::move(numbers));
```

But you can also use it in the straightforward way, for the hopefully more common case:

```auto numbers = random_between(
50*i + 7, 50*i + 42, 10);
```

Now you should definitely not do this with all your functions returning a container. But it is a nice pattern to have up your sleeve when the need arises. It should be noted that passing a mutable reference can still be faster in some cases, as that will save you two moves. And you can also add a container-returning facade variant as an overload, but I think this pattern is a very nice compromise that can be implemented by moving a single variable to the parameter list and defaulting it. It keeps 99% of the use cases identically to the original container-returning variant, while making the “append” use slightly more verbose, but also more expressive.

# The Java Cache API and Custom Key Generators

The Java Cache API allows you to add a `@CacheResult` annotation to a method, which means that calls to the method will be cached:

```import javax.cache.annotation.CacheResult;

@CacheResult
public String exampleMethod(String a, int b) {
// ...
}
```

The cache will be looked up before the annotated method executes. If a value is found in the cache it is returned and the annotated method is never actually executed.

The cache lookup is based on the method parameters. By default a cache key is generated by a key generator that uses `Arrays.deepHashCode(Object[])` and `Arrays.deepEquals(Object[], Object[])` on the method parameters. The cache lookup based on this key is similar to a HashMap lookup.

You can define and configure multiple caches in your application and reference them by name via the `cacheName` parameter of the `@CacheResult` annotation:

```@CacheResult(cacheName="examplecache")
public String exampleMethod(String a, int b) {
```

If no cache name is given the cache name is based on the fully qualified method name and the types of its parameters, for example in this case: “my.app.Example.exampleMethod(java.lang.String,int)”. This way there will be no conflicts with other cached methods with the same set of parameters.

### Custom Key Generators

But what if you actually want to use the same cache for multiple methods without conflicts? The solution is to define and use a custom cache key generator. In the following example both methods use the same cache (“examplecache”), but also use a custom cache key generator (`MethodSpecificKeyGenerator`):

```@CacheResult(
cacheName="examplecache",
cacheKeyGenerator=MethodSpecificKeyGenerator.class)
public String exampleMethodA(String a, int b) {
// ...
}

@CacheResult(
cacheName="examplecache",
cacheKeyGenerator=MethodSpecificKeyGenerator.class)
public String exampleMethodB(String a, int b) {
// ...
}
```

Now we have to implement the `MethodSpecificKeyGenerator`:

```import org.infinispan.jcache.annotation.DefaultCacheKey;

import javax.cache.annotation.CacheInvocationParameter;
import javax.cache.annotation.CacheKeyGenerator;
import javax.cache.annotation.CacheKeyInvocationContext;
import javax.cache.annotation.GeneratedCacheKey;

public class MethodSpecificKeyGenerator
implements CacheKeyGenerator {

@Override
public GeneratedCacheKey generateCacheKey(CacheKeyInvocationContext<? extends Annotation> context) {
Stream<Object> methodIdentity = Stream.of(context.getMethod());
Stream<Object> parameterValues = Arrays.stream(context.getKeyParameters()).map(CacheInvocationParameter::getValue);
return new DefaultCacheKey(Stream.concat(methodIdentity, parameterValues).toArray());
}
}
```

This key generator not only uses the parameter values of the method call but also the identity of the method to generate the key. The call to `context.getMethod()` returns a `java.lang.reflect.Method` instance for the called method, which has appropriate `hashCode()` and `equals()` implementations. Both this method object and the parameter values are passed to the DefaultCacheKey implementation, which uses deep equality on its parameters, as mentioned above.

By adding the method’s identity to the cache key we have ensured that there will be no conflicts with other methods when using the same cache.

# Adding a dynamic React page to your classic grails multi-page application

We are developing and maintaining a more than 10 years old classic multi-page application based on the Grails web framework. With the advent of HTML 5 and modern browsers with faster JavaScript engines user expect more and more dynamic and pleasant user experience (UX) from web applications. Our application is used by hundreds of users and our customer expects a stable, familiar and feature-rich experience that continues to improve over time. Something like a complete rewrite of the UI is way out of scope time- and budget-wise.

One of the new feature requests would benefit highly from a client-side JavaScript implementation so we looked at our options. Fortunately it is quite easy to integrate a react app with grails and the gradle build system. So we implemented the new page almost completely as a react app while leaving all the other pages as normal server-side rendered Groovy Server Pages (GSP). The result is quite convincing and opens up a transition path to more and more dynamic client-side pages and perhaps even to the complete transformation to a single-page-application (SPA) in a distant future.

# Integrating a React-App into Grails build process

The Grails react-webpack profile can serve as a great starting point to integrate a react app into an existing grails project. First you create the react app for the new page in the folder `src/main/webapp`, using the create-react-app scripts for example. Then you need to add a `\$GRAILS_PROJECT/webpack.config.js` to configure webpack appropriately like so:

```var path = require('path');

module.exports = {
entry: './src/main/webapp/index.js',
output: {
path: path.join(__dirname, 'grails-app/assets/javascripts'),
publicPath: '/assets/',
filename: 'bundle.js'
},
module: {
rules: [
{
test: /\.js\$/,
include: path.join(__dirname, 'src/main/webapp'),
use: {
options: {
presets: ["@babel/preset-env", "@babel/preset-react"],
plugins: ["transform-class-properties"]
}
}
},
{
test: /\.css\$/,
use: [
]
},
{
test: /\.(jpe?g|png|gif|svg)\$/i,
use: {
}
}
]
}
};
```

The next step is to move the `package.json` to the `\$GRAILS_PROJECT` directory because we want gradle tasks to take care of building and bundling it as a grails asset. To make this convenient we add some gradle tasks employing yarn to our `build.gradle`:

```buildscript {
dependencies {
...
}
}

...

apply plugin:"com.moowork.node"

...

node {
version = '12.15.0'
yarnVersion = '1.22.0'
distBaseUrl = 'https://nodejs.org/dist'
}

group = 'build'
description = 'Build the client bundle'
args = ['run', 'bundle']
}

group = 'application'
description = 'Build the client bundle in watch mode'
args = ['run', 'start']
}

bootRun.dependsOn(['bundle'])
assetCompile.dependsOn(['bundle'])

...
```

Now we have integrated our new react app with the grails build system and packaging. The webpack task allows updating the javascript bundle on the fly so that we have almost the same hot reloading support when developing as with the rest of grails.

# Delivering the react app as a page

Now that we have integrated the react app in the build and packaging process of our grails application we need to deliver it when the new page is requested by the browser. This is quite simple and straightforward and can be achieved with a GSP like so:

```<html>
<meta name="layout" content="main"/>
<title>
</title>
<body>
<div id="react-content">
</div>
<asset:javascript src="bundle.js"/>
</body>
</html>
```

Now you just have to develop the endpoints for the javascript app in form of normal grails controllers rendering JSON instead of GSP views. This is extremely easy using groovy maps and the grails JSON converters:

```import grails.converters.JSON

class DataApiController {

def getData = {
def responseData = [
name: 'John',
age: 37
]
render responseData as JSON
}
}
```

# Conclusion

Grails and its build infrastructure is flexible enough to easily integrate SPA pages into an existing traditional web application. This allows you to deliver modern UX and features expected by nowadays users without completely rewriting your trusty and proven grails application. The process can be gradually and individual pages/views can be renewed when needed. That way you can continually add value to your customer while incrementally modernizing your application.

# Some strings are more equal before your Oracle database

When working with customer code based on ADO.net, I was surprised by the following error message:

The german message just tells us that some `UpdateCommand` had an effect on “0” instead of the expected “1” rows of a `DataTable`. This happened on writing some changes to a table using an `OracleDataAdapter`. What really surprised me at this point was that there certainly was no other thread writing to the database during my update attempt. Even more confusing was, that my method of changing `DataTable`s and using the `OracleDataAdapter` to write changes had worked pretty well so far.

In this case, the title “`DBConcurrencyException`turned out to be quite misleading. The text message was absolutely correct, though.

## The explanation

The `UpdateCommand` is a prepared statement generated by the `OracleDataAdapter`. It may be used to write the changes a `DataTable` keeps track of to a database. To update a row, the `UpdateCommand` identifies the row with a `WHERE`-clause that matches all original values of the row and writes the updates to the row. So if we have a table with two rows, a primary id and a number, the update statement would essentially look like this:

```UPDATE EXAMPLE_TABLE
SET ROW_ID =:current_ROW_ID,
NUMBER_COLUMN =:current_NUMBER_COLUMN
WHERE
ROW_ID =:old_ROW_ID
AND NUMBER_COLUMN =:old_NUMBER_COLUMN
```

In my case, the problem turned out to be caused by string-valued columns and was due to some oracle-weirdness that was already discussed on this blog (https://schneide.blog/2010/07/12/an-oracle-story-null-empty-or-what/): On writing, empty strings (more precisely: empty VARCHAR2s) are transformed to a DBNull. Note however, that the following are not equivalent:

```WHERE TEXT_COLUMN = ''
```
```WHERE TEXT_COLUMN is null
```

The first will just never match… (at least with Oracle 11g). So saying that null and empty strings are the same would not be an accurate description.

The `WHERE`-clause of the generated `UpdateCommand`s look more complicated for (nullable) columns of type `VARCHAR2`. But instead of trying to understand the generated code, I just guessed that the problem was a bug or inconsistency in the `OracleDataAdapter` that caused the exception. And in fact, it turned out that the problem occured whenever I tried to write an empty string to a column that was `DBNull` before. Which would explain the message of the `DBConcurrencyException`, since the `DataTable` thinks there is a difference between empty strings and `DBNull`s but due to the conversion there will be no difference when the corrensponding row is updated. So once understood, the problem was easily fixed by transforming all empty strings to `null` prior to invoking the `UpdateCommand`.

# The “parameter self-destruction” bug

A few days ago, I got a bug report for a C++ program about a weird exception involving invalid characters in a JSON format. Now getting weird stuff back from a web backend is not something totally unexpected, so my first instinct was to check whether any calls to the parser did not deal with exceptions correctly. To my surprise, they all did. So I did what I should have done right away: just try to use the feature were the client found the bug. It crashed after a couple of seconds. And what I found was a really interesting problem. It was actually the JSON encoder trying to encode a corrupted string. But how did it get corrupted?

# Tick, tick, boom..

The code in question logs into a web-service and then periodically sends a keep-alive signal with the same information. Let me start by showing you some support code:

```
class ticker_service
{
public:
using callable_type = std::function<void()>;
using handle = std::shared_ptr<callable_type>;

handle insert(callable_type fn)
{
auto result = std::make_shared<callable_type>(
std::move(fn));
callables_.push_back(result);
return result;
}

void remove(handle const& fn_ptr)
{
if (fn_ptr == nullptr)
return;

// just invalidate the function
*fn_ptr = {};
}

void tick()
{
auto callable_invalid =
[](handle const& fn_ptr) -> bool
{
return !*fn_ptr;
};

// erase all the 'remove()d' functions
auto new_end = std::remove_if(
callables_.begin(),
callables_.end(),
callable_invalid);

callables_.erase(new_end, callables_.end());

// call the remainder
for (auto const& each : callables_)
(*each)();
}

private:
std::vector<handle> callables_;
};
```

This is dumbed down from the real thing, but enough to demonstrate the problem. In the real code, this only runs the functions after a specific time has elapsed, and they are all in a queue. Invalidating the `std::function` serves basically as “marking for deletion”, which is a common pattern for allowing deletion in queue or heap-like data structure. In this case, it just allows to mark a function for deletion in constant time, while the actual element shifting is “bundled” in the tick() function.

Now for the code that uses this “ticker service”:

```class announcer_service
{
public:
explicit announcer_service(ticker_service& ticker)
: ticker_(ticker)
{
}

void update_presence(std::string info)
{
// Make sure no jobs are running
ticker_.remove(job_);

if (!send_web_request(info))
return;

// reinsert the job
job_ = ticker_.insert(
[=] {
update_presence(info);
});
}
private:
ticker_service& ticker_;
ticker_service::handle job_;
};
```

The announcer service runs

```ticker_service ticker;
announcer_service announcer(ticker);

announcer.update_presence(
"hello world! this is a longer text.");
ticker.tick();
```

# A subtle change

You might be wondering where the bug is. To the best of my knowledge, there is none. And the real code corresponding to this worked like a charm for years. And I did not make any significant changes to it lately either. Or so I thought.
If I open that code in CLion, Clang-Tidy is telling me that the parameter “info” to `update_presence` is only used as a reference, and I should consider turning it into one. Well, Clang-Tidy, that’s bad advice. Because that’s pretty much the change I made:

```void update_presence(std::string const& info) // <--
```

And this makes it go boom on the second call to `update_presence()`, the one from `tick()`. Whew. But why?

# What is happening?

It turns out, even though we are capturing everything by value, the lambda is still at fault here. Or rather, using values that are captured by the lambda after the lambda has been destroyed. And in this case, the lambda actually destroys itself in the call to `ticker_service::remove()`. In the first call to `update_presence()`, the `job_` handle is still `nullptr`, turning `remove()` into a no-op. On the second call however, `remove()` overwrites the `std::function` that is currently on the stack, calling into `update_presence`, with a default-constructed value. This effectively `delete`s the lambda that was put there by the last iteration of `update_presence`, thereby also destroying the captured info string. Now if `info` was copied into update_presence, this is not a problem, but if you’re still referencing the value stored in the lambda, this is a typical use-after-free. Ooops. I guess C++ can be tricky sometimes, even if you are using automatic memory management.

# How to avoid this

This bug is not unlike changing your container when changing it while iterating over it. Java people know this error from the `ConcurrentModificationException`. Yes, this is possible, if you are really really careful. But in general, you better solve this bug by defering your container modification to a later point after you’re done iterating. Likewise, in this example, the `std::function` that is currently executing is being modified while it is executing.
A good solution is to defer the deletion until after the execution. So I argue the bug is actually in the `ticker_service`, which is not as safe as it can be. It should make sure that the lambda survives the complete duration of the call. An easy, albeit somewhat inefficient, approach would be copying the `std::function` before calling it. Luckily, in the real code, the functions are all just executed once, so I could `std::move` them to a local variable before executing.