5 Not-so-Beginner’s React Pitfalls

React, in my opinion, has become quite a useful tool over the years. I admin I haven’t given the other major frameworks a try, but from the look of the resulting code, I only would give Svelte a real chance in the nearer future (in fact, you’d really have to pay me real big money to convince me about Angular).

Now with many of the more useful JS libraries, React is in a state where not only has it survived quite a time (reaching v18 only a few weeks ago), but also breeding a community that harbors a lot of valuable knowledge, enabling one to efecavoid the most common pitfalls at the beginning of your journey. There are lots of resources you can easily find online, from few-hour-courses to several posts in other blogs about the most common traps.

However, in our daily life it appears that there still are some very good points to make about how not to go about React’s unopinionatedness. So these are some of our own findings that I’ve not yet seen overly emphasized, and maybe they are here for your advantage.

1. HAVE YOUR STATES ATOMIC

It might happen that one migrates an older React component where functional programming wasn’t the norm yet, or out of whatever habit, that you declares something like a greedy React state as

const [state, setState] = useState({this: ..., that: ... , ..., ...});

Now your state profits much from immutability (think of this as “your machine then knows that it’s content is clear and unique, given any time”) and therefore you do not need to care about the same-or-not-sameness of state.that when evaluating state.this. Therefore, it is usually advised to split that up into several independent states as

const [this, setThis] = useState(...);
const [that, setThat] = useState(...);
...

That is more readable and everything. However, the most useful rule to build your states is not even to split everything up as small-as-possible, but rather, to have your states atomic. By that, we mean, “not needlessly large, but containing all what might change at the same time”.

One common example is basic data fetching. If you don’t choose to grab for react-query, which I personally like. But if you do e.g. a simple GET request, you usually do not only have “data” (some response), but also at least a “pending” (has the request finished yet?) and an “error” (is this response even usable?) field. These all change at the same time. Thus, they belong to the same entity. That state, designed atomically

const [query, setQuery] = useState({
    pending: false,
    data: null,
    error: null,
});

side note: you might choose not to use the null object as an initial value here because of the known problem of ambivalence with this object. For this illustration, it will suffice.

So, this query state now is atomic. Not to split further without serious consequences, as you will. If you had another, unrelated query, you would not just put it right into the same state entity; but if you had another property of that query (like e.g. a separate field for the status code, …), it would belong.

This helps in having more predictable useEffect, useMemo etc. dependency arrays. You can have an Effect depending on [query] as a whole and this makes complete semantic sense. It would be very hard to predict it’s behaviour, if you mashed multiple queries or whatever-state-you-can-think-of in there.

2.HAVE YOUR EFFECTS ATOMIC & TEAR THEM DOWN

Similarly, it is not super obvious (to the newcomer’s eye at least), that you can have multiple useEffects(). You can adhere to the Single Responsibility principle right there — the only good Effects are the ones that you can grasp in a twinkling of an eye. Use one each for every single thing you want to achieve, don’t lump multiple different things together in a somewhat-“constructor”-type of thinking. This keeps the dependency arrays small and controllable, and there are fewer cases of peculiar “But this CANNOT EVEN happen!!”.

Moreover, Effects have a function designed to clean them up, or the teardown function. If your Effect starts any larger operation and then for some reason your component get’s re-rendered before your operation is finished, you are likely to get hit by that effect in a state where you forgot about it already. You can follow this example

// example: listening to the scroll event
useEffect(() => {
    const handler = (event) => { /* ... */ };
    document.addEventListener('scroll', handler);
    return () => document.removeEventListener('scroll', handler);
}, []);

// or you might do something later in life
useEffect(() => {
    const timeout = setTimeout(() => { /* ... */ }, 5000);
    return () => clearTimeout(timeout);
}, []);

Some asynchronous operations might not have a simple teardown operation, but you can at least tell your Promises to disregard the effect. This is at least responsible for the very ugly

Warning: Can’t perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application.

If you are responsible, you clean your Browser Console of all of these warnings. It appears if you call a setState-or-similar function at a point where the teardown actually should have happened. This pattern solves that case:

// this example uses a fetch Promise,
// but it also works for stale setTimeout handlers etc.

useEffect(() => {
    let mounted = true;
    fetch('/whatever').then(() => {
        if (mounted) {
            setState(true);
        }
    };
    return () => { mounted = false };
}, []);

// if you do not check for the value of mounted,
// the "memory leak" error can appear, if the
// fetch returns when the component updated meanwhile.

Side note: I also can not recall a single case in which the common React linter rule “exhaustivedeps” was worth ignoring. I had several occasions in which I believed to outsmart the stupid machine, only to end up in much larger problems down the road. Sure, things like Redux’ dispatch() might be cumbersome to include always, but I found that if I just make sure that exhaustive-deps never fires, I am more happy in the long run.

3.USEEFFECT() in too DEEP Functions

Especially in the context of data fetching, it might appear luring to put your useEffect() calls as deep (in the direction of the smallest components) as you can. Even more so, if you don’t have a rigid way of state management.

Now, I feel the point that this appears as “more modular” and flexible, but for me, has happend to situations where way too many requests were sent to our backends. You trade the modularity for the unpredictability of some Effects, so the best way I came to think of it was: Treat useEffect() like a bug.

I’m not saying that using it is wrong. But if you are wary of it’s appearance, this can help. Sometimes, it is just possible to do everything an Effect does – just completely outside React. Maybe, the Effect code can instead live in your index.js (as vanilla JS or otherwise) and just injected into your Root component, e.g. as props or via other libraries. E.g. with a Redux middleware, some effects can run with a higher degree of control about your state.

Remember: Modularity is not bad per se. It’s good. Don’t elevate the most particular effects to the top level of your application, but figure out where they can live well enough so you exactly know when they need to fire.

So far, there hasn’t been a case where I wished that I stuffed my useEffects further down to the virtual DOM leaves, but several, in which elevating them helped me a lot.

4. USE CUSTOM HOOKS with minimal interface

I consider it helpful, even for React beginners, to always be on the lookout of what could be its own React hook. A React Hook is any function that has a name beginning with “use” and for the most time, these consist of some combination of internal useState, useEffect, useContext and useRef definitions.

But their merit is in that they allow for much cleaner, dumber looking Components themselves – consider: dumb components are the best!

If they are only needed once, you can have them co-located next to where they are needed, but even just the act of giving them an own name makes for much more understandable code.

I use custom hooks for a lot of things, e.g.

  • having a State that is persisted in the localStorage / sessionStorage
  • having a State that updates in a debounced / throttled / delayed manner
  • standardizing very basic data fetching
  • accessing the window width at any time (nice for Responsive layout)
  • creating a React ref for an element with an “clicked outside” handler
  • standardized response of messages from connected websockets

I will now spare you the code, but if you have questions about any of these, just drop a comment.

One important point, though: Always have your interface minimal. E.g. if your custom hook has an internal setState(), think hard about whether you pass that function to the outside via the hook return value. Even if you are the only developer on a project, treat yourself as two different instances, one “framework designer” and one “framework consumer”, and as the designer, think hard about what havoc the consumer could do if you allow him too much.

5. Do not duplicate STATE informAtion (especially with react-router)

This applies to every state information, but it’s important to recognize that your URL route is just that: a kind of global state. One that your user can edit directly at any time, leaving the synchronization up to you.

So do not go about it by reading the URL parameters into some state that has it’s own setState! If you define a certain role of a state parameter in your URL, then it is your obligation to have a uni-directional data flow:

  1. From the route, that value flows into your application in a clearly-defined manner,
  2. where you act upon it as you wish, until you need to change it
  3. Then you change the route. Then go back to 1

Of course, one might imagine that in some cases you can not guarantee that. Then maybe do your own synchronization logic, but I would highly advise you to stash that away into e.g. a custom hook, or middleware if you use Redux, so that you can test it thoroughly and it won’t break too soon.

Further note: There are situations where it is quite sensible to have two very similar states, if they have a different responsibility. These are not a bug.

E.g. if you GET a value from a server, then edit it in a controlled <input/> field, and PUT it to the server again, you do not wish to do so on every key press. Then these are not meant to be the same:

  1. the value as you currently know it from the server
  2. the value as it exists inside the <input/>

These are semantically different. They can and should be a different state entity. But if you have something that is utterly dependant on one other state, then chances are you do not really need another entity.

All in all,

that turned out longer than I envisioned it to be become. But I hope it is of any help to any React coders who managed the absolute basics and now are prone to the next-level pitfalls.

The good news is that after a certain bunch of hardships, there is rarely the case of even more surprises. So, manage your state and effects responsibly, especially the asynchronous ones, and the rest are practices that apply for any software development.

Or am I misled?

Mutable States can change inside your Browser console log

So we know, that web development must be one of the fastest-changing ecospheres humankind has ever seen (not to say, JavaScript frameworks and their best practices definitely mutate similar in frequency and deadliness as Coronaviruses). While these new developments can also come with great joy and many opportunities, this means that once in a while, we need to take care of older projects which were written in a completely different mindset.

It’s somehow trivial: Even when your infrastructure is prone to constant shifts, any Software Developer holding at least some reputation should strive to write their code as long-living and maintainable as originally intended. Or longer.

But once in a while you run into legacy code that you first have to dissect in order to understand their working. And for JS, this usually means inserting console.log() statements at various places and to trace them during execution (yeah, I know, there’s a plenitude of articles telling you to stop that, but let’s just stay at the most basic level here).

Especially in an architecture with distributed, possibly asynchronous events (which helps in reducing coupling, see e.g. Mediator and Publish-Subscribe patterns), this can help your bugtracing. But there’s a catch. One which took me some time to actually understand as quite the villain.

It does not make any sense to me, but for some reason, at least Chrome and Firefox in their current implementation save some effort when using console.log() for object entities. As in, they seem to just hold a reference for lazy evaluation. It can then be that you look upwards at your log, maybe even need to scroll there, look at some value and then not realize that you are looking at the current state, not the state at time of logging!

Maybe that was clear to you. Maybe it never occured to you because you always cared about using your state immutably. But in case you are developing on some legacy code and don’t know about what your predecessor did everywhere, you might not be prepared.

You can visualize that difference easily by yourself. Consider that short JS script:

var trustfulObject = {number: 0};
var deceptiveObject = {number: 0};

// let's just increase these numbers once each second
setInterval(() => {
    console.log("let's see...", trustfulObject, deceptiveObject);
    trustfulObject = {number: trustfulObject.number + 1};
    deceptiveObject.number = deceptiveObject.number + 1;
}, 1000);

Let that code run for a while and then open your Browser console. Scroll upwards a bit and click on some of the objects. You will find that the trustfulObject is always enumerated as supposed (at the time of logging), while the deceptiveObject will always show the number at the time of clicking. That surely surprised me.

In case you are still wondering why: The trustfulObject is freshly created each step and then reassigned to your reference variable. It seems the Browser has no other choice than logging the old (correct) state, because the reference is lost afterwards. The deceptiveObject holds the same reference during the whole runtime, which somehow makes it look more efficient to the Browser to just not evaluate anything until you want to know the value.

And then, it lies to you. ¯\_(ツ)_/¯

Two notes:

  1. If you really have to deal with legacy code of a given size where you cannot easily change that behaviour, you can log your object using JSON.stringify, i.e. console.log("let's see…", trustfulObject, JSON.stringify(deceptiveObject)); avoids that lazy evaluation.
  2. Note: Not to be confused, the JS “const” keyword does exactly the opposite of creating an immutable object. It creates an immutable reference, i.e. you can only manipulate their content afterwards. Exactly what you not want.

Of course, in modern times you probably wouldn’t write vanilla JS, and e.g. using React useState definitely reduces that issue. But still. If you don’t want to use React & Co. everywhere, then… pay attention.

You are not safe with Semantic Versioning (right now).

TL;DR: Several recent hijacks of widely spread NPM libraries should make you double-think whether to trust the package.json-semantic-versioning notation using carets and tildes.

So what’s that about?

Version updates are one oft he most haunting things any person with any kind of computer does ever encounter. On the one hand, it’s good news – some thing that you use has evolved again, and you are right at the source. Have better tools, less bugs, new functionality, and usually delivered by just a few clicks. But there’s always these question marks – do I want to know what happened there? Do I want that now? Wasn’t that old version totally working?

So we all know that dilemma. I mean, as we speak, every Windows user is kindly remembered that one could now let that version 11 live on her system. But is it compatible with your system? Will this work out of the box? Is your time worth trying now or do you wait until the waters have settled? And while Microsoft is asking that question only once a few years apart, there’s software like Notepad++ that wants me to have its updates every single start. Because apparently, text editors can grow up so fast..?

Now imagine this problem, times a few googol, and you are in the everyday world of every web developer. In the npm universe, you have nearly total modular flexibility, which comes with so many small packages that update for whatever reason — One likes to believe that mostly these are bugfixes. Or might these be these breaking changes? Did JavaScript evolve again? Do you want that stuff? Are you state-of-the-art or are you in dependency hell?

Issues like these are the source of semantic versioning. The idea is, that you apply a certain level of trust into the usually three-membered version “major.minor.patch” label. Version “3.10.1” means

  • Major version 3; the API of your software is promised to be compatible within each major version (except for the initial version 0)
  • Minor version 10, you use a new version for new API functionality, e.g. there have been 10 times when version 3.0.0 was improved without breaking backwards compatibility
  • Patch version 1, which is incremented for bugfixes within that API specification, i.e. backwards compatible within the whole major version.

This is done in good faith because only when the package maintainer “re-think”s his API, major version is incremented, and that mediates a level of trust in that you in return only have to re-think your usage of that API, if you switch major version. Therefore, dependency management systems like the npm / yarn package.json allow for the convenient notation to specify e.g.

"dependencies": {
    /* ... */
    "styled-components": "^5.1.1",
    "websocket": "~1.0.31"
  },

The caret (^) notation tells us that when the styled-components package was added to our projects, we installed version 5.1.1, but we trust the npm universe that far that every future execution of “npm install” / “yarn install” can increase this version within the same major version, e.g. if version 5.2.0 was released in the meantime, then update for its new content, and as we speak, we are at version 5.3.3, so this project is well up-to-date with whatever the good folks put in there.

Similarly, the tilde (~) notation only allows that behavior within this minor version, e.g. at the moment any call of “… install” would retrieve the current version 1.0.34 but would not get version 1.1.0 whenever that was released.

The opposite of using these is called dependency pinning, and there is lots of further reading available, e.g. here.

There is a certain misconception that “… install” will only update any of these versions if there is no “package-lock.json” for npm or no “yarn.lock” for yarn is around. That is not the case, see below, but first, my actual point.

So the point of semantic versioning is the establishment of trust between the package developer and the user: “This update only changes about that much”.

Problem: You cannot trust the npm universe right now.

Now the last weeks showed us not only a hijacking of the npm package us-parser-js at the end of October, but also another one of the packages coa and rc 11 days ago – these appeared somewhat correlated and came with a mixture of password-stealing and secret installing of crypto-mining tools, all in all the result of some bad folks getting access to these package repositories, making them execute malicious code in their install scripts – note that install scripts are not uncommon for widely spread npm packages. This means that while you can complain that these hackers did not really adhere to the Semantic Versioning code (oh..??), and also these breaches were noticed in a couple of hours each, think about this:

anyone with a certain caret or tilde in her package.json might have infected herself just by a unluckily-timed call of “npm install”.

Think of an automated script. Think of CI. Think of anyone who just wants to build his project and be as up-to-date as one can get. A last year survey of npm developers showed that usage of two-factor authentication is just below 10%, and while this doesn’t mean that the other 90% are completely irresponsible, there just is no system in place that would promise us that such attacks will just go away soon.

So of course we can not write every dependency of your projects itself, especially if they are not direct dependencies. But think of it as Russian Roulette: At least you can minimize the number of pulling the trigger.

You can not know which package is affected next. You better make sure to pin that version to exactly a version you can trust right now, and if you are ever in need of updating this, at least have a quick googling – whether there’s some sh*t going down right now.

Do you have further ideas on how to isolate your development / CI environments from whatever just happens in the outer rims of the npm universe? Please feel free to share.

How to make npm / yarn respect their respective lockfiles (package-lock.json / YARN.lock)

In principle, you can even live with the caret / tilde, if you make sure that you never actually call “npm / yarn install” itself, but make them actually consider their so-called-lockfiles as lockfiles. In their current versions, these calls should lead to that behaviour:

# instead of npm install
npm ci

# instead of yarn install
# for yarn 1.x:
yarn install --frozen-lockfile
# for yarn 2:
yarn install --immutable

As you can see from the npm call, this is especially suited for CI environments, this means you have to make sure the package-lock.json / yarn.lock is part of your repository.

One disadvantage of our approach is that npm really likes to notice you of being not very up to date, and produce lots of noise for whatever reason that you want to get rid of. Just be sure to pay some amount of attention when you update.

Redux-Toolkit & Solving “ReferenceError: Access lexical declaration … before initialization”

Last week, I had a really annoying error in one of our React-Redux applications. It started with a believed-to-be-minor cleanup in our code, culminated in four developers staring at our code in disbelief and quite some research, and resulted in some rather feasible solutions that, in hindsight, look quite obvious (as is usually the case).

The tech landscape we are talking about here is a React webapp that employs state management via Redux-Toolkit / RTK, the abstraction layer above Redux to simplify the majority of standard use cases one has to deal with in current-day applications. Personally, I happen to find that useful, because it means a perceptible reduction of boilerplate Redux code (and some dependencies that you would use all the time anyway, like redux-thunk) while maintaining compatibility with the really useful Redux DevTools, and not requiring many new concepts. As our application makes good use of URL routing in order to display very different subparts, we implemented our own middleware that does the data fetching upfront in a major step (sometimes called „hydration“).

One of the basic ideas in Redux-Toolkit is the management of your state in substates called slices that aim to unify the handling of actions, action creators and reducers, essentially what was previously described as Ducks pattern.

We provide unit tests with the jest framework, and generally speaking, it is more productive to test general logic instead of React components or Redux state updates (although we sometimes make use of that, too). Jest is very modular in the sense that you can add tests for any JavaScript function from whereever in your testing codebase, the only thing, of course, is that these functions need to be exported from their respective files. This means that a single jest test only needs to resolve the imports that it depends on, recursively (i.e. the depenency tree), not the full application.

Now my case was as follows: I wrote a test that essentially was just testing a small switch/case decision function. I noticed there was something fishy when this test resulted in errors of the kind

  • Target container is not a DOM element. (pointing to ReactDOM.render)
  • No reducer provided for key “user” (pointing to node_modules redux/lib/redux.js)
  • Store does not have a valid reducer. Make sure the argument passed to combineReducers is an object whose values are reducers. (also …/redux.js)

This meant there was too much going on. My unit test should neither know of React nor Redux, and as the culprit, I found that one of the imports in the test file used another import that marginally depended on a slice definition, i.e.

///////////////////////////////
// test.js
///////////////////////////////
import {helper} from "./Helpers.js"
...

///////////////////////////////
// Helpers.js
///////////////////////////////
import {SOME_CONSTANT} from "./state/generalSlice.js"
...

Now I only needed some constant located in generalSlice, so one could easily move this to some “./const.js”. Or so I thought.

When I removed the generalSlice.js depency from Helpers.js, the React application broke. That is, in a place totally unrelated:

ReferenceError: can't access lexical declaration 'loadPage' before initialization

./src/state/loadPage.js/</<
http:/.../static/js/main.chunk.js:11198:100
./src/state/topicSlice.js/<
C:/.../src/state/topicSlice.js:140
> [loadPage.pending]: (state, action) => {...}

From my past failures, I instantly recalled: This is a problem with circular dependencies.

Alas, topicSlice.js imports loadPage.js and loadPage.js imports topicSlice.js, and while some cases allow such a circle to be handled by webpack or similar bundlers, in general, such import loops can cause problems. And while I knew that before, this case was just difficult for me, because of the very nature of RTK.

So this is a thing with the RTK way of organizing files:

  • Every action that clearly belongs to one specific slice, can directly be defined in this state file as a property of the “reducers” in createSlice().
  • Every action that is shared across files or consumed in more than one reducer (in more than one slice), can be defined as one of the “extraReducers” in that call.
  • Async logic like our loadPage is defined in thunks via createAsyncThunk(), which gives you a place suited for data fetching etc. that always comes with three action creators like loadPage.pending, loadPage.fulfilled and loadPage.rejected
  • This looks like
///////////////////////////////
// topicSlice.js
///////////////////////////////
import {loadPage} from './loadPage.js';

const topicSlice = createSlice({
    name: 'topic',
    initialState,
    reducers: {
        setTopic: (state, action) => {
            state.topic= action.payload;
        },
        ...
    },
    extraReducers: {
        [loadPage.pending]: (state, action) => {
              state.topic = initialState.topic;
        },
        ...
    });

export const { setTopic, ... } = topicSlice.actions;

And loadPage itself was a rather complex action creator (thunk), as it could cause state dispatches as well, as it was built, in simplified form, as:

///////////////////////////////
// loadPage.js
///////////////////////////////
import {setTopic} from './topicSlice.js';

export const loadPage = createAsyncThunk('loadPage', async (args, thunkAPI) => {
    const response = await fetchAllOurData();

    if (someCondition(response)) {
        await thunkAPI.dispatch(setTopic(SOME_TOPIC));
    }

    return response;
};

You clearly see that import loop: loadPage needs setTopic from topicSlice.js, topicSlice needs loadPage from loadPage.js. This was rather old code that worked before, so it appeared to me that this is no problem per se – but solving that completely different dependency in Helpers.js (SOME_CONSTANT from generalSlice.js), made something break.

That was quite weird. It looked like this not-really-required import of SOME_CONSTANT made ./generalSlice.js load first, along with it a certain set of imports include some of the dependencies of either loadPage.js or topicSlice.js, so that when their dependencies would have been loaded, their was no import loop required anymore. However, it did not appear advisable to trace that fact to its core because the application has grown a bit already. We needed a solution.

As I told before, it required the brainstorming of multiple developers to find a way of dealing with this. After all, RTK appeared mature enough for me to dismiss “that thing just isn’t fully thought through yet”. Still, code-splitting is such a basic feature that one would expect some answer to that. What we did come up with was

  1. One could address the action creators like loadPage.pending (which is created as a result of RTK’s createAsyncThunk) by their string equivalent, i.e. ["loadPage/pending"] instead of [loadPage.pending] as key in the extraReducers of topicSlice. This will be a problem if one ever renames the action from “loadPage” to whatever (and your IDE and linter can’t help you as much with errors), which could be solved by writing one’s own action name factory that can be stashed away in a file with no own imports.
  2. One could re-think the idea that setTopic should be among the normal reducers in topicSlice, i.e. being created automatically. Instead, it can be created in its own file and then being referenced by loadPage.js and topicSlice.js in a non-circular manner as export const setTopic = createAction('setTopic'); and then you access it in extraReducers as [setTopic]: ... .
  3. One could think hard about the construction of loadPage. This whole thing is actually a hint that loadPage does too many things on too many different levels (i.e. it violates at least the principles of Single Responsibility and Single Level of Abstraction).
    1. One fix would be to at least do away with the automatically created loadPage.pending / loadPage.fulfilled / loadPage.rejected actions and instead define custom createAction("loadPage.whatever") with whatever describes your intention best, and put all these in your own file (as in idea 2).
    2. Another fix would be splitting the parts of loadPage to other thunks, and the being able to react on the automatically created pending / fulfilled / rejected actions each.

I chose idea 2 because it was the quickest, while allowing myself to let idea 3.1 rest a bit. I guess that next time, I should favor that because it makes the developer’s intention (as in… mine) more clear and the Redux DevTools output even more descriptive.

In case you’re still lost, my solution looks as

///////////////////////////////
// sharedTopicActions.js
///////////////////////////////
import {createAction} from "@reduxjs/toolkit";
export const setTopic = createAction('topic/set');
//...

///////////////////////////////
// topicSlice.js
///////////////////////////////
import {setTopic} from "./sharedTopicActions";
const topicSlice = createSlice({
    name: 'topic',
    initialState,
    reducers: {
        ...
    },
    extraReducers: {
        [setTopic]: (state, action) => {
            state.topic= action.payload;
        },

        [loadPage.pending]: (state, action) => {
              state.topic = initialState.topic;
        },
        ...
    });

///////////////////////////////
// loadPage.js, only change in this line:
///////////////////////////////
import {setTopic} from "./sharedTopicActions";
// ... Rest unchanged

So there’s a simple tool to break circular dependencies in more complex Redux-Toolkit slice structures. It was weird that it never occured to me before, i.e. up until to this day, I always was able to solve circular dependencies by shuffling other parts of the import.

My problem is fixed. The application works as expected and now all the tests work as they should, everything is modular enough and the required change was not of a major structural redesign. It required to think hard but had a rather simple solution. I have trust in RTK again, and one can be safe again in the assumption that JavaScript imports are at least deterministic. Although I will never do the work to analyse what it actually was with my SOME_CONSTANT import that unknowingly fixed the problem beforehand.

Is there any reason to disfavor idea 3.1, though? Feel free to comment your own thoughts on that issue 🙂

Flexible React-Redux Hook Mocks in jest & React Testing Library

Best practices in mocking React components aren’t entirely unheard of, even in connection with a Redux state, and even not in connection with the quite convenient Hooks description ({ useSelector, useDispatch}).

So, of course the knowledge of a proper approach is at hand. And in many scenarios, it makes total sense to follow their principle of exactly arranging your Redux state in your test as you would in your real-world app.

Nevertheless, there are reasons why one wants to introduce a quick, non-overwhelming unit test of a particular component, e.g. when your system is in a state of high fluctuation because multiple parties are still converging on their interfaces and requirements; and a complete mock would be quite premature, more of a major distraction, less of being any help.

Proponents of strict TDD would now object, of course. Anyway – Fortunately, the combination of jest with React Testing Library is flexible enough to give you the tools to drill into any of your state-connected components without much knowledge of the rest of your React architecture*

(*) of course, these tests presume knowledge of your current Redux store structure. In the long run, I’d also consider this bad style, but during highly fluctuatig phases of develpment, I’d favour the explicit “this is how the store is intended to look” as safety by documentation.

On a basic test frame, I want to show you three things:

  1. Mocking useSelector in a way that allows for multiple calls
  2. Mocking useDispatch in a way that allows expecting a specific action creator to be called.
  3. Mocking useSelector in a way that allows for mocking a custom selector without its actual implementation

(Upcoming in a future blog post: Mocking useDispatch in a way to allow for async dispatch-chaining as known from Thunk / Redux Toolkit. But I’m still figuring out how to exactly do it…)

So consider your component e.g. as a simple as:

import {useDispatch, useSelector} from "react-redux";
import {importantAction} from "./place_where_these_are_defined";

const TargetComponent = () => {
    const dispatch = useDispatch();
    const simpleThing1 = useSelector(store => store.thing1);
    const simpleThing2 = useSelector(store => store.somewhere.thing2);

    return <>
        <div>{simpleThing1}</div>
        <div>{simpleThing2}</div>
        <button title={"button title!"} onClick={() => dispatch(importantAction())}>Do Important Action!</button>
    </>;
};

Multiple useSelector() calls

If we had a single call to useSelector, we’d be as easily done as making useSelector a jest.fn() with a mockReturnValue(). But we don’t want to constrain ourselves to that. So, what works, in our example, to actually construct a mockin store as plain object, and give our mocked useSelector a mockImplementation that applies its argument (which, as selector, is a function of the store)) to that store.

Note that for this simple example, I did not concern myself with useDispatch() that much. It just returns a dispatch function of () => {}, i.e. it won’t throw an error but also doesn’t do anything else.

import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';

jest.mock("react-redux", () => ({
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

describe('Test TargetComponent', () => {

    beforeEach(() => {
        useDispatchMock.mockImplementation(() => () => {});
        useSelectorMock.mockImplementation(selector => selector(mockStore));
    })
    afterEach(() => {
        useDispatchMock.mockClear();
        useSelectorMock.mockClear();
    })

    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const mockStore = {
        thing1: 'this is thing1',
        somewhere: {
            thing2: 'and I am thing2!',
        }
    };

    it('shows thing1 and thing2', () => {
        render(<TargetComponent/>);
        expect(screen.getByText('this is thing1').toBeInTheDocument();
        expect(screen.getByText('and I am thing2!').toBeInTheDocument();
    });

});

This is surprisingly simple considering that one doesn’t find this example scattered all over the internet. If, for some reason, one would require more stuff from react-redux, you can always spread it in there,

jest.mock("react-redux", () => ({
    ...jest.requireActual("react-redux"),
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

but remember that in case you want to build full-fledged test suites, why not go the extra mile to construct your own Test store (cf. link above)? Let’s stay simple here.

Assert execution of a specific action

We don’t even have to change much to look for the call of a specific action. Remember, we presume that our action creator is doing the right thing already, for this example we just want to know that our button actually dispatches it. E.g. you could have connected that to various conditions, the button might be disabled or whatever, … so that could be less trivial than our example.

We just need to know how the original action creator looked like. In jest language, this is known as spying. We add the blue parts:

// ... next to the other imports...
import * as ourActions from './actions';



    //... and below this block
    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const importantAction = jest.spyOn(ourActions, 'importantAction');

    //...

    //... other tests...

    it('dispatches importantAction', () => {
        render(<TargetComponent/>);
        const button = screen.getByTitle("button title!"); // there are many ways to get the Button itself. i.e. screen.getByRole('button') if there is only one button, or in order to be really safe, with screen.getByTestId() and the data-testid="..." attribute.
        fireEvent.click(button);
        expect(importantAction).toHaveBeenCalled();
    });

That’s basically it. Remember, that we really disfigured our dispatch() function. What we can not do this way, is a form of

// arrangement
const mockDispatch = jest.fn();
useDispatchMock.mockImplementation(() => mockDispatch);

// test case:
expect(mockDispatch).toHaveBeenCalledWith(importantAction()); // won't work

Because even if we get a mocked version of dispatch() that way, the spyed-on importantAction() call is not the same as the one that happened inside render(). So again. In our limited sense, we just don’t do it. Dispatch() doesn’t do anything, importantAction just gets called once inside.

Mock a custom selector

Consider now that there are custom selectors which we don’t care about much, we just need them to not throw any error. I.e. below the definition of simpleThing2, this could look like

import {useDispatch, useSelector} from "react-redux";
import {importantAction, ourSuperComplexCustomSelector} from "./place_where_these_are_defined";

const TargetComponent = () => {
    const dispatch = useDispatch();
    const simpleThing1 = useSelector(store => store.thing1);
    const simpleThing2 = useSelector(store => store.somewhere.thing2);
    const complexThing = useSelector(ourSuperComplexCustomSelector);
    
    //... whathever you want to do with it....
};

Here, we want to keep it open how exactly complexThing is gained. This selector is considered to already be tested in its own unit test, we just want its value to not-fail and we can really do it like this, blue parts added / changed:

import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';
import {ourSuperComplexCustomSelector} from "./place_where_these_are_defined";

jest.mock("react-redux", () => ({
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

const mockSelectors = (selector, store) => {
    if (selector === ourSuperComplexCustomSelector) {
        return true;  // or what we want to 
    }
    return selector(store);
}

describe('Test TargetComponent', () => {

    beforeEach(() => {
        useDispatchMock.mockImplementation(() => () => {});
        useSelectorMock.mockImplementation(selector => mockSelectors(selector, mockStore));
    })
    afterEach(() => {
        useDispatchMock.mockClear();
        useSelectorMock.mockClear();
    })

    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const mockStore = {
        thing1: 'this is thing1',
        somewhere: {
            thing2: 'and I am thing2!',
        }
    };

    // ... rest stays as it is
});

This wasn’t as obvious to me as you never know what jest is doing behind the scenes. But indeed, you don’t have to spy on anything for this simple test, there is really functional identity of ourSuperComplexCustomSelector inside the TargetComponent and the argument of useSelector.

So, yeah.

The combination of jest with React Testing Library is obviously quite flexible in allowing you to choose what you actually want to test. This was good news for me, as testing frameworks in general might try to impose their opinions on your style, which isn’t always bad – but in a highly changing environment as is anything that involves React and Redux, sometimes you just want to have a very basic test case in order to concern yourself with other stuff.

So, without wanting that you lower your style to such basic constructs, I hope this was of some insight for you. In a more production-ready state, I would still go the way as that krawaller.se blog post state above, it makes sense. I was just here to get you started 😉

Bridging Eons in Web Dev with Polyfills

Indeed, web development is kind of peculiar. On the one hand, there‘s seldom a field in which new technologies overturn each other at that pace, creating very exciting opportunities ranging from quickly sketching out proof-of-concepts to the efficient construction of real-world applications. On the other hand, there is this strange air of browser dependency and with any new technology one acquires, there‘s always the question of whether this is just some temporary fashion or here to stay.

Which is why it hapens, that one would like to quickly scaffold a web application on the base of React and its ecosystem, but has the requirement that the customer is – either voluntarily or forced by higher powers – using some legacy browser like Internet Explorer 11, for which Microsoft has recently announced its end of life support for 30th November this year. Which doesn’t sound nice for the… *searching quickly* … 5% of desktop/laptop users that still use this old horse, but then again, how long can you cling to an outdated thing?

For the daily life of a web developer, his mind full of peculiarities that the evolution of the ECMAScript standard which basically is JavaScript, there is the practical helper of caniuse.com, telling you for every item of your code you want to know about, which browser / device has support and which doesn’t.

But what about whole frameworks? When I recently had my quest for a IE11-comptabile React app, I already feared that at every corner, I needed to double-check all my doing, especially given that for the development itself, one is certainly advised to instead use one of the browsers that come with a quite some helpful developer tools, like extensions for React, Redux, etc. — but also the features in the built-in Console, where it makes your life a lot easier whether you can just log a certain state as a string of “[Object object]” or a fully interactive display of object properties. Sorry IE11, there are reasons why you have to go.

But actually, then, I figured, that my request is maybe not that far outside the range of rather widespread use cases. Thus, the chance that someone already tried to tackle the problem, aren’t so hopeless. And so this works pretty straightforward:

  • Install “react-app-polyfill”, e.g. via npm:
npm install react-app-polyfill
  • At the very top of your index.js, add for good measure:
import "react-app-polyfill/ie11";
import "react-app-polyfill/stable";
  • Include “IE 11” (with quotes) in your package.json under the “browserlist” as a new entry under “production” and “development”

That should do it. There are people on the internet that advise removing the “node_modules/.cache” directory when doing this in an existing project.

The term of a polyfill is actually derived from some kind of putty, which is actually a nice picture. It’s all about allowing a developer to use accustomed features while maintaining the actual production environment.

Another very useful polyfill in this undertaking was…

// install 
npm install --save-dev @babel/plugin-transform-arrow-functions

// then add to the "babel" > "plugins" config array:
"babel": {
    "plugins": [
      "@babel/plugin-transform-arrow-functions"
    ]
  }

… as I find the new-fashioned arrow function notation quite useful.

So, this seems to bridge (most of) the worries one encounters in this web dev world where use cases span eons of technology evolution. Now, do you know any more useful polyfills that make your life easier?

Changing the keyboard navigation behaviour of form inputs

The default behaviour in HTML forms is that you can move the focus from one input element to the next via the tab key and submit the form via the enter key. This is also how dialogs work on most operating systems when using the native UI components. This behaviour is consistent across all browsers, and changing it messes with the user’s expectations and reduces accessibility. So I would normally advise against changing this behaviour without good reasons.

However, one of our customers wanted a different behaviour for an application developed by us. This application replaced an older application where the enter key did not submit the form, but moved the focus to the next input element. The ‘muscle memory’ effect made users accidentally submit the form by hitting the enter key, causing frustration. Since this application is not a public web site, but merely a web technology based intranet application with a small and specialized user base, changing the default behaviour is acceptable if the users want it.

So here’s how to do it. The following JavaScript function focusNextInputOnEnter takes a form element as a parameter and changes the focus behaviour on the input elements within this form.

function focusNextInputOnEnter(form) {
  var inputs = form.querySelectorAll('input, select, textarea');
  for (var i = 0; i < inputs.length; i++) {
    var input = inputs[i];
    input.addEventListener('keypress', (function(index) {
      return function(event) {
        if (!isEnter(event.which)) {
          return;
        }
        var nextIndex = index + 1;
        while (nextIndex < inputs.length) {
          var nextInput = inputs[nextIndex];
          if (nextInput.disabled) {
            nextIndex++;
            continue;
          }
          nextInput.focus();
          break;
        }
      };
    })(i));
  }

  function isEnter(keyCode) {
    return keyCode === 13;
  }
}

It works by handling the keypress events on the input elements and checking the key code for the enter key (code 13). It has an additional check so that disabled input elements are skipped.

To apply this change in behaviour to a form we have to call the function when the DOM content is loaded:

<form id="demo-form">
  <input type="text">
  <input type="text" disabled="disabled">
  <input type="checkbox">
  <select>
    <option>A</option>
    <option>B</option>
  </select>
  <textarea></textarea>
  <input type="text">
  <input type="text">
</form>

<script>
  document.addEventListener('DOMContentLoaded', function() {
    focusNextInputOnEnter(document.getElementById('demo-form'));
  });
</script>

I want to reiterate my warning that you should definitely not do this for public web sites, and elsewhere only if you know that this is what your users want.

State Management for emotionally overwhelmed React rookies

State Management for overwhelmed React rookies

The topic of React state management is nowhere near new. Just to be safe what we‘re talking about here: Consider a multitude of components, which, in nice React-fashion, are finely interleaved into each other, each one with a Single Responsibility, each one only holding as much information as it needs. Depending on the complexity of your application, there can now be a lot of complex dependencies, where one small component somewhere might cause another small component totally-elsewhere to update (re-render), without these having to really know much about each other, because we strive for Low Coupling. In front-end development, this is not only done in terms of „cleaner code“, but also in the performance problem of having to re-render stuff that is not actually changing.

So, just a few months back, a new competitor appeared in the question of React state management, which was open-sourced by Facebook and is called Recoil. The older top dog in this field is the widely-used Redux, with smaller interventions of libraries like MobX, that also aimed to offer an alternative of managing state in smaller applications, and when React in version 16.3 opened up a new standard of Context API, it already officially advanced quite a step into the direction of an official React solution to these questions.

There‘s probably not a single web developer on earth who wouldn‘t agree that in our field, one of the most fun…fundamental challenges is the effort of staying afloat on top of the turbulent JavaScript-osphere. If you are the type of person who doesn‘t want to jump on every bandwagon, but still don‘t want to miss out on all the amazing opportunities that all this progress could give you, you better start a bunch of side projects (call them „recreational“ if you like) and give yourself a chance to dive into particular technologies with confined scope, for some research time.

This is what I‘ve done now and I try to focus completely on the issues that an ambitious developer can experience when having all these choices. This is what I want to outline for you now, because as usual – if you have lots of time studying a single technology, you can succeed in spite of many limitations, and you also get used to certain kinds of things you might have to do that you originally didn‘t want to do, and so on and so on.

So with Redux, nobody really appeared to talk a lot of bad things about it and there even are some Mariuses who seem to be absolutely in love with the official Redux documentation, that are actually more of a guide to time-tested Best Practices, giving you the opportunity to do things right and have a scaleable state container which supports you even if your application grows to large dimensions. Then there‘s stuff like a time-travelling state debugger and the flexible middleware integration which I didn‘t even touch yet. When your project has a number of unrelated data structures, there‘s the Ducks pattern that advises you to organize your required Reducers, Actions and Action Creators in a coherently arranged files. Which, however, turned complicated in my one project in which the types of data objects aren‘t as unrelated, and I had to remove all the combineReducer() logic and ended up with one large global state object; I now have one source file that just consists of everything-related-with-redux and for my purpose, this seems fine, but I still have to write rather cumbersome connect(mapStateToProps, mapDispatchToProps) structure in every component in which I want to access the state. I would prefer to have smaller state containers, but maybe it‘s due to the structure of my project that makes these complicated.

It really is that way: Due to the everchanging recommendations that come with the evolution of React, the question of how to do things best (read: best for your specific purpose), always stays fresh. Since React 16.8 and the arrival of Hooks there is a procession towards less boilerplate code, favoring functional components with a leaner appearance. In this spirit, I strived for something less Redux-y. E.g. if I want some text in my state to be set; I would have to do something like

// ./ducks/TextDucks.js
// avoid having to rely on a magical string, therefore save the string to a constant
const SET_TEXT = 'SET_TEXT'; 

// action creator
export const setTextCreator = (text) ==> ({type: CLEAR_TEXT, payload: {text}});

const Reducer = (state = initialState, {type, payload}) => {
  //... other state stuff
  if (type === SET_TEXT) {
    return {...state, text: payload.text};
  }
}

================
// Component file
import {setTextCreator} from './ducks/TextDucks.js';
const mapDispatchToProps = (dispatch) => ({
  setText: text => dispatch(setTextCreator(text));
});
const Component = ({setText, ...}) => {
  // here can I actually use setText()
};
export default connect(..., mapDispatchToProps)(Component);

Which is more organized than passing along some setText(‘‘) function through my whole component tree, but still feels a bit overhead.

Then there was MobX. Which seemed to be quite lightweight and clearly laid out a coherent use of the Observable pattern, implemented with its own JavaScript decorators that follow a simple structure. Still, however, the lookandfeel of this code would appear to differ quite a lot from my usual coding style, which kept me from actually using it. This decision was then advanced by certain statements online, who, some years ago, actually predicted that the advancement of React’s own Context API would make any third-party library redundant. Now to be fair, React’s current Context API, with its useReducer() and useContext() also makes it possible to imiate a Redux-like structure already, but consider it as ways of thinking: If you write your code in the same style as you would with Redux’ recommendations, why not use it directly? Clearly, the point of avoiding Redux should go towards the direction of thinking differently.

The Context API actually supplies the underlying structure on which Redux’ own <Provider> builds. Insofar, it is not a big astonishment that you can accomplish similar things with it. Using the Context API, you wrap your whole Application like

// myContext.js
import React from "react";
const TextContext = React.createContext();
export default TextContext;

// App.js
import TextContext from './myContext';
const App = () => <TextContext.Provider value={"initial text"}>{/* actual app components here */}</TextContext.Provider>;

// some subComponent.js
import React from 'react';
import TextContext fom './myContext';
const SubComponent = (props) => {
  const [text, setText] = React.useContext(TextContext);
  // now use setText() as you would with a local React useState dispatch function..
}

Personally, I found that arrangement a bit clearer than the Redux structure, if you ‘re not used to Redux’ way of thinking anyway. However, if your state grows and is more than just a text, you would either keep state information in one large object again, or wrap your <App/> in a ton of different Contexts which I personally disdained already when I just had three different global state variables to manage.

Having all these possbilities at hand, one might wonder why Facebook felt the need to implement a new way of state management with Recoil. Recoil is still in its experimental phase, however, it didn’t take long to find one aspect very promising: The coding style of managing state in Recoil feels a lot like writing React code itself, which itself makes it very smooth to operate, as you don’t have to treat global state much different than local state. Our example would look like this

// textState.js
import * as Recoil from 'recoil';
export const text = Recoil.atom({key: 'someUniqueKey', default: 'inital text'});

// App.js
import {RecoilRoot} from 'recoil';
const App = () => <RecoilRoot>{/* here the actual app components */}</RecoilRoot>

// some Component.js
import * as TextState from './textState';
const [text, setText] = Recoil.useRecoilState(TextState.text);
// from then on, you can use setText() like you would as a React useState dispatch function

Even more simple, with Recoil you directly have access to the single useRecoilValue() function to just read the text value, and the single useSetRecoilState() function to just dispatch a new text. This avoids the complication of having to re-think your treating of whatever-in-your-state-is-global differently from what is local. Your <App/> component doesn’t grow to ugly levels of intendation, and you can neatly organize everything state-related in a separate file.

As someone who considers himself quite eager to learn new technologies, but also wants to quickly see some results without having to learn a lot of fresh basic understanding first, I had the most fun trying out Recoil in my projects, I have to admit. However, I totally believe that the demise of Redux is not closing in that soon at all, due to its focus on sustainability. For the future, I would aim to see my one Recoil project grow, and I keep you updated on how well this grows…

Adding a dynamic React page to your classic grails multi-page application

We are developing and maintaining a more than 10 years old classic multi-page application based on the Grails web framework. With the advent of HTML 5 and modern browsers with faster JavaScript engines user expect more and more dynamic and pleasant user experience (UX) from web applications. Our application is used by hundreds of users and our customer expects a stable, familiar and feature-rich experience that continues to improve over time. Something like a complete rewrite of the UI is way out of scope time- and budget-wise.

One of the new feature requests would benefit highly from a client-side JavaScript implementation so we looked at our options. Fortunately it is quite easy to integrate a react app with grails and the gradle build system. So we implemented the new page almost completely as a react app while leaving all the other pages as normal server-side rendered Groovy Server Pages (GSP). The result is quite convincing and opens up a transition path to more and more dynamic client-side pages and perhaps even to the complete transformation to a single-page-application (SPA) in a distant future.

Integrating a React-App into Grails build process

The Grails react-webpack profile can serve as a great starting point to integrate a react app into an existing grails project. First you create the react app for the new page in the folder src/main/webapp, using the create-react-app scripts for example. Then you need to add a $GRAILS_PROJECT/webpack.config.js to configure webpack appropriately like so:

var path = require('path');

module.exports = {
  entry: './src/main/webapp/index.js',
  output: {
    path: path.join(__dirname, 'grails-app/assets/javascripts'),
    publicPath: '/assets/',
    filename: 'bundle.js'
  },
  module: {
    rules: [
      {
        test: /\.js$/,
        include: path.join(__dirname, 'src/main/webapp'),
        use: {
          loader: 'babel-loader',
          options: {
            presets: ["@babel/preset-env", "@babel/preset-react"],
            plugins: ["transform-class-properties"]
          }
        }
      },
      {
        test: /\.css$/,
        use: [
          'style-loader',
          'css-loader'
        ]
      },
      {
        test: /\.(jpe?g|png|gif|svg)$/i,
        use: {
          loader: 'url-loader?limit=10000&prefix=assets/!img'
        }
      }
    ]
  }
};

The next step is to move the package.json to the $GRAILS_PROJECT directory because we want gradle tasks to take care of building and bundling it as a grails asset. To make this convenient we add some gradle tasks employing yarn to our build.gradle:

buildscript {
    dependencies {
        ...
        classpath "com.moowork.gradle:gradle-node-plugin:1.2.0"
    }
}

...

apply plugin:"com.moowork.node"

...

node {
    version = '12.15.0'
    yarnVersion = '1.22.0'
    distBaseUrl = 'https://nodejs.org/dist'
    download = true
}

task bundle(type: YarnTask, dependsOn: 'yarn') {
    group = 'build'
    description = 'Build the client bundle'
    args = ['run', 'bundle']
}

task webpack(type: YarnTask, dependsOn: 'yarn') {
    group = 'application'
    description = 'Build the client bundle in watch mode'
    args = ['run', 'start']
}

bootRun.dependsOn(['bundle'])
assetCompile.dependsOn(['bundle'])

...

Now we have integrated our new react app with the grails build system and packaging. The webpack task allows updating the javascript bundle on the fly so that we have almost the same hot reloading support when developing as with the rest of grails.

Delivering the react app as a page

Now that we have integrated the react app in the build and packaging process of our grails application we need to deliver it when the new page is requested by the browser. This is quite simple and straightforward and can be achieved with a GSP like so:

<html>
<head>
    <meta name="layout" content="main"/>
    <title>
        <g:message code="example.header"/>
    </title>
</head>
<body>
    <div id="react-content">
    </div>
    <asset:javascript src="bundle.js"/>
</body>
</html>

Now you just have to develop the endpoints for the javascript app in form of normal grails controllers rendering JSON instead of GSP views. This is extremely easy using groovy maps and the grails JSON converters:

import grails.converters.JSON

class DataApiController {

    def getData = {
        def responseData = [
            name: 'John',
            age: 37
        ]
        render responseData as JSON
    }
}

Conclusion

Grails and its build infrastructure is flexible enough to easily integrate SPA pages into an existing traditional web application. This allows you to deliver modern UX and features expected by nowadays users without completely rewriting your trusty and proven grails application. The process can be gradually and individual pages/views can be renewed when needed. That way you can continually add value to your customer while incrementally modernizing your application.

Configurable React backend in deployment

In my last post I explained how to make you React App configurable with the backend endpoint as an example. I did not make clear that the depicted approach is build-time configurability.

If you want deploy- or runtime-time configurability the most simple approach is to provide global variables in your index.html like so:

<!DOCTYPE html>
<html lang="en">
  <head>
    <script>
      window.REACT_APP_BACKEND_API_BASE_URL= 'http://some.other.server:5000';
      window.APPLICATION_CONFIGURATION = {
        settingA: 'aValue',
        anotherSetting: 'anotherValue'
      };
    </script>
  </head>
  <body>
    <noscript>
      You need to enable JavaScript to run this app.
    </noscript>
    <div id="root"></div>
  </body>
</html>

We use (or activate) this configuration similar to the build-time approach with .env files:

// If we have a differing backend configured, replace the global fetch()
// instead of process.env.REACT_APP_BACKEND_API_BASE_URL
// we now use window.REACT_APP_BACKEND_API_BASE_URL
if (window.REACT_APP_BACKEND_API_BASE_URL !== undefined
    && window.REACT_APP_BACKEND_API_BASE_URL !== '') {
  applyBaseUrlToFetch(window.REACT_APP_BACKEND_API_BASE_URL);
}

That way an automated process or a human administrator can deploy the same artifact to different servers with customized settings. This approach is briefly explained in the create-react-app documentation. In addition a server-side application could replace placeholders dynamically in the html file, e.g. with data from a configuration database.

I personally like this approach because it allows us to use the same build artifact for internal testing, staging systems and production at the clients site. It also allows the client to make some basic configuration themselves.