Redux-Toolkit & Solving “ReferenceError: Access lexical declaration … before initialization”

Last week, I had a really annoying error in one of our React-Redux applications. It started with a believed-to-be-minor cleanup in our code, culminated in four developers staring at our code in disbelief and quite some research, and resulted in some rather feasible solutions that, in hindsight, look quite obvious (as is usually the case).

The tech landscape we are talking about here is a React webapp that employs state management via Redux-Toolkit / RTK, the abstraction layer above Redux to simplify the majority of standard use cases one has to deal with in current-day applications. Personally, I happen to find that useful, because it means a perceptible reduction of boilerplate Redux code (and some dependencies that you would use all the time anyway, like redux-thunk) while maintaining compatibility with the really useful Redux DevTools, and not requiring many new concepts. As our application makes good use of URL routing in order to display very different subparts, we implemented our own middleware that does the data fetching upfront in a major step (sometimes called „hydration“).

One of the basic ideas in Redux-Toolkit is the management of your state in substates called slices that aim to unify the handling of actions, action creators and reducers, essentially what was previously described as Ducks pattern.

We provide unit tests with the jest framework, and generally speaking, it is more productive to test general logic instead of React components or Redux state updates (although we sometimes make use of that, too). Jest is very modular in the sense that you can add tests for any JavaScript function from whereever in your testing codebase, the only thing, of course, is that these functions need to be exported from their respective files. This means that a single jest test only needs to resolve the imports that it depends on, recursively (i.e. the depenency tree), not the full application.

Now my case was as follows: I wrote a test that essentially was just testing a small switch/case decision function. I noticed there was something fishy when this test resulted in errors of the kind

  • Target container is not a DOM element. (pointing to ReactDOM.render)
  • No reducer provided for key “user” (pointing to node_modules redux/lib/redux.js)
  • Store does not have a valid reducer. Make sure the argument passed to combineReducers is an object whose values are reducers. (also …/redux.js)

This meant there was too much going on. My unit test should neither know of React nor Redux, and as the culprit, I found that one of the imports in the test file used another import that marginally depended on a slice definition, i.e.

///////////////////////////////
// test.js
///////////////////////////////
import {helper} from "./Helpers.js"
...

///////////////////////////////
// Helpers.js
///////////////////////////////
import {SOME_CONSTANT} from "./state/generalSlice.js"
...

Now I only needed some constant located in generalSlice, so one could easily move this to some “./const.js”. Or so I thought.

When I removed the generalSlice.js depency from Helpers.js, the React application broke. That is, in a place totally unrelated:

ReferenceError: can't access lexical declaration 'loadPage' before initialization

./src/state/loadPage.js/</<
http:/.../static/js/main.chunk.js:11198:100
./src/state/topicSlice.js/<
C:/.../src/state/topicSlice.js:140
> [loadPage.pending]: (state, action) => {...}

From my past failures, I instantly recalled: This is a problem with circular dependencies.

Alas, topicSlice.js imports loadPage.js and loadPage.js imports topicSlice.js, and while some cases allow such a circle to be handled by webpack or similar bundlers, in general, such import loops can cause problems. And while I knew that before, this case was just difficult for me, because of the very nature of RTK.

So this is a thing with the RTK way of organizing files:

  • Every action that clearly belongs to one specific slice, can directly be defined in this state file as a property of the “reducers” in createSlice().
  • Every action that is shared across files or consumed in more than one reducer (in more than one slice), can be defined as one of the “extraReducers” in that call.
  • Async logic like our loadPage is defined in thunks via createAsyncThunk(), which gives you a place suited for data fetching etc. that always comes with three action creators like loadPage.pending, loadPage.fulfilled and loadPage.rejected
  • This looks like
///////////////////////////////
// topicSlice.js
///////////////////////////////
import {loadPage} from './loadPage.js';

const topicSlice = createSlice({
    name: 'topic',
    initialState,
    reducers: {
        setTopic: (state, action) => {
            state.topic= action.payload;
        },
        ...
    },
    extraReducers: {
        [loadPage.pending]: (state, action) => {
              state.topic = initialState.topic;
        },
        ...
    });

export const { setTopic, ... } = topicSlice.actions;

And loadPage itself was a rather complex action creator (thunk), as it could cause state dispatches as well, as it was built, in simplified form, as:

///////////////////////////////
// loadPage.js
///////////////////////////////
import {setTopic} from './topicSlice.js';

export const loadPage = createAsyncThunk('loadPage', async (args, thunkAPI) => {
    const response = await fetchAllOurData();

    if (someCondition(response)) {
        await thunkAPI.dispatch(setTopic(SOME_TOPIC));
    }

    return response;
};

You clearly see that import loop: loadPage needs setTopic from topicSlice.js, topicSlice needs loadPage from loadPage.js. This was rather old code that worked before, so it appeared to me that this is no problem per se – but solving that completely different dependency in Helpers.js (SOME_CONSTANT from generalSlice.js), made something break.

That was quite weird. It looked like this not-really-required import of SOME_CONSTANT made ./generalSlice.js load first, along with it a certain set of imports include some of the dependencies of either loadPage.js or topicSlice.js, so that when their dependencies would have been loaded, their was no import loop required anymore. However, it did not appear advisable to trace that fact to its core because the application has grown a bit already. We needed a solution.

As I told before, it required the brainstorming of multiple developers to find a way of dealing with this. After all, RTK appeared mature enough for me to dismiss “that thing just isn’t fully thought through yet”. Still, code-splitting is such a basic feature that one would expect some answer to that. What we did come up with was

  1. One could address the action creators like loadPage.pending (which is created as a result of RTK’s createAsyncThunk) by their string equivalent, i.e. ["loadPage/pending"] instead of [loadPage.pending] as key in the extraReducers of topicSlice. This will be a problem if one ever renames the action from “loadPage” to whatever (and your IDE and linter can’t help you as much with errors), which could be solved by writing one’s own action name factory that can be stashed away in a file with no own imports.
  2. One could re-think the idea that setTopic should be among the normal reducers in topicSlice, i.e. being created automatically. Instead, it can be created in its own file and then being referenced by loadPage.js and topicSlice.js in a non-circular manner as export const setTopic = createAction('setTopic'); and then you access it in extraReducers as [setTopic]: ... .
  3. One could think hard about the construction of loadPage. This whole thing is actually a hint that loadPage does too many things on too many different levels (i.e. it violates at least the principles of Single Responsibility and Single Level of Abstraction).
    1. One fix would be to at least do away with the automatically created loadPage.pending / loadPage.fulfilled / loadPage.rejected actions and instead define custom createAction("loadPage.whatever") with whatever describes your intention best, and put all these in your own file (as in idea 2).
    2. Another fix would be splitting the parts of loadPage to other thunks, and the being able to react on the automatically created pending / fulfilled / rejected actions each.

I chose idea 2 because it was the quickest, while allowing myself to let idea 3.1 rest a bit. I guess that next time, I should favor that because it makes the developer’s intention (as in… mine) more clear and the Redux DevTools output even more descriptive.

In case you’re still lost, my solution looks as

///////////////////////////////
// sharedTopicActions.js
///////////////////////////////
import {createAction} from "@reduxjs/toolkit";
export const setTopic = createAction('topic/set');
//...

///////////////////////////////
// topicSlice.js
///////////////////////////////
import {setTopic} from "./sharedTopicActions";
const topicSlice = createSlice({
    name: 'topic',
    initialState,
    reducers: {
        ...
    },
    extraReducers: {
        [setTopic]: (state, action) => {
            state.topic= action.payload;
        },

        [loadPage.pending]: (state, action) => {
              state.topic = initialState.topic;
        },
        ...
    });

///////////////////////////////
// loadPage.js, only change in this line:
///////////////////////////////
import {setTopic} from "./sharedTopicActions";
// ... Rest unchanged

So there’s a simple tool to break circular dependencies in more complex Redux-Toolkit slice structures. It was weird that it never occured to me before, i.e. up until to this day, I always was able to solve circular dependencies by shuffling other parts of the import.

My problem is fixed. The application works as expected and now all the tests work as they should, everything is modular enough and the required change was not of a major structural redesign. It required to think hard but had a rather simple solution. I have trust in RTK again, and one can be safe again in the assumption that JavaScript imports are at least deterministic. Although I will never do the work to analyse what it actually was with my SOME_CONSTANT import that unknowingly fixed the problem beforehand.

Is there any reason to disfavor idea 3.1, though? Feel free to comment your own thoughts on that issue 🙂

Don’t shoot your messengers

Writing small, focused tests, often called unit tests, is one of the things that look easy at the outset but turn out to be more delicate than anticipated. Writing a three-lines-of-code unit test in the triple-A structure soon became second nature to me, but there were lots of cases that resisted easy testing.

Using mock objects is the typical next step to accommodate this resistance and make the test code more complex. This leads to 5 to 10 lines of test code for easy mock-based tests and up to thirty or even fifty lines of test code where a lot of moving parts are mocked and chained together to test one single method.

So, the first reaction for a more complicated testing scenario is to make the test more complicated.

But even with the powerful combination of mock objects and dependency injection, there are situations where writing suitable tests seems impossible. In the past, I regarded these code blocks as “untestable” and omitted the tests because their economic viability seemed debatable.

I wrote small tests for easy code, long tests for complicated code and no tests for defiant code. The problem always seemed to be the tests that just didn’t cut it.

Until I could recognize my approach in a new light: I was encumbering the messenger. If the message was too harsh, I would outright shoot him.

The tests tried to tell me something about my production code. But I always saw the problem with them, not the code.

Today, I can see that the tests I never wrote because the “test story” at hand was too complicated for my abilities were already telling me something important.

The test you decide not to write because it’s too much of a hassle tells you that your code structure needs improvement. They already deliver their message to you, even before they exist.

With this insight, I can oftentimes fix the problem where it is caused: In the production code. The test coverage increases and the tests become simpler.

Let’s look at a small example that tries to show the line of thinking without being too extensive:

We developed a class in Java that represents a counter that gets triggered and imposes a wait period on every tenth trigger impulse:

public class CountAndWait {
	private int triggered;
	
	public CountAndWait() {
		this.triggered = 0;
	}
	
	public void trigger() {
		this.triggered++;
		if (this.triggered == 10) {
			try {
				Thread.sleep(1000L);
			} catch (InterruptedException e) {
				Thread.currentThread().interrupt();
			}
			this.triggered = 0;
		}
	}
}

There is a lot going on in the code for such a simple functionality. Especially the try-catch block catches my eye and makes me worried when thinking about tests. Why is it even there? Well, here is a starter link for an explanation.

But even without advanced threading issues, the normal functionality of our code is worrisome enough. How many lines of code will a test contain that covers the sleep? Should I really use a loop in my test code? Will the test really have a runtime of one second? That’s the same amount of time several hundred other unit tests require for much more coverage. Is this an economically sound testing approach?

The test doesn’t even exist and already sends a message: Your production code should be structured differently. If you focus on the “test story”, perhaps a better structure emerges?

The “story of the test” is the description of the production code path that is covered and asserted by the test. In our example, I want the story to be:

“When a counter object is triggered for the tenth time, it should impose a wait. Afterwards, the cycle should repeat.”

Nothing in the story of this test talks about interruption or exceptions, so if this code gets in the way, I should restructure it to eliminate it from my story. The new production code might look like this:

public class CountAndWait {
	private final Runnable waiting;
	private int triggered;
	
	public static CountAndWait forOneSecond() {
		return new CountAndWait(() -> {
			try {
				Thread.sleep(1000L);
			} catch (InterruptedException e) {
				Thread.currentThread().interrupt();
			}			
		});
	}
	
	public CountAndWait(Runnable waiting) {
		this.waiting = waiting;
		this.triggered = 0;
	}
	
	public void trigger() {
		this.triggered++;
		if (this.triggered == 10) {
			this.waiting.run();
			this.triggered = 0;
		}
	}
}

That’s a lot more code than before, but we can concentrate on the latter half. We can now inject a mock object that attests to how often it was run. This mock object doesn’t need to sleep for any amount of time, so the unit test is fast again.

Instead of making the test more complex, we introduced additional structure (and complexity) into the production code. The resulting unit test is rather easy to write:

class CountAndWaitTest {
	@Test
	@DisplayName("Waits after 10 triggers and resets")
	void wait_after_10_triggers_and_reset() {
		Runnable simulatedWait = mock(Runnable.class);
		CountAndWait target = new CountAndWait(simulatedWait);
		
		// no wait for the first 9 triggers
		Repeat.times(9).call(target::trigger);
		verifyNoInteractions(simulatedWait);
		
		// wait at the 10th trigger
		target.trigger();
		verify(simulatedWait, times(1)).run();
		
		// reset happened, no wait for another 9 triggers
		Repeat.times(9).call(target::trigger);
		verify(simulatedWait, times(1)).run();
	}
}

It’s still different from a simple 3-liner test, but the “and” in the test story hints at a more complex story than “get y for x”, so that might be ok. We could probably simplify the test even more if we got access to the internal trigger count and verify the reset directly.

I hope the example was clear enough. For me, the revelation that test problems more often than not have their root cause in production code is a clear message to improve my ability on writing code that facilitates testing instead of obstructing it.

I don’t shoot/omit my messengers anymore even if their message means more work for me.

It’s not a bug, it’s a missing test

Recently, I changed some wording when I talk about code and code problems. In my opinion, the new words have more correlation with the root cause of the problem, while the old words used to be “nearer” to the symptoms.

Let me explain the changes with a small code example that serves no other purpose than to contain a few problems:

public class InMemoryItemRepository {
	private final Map<String, Item> items;
	
	public InMemoryItemRepository() {
		this.items = new HashMap<>();
	}
	
	public Optional<Item> itemFor(String itemNumber) {
		return Optional.of(this.items.get(itemNumber));
	}
}

This class is a concrete implementation of an “item repository” that stores items by their “item number”. You can retrieve items by calling the itemFor method and giving it a valid itemNumber. Otherwise, you’ll end up with an empty Optional.

The first problem

The first problem in this code is a code smell. The data structure used to store the items is an HashMap. In Java, the HashMap implementation is not thread-safe:

Note that this implementation is not synchronized. If multiple threads access a hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally.

https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/HashMap.html

With our current implementation, we leak this limitation onto our clients, which will be totally unaware, because nothing in the class design hints towards an HashMap. In most production environments, our in-memory implementation might even be swapped out with a database-based one. And the database implementation hopefully is thread-safe.

A code smell is a bug

My change in wording calls this type of code problem a bug (instead of a code smell). It might be possible that right now, this class is only used in a single-threaded manner and the implementation flaw doesn’t matter. In that case, it’s a bug in hibernation. Right now, it sleeps, but the day will come when it wakes up. That doesn’t change its bug-like features, just its immediate damage potential.

If you label a code smell as a bug, you highlight the damage potential instead of the current “nice to have” prioritization.

The second problem

Let’s return to the code example. There is another problem with this implementation: If you ask for an item number that is not stored in the repository, you don’t get an empty Optional, you’ll get a nice and unexpected NullPointerException.

A small unit test that asks for any item number on an empty repository reveals the problem:

	@Test
	void returnsEmptyIfNotStored() {
		var target = new InMemoryItemRepository();
		assertThat(
			target.itemFor("non-existent")
		).isEmpty();
	}

This test will run red with a NullPointerException, pointing to this line in the implementation:

		return Optional.of(this.items.get(itemNumber));

Of course, this is “just a typo” in the way we construct the Optional. Because our HashMap returns null if a given key is not stored in it, we need to call Optional.ofNullable() and have it convert null to Optional.empty:

		return Optional.ofNullable(this.items.get(itemNumber));

A bug is a missing test

This “little fix” makes our unit test pass and the repository useful. My change in wording calls every bug a “missing test”. This points out that the test you write as you fix the bug could have prevented the bug’s damage altogether.

If you were surprised by my assumption that you write tests when you fix bugs, please consider adopting this as a habit. It is the last possible moment to improve your test coverage at a place that has proven to be important enough to have tests and undertested enough to exhibit bugs. If you want your project to be antifragile (and there are good reasons why it should be), start by strengthening it at every place it breaks.

Tests for every code smell?

The combination of both changes would make every code smell a missing test. Right now, I’m not sure if this needs to be an automatism. The existence of a code smell hints towards a more fragile part of the system, but I’m not sure if “potential fragility” is a valid indicator for additional tests.

Of course, if you program completely test driven, these kind of thoughts probably seem pointless to you.

What are your thoughts on this topic and specifically on compulsory testing for every code smell?

Flexible React-Redux Hook Mocks in jest & React Testing Library

Best practices in mocking React components aren’t entirely unheard of, even in connection with a Redux state, and even not in connection with the quite convenient Hooks description ({ useSelector, useDispatch}).

So, of course the knowledge of a proper approach is at hand. And in many scenarios, it makes total sense to follow their principle of exactly arranging your Redux state in your test as you would in your real-world app.

Nevertheless, there are reasons why one wants to introduce a quick, non-overwhelming unit test of a particular component, e.g. when your system is in a state of high fluctuation because multiple parties are still converging on their interfaces and requirements; and a complete mock would be quite premature, more of a major distraction, less of being any help.

Proponents of strict TDD would now object, of course. Anyway – Fortunately, the combination of jest with React Testing Library is flexible enough to give you the tools to drill into any of your state-connected components without much knowledge of the rest of your React architecture*

(*) of course, these tests presume knowledge of your current Redux store structure. In the long run, I’d also consider this bad style, but during highly fluctuatig phases of develpment, I’d favour the explicit “this is how the store is intended to look” as safety by documentation.

On a basic test frame, I want to show you three things:

  1. Mocking useSelector in a way that allows for multiple calls
  2. Mocking useDispatch in a way that allows expecting a specific action creator to be called.
  3. Mocking useSelector in a way that allows for mocking a custom selector without its actual implementation

(Upcoming in a future blog post: Mocking useDispatch in a way to allow for async dispatch-chaining as known from Thunk / Redux Toolkit. But I’m still figuring out how to exactly do it…)

So consider your component e.g. as a simple as:

import {useDispatch, useSelector} from "react-redux";
import {importantAction} from "./place_where_these_are_defined";

const TargetComponent = () => {
    const dispatch = useDispatch();
    const simpleThing1 = useSelector(store => store.thing1);
    const simpleThing2 = useSelector(store => store.somewhere.thing2);

    return <>
        <div>{simpleThing1}</div>
        <div>{simpleThing2}</div>
        <button title={"button title!"} onClick={() => dispatch(importantAction())}>Do Important Action!</button>
    </>;
};

Multiple useSelector() calls

If we had a single call to useSelector, we’d be as easily done as making useSelector a jest.fn() with a mockReturnValue(). But we don’t want to constrain ourselves to that. So, what works, in our example, to actually construct a mockin store as plain object, and give our mocked useSelector a mockImplementation that applies its argument (which, as selector, is a function of the store)) to that store.

Note that for this simple example, I did not concern myself with useDispatch() that much. It just returns a dispatch function of () => {}, i.e. it won’t throw an error but also doesn’t do anything else.

import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';

jest.mock("react-redux", () => ({
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

describe('Test TargetComponent', () => {

    beforeEach(() => {
        useDispatchMock.mockImplementation(() => () => {});
        useSelectorMock.mockImplementation(selector => selector(mockStore));
    })
    afterEach(() => {
        useDispatchMock.mockClear();
        useSelectorMock.mockClear();
    })

    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const mockStore = {
        thing1: 'this is thing1',
        somewhere: {
            thing2: 'and I am thing2!',
        }
    };

    it('shows thing1 and thing2', () => {
        render(<TargetComponent/>);
        expect(screen.getByText('this is thing1').toBeInTheDocument();
        expect(screen.getByText('and I am thing2!').toBeInTheDocument();
    });

});

This is surprisingly simple considering that one doesn’t find this example scattered all over the internet. If, for some reason, one would require more stuff from react-redux, you can always spread it in there,

jest.mock("react-redux", () => ({
    ...jest.requireActual("react-redux"),
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

but remember that in case you want to build full-fledged test suites, why not go the extra mile to construct your own Test store (cf. link above)? Let’s stay simple here.

Assert execution of a specific action

We don’t even have to change much to look for the call of a specific action. Remember, we presume that our action creator is doing the right thing already, for this example we just want to know that our button actually dispatches it. E.g. you could have connected that to various conditions, the button might be disabled or whatever, … so that could be less trivial than our example.

We just need to know how the original action creator looked like. In jest language, this is known as spying. We add the blue parts:

// ... next to the other imports...
import * as ourActions from './actions';



    //... and below this block
    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const importantAction = jest.spyOn(ourActions, 'importantAction');

    //...

    //... other tests...

    it('dispatches importantAction', () => {
        render(<TargetComponent/>);
        const button = screen.getByTitle("button title!"); // there are many ways to get the Button itself. i.e. screen.getByRole('button') if there is only one button, or in order to be really safe, with screen.getByTestId() and the data-testid="..." attribute.
        fireEvent.click(button);
        expect(importantAction).toHaveBeenCalled();
    });

That’s basically it. Remember, that we really disfigured our dispatch() function. What we can not do this way, is a form of

// arrangement
const mockDispatch = jest.fn();
useDispatchMock.mockImplementation(() => mockDispatch);

// test case:
expect(mockDispatch).toHaveBeenCalledWith(importantAction()); // won't work

Because even if we get a mocked version of dispatch() that way, the spyed-on importantAction() call is not the same as the one that happened inside render(). So again. In our limited sense, we just don’t do it. Dispatch() doesn’t do anything, importantAction just gets called once inside.

Mock a custom selector

Consider now that there are custom selectors which we don’t care about much, we just need them to not throw any error. I.e. below the definition of simpleThing2, this could look like

import {useDispatch, useSelector} from "react-redux";
import {importantAction, ourSuperComplexCustomSelector} from "./place_where_these_are_defined";

const TargetComponent = () => {
    const dispatch = useDispatch();
    const simpleThing1 = useSelector(store => store.thing1);
    const simpleThing2 = useSelector(store => store.somewhere.thing2);
    const complexThing = useSelector(ourSuperComplexCustomSelector);
    
    //... whathever you want to do with it....
};

Here, we want to keep it open how exactly complexThing is gained. This selector is considered to already be tested in its own unit test, we just want its value to not-fail and we can really do it like this, blue parts added / changed:

import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';
import {ourSuperComplexCustomSelector} from "./place_where_these_are_defined";

jest.mock("react-redux", () => ({
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

const mockSelectors = (selector, store) => {
    if (selector === ourSuperComplexCustomSelector) {
        return true;  // or what we want to 
    }
    return selector(store);
}

describe('Test TargetComponent', () => {

    beforeEach(() => {
        useDispatchMock.mockImplementation(() => () => {});
        useSelectorMock.mockImplementation(selector => mockSelectors(selector, mockStore));
    })
    afterEach(() => {
        useDispatchMock.mockClear();
        useSelectorMock.mockClear();
    })

    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const mockStore = {
        thing1: 'this is thing1',
        somewhere: {
            thing2: 'and I am thing2!',
        }
    };

    // ... rest stays as it is
});

This wasn’t as obvious to me as you never know what jest is doing behind the scenes. But indeed, you don’t have to spy on anything for this simple test, there is really functional identity of ourSuperComplexCustomSelector inside the TargetComponent and the argument of useSelector.

So, yeah.

The combination of jest with React Testing Library is obviously quite flexible in allowing you to choose what you actually want to test. This was good news for me, as testing frameworks in general might try to impose their opinions on your style, which isn’t always bad – but in a highly changing environment as is anything that involves React and Redux, sometimes you just want to have a very basic test case in order to concern yourself with other stuff.

So, without wanting that you lower your style to such basic constructs, I hope this was of some insight for you. In a more production-ready state, I would still go the way as that krawaller.se blog post state above, it makes sense. I was just here to get you started 😉

Last night, an end-to-end test saved my live

but not with a song, only with an assertion failure. Okay, I’ll admit, this is not the only hyperbole in the blog title. Let’s set things right: The story didn’t happen last night, it didn’t even happen at night (because I’m not working night hours). In fact, it was a well-lit sunny day. And the test didn’t save my live. It did save me from some embarrassment, though. And spared a customer some trouble and hotfixing sessions with me.

The true parts of the blog title are: An end-to-end test reported an assertion failure that helped me to fix a bug before it got released. In other words: An end-to-end test did its job and I felt fine. And that’s enough of the obscure song references from the 1980s, hopefully.

Let me explain the setting. We develop a big system that runs in production 24/7-style and is perpertually adjusted and improved. The system should run unattended and only require human attention when a real incident happens, either in the data or the system itself. If you look at the system from space, it looks like this:

A data source (in fact, lots of data sources) occasionally transmits data to the system that gets transformed and sent to a third-party system that does all kind of elaborate analysis with it. Of course, there is more to the real system than that, but for our story, it is sufficient to know that there are several third-party systems, each with their own data formats. We’ll call them third-party system A (TPS-A) and TPS-B in this story.

Our system is secured by lots and lots of unit tests, a good number of integration tests and a handful of end-to-end tests. All tests run green all the time. The end-to-end tests (E2ET) try to replicate the production environment as close as possible by running on the target operating system and using data transfer means like in the real case. The test boots up the complete system and simulates the data source and the third-party systems. By configuring the system so that it transfers back to the E2ET, we can write assertions in the kind of “given a specific input data, we expect this particular output data from the system, however it chooses to produce it”. This kind of broad-brush test is invalueable because it doesn’t care for specifics in the system. It only cares for output from the system.

This kind of E2ET is also difficult to write, complicated to run, prone to brittleness and obscure in its failure statement. If you write a three-line unit test, it is straight-forward, fast and probably pretty specific when it breaks: “This one thing that I’m testing is wrong now”. An E2ET as described here is like the Delphi Oracle:

  • E2ET: “Somewhere in your system, something doesn’t work the way I like it anymore.”
  • Developer: “Can you be more specific?”
  • E2ET: “No, but here are 2000 lines of debug output that might help you on your quest for the cause.”
  • Developer: “But I didn’t change anything.”
  • E2ET: “Oh, in that case, it could as well be the weather or a slow network. Mind if I run again?”

This kind of E2ET is also rather slow and takes its sweet time resetting the workspace state, booting the system again and using prolonged timeouts for every step. So you wait several minutes for the E2ET to fail again:

  • E2ET: “Yup, there is still something fishy with your current code.”
  • Developer: “It’s the same code you let pass yesterday.”
  • E2ET: “Well, today I don’t like it!”
  • Developer: “Okay, let me dive into the debug output, then.”

Ten to twenty minutes pass.

  • Developer: “Is it possible that you just choke on a non-free network port?”
  • E2ET: “Yes, I’m supposed to wait for the system’s output on this port and cannot bind to it.”
  • Developer: “You cannot bind to it because an instance of yourself is stuck in the background and holding onto the port.”
  • E2ET: “See? I’ve told you there is something wrong with your system!”
  • Developer: “This isn’t a problem with the system’s code. This is a problem with your code.”
  • E2ET: “Hey! Don’t blame me! I’m here to help. You can always chose to ignore or delete me if I’m too much of a burden to you.”

This kind of E2ET cries wolf lots of times for problems in the test code itself, for too optimistic timeouts or just oddities in the environment. If such an E2ET fails, it’s common practice to run it again to see if the problem persists.

How many false alarms can you tolerate before you stop listing?

In my case, I choose to reduce the amount of E2ET to the essential minimum and listen to them every time. If the test raises a false alarm, invest the time to come up with a way to make it more robust without reducing its sensitivity. But listen to the test every time.

Because, in my story, the test insisted that third-party system A and B both didn’t receive half of their data. I could rule out stray effects in the network or on the harddisk pretty fast, because the other half of the data arrived just fine. So I invested considerable time in understanding the debug output. This led me nowhere – it seemed that the missing data wasn’t sent to the system in the first place. Checking the E2ET disproved this hypothesis. The test sent as much data to the system as ever. But half of it went missing underway. Now I reviewed all changes I made in the last few days. Some of them affected the data export to the TPS. But all unit and integration tests in this area insisted that everything worked as intended.

It is important to have this kind of “multiple witnesses”. Unit tests are like traces in a criminal investigation: They report one very specific information about one very specific part of the code and are only useful in larger numbers. Integration tests testify on the possibility to combine several code parts. In my case, this meant that the unit tests guaranteed that all parts involved in the data export work correct on their own (in isolation). The integration tests vouched that, given the right combination of data export parts, they will work as intended.

And this left one area of the code as the main suspect: Something in the combination of export parts must go wrong in the real case. The code that wires the parts together is the only code not tested by unit and integration tests, but E2ET. Both other test types base their work on the hypothesis of “given that everything is wired together like this”. If this hypothesis doesn’t hold true in the real case, no test but the E2ET finds a problem, because the E2ET bases on another hypothesis: “given that the system is started for real”.

I went through all the changes of the wiring code and there it was: A glorious TODO comment, written by myself on a late friday afternoon, stating: “TODO: Register format X export again after you’ve finished issue Y”. I totally forgot about it over the weekend. I only added the comment into the code, not as an issue comment. Friday afternoon me wasn’t cooperative with future or current me. In effect, I totally played myself. I don’t remember removing the registering code or writing the comment. It probably was a minor side change for an unimportant aspect of the main issue. The whole thing was surely done in a few seconds and promptly forgotten. And the only guardian that caught it was the E2ET, using the most nonspecific words for it (“somewhere, something went wrong”).

If I had chosen to ignore or modify the E2ET, the bug would have made it to production. It would have caused a loss of current data for the TPS. Maybe they would have detected it. But maybe, they would have just chosen to operate on the most recent data – data that doesn’t get updated anymore. In short, we don’t want to find out.

So what is the moral of the story?
Always have an end-to-end test for your most important functionality in place. Let it deal with your system from the outside. And listen to it, regardless of the effort it takes.

And what can you do if your E2ET has saved you?

  • Give your test a medal. No, seriously, leave a comment in the test code that it was helpful.
  • Write an unit or integration test that replicates the specific cause. You want to know immediately if you make the same error again in the future. Your E2ET is your last line of defense. Don’t let it be your only one. You’ve been shown a weakness in your test setup, remediate it.
  • Blame past you for everything. Assure future you that you are a better developer than this.
  • Feel good that you were able to neutralize your faux pas in time. That’s impressive!

When was the last time a test has saved you? Leave the story or a link to it in the comments. I can’t be the only one.

Oh, and if you find the random links to 80s music videos weird: At least I didn’t rickroll you. The song itself is from 1987 and would fit my selection.

Integrating catch2 with CMake and Jenkins

A few years back, we posted an article on how to get CMake, googletest and jenkins to play nicely with each other. Since then, Phil Nash’s catch testing library has emerged as arguably the most popular thing to write your C++ tests in. I’m going to show how to setup a small sample project that integrates catch2, CMake and Jenkins nicely.

Project structure

Here is the project structure we will be using in our example. It is a simple library that implements left-pad: A utility function to expand a string to a minimum length by adding a filler character to the left.

├── CMakeLists.txt
├── source
│   ├── CMakeLists.txt
│   ├── string_utils.cpp
│   └── string_utils.h
├── externals
│   └── catch2
│       └── catch.hpp
└── tests
    ├── CMakeLists.txt
    ├── main.cpp
    └── string_utils.test.cpp

As you can see, the code is organized in three subfolders: source, externals and tests. source contains your production code. In a real world scenario, you’d probably have a couple of libraries and executables in additional subfolders in this folder.

The source folder

set(TARGET_NAME string_utils)

add_library(${TARGET_NAME}
  string_utils.cpp
  string_utils.h)

target_include_directories(${TARGET_NAME}
  INTERFACE ./)

install(TARGETS ${TARGET_NAME}
  ARCHIVE DESTINATION lib/)

The library is added to the install target because that’s what we typically do with our artifacts.

I use externals as a place for libraries that go into the projects VCS. In this case, that is just the catch2 single-header distribution.

The tests folder

I typically mirror the filename and path of the unit under test and add some extra tag, in this case the .test. You should really not need headers here. The corresponding CMakeLists.txt looks like this:

set(UNIT_TEST_LIST
  string_utils)

foreach(NAME IN LISTS UNIT_TEST_LIST)
  list(APPEND UNIT_TEST_SOURCE_LIST
    ${NAME}.test.cpp)
endforeach()

set(TARGET_NAME tests)

add_executable(${TARGET_NAME}
  main.cpp
  ${UNIT_TEST_SOURCE_LIST})

target_link_libraries(${TARGET_NAME}
  PUBLIC string_utils)

target_include_directories(${TARGET_NAME}
  PUBLIC ../externals/catch2/)

add_test(
  NAME ${TARGET_NAME}
  COMMAND ${TARGET_NAME} -o report.xml -r junit)

The list and the loop help me to list the tests without duplicating the .test tag everywhere. Note that there’s also a main.cpp included which only defines the catch’s main function:

#define CATCH_CONFIG_MAIN
#include <catch.hpp>

The add_test call at the bottom tells CTest (CMake’s bundled test-runner) how to run catch. The “-o” switch commands catch to direct its output to a file, report.xml. The “-r” switch sets the report mode to JUnit format. We will need both to integrate with Jenkins.

The top-level folder

The CMakeLists.txt in the top-level folder needs to call enable_testing() for our setup. Other than that, it just directs to the subfolders via add_subdirectory().

Jenkins

Now all that is needed is to setup Jenkins accordingly. Setup jenkins to get your code, add a “CMake Build” build-step. Hit “Add build tool invocation” and check “Use cmake” to let cmake handle the invocation of your build tool (e.g. make). You also specify the target here, which is typically “install” or “package” via the “–target” switch.

Now you add another step that runs the tests via CTest. Add another Build Step, this time “CMake/CPack/CTest Execution” and pick CTest. The one quirk with this is that it will let the build fail when CTest returns a non-zero exit code – which it does when any tests fail. Usually, you want the build to become unstable and not failed if that happens. Hence set “1-65535” in the “Ignore exit codes” input.

The final step is to let jenkins use the report.xml that we had CTest generate so it can generate the test result charts and tables. To do that, add the post-build action: “Publish JUnit test result report” and point it to tests/report.xml.

Done!

That’s it. Now you got your CI running nice catch tests. The code for this example is available on our github.

Functional tests for Grails with Geb and geckodriver

Previously we had many functional tests using the selenium-rc plugin for Grails. Many were initially recorded using Selenium IDE, then refactored to be more maintainable. These refactorings introduced “driver” objects used to interact with common elements on the pages and runners which improved the API for walking through a multipage process.

Selenium-rc got deprecated quite a while ago and support for Firefox broke every once in a while. Finally we were forced to migrate to the current state-of-the-art in Grails functional testing: Geb.

Generally I can say it is really a major improvement over the old selenium API. The page concept is similar to our own drivers with some nice features:

  • At-Checkers provide a standardized way of checking if we are at the expected page
  • Default and custom per page timeouts using atCheckWaiting
  • Specification of relevant content elements using a JQuery-like syntax and support for CSS-selectors
  • The so-called modules ease the interaction with form elements and the like
  • Much better error messages

While Geb is a real improvement over selenium it comes with some quirks. Here are some advice that may help you in successfully using geb in the context of your (grails) webapplication.

Cross-plattform testing in Grails

Geb (or more specifically the underlying webdriver component) requires a geckodriver-binary to work correctly with Firefox. This binary is naturally platform-dependent. We have a setup with mostly Windows machines for the developers and Linux build slaves and target systems. So we need binaries for all required platforms and have to configure them accordingly. We have simply put them into a folder in our project and added following configuration to the test-environment in Config.groovy:

environments {
  test {
    def basedir = new File(new File('.', 'infrastructure'), 'testing')
    def geckodriver = 'geckodriver'
    if (System.properties['os.name'].toLowerCase().contains('windows')) {
      geckodriver += '.exe'
    }
    System.setProperty('webdriver.gecko.driver', new File(basedir, geckodriver).canonicalPath)
  }
}

Problems with File-Uploads

If you are plagued with file uploads not working it may be a Problem with certain Firefox versions. Even though the fix has landed in Firefox 56 I want to add the workaround if you still experience problems. Add The following to your GebConfig.grooy:

driver = {
  FirefoxProfile profile = new FirefoxProfile()
  // Workaround for issue https://github.com/mozilla/geckodriver/issues/858
  profile.setPreference('dom.file.createInChild', true)
  new FirefoxDriver(profile)
}

Minor drawbacks

While the Geb-DSL is quite readable and allows concise tests the IDE-support is not there. You do not get much code assist when writing the tests and calling functions of the page objects like in our own, code based solution.

Conclusion

After taking the first few hurdles writing new functional tests with Geb really feels good and is a breeze compared to our old selenium tests. Converting them will be a lot work and only happen on a case-by-case basis but our coverage with Geb is ever increasing.

Look at the automated tests to diagnose the project ailments

If you have to evaluate a new software project, draw the outline of the existing test types. If it doesn’t look like a pyramide, you might be in trouble. Here are some common non-pyramide outlines and what they mean.

A cornerstone of modern software development is developer testing. That means that developers are the primary authors of automated test code. In theory, that is a good thing and might look like the quality assurance department is out of work soon. In practice, we as a profession tried for nearly twenty years to install a culture of developer testing in our work and still end up with software projects that feature no automated tests at all (Side note: JUnit 1.0 was released in February of 1998).

What we know about automated tests

One piece of common understanding about developer testing is the test pyramide. Let’s iterate quickly what we know about it. There are different kinds of automated tests and the test pyramide differentiates three of them:

  • Acceptance tests or UI tests are the heaviest type of automated test. They operate on the software from the outside, with the means of a real user and try to assert that real use cases are accomplishable.
  • Integration tests often use several parts of the system in a test scenario that asserts the correct collaboration of the parts. Integration tests may take some time to come to a conclusion and utilize real hardware like network or disks.
  • Unit tests tend to be small and quick and focus on a particular aspect of an “unit” like a class or entity aggregate. Their reach into the system should be short and might be forcefully restricted by employing mocks.

These three types, the A, I and U of automated tests, should come in different numbers. A good rule of thumb is that for every acceptance test, there might be up to one thousand unit tests. If you draw the quantities as areas, they appear in form of a pyramide. A small top of acceptance tests rests on a broader seating of integration tests that relies on a groundwork of many unit tests. A healthy test pyramide looks like this:

Take this picture as an orientation, not as an absolute scale. But be sure to count your different test types from time to time.

Outlining the tests

This is actually one of the first things I do when I get introduced to a new and unknown code base. This happens quite often when I do consulting work for existing development teams. Have a look at the automated tests, determine their type and count their numbers. If it resembles anything close to the test pyramide, you’ve got a chance. If the resulting shape looks different, you might find this blog entry useful:

The Tower

If you have a hard time finding any tests (because there are none) or you find only some half-assed attempts to produce a meaningful automated test suite, you look at a tower project. The tower is rather small in diameter, in the cases of absent tests it is nothing more than a thin vertical line (the “stick”). If you find a solid number of tests for every type, you’ve found a “block” project. Block projects usually don’t have a problem, but a history of test effort migration either from unit to acceptance tests or, more common, in the other direction. If you find a block, you are fine.

The tower, though, is a case of neglect. The project team might have started serious efforts to automated their tests, but got demotivated by intrinsic or extrinsic influences and abandoned the tests soon after their creation. Nobody has looked after them since and the only reason they still pass green is that they didn’t really test anything to begin with or only cover an area of the system that is as finished as it is boring. Topics like user management or utility classes are usually the first and only things that got tests in a tower scenario.

Don’t get me wrong, the tower indicates the absence of tests, but not the absence of willingness to write automated tests, unless the tower is really a stick. A team willing to invest in automated tests may only lack knowledge and coaching about the topic. Be sure to lead them bottom-up (unit tests first), though.

The Egg

If you’ve categorized and counted the tests and couldn’t find many acceptance or unit tests, you’ve found an egg. The egg consists of mostly integration tests that may lean into unit testing territory by asserting smallest bits of functionality here and there (often embedded in an overarching test storyline) or dip their toes into gui-based testing by asserting presentation-specific properties of widget objects. While they provide ample test coverage for the system, they also tie application logic and presentation details together and don’t help to separate domain code from the use cases.

The project team is probably proud of their test coverage and doesn’t see any value in differentiating the automated tests types, because “every test improves the situation”. The blindness to test types is the core problem that may be cured with training and coaching (I’ve found the ATRIP-rules to be particularly effective to distinguish integration and unit tests), but the symptoms, especially the lack of separation of concerns, have to be mitigated soon, too.

One way to start there is to break the tests down into their integration and their unit test parts. You can work from assertion to assertion and ask: is this necessary to ensure the current use case? If not, extract a new unit test focussed on only this one assertion.

As soon as you add a pedestal consisting of unit tests to your egg, you are on your best way to a healthy test pyramide.

The Ice Cream Cone

This is the most fearsome automated test outline in existence, even more dramatic than the stick. Usually, the project team is really enthusiastic about writing tests or at least follow order to do so, but they cannot test parts of the application in isolation. A really tragic case was a complex system that was so entangled with its database, through countless stored procedures that contributed to the application logic, that it was hopeless to think about tests without the database. And because every automated test had to start the whole system including the database, there was really no need to differentiate between application logic and presentation logic. It all became a gordic knot of dependencies that enforced the habit of writing elaborate automated GUI-based tests to test the smallest logic bits deep inside the core. It felt like eating single rice grains with overly long, flimsy wooden chopsticks that would break often.

The ice cream cone is problematic because the project team needs to realize that their effort was mislead and the tests are all telling the bitter truth: the system’s architecture isn’t fit for proper automated tests. It’s not the tests, it’s you (or your architecture)! Nobody wants to hear that and more so, nobody wants to untangle the mess (without the help of a proper safety net consisting of automated tests). Pinning tests are probably helpful in this scenario.

But you need to turn the test pyramide around or the project team will suffocate by the overly costly test tax while increasing technical debt.

Epilogue

Please keep in mind that it’s not a problem in itself that your project doesn’t have a normal test pyramide. It’s great that you have automated tests at all! But your current test type distribution might not be as effective as possible, might be more expensive than necessary and might be not the right automated test setup for your development goals.

What are your stories with automated test setups? Care to share it with us in the comments?

Why I’m not using C++ unnamed namespaces anymore

Well okay, actually I’m still using them, but I thought the absolute would make for a better headline. But I do not use them nearly as much as I used to. Almost exactly a year ago, I even described them as an integral part of my unit design. Nowadays, most units I write do not have an unnamed namespace at all.

What’s so great about unnamed namespaces?

Back when I still used them, my code would usually evolve gradually through a few different “stages of visibility”. The first of these stages was the unnamed-namespace. Later stages would either be a free-function or a private/public member-function.

Lets say I identify a bit of code that I could reuse. I refactor it into a separate function. Since that bit of code is only used in that compile unit, it makes sense to put this function into an unnamed namespace that is only visible in the implementation of that unit.

Okay great, now we have reusability within this one compile unit, and we didn’t even have to recompile any of the units clients. Also, we can just “Hack away” on this code. It’s very local and exists solely to provide for our implementation needs. We can cobble it together without worrying that anyone else might ever have to use it.

This all feels pretty great at first. You are writing smaller functions and classes after all.

Whole class hierarchies are defined this way. Invisible to all but yourself. Protected and sheltered from the ugly world of external clients.

What’s so bad about unnamed namespaces?

However, there are two sides to this coin. Over time, one of two things usually happens:

1. The code is never needed again outside of the unit. Forgotten by all but the compiler, it exists happily in its seclusion.
2. The code is needed elsewhere.

Guess which one happens more often. The code is needed elsewhere. After all, that is usually the reason we refactored it into a function in the first place. Its reusability. When this is the case, one of these scenarios usually happes:

1. People forgot about it, and solve the problem again.
2. People never learned about it, and solve the problem again.
3. People know about it, and copy-and-paste the code to solve their problem.
4. People know about it and make the function more widely available to call it directly.

Except for the last, that’s a pretty grim outlook. The first two cases are usually the result of the bad discoverability. If you haven’t worked with that code extensively, it is pretty certain that you do not even know that is exists.

The third is often a consequence of the fact that this function was not initially written for reuse. This can mean that it cannot be called from the outside because it cannot be accessed. But often, there’s some small dependency to the exact place where it’s defined. People came to this function because they want to solve another problem, not to figure out how to make this function visible to them. Call it lazyness or pragmatism, but they now have a case for just copying it. It happens and shouldn’t be incentivised.

A Bug? In my code?

Now imagine you don’t care much about such noble long term code quality concerns as code duplication. After all, deduplication just increases coupling, right?

But you do care about satisfied customers, possibly because your job depends on it. One of your customers provides you with a crash dump and the stacktrace clearly points to your hidden and protected function. Since you’re a good developer, you decide to reproduce the crash in a unit test.

Only that does not work. The function is not accessible to your test. You first need to refactor the code to actually make it testable. That’s a terrible situation to be in.

What to do instead.

There’s really only two choices. Either make it a public function of your unit immediatly, or move it to another unit.

For functional units, its usually not a problem to just make them public. At least as long as the function does not access any global data.

For class units, there is a decision to make, but it is simple. Will using preserve all class invariants? If so, you can move it or make it a public function. But if not, you absolutely should move it to another unit. Often, this actually helps with deciding for what to create a new class!

Note that private and protected functions suffer many of the same drawbacks as functions in unnamed-namespaces. Sometimes, either of these options is a valid shortcut. But if you can, please, avoid them.

Monitoring data integrity with health checks

An important aspect for systems, which are backed by a database storage, is to maintain data integrity. Most relational databases offer the possibility to define constraints in order to maintain data integrity, usually referential integrity and entity integrity. Typical constraints are foreign key constraints, not-null constraints, unique constraints and primary key constraints.

SQL also provides the CHECK constraint, which allows you to specify a condition on each row in a table:

ALTER TABLE table_name ADD CONSTRAINT
   constraint_name CHECK ( predicate )

For example:

CHECK (AGE >= 18)

However, these check constraints are limited. They can’t be defined on views, they can’t refer to columns in other tables and they can’t include subqueries.

Health checks

In order to monitor data integrity on a higher level that is closer to the business rules of the domain, we have deployed a technique that we call health checks in some of our applications.

These health checks are database queries, which check that certain constraints are met in accordance with the business rules. The queries are usually designed to return an empty result set on success and to return the faulty data records otherwise.

The health checks are run periodically. For example, we use a Jenkins job to trigger the health checks of one of our web applications every couple of hours. In this case we don’t directly query the database, but the application does and returns the success or failure states of the health checks in the response of a HTTP GET request.

This way we can detect problems in the stored data in a timely manner and take countermeasures. Of course, if the application is bug free these health checks should never fail, and in fact they rarely do. We mostly use the health checks as an addition to regression tests after a bug fix, to ensure and monitor that the unwanted state in the data will never happen again in the future.