You are not safe with Semantic Versioning (right now).

TL;DR: Several recent hijacks of widely spread NPM libraries should make you double-think whether to trust the package.json-semantic-versioning notation using carets and tildes.

So what’s that about?

Version updates are one oft he most haunting things any person with any kind of computer does ever encounter. On the one hand, it’s good news – some thing that you use has evolved again, and you are right at the source. Have better tools, less bugs, new functionality, and usually delivered by just a few clicks. But there’s always these question marks – do I want to know what happened there? Do I want that now? Wasn’t that old version totally working?

So we all know that dilemma. I mean, as we speak, every Windows user is kindly remembered that one could now let that version 11 live on her system. But is it compatible with your system? Will this work out of the box? Is your time worth trying now or do you wait until the waters have settled? And while Microsoft is asking that question only once a few years apart, there’s software like Notepad++ that wants me to have its updates every single start. Because apparently, text editors can grow up so fast..?

Now imagine this problem, times a few googol, and you are in the everyday world of every web developer. In the npm universe, you have nearly total modular flexibility, which comes with so many small packages that update for whatever reason — One likes to believe that mostly these are bugfixes. Or might these be these breaking changes? Did JavaScript evolve again? Do you want that stuff? Are you state-of-the-art or are you in dependency hell?

Issues like these are the source of semantic versioning. The idea is, that you apply a certain level of trust into the usually three-membered version “major.minor.patch” label. Version “3.10.1” means

  • Major version 3; the API of your software is promised to be compatible within each major version (except for the initial version 0)
  • Minor version 10, you use a new version for new API functionality, e.g. there have been 10 times when version 3.0.0 was improved without breaking backwards compatibility
  • Patch version 1, which is incremented for bugfixes within that API specification, i.e. backwards compatible within the whole major version.

This is done in good faith because only when the package maintainer “re-think”s his API, major version is incremented, and that mediates a level of trust in that you in return only have to re-think your usage of that API, if you switch major version. Therefore, dependency management systems like the npm / yarn package.json allow for the convenient notation to specify e.g.

"dependencies": {
    /* ... */
    "styled-components": "^5.1.1",
    "websocket": "~1.0.31"
  },

The caret (^) notation tells us that when the styled-components package was added to our projects, we installed version 5.1.1, but we trust the npm universe that far that every future execution of “npm install” / “yarn install” can increase this version within the same major version, e.g. if version 5.2.0 was released in the meantime, then update for its new content, and as we speak, we are at version 5.3.3, so this project is well up-to-date with whatever the good folks put in there.

Similarly, the tilde (~) notation only allows that behavior within this minor version, e.g. at the moment any call of “… install” would retrieve the current version 1.0.34 but would not get version 1.1.0 whenever that was released.

The opposite of using these is called dependency pinning, and there is lots of further reading available, e.g. here.

There is a certain misconception that “… install” will only update any of these versions if there is no “package-lock.json” for npm or no “yarn.lock” for yarn is around. That is not the case, see below, but first, my actual point.

So the point of semantic versioning is the establishment of trust between the package developer and the user: “This update only changes about that much”.

Problem: You cannot trust the npm universe right now.

Now the last weeks showed us not only a hijacking of the npm package us-parser-js at the end of October, but also another one of the packages coa and rc 11 days ago – these appeared somewhat correlated and came with a mixture of password-stealing and secret installing of crypto-mining tools, all in all the result of some bad folks getting access to these package repositories, making them execute malicious code in their install scripts – note that install scripts are not uncommon for widely spread npm packages. This means that while you can complain that these hackers did not really adhere to the Semantic Versioning code (oh..??), and also these breaches were noticed in a couple of hours each, think about this:

anyone with a certain caret or tilde in her package.json might have infected herself just by a unluckily-timed call of “npm install”.

Think of an automated script. Think of CI. Think of anyone who just wants to build his project and be as up-to-date as one can get. A last year survey of npm developers showed that usage of two-factor authentication is just below 10%, and while this doesn’t mean that the other 90% are completely irresponsible, there just is no system in place that would promise us that such attacks will just go away soon.

So of course we can not write every dependency of your projects itself, especially if they are not direct dependencies. But think of it as Russian Roulette: At least you can minimize the number of pulling the trigger.

You can not know which package is affected next. You better make sure to pin that version to exactly a version you can trust right now, and if you are ever in need of updating this, at least have a quick googling – whether there’s some sh*t going down right now.

Do you have further ideas on how to isolate your development / CI environments from whatever just happens in the outer rims of the npm universe? Please feel free to share.

How to make npm / yarn respect their respective lockfiles (package-lock.json / YARN.lock)

In principle, you can even live with the caret / tilde, if you make sure that you never actually call “npm / yarn install” itself, but make them actually consider their so-called-lockfiles as lockfiles. In their current versions, these calls should lead to that behaviour:

# instead of npm install
npm ci

# instead of yarn install
# for yarn 1.x:
yarn install --frozen-lockfile
# for yarn 2:
yarn install --immutable

As you can see from the npm call, this is especially suited for CI environments, this means you have to make sure the package-lock.json / yarn.lock is part of your repository.

One disadvantage of our approach is that npm really likes to notice you of being not very up to date, and produce lots of noise for whatever reason that you want to get rid of. Just be sure to pay some amount of attention when you update.

WPF: Recipe for customizable User Controls with flexible Interactivity

The most striking feature of WPF is its peculiar understanding of flexibility. Which means, that usually you are free to do anything, anywhere, but it instantly hands back to you the responsibility to pay super-close attention of where you actually are.

As projects with grow, their user interfaces usually grow, and over time there usually appears the need to re-use any given component of that user interface.

This not only is the working of the DRY principle at the code level, Consistency is also one of the Nielsen-Norman Usability Heuristics, i.e. a good plan as to not confuse your users with needless irritations. This establishes trust. Good stuff.

Now say that you have a re-usable custom button that should

  1. Look a certain way at a given place,
  2. Show custom interactivity (handling of mouse events)
  3. Be fully integrated in the XAML workflow, especially accepting Bindings from outside, as inside an ItemsControl or other list-type Control.

As usual, this was a multi-layered problem. It took me a while to find my optimum-for-now solution, but I think I managed, so let me try to break it down a bit. Consider the basic structure:

<ItemsControl ItemsSource="{Binding ListOfChildren}">
	<ItemsControl.ItemTemplate>
		<DataTemplate>
			<Button Style="{StaticResource FancyButton}"
				Command="{Binding SomeAwesomeCommand}"
				Content="{Binding Title}"
				/>
		</DataTemplate>
	</ItemsControl.ItemTemplate>
</ItemsControl>
Quick Note about the Style

We see the Styling of FancyButton (defined in some ResourceDictionary, merged together with a lot of stuff in the App.xaml Applications.Resources), and I want to define the styling here in order to modify it in some other places i.e. this could be defined in said ResourceDictionary like

<Style TargetType="{x:Type Button}" x:Key="FancyButton"> ... </Style>

<Style TargetType="{x:Type Button}" BasedOn="{StaticResource FancyButton}" x:Key="SmallFancyButton"> ... </Style>
<Style TargetType="{x:Type Button}" BasedOn="{StaticResource FancyButton}" x:Key="FancyAlertButton"> ... </Style>
... as you wish ...
Quick Note about the Command

We also see SomeAwesomeCommand, defined in the view model of what ListOfChildren actually consists of. So, SomeAwesomeCommand is a Property of a custom ICommand-implementing class, but there’s a catch:

Commands on a Button work on the Click event. There’s no native way to assign that to different events like e.g. DragOver, so this sounds like our new User Control would need quite some Code Behind in order to wire up any Non-Click-Event with that Command. Thankfully, there is a surprisingly simple solution, called Interaction.Triggers. Apply it as

  1. installing System.Windows.Interactivity from NuGet
  2. adding the to your XAML namespaces: xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
  3. Adding the trigger inside:
<Button ...>
    <i:Interaction.Triggers>
        <i:EventTrigger EventName="DragOver">
            <i:InvokeCommandAction Command="WhateverYouFeelLike"/>
        </i:EventTrigger>
    </i:Interaction.Triggers>
</Button>

But that only as a side note, remember that the point of having our separate User Control is still valid; considering that you would want to have some extra interactivity in your own use cases.

Now: Extracting the functionality to our own User Control

I chose not to derive some class from the Button class itself because it would couple me closer to the internal workings of Button; i.e. an application of Composition over Inheritance. So the first step looks easy: Right click in the VS Solution Explorer -> Add -> User Control (WPF) -> Create under some name (say, MightyButton) -> Move the <Button.../> there -> include the XAML namespace and place the MightyButton in our old code:

// old place
<Window ...
	xmlns:ui="clr-namespace:WhereYourMightyButtonLives
	>
	...
	<ItemsControl ItemsSource="{Binding ListOfChildren}">
		<ItemsControl.ItemTemplate>
			<DataTemplate>
				<ui:MightyButton Command="{Binding SomeAwesomeCommand}"
						 Content="{Binding Title}"
						 />
			</DataTemplate>
		</ItemsControl.ItemTemplate>
	</ItemsControl>
	...
</Window>

// MightyButton.xaml
<UserControl ...>
	<Button Style="{StaticResource FancyButton}"/>
</UserControl>

But now it get’s tricky. This could compile, but still not work because of several Binding mismatches.

I’ve written a lot already, so let me just define the main problems. I want my call to look like

<ui:DropTargetButton Style="{StaticResource FancyButton}"
		     Command="{Binding OnBranchFolderSelect}"
		     ...
		     />

But again, these are two parts. Let me clarify.

Side Quest: I want the Style to be applied from outside.

Remember the idea of having SmallFancyButton, FancyAlertButton or whatsoever? The problem is, that I can’t just pass it to <ui:MightyButton.../> as intended (see last code block), because FancyButton has its definition of TargetType="{x:Type Button}". Not TargetType="{x:Type ui:MightyButton}".

Surely I could change that. But I will regret this when I change my component again; I would always have to adjust the FancyButton definition every time (at several places) even though it always describes a Button.

So let’s keep the Style TargetType to be Button, and just treat the Style as something to be passed to the inner-lying Button.

Main Quest: Passing through Properties from the ListOfChildren members

Remember that any WPF Control inherits a lot of Properties (like Style, Margin, Height, …) from its ancestors like FrameworkElement, and you can always extend that with custom Dependency Properties. Know that Command actually is not one of these inherited Properties – it only exists for several UI Elements like the Button, but not in a general sense, so we can easily extend this.

Go to the Code Behind, and at some suitable place make a new Dependency Property. There is a Visual Studio shorthand of writing “propdp” and pressing Tab twice. Then adjust it to read like

public ICommand Command
        {
            get { return (ICommand)GetValue(CommandProperty); }
            set { SetValue(CommandProperty, value); }
        }

        public static readonly DependencyProperty CommandProperty =
            DependencyProperty.Register("Command", typeof(ICommand), typeof(DropTargetButton), new PropertyMetadata(null));

With Style, we have one of these inherited Properties. Nevertheless, I want my Property to be called Style, which is quite straightforward by just employing the new keyword (i.e. we really want to shadow the inherited property, which is tolerable because we already know our FancyButton Style to its full extent.)

public new Style Style
        {
            get { return (Style)GetValue(StyleProperty); }
            set { SetValue(StyleProperty, value); }
        }

        public static readonly new DependencyProperty StyleProperty =
            DependencyProperty.Register("Style", typeof(Style), typeof(DropTargetButton), new PropertyMetadata(null));

And then we’re nearly there, we just have to make the Button inside know where to take these Properties. In an easy setting, this could be accomplished by making the UserControl constructor set DataContext = this; but STOP!

If you do that, you lose easy access to the outer ItemsControl elements. Sure you could work around – remember the WPF philosophy of allowing you many ways – but more practicable imo is to have an ElementName. Let’s be boring and take “Root”.

<UserControl x:Class="ComplianceManagementTool.UI.DropTargetButton"
	     ...
             xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
             xmlns:local="clr-namespace:ComplianceManagementTool.UI"
             x:Name="Root"
             >
    <Button Style="{Binding Style, ElementName=Root}"
            AllowDrop="True"
            Command="{Binding Command, ElementName=Root}"
            Content="{Binding Text, ElementName=Root}"
            >
        <i:Interaction.Triggers>
            <i:EventTrigger EventName="DragOver">
                <i:InvokeCommandAction Command="{Binding Command, ElementName=Root}"/>
            </i:EventTrigger>
        </i:Interaction.Triggers>
    </Button>
</UserControl>

As some homework, I’ve left you the Content property to add as a Dependency Property, as well. You could go ahead and add as many DPs to your User Control, and inside that Control (which is quite maiden-like still, if we ignore all that DP boilerplate code) you could have as many complex interactivity as you would require, without losing the flexibility of passing the corresponding Commands from the outside.

Of course, this is just one way of about seventeen plusminus thirtythree, add one or two, which is about the usual number of WPF ways of doing things. Nevertheless, this solution now lives in our blog, and maybe it is of some help to you. Or Future-Me.

Redux-Toolkit & Solving “ReferenceError: Access lexical declaration … before initialization”

Last week, I had a really annoying error in one of our React-Redux applications. It started with a believed-to-be-minor cleanup in our code, culminated in four developers staring at our code in disbelief and quite some research, and resulted in some rather feasible solutions that, in hindsight, look quite obvious (as is usually the case).

The tech landscape we are talking about here is a React webapp that employs state management via Redux-Toolkit / RTK, the abstraction layer above Redux to simplify the majority of standard use cases one has to deal with in current-day applications. Personally, I happen to find that useful, because it means a perceptible reduction of boilerplate Redux code (and some dependencies that you would use all the time anyway, like redux-thunk) while maintaining compatibility with the really useful Redux DevTools, and not requiring many new concepts. As our application makes good use of URL routing in order to display very different subparts, we implemented our own middleware that does the data fetching upfront in a major step (sometimes called „hydration“).

One of the basic ideas in Redux-Toolkit is the management of your state in substates called slices that aim to unify the handling of actions, action creators and reducers, essentially what was previously described as Ducks pattern.

We provide unit tests with the jest framework, and generally speaking, it is more productive to test general logic instead of React components or Redux state updates (although we sometimes make use of that, too). Jest is very modular in the sense that you can add tests for any JavaScript function from whereever in your testing codebase, the only thing, of course, is that these functions need to be exported from their respective files. This means that a single jest test only needs to resolve the imports that it depends on, recursively (i.e. the depenency tree), not the full application.

Now my case was as follows: I wrote a test that essentially was just testing a small switch/case decision function. I noticed there was something fishy when this test resulted in errors of the kind

  • Target container is not a DOM element. (pointing to ReactDOM.render)
  • No reducer provided for key “user” (pointing to node_modules redux/lib/redux.js)
  • Store does not have a valid reducer. Make sure the argument passed to combineReducers is an object whose values are reducers. (also …/redux.js)

This meant there was too much going on. My unit test should neither know of React nor Redux, and as the culprit, I found that one of the imports in the test file used another import that marginally depended on a slice definition, i.e.

///////////////////////////////
// test.js
///////////////////////////////
import {helper} from "./Helpers.js"
...

///////////////////////////////
// Helpers.js
///////////////////////////////
import {SOME_CONSTANT} from "./state/generalSlice.js"
...

Now I only needed some constant located in generalSlice, so one could easily move this to some “./const.js”. Or so I thought.

When I removed the generalSlice.js depency from Helpers.js, the React application broke. That is, in a place totally unrelated:

ReferenceError: can't access lexical declaration 'loadPage' before initialization

./src/state/loadPage.js/</<
http:/.../static/js/main.chunk.js:11198:100
./src/state/topicSlice.js/<
C:/.../src/state/topicSlice.js:140
> [loadPage.pending]: (state, action) => {...}

From my past failures, I instantly recalled: This is a problem with circular dependencies.

Alas, topicSlice.js imports loadPage.js and loadPage.js imports topicSlice.js, and while some cases allow such a circle to be handled by webpack or similar bundlers, in general, such import loops can cause problems. And while I knew that before, this case was just difficult for me, because of the very nature of RTK.

So this is a thing with the RTK way of organizing files:

  • Every action that clearly belongs to one specific slice, can directly be defined in this state file as a property of the “reducers” in createSlice().
  • Every action that is shared across files or consumed in more than one reducer (in more than one slice), can be defined as one of the “extraReducers” in that call.
  • Async logic like our loadPage is defined in thunks via createAsyncThunk(), which gives you a place suited for data fetching etc. that always comes with three action creators like loadPage.pending, loadPage.fulfilled and loadPage.rejected
  • This looks like
///////////////////////////////
// topicSlice.js
///////////////////////////////
import {loadPage} from './loadPage.js';

const topicSlice = createSlice({
    name: 'topic',
    initialState,
    reducers: {
        setTopic: (state, action) => {
            state.topic= action.payload;
        },
        ...
    },
    extraReducers: {
        [loadPage.pending]: (state, action) => {
              state.topic = initialState.topic;
        },
        ...
    });

export const { setTopic, ... } = topicSlice.actions;

And loadPage itself was a rather complex action creator (thunk), as it could cause state dispatches as well, as it was built, in simplified form, as:

///////////////////////////////
// loadPage.js
///////////////////////////////
import {setTopic} from './topicSlice.js';

export const loadPage = createAsyncThunk('loadPage', async (args, thunkAPI) => {
    const response = await fetchAllOurData();

    if (someCondition(response)) {
        await thunkAPI.dispatch(setTopic(SOME_TOPIC));
    }

    return response;
};

You clearly see that import loop: loadPage needs setTopic from topicSlice.js, topicSlice needs loadPage from loadPage.js. This was rather old code that worked before, so it appeared to me that this is no problem per se – but solving that completely different dependency in Helpers.js (SOME_CONSTANT from generalSlice.js), made something break.

That was quite weird. It looked like this not-really-required import of SOME_CONSTANT made ./generalSlice.js load first, along with it a certain set of imports include some of the dependencies of either loadPage.js or topicSlice.js, so that when their dependencies would have been loaded, their was no import loop required anymore. However, it did not appear advisable to trace that fact to its core because the application has grown a bit already. We needed a solution.

As I told before, it required the brainstorming of multiple developers to find a way of dealing with this. After all, RTK appeared mature enough for me to dismiss “that thing just isn’t fully thought through yet”. Still, code-splitting is such a basic feature that one would expect some answer to that. What we did come up with was

  1. One could address the action creators like loadPage.pending (which is created as a result of RTK’s createAsyncThunk) by their string equivalent, i.e. ["loadPage/pending"] instead of [loadPage.pending] as key in the extraReducers of topicSlice. This will be a problem if one ever renames the action from “loadPage” to whatever (and your IDE and linter can’t help you as much with errors), which could be solved by writing one’s own action name factory that can be stashed away in a file with no own imports.
  2. One could re-think the idea that setTopic should be among the normal reducers in topicSlice, i.e. being created automatically. Instead, it can be created in its own file and then being referenced by loadPage.js and topicSlice.js in a non-circular manner as export const setTopic = createAction('setTopic'); and then you access it in extraReducers as [setTopic]: ... .
  3. One could think hard about the construction of loadPage. This whole thing is actually a hint that loadPage does too many things on too many different levels (i.e. it violates at least the principles of Single Responsibility and Single Level of Abstraction).
    1. One fix would be to at least do away with the automatically created loadPage.pending / loadPage.fulfilled / loadPage.rejected actions and instead define custom createAction("loadPage.whatever") with whatever describes your intention best, and put all these in your own file (as in idea 2).
    2. Another fix would be splitting the parts of loadPage to other thunks, and the being able to react on the automatically created pending / fulfilled / rejected actions each.

I chose idea 2 because it was the quickest, while allowing myself to let idea 3.1 rest a bit. I guess that next time, I should favor that because it makes the developer’s intention (as in… mine) more clear and the Redux DevTools output even more descriptive.

In case you’re still lost, my solution looks as

///////////////////////////////
// sharedTopicActions.js
///////////////////////////////
import {createAction} from "@reduxjs/toolkit";
export const setTopic = createAction('topic/set');
//...

///////////////////////////////
// topicSlice.js
///////////////////////////////
import {setTopic} from "./sharedTopicActions";
const topicSlice = createSlice({
    name: 'topic',
    initialState,
    reducers: {
        ...
    },
    extraReducers: {
        [setTopic]: (state, action) => {
            state.topic= action.payload;
        },

        [loadPage.pending]: (state, action) => {
              state.topic = initialState.topic;
        },
        ...
    });

///////////////////////////////
// loadPage.js, only change in this line:
///////////////////////////////
import {setTopic} from "./sharedTopicActions";
// ... Rest unchanged

So there’s a simple tool to break circular dependencies in more complex Redux-Toolkit slice structures. It was weird that it never occured to me before, i.e. up until to this day, I always was able to solve circular dependencies by shuffling other parts of the import.

My problem is fixed. The application works as expected and now all the tests work as they should, everything is modular enough and the required change was not of a major structural redesign. It required to think hard but had a rather simple solution. I have trust in RTK again, and one can be safe again in the assumption that JavaScript imports are at least deterministic. Although I will never do the work to analyse what it actually was with my SOME_CONSTANT import that unknowingly fixed the problem beforehand.

Is there any reason to disfavor idea 3.1, though? Feel free to comment your own thoughts on that issue 🙂

CSS: z-index can be weird.

Before I start this post, there are three things I want to state:

  1. If you think the “z-index” is quite simple, you probably never bothered to care.
  2. When in doubt, one can always read the official specification
  3. There are multiple good elaborations available already (see bottom of this post), but I was missing a comprehensive list of the most important points.
Quick Motivation (skip that if you only want the facts)

Yes. We know: The web has become a place which it never intended to be. Nowadays, it seems to be accommodate everything. You want live control of measurement devices? 3D camera applications? Advanced data wizardry? Or in my case, a kind of sophisticated layout engine? … Web Dev in 2021 gives you the impression that it is all merely a matter of time (or cost).

But then, there are always the caveats. Some semi-suggestive idea turns out to be not that accessible at all. Our user experience considerations made us implement “kind of basic” windows (in the operation system sense), that appear at times and disappear at other times, and give the user maximum information while maintaining minimum clutter.

Very early on, I noticed that I had to implement my own drag’n’drop functionality, because HTML5 isn’t really there yet. But I consider that as something advanced, which also has its idiosyncrasy in every conceivable use case, so that’s ok.

But then again, a somewhat-native-feeling windowing system (even if they are only rectangles with text) makes use of a seemingly simple thing: That stuff gets drawn over other stuff in the right order. And this comes with certain pecularities.

The painting order of HTML elements is divided into stacking contexts. Stacking contexts can be stacked above or below each other, and most of the times they behave as expected, but sometimes, they are not. So, for the roundup…

Stacking Context – Essential Rules

(This assumes CSS knowledge, but don’t hesitate to comment if you have any questions.)

  • If you set any z-index, you set that z-index within the current stacking context.
    • You can never enter an outside stacking context, only create new ones inside
  • One stacking context as a whole is always either above or below other stacking contexts as a whole
  • The root stacking order (from the <html> element) is as you expect:
    • Further down in the HTML source means more upfront
  • Higher z-index means “more upfront”, but
    • z-index doesn’t mean a thing if your neighboring elements do not live inside the same stacking context!
  • Within any parent stacking context, new child stacking contexts are created by
    • Setting CSS “position” to something other than “static”
    • Setting a z-index different from the default value “auto”
      • For clarity: “z-index: 0;” is nearly the same as “z-index: auto;”, but the latter doesn’t open up a child stacking context, while the former does.
    • Setting CSS “display: flex;” or “display: grid”
    • Setting CSS: “isolation: isolate;” (what is that even?)
    • Setting CSS: “will-change” to something non-initial (what is that even?)
    • Setting one of the “graphically advanced” CSS properties like
      • opacity, transform, filter, mix-blend-mode, clip-path, mask, …

There is more, but maybe this can help you bugtracing. And two meta-points:

  • CSS evolves, so with new features, always have stacking context in mind
  • In a framework context (like the React biosphere), you might not know what your imported dependencies do under the hood. Maybe better isolate them.
Reading recommendations:

They have nice illustrations, too.

If everything fails, go back to square one:

https://www.w3.org/TR/CSS2/visuren.html#propdef-z-index

Be aware.

Pattern Matching and the SLAP

You probably know that effect: One starts writing a lot of code in a new language, after a while gains a decent appreciation of this or that goodness and a decent annoyance about these or that oddities… Then you do some other project in other languages (the Softwareschneiderei projects are quite diverse in this respect), and each time you switch languages there’s that small moment of pondering about certain design decisions.

Then after a while, sometimes there’s that feeling of a deep “ooooooh”, and you get a understanding of a fundamental mindset that probably must have been influential there. This is always interesting, because you can start to try using similar patterns in other languages, just to find out whether they are generally useful or not.

Now, as I’ve been writing quite some Rust code lately, I somehow started to like the way of pattern matching that it proposes. Suppose you have a structure that is some composition of multiple decisions, that somehow belong together to a sufficient degree that you don’t split it up into multiple pieces. That might be the state of some file handling that was passed to your program, as an example. Such a structure could look like (u32 is Rust’s unsigned 32-bit integer format):

enum InputState {
    Unitialized,
    PlainText(String),
    ImageData {
        width: u32,
        height: u32,
        base64Data: String,
    },
    Error {
        code: u32,
        message: String
    }
}

Now, in order to read such a thing in a context, there’s the “match” statement, a kind of “switch/select with superpowers”, in that it allows you to simultaneously destructure its content to reduce unnecessary typing. This might look like

fn process_input(state: InputState) {
    match state {
        InputState::Uninitialized => {
            println!("Uninitialized. Nothing to do!");
            std::process::exit();
        },
        InputState::PlainText(str) => {
            display_string(str);
        },
        InputState::ImageData {_, base64Data} => {
            println!("This seems to be a image and is now to be displayed");
            display_image(base64Data);
        }
        _ => (),
    }
}

(I probably forgot several &references in writing that example, but that’s Rust and not my point here :D). Anyway. At first, I was quite irritated with that – why does Rust want me to always include the placeholders (the _)? Why can’t it just let me take care of the stuff I want to take care right now? Why does the compiler complain instead of always assuming that _ => (), i.e. if nothing else matches, do nothing? But I eventually found out.

To make my point, a comparison with a more loosely typed environment like JavaScript, where (as a quick sketch) one could have written that as

// inputState might be null, or {message: "bla"},
// or {width: ..., height:..., base64Data}, or {code: ..., message: "bla"}...

if (!inputState) { return; }
if (inputState.code) { /* Error case */ }
else if (inputState.base64Data) { /* Image case */ }
else if (inputState.message) { /* probably the PlainText case */ }

/* but are you sure you forgot nothing? */

Now my point is, that these are not merely translations of the same idea between different languages. These are structurally different.

The latter example is a very microscopic view. I have a kind of squishy looking, alien thing called inputState that lies on the center of my operating table, I take the scalpel and dissect – what do we have here? what color is that? does this have bones? … Without further ado, you grab into the interior of whatever and you better hope that you’re not in a kind of Sci-Fi Horror Movie… :O

The former Pattern Matching, however, is an approach more true to its original question. It stops you with your scalpel right at the beginning.

We first want to know all eventualities. Then cut where it makes sense, and free your mind from the first decision.

We just wanted to distinguish our proceeding depending on the general nature of our object of interest. We do not want to look into details at that moment, we just want to know where we are.

In the language of Clean Code, this is the Single Level of Abstraction Principle (SLAP). It states that each method you write should explicitly concern itself with a constant degree of looking at a certain problem. E.g. Low-level mathematical calculations like milliseconds vs. system time conversions should not appear next to high-level server initialization; with the simple reason of understandability – switching levels of abstractions is quite irritating for the human brain (i.e. everyone who didn’t write that code). It breaks your line of thinking, especially when you are trying to find a bug or worse.

From my experience, I know that I myself would argue “well that’s not true; I can indeed hold multiple level of abstractions in my head simultaneously!” and this isn’t really a lie. But I still see embarrassing mistakes later, of the type “of course there’s also that case. I should have known.”

Another metaphor: Think you just ask your friend about the time. She then directly initiates a very detailed lecture about the movement of Sun and Earth, paired with some epistological considerations about the Heisenberg uncertainty principle, not to neglect the role of time in the concept of Entropy. Would you rather respond with “Thanks, that helped” or “… are you on drugs?”?

With Pattern Matching, this is the same. Look at a single decision, and then first lay all the options open: “What is this?” – then go on to another method. “What do do about it?” is another level. “How?” goes deeper, “And how, actually, if you consider these or that additional conditions” even deeper. And at the bottom are something like components, that just take one input and mangle these bytes without regard for higher morals.

It is a very helpful principle that still sometimes needs a little reminder. For me, I was just happy to see that kind-of-compulsive approach embodied by the Rust match operator.

So, how far do you think one should take it with SLAP? Do you manage to always follow it, or does it work differently for you?

PS: Of course, one could introduce the same caution by defining a similar control flow also in JavaScript. There’s no need to break the SLAP. But it makes you the one responsible of keeping track, while in my Rust example you have the annoying linting / compiler do that for you.

Flexible React-Redux Hook Mocks in jest & React Testing Library

Best practices in mocking React components aren’t entirely unheard of, even in connection with a Redux state, and even not in connection with the quite convenient Hooks description ({ useSelector, useDispatch}).

So, of course the knowledge of a proper approach is at hand. And in many scenarios, it makes total sense to follow their principle of exactly arranging your Redux state in your test as you would in your real-world app.

Nevertheless, there are reasons why one wants to introduce a quick, non-overwhelming unit test of a particular component, e.g. when your system is in a state of high fluctuation because multiple parties are still converging on their interfaces and requirements; and a complete mock would be quite premature, more of a major distraction, less of being any help.

Proponents of strict TDD would now object, of course. Anyway – Fortunately, the combination of jest with React Testing Library is flexible enough to give you the tools to drill into any of your state-connected components without much knowledge of the rest of your React architecture*

(*) of course, these tests presume knowledge of your current Redux store structure. In the long run, I’d also consider this bad style, but during highly fluctuatig phases of develpment, I’d favour the explicit “this is how the store is intended to look” as safety by documentation.

On a basic test frame, I want to show you three things:

  1. Mocking useSelector in a way that allows for multiple calls
  2. Mocking useDispatch in a way that allows expecting a specific action creator to be called.
  3. Mocking useSelector in a way that allows for mocking a custom selector without its actual implementation

(Upcoming in a future blog post: Mocking useDispatch in a way to allow for async dispatch-chaining as known from Thunk / Redux Toolkit. But I’m still figuring out how to exactly do it…)

So consider your component e.g. as a simple as:

import {useDispatch, useSelector} from "react-redux";
import {importantAction} from "./place_where_these_are_defined";

const TargetComponent = () => {
    const dispatch = useDispatch();
    const simpleThing1 = useSelector(store => store.thing1);
    const simpleThing2 = useSelector(store => store.somewhere.thing2);

    return <>
        <div>{simpleThing1}</div>
        <div>{simpleThing2}</div>
        <button title={"button title!"} onClick={() => dispatch(importantAction())}>Do Important Action!</button>
    </>;
};

Multiple useSelector() calls

If we had a single call to useSelector, we’d be as easily done as making useSelector a jest.fn() with a mockReturnValue(). But we don’t want to constrain ourselves to that. So, what works, in our example, to actually construct a mockin store as plain object, and give our mocked useSelector a mockImplementation that applies its argument (which, as selector, is a function of the store)) to that store.

Note that for this simple example, I did not concern myself with useDispatch() that much. It just returns a dispatch function of () => {}, i.e. it won’t throw an error but also doesn’t do anything else.

import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';

jest.mock("react-redux", () => ({
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

describe('Test TargetComponent', () => {

    beforeEach(() => {
        useDispatchMock.mockImplementation(() => () => {});
        useSelectorMock.mockImplementation(selector => selector(mockStore));
    })
    afterEach(() => {
        useDispatchMock.mockClear();
        useSelectorMock.mockClear();
    })

    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const mockStore = {
        thing1: 'this is thing1',
        somewhere: {
            thing2: 'and I am thing2!',
        }
    };

    it('shows thing1 and thing2', () => {
        render(<TargetComponent/>);
        expect(screen.getByText('this is thing1').toBeInTheDocument();
        expect(screen.getByText('and I am thing2!').toBeInTheDocument();
    });

});

This is surprisingly simple considering that one doesn’t find this example scattered all over the internet. If, for some reason, one would require more stuff from react-redux, you can always spread it in there,

jest.mock("react-redux", () => ({
    ...jest.requireActual("react-redux"),
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

but remember that in case you want to build full-fledged test suites, why not go the extra mile to construct your own Test store (cf. link above)? Let’s stay simple here.

Assert execution of a specific action

We don’t even have to change much to look for the call of a specific action. Remember, we presume that our action creator is doing the right thing already, for this example we just want to know that our button actually dispatches it. E.g. you could have connected that to various conditions, the button might be disabled or whatever, … so that could be less trivial than our example.

We just need to know how the original action creator looked like. In jest language, this is known as spying. We add the blue parts:

// ... next to the other imports...
import * as ourActions from './actions';



    //... and below this block
    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const importantAction = jest.spyOn(ourActions, 'importantAction');

    //...

    //... other tests...

    it('dispatches importantAction', () => {
        render(<TargetComponent/>);
        const button = screen.getByTitle("button title!"); // there are many ways to get the Button itself. i.e. screen.getByRole('button') if there is only one button, or in order to be really safe, with screen.getByTestId() and the data-testid="..." attribute.
        fireEvent.click(button);
        expect(importantAction).toHaveBeenCalled();
    });

That’s basically it. Remember, that we really disfigured our dispatch() function. What we can not do this way, is a form of

// arrangement
const mockDispatch = jest.fn();
useDispatchMock.mockImplementation(() => mockDispatch);

// test case:
expect(mockDispatch).toHaveBeenCalledWith(importantAction()); // won't work

Because even if we get a mocked version of dispatch() that way, the spyed-on importantAction() call is not the same as the one that happened inside render(). So again. In our limited sense, we just don’t do it. Dispatch() doesn’t do anything, importantAction just gets called once inside.

Mock a custom selector

Consider now that there are custom selectors which we don’t care about much, we just need them to not throw any error. I.e. below the definition of simpleThing2, this could look like

import {useDispatch, useSelector} from "react-redux";
import {importantAction, ourSuperComplexCustomSelector} from "./place_where_these_are_defined";

const TargetComponent = () => {
    const dispatch = useDispatch();
    const simpleThing1 = useSelector(store => store.thing1);
    const simpleThing2 = useSelector(store => store.somewhere.thing2);
    const complexThing = useSelector(ourSuperComplexCustomSelector);
    
    //... whathever you want to do with it....
};

Here, we want to keep it open how exactly complexThing is gained. This selector is considered to already be tested in its own unit test, we just want its value to not-fail and we can really do it like this, blue parts added / changed:

import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';
import {ourSuperComplexCustomSelector} from "./place_where_these_are_defined";

jest.mock("react-redux", () => ({
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

const mockSelectors = (selector, store) => {
    if (selector === ourSuperComplexCustomSelector) {
        return true;  // or what we want to 
    }
    return selector(store);
}

describe('Test TargetComponent', () => {

    beforeEach(() => {
        useDispatchMock.mockImplementation(() => () => {});
        useSelectorMock.mockImplementation(selector => mockSelectors(selector, mockStore));
    })
    afterEach(() => {
        useDispatchMock.mockClear();
        useSelectorMock.mockClear();
    })

    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const mockStore = {
        thing1: 'this is thing1',
        somewhere: {
            thing2: 'and I am thing2!',
        }
    };

    // ... rest stays as it is
});

This wasn’t as obvious to me as you never know what jest is doing behind the scenes. But indeed, you don’t have to spy on anything for this simple test, there is really functional identity of ourSuperComplexCustomSelector inside the TargetComponent and the argument of useSelector.

So, yeah.

The combination of jest with React Testing Library is obviously quite flexible in allowing you to choose what you actually want to test. This was good news for me, as testing frameworks in general might try to impose their opinions on your style, which isn’t always bad – but in a highly changing environment as is anything that involves React and Redux, sometimes you just want to have a very basic test case in order to concern yourself with other stuff.

So, without wanting that you lower your style to such basic constructs, I hope this was of some insight for you. In a more production-ready state, I would still go the way as that krawaller.se blog post state above, it makes sense. I was just here to get you started 😉

Pie-Decoration with Python

Some while ago, I found out that it’s actually quite easy to write your own decorators in Python. That thing that is called a “pie” syntax, available since Python 2.4 but still is mostly only the stuff of some frameworks like Django, Flask or PyTango. As to the “what” and especially “why??”, one might benefit from some thoughts I recently verified.

Motivation: As several of our software projects help our customers in supporting their experimental research with Supervistory Control and Data Acquisition (SCADA) applications, we have build a considerable knowledge base around the development with the open-source Tango Controls system. For me as a developer, its Python implementation PyTango makes my life significantly easier, because I can sketch out Tango applications, like Tango Device Servers, clients for data analysis, plotting live data etc. with little to none overhead. Less than an hour of development time usually gets you started.

PyTango actually makes it that easy to code the basic functionality of a device in a network structure with all the easy Tango features – well, you have to set up Tango once. But then the programming becomes easy. E.g. a current measurement device which is accessible from anywhere in your network could basically only need:

class PowerSupply(Device):
 
    def init_device(self):
        Device.init_device(self)
        self._current = 0
 
    @attribute(label="Current", unit="mA", dtype=float)
    def current(self):
        return self._current
 
    @current.write
    def current(self, current):
        self._current = current

Compare that to the standard C++ Tango implementation, where is all that would be loads of boilerplote code.

Now, this not unusual for Python. Hiding away the ugly implementation in its libraries or frameworks so you can focus on the content. However, you need to trust it, and depending on what kind of Python code you usually enjoy to read, this can make you think a bit. What is this usage of @ doing there? Isn’t this Java code??

But to the contrary: in Java, the @annotations are a way of basically interfering with the way the compiler itself behaves. For Python, the pie is pure syntactical sugar. In a way, they both address a similar way of your source code telling you “the next thing is treated in a specific way”, but whereas you would dig quite deep in Java to produce useful custom annotations (if you know otherwise, tell me), decorating pies in Python can easily help you to dynamically abstract layers of code away to places where it’s nicer and cleaner.

It turns out: A Python decorator is any function that maps a function to a function; i.e. it takes a function as an argument and returns the same or a different function. Easy enough; as in Python, functions are first-class objects. They can be passed around and extended accordingly.

But… Every programming pattern comes with the question: “What do I gain for learning such a new habit?”

Can and should you write your own decorators?

Under the hood, something like this happens. You define a decorator e.g. like

def my_decorator(func):
    print("the decorator is initialized")

    def whatever():
        print("the function is decorated")
        return func()

    return whatever

giving you instant access to your own decorator magic:

@my_decorator
def my_func():
    print("the decorated function is called")

A function call of my_func() now results in the output

the function is decorated
the decorated function is called

while the first “the decorator is initialized” in only executed at the declaration of the @my_decorator syntax itself.

This pie syntax is pure syntactical sugar. It does nothing more than if you instead had defined:

def my_anonymous_func():
    print("the decorated function is called")
my_func = my_decorator(my_anonymous_func)

But it appears somehow cleaner, avoiding the need of the two “my_anonymous_func” that are not useful anywhere else in your code. You stash away more irrelevant implementation details from the actualy meaning of “what is this method/function actually for?”. Now you can pack my_decorator in your own set utility functions as you would with any re-usable code block. To make the example more sophisticated, you can also process function arguments (*args, **kwargs) this way; or extend previously decorated functions with further functionality… and while the Decorator itself might be a bit more complex to read, you can test that separatly, and from then on, the actual __main__ code is tidy and easy to extend.

def my_decorator(func):
    # do setup stuff here

    def whatever(*args, **kwargs):
        print("The keyword argument par is:", kwargs.get('par', None))
        return func(*args, **kwargs)

    def whatever2(func2):
        def some_inner_function():
            print("Call the second function and do something with its result")

            # note that the _actual_ function call of the decorated function would happen in the very core of the decorating function
            return func2()
        return some_inner_function

    result = whatever
    result.other_thing = whatever2
    return result


if __name__ == '__main__':
    print("START")

    @my_decorator
    def my_func(*args, **kwargs):
        print("original my_func", kwargs.get('par', None))

    @my_func.other_thing
    def my_func2():
        pass

    my_func(par='Some keyword argument')
    my_func2()

So really, we are equipped to do anything. Any scenario, in which a function call should actually be guided by one or more extra actions, might be nicely packaged in a decorator. Which gives you a number of good use cases as e.g.

  • Logging (if only for debug reasons)
  • Additional metrics (e.g. performance evaluation)
  • Type checks (making even simple scripts more robust)
  • Error Handling
  • Registering of any service or entity e.g. in one centralized list structure
  • Updates on a user interface
  • Caching, e.g. to ease some expensive operation or asynchronous action
  • or any such meta-pattern that you find yourself repeating in otherwise unrelated scenarios.

And more than that, a good name choice of your decorator gives your code a nice self-explaining semantic structure. You describe more “what”, less “how” of a block of code.

I like that pattern, but I always thought the @ looks somewhat cryptical. If you have any objection, want to warn me about unforeseen implications or generally think this was always super-obvious, let me know 🙂

Applied User Research on my own pile of synthesizer machines

Last year, I moved. Moving into a new apartment is very much akin to a major rewrite of a complex piece of software. A piece of software with a limited amount of users, maybe (well, me, my girlfriend, that’s it), but with an immensely sophisticated structure of requirements… despite the decade-long experience of “living somewhere” one usually has at this point.

For example, I have a scarce habit of hoarding hardware synthesizers – from Soviet-era monstrosities like the Поливокс to modern machines, they have a few things in common, as they

  1. take quite some space (due to their extensive wiring) and some are heavy
  2. idle around for most of the time (I mean, I have a job)
  3. are versatile enough not to have a clear-cut function in any workflow.

These are somewhat essential. All in all, I had some unassigned space in my new home and my machines instantly colonized there. Like his original version of “work expands to fill the time available“, there seems to be another Parkinson‘s Law correlating hardware synthesizers and the space you allow them to have. So basically, they now occupied one room of their own.

Which could be the end of the story and everyone would live happily ever after. If you have, like, a Googol amount of rooms. And considering Point 2 above – this solution feels quite like a waste.

Now – also last year, I began to deep-dive into the techniques of User Experience. And basically, my problem is pretty much comparable to a customer with a few quite diverging requirements. Who wants his problem solved, but hasn‘t yet figured out his actual needs to begin with.

Ergo, I should be able to use my insights of the field of UX, or Interaction Design, directly to my advantage, by applying techniques of User Research to my own behaviour.

And the first question should always be: By which approach do we get the largest understanding by the least effort? But the zeroth question is actually: How “wicked” is my problem?. Do I have an absolutely ill-defined, ever-changing, non-testable set of challenges? Or is my problem-solving rather a technical feat, implementing the best patterns, clearest details, best documentation?

Indeed. Point 3 above makes my problem rather ill-defined. On some wickedness scale, with 1 being a quick YouTube search for a How-To, 10 being a problem that I would need to go on a week-long mountain retreat with daily meditation sessions… I would give it a 6-7.

This feels like it should be in the range of difficult, but solvable with a kind of generalized abstract thinking. I choose to opt for the technique of the Five Whys: The goal is to find a consecutive number of deeper questions after my problem solving. But generally, I understand this as

  • “Five” is a general number one can aim for. There can be more, less, and there can be branches in the questioning.
  • “Why” is a placeholder for any qualifier that goes to a more abstract level. It can also be a „What do you want…“, „How about…“, „What‘s wrong with…“ serving that purpose

So. I consider myself sitting on opposite sides of a table and asking:

  1. What do you want to accomplish, and why?
    • I want to have a truckload of synthesizers, fully functional, and also not to waste my space.
  2. Why do you need your space?
    • Not only do I also have other stuff. Less clutter is generally a way to improve life quality.
  3. Why don‘t you just get rid of the machines, i.e. selling, basement, …?
    • Well. I do want to have access to spontaneous synthesizer jams. The pandemic makes it hard to get that creative input anywhere else.
  4. Why does this need to be spontaneous?
    • Because musical, like any creative flow, comes spontaneously. I don‘t want to have a great idea gone by because of long efforts in setting up.
  5. Why would set up times have to be slow?
    • Because in an strictly ordered system I need to collect stuff from various spaces, i.e. the synthesizer itself, the power plug from a box of power plugs, cables for MIDI, audio, control voltages, …
  6. Why would these have to be in their ordered boxes, for apart from the synthesizer?
    • Hm. Well, I guess they wouldn‘t have to be. That‘s just what one would do..?

Now basically, this example did not happen as straightforward as I just made it seem, but it helped me to channel my focus on a rule that is actually quite known in the design of enjoyable user interfaces: „Group things according to their usage, not their nature“. This is a form of reducing cognitive load. UI designers sometimes make this mistake, e.g. grouping “everything that is a filter” in one section, “everything that is a configuration” in one section, and so on; focussing more on the technical implementation of what these are, not on the proximity in the actual domain. This gets worse, the more technical any domain in its essence is, which is the mistake I did.

Wiring hardware synthesizers together is a very technical task, but there is much more use in grouping what belongs together when one wants to use it, not when one looks at its definition.

So now I have it. I, as a user, am happy that I, as a researcher, actually asked stupid questions like “Why don‘t you sell all the garbage?” in order to channel my focus to a system, in which setting up the whole stuff is rather quick, reducing the cognitive load, or rather shift it to the cleanup process – which is fine because I now can trust the system 😉

Hacking one‘s wetware by sleeping slantwise

Over the turn of the year I took some weeks off, in order to work out some private projects, finish some books that I had halfway-finished for far too long, and of course, to reflect about what such concepts like New Year actually could mean. As usual, the short answer is… not much per se, but one can seize the occasion to contrive a few goals for the year. You know, not these mundane, marginal resolutions like “on February 20th I will definitely go for a run”, or ambiguous abstracts like “in 2021 I‘m finally gonna be a people person!”, but a more profound search for something new, a kind of evaluation of undiscovered instrument in the toolkit of one‘s being, the juicy stuff.

Now we‘ve seen for quite some years, that the world of self-improvement likes to border on the superstitious. From particularly fine-tuned compositions of one‘s diet plan, to the sheer religious belief in certain routines, there‘s no shortage in shady suggestions that draw their data from unique success stories that makes me wonder: Even if there was a kind of biological truth to these underlying claims, how could I survive the cringing of my heart that I would experience by reading these articles?

A special downside of solutions of the kind „collection of very intricate details“ is that they aim to intervene in your life at a very incessant level. Which is, you have to think about them all day in fear of breaking the patterns, i.e. you not only distract yourself from the stuff you actually want to do, but also not giving you a certainty of feedback in either – if something appears to help – what exactly it is that helps, or – if there‘s no effect to be noticed – which detail is maybe done wrong in order to blast away all the other efforts. I guess I would only resort to such methods if I had the impression that something‘s seriously wrong with me, and I currently don‘t notice people telling me this more than once a week, so it‘s fine.

Then there are the kind of solutions that I consider just minor variations of the stuff that one already sort of knows. E.g. I don‘t consider “doing some physical movement once in a while is quite ok” as a real form of “bio-hack”, as that is just common knowledge. Ok, you still have to actually apply it, fair enough, but from an intellectual standpoint it’s boring.

So anyway, I‘m still convinced there ought to be some ways of modifying your life style at quite a beginner-suitable level, one that can easily be opted-in, low requirement of risk or investion, and that‘s where I usually start listening.

A few years back, for instance, for me that was the discovery of intermittent fasting, which in my case happened to show nearly instantaneous effects, mostly positive (e.g. for subjective impressions of my attention and overall well-being). This is something that I‘d at least easily recommend trying out a few times, but as for me, right now, I‘m not really inclined in implementing it right now.

One can probably be a bit ambivalent about all these over-the-(online-)counter supplement prescriptions that are listed everywhere. I‘m not going to recommend the regular use of any substance here, but there‘s plenty of articles about nootropics or other cognitive enhancers; some of these articles also border on the quasi-religious realm, others appear more scientific (the usage of coffee is living in the same domain, and I like that a lot) – so I‘ll leave it to the reader to form his own opinion.

Having said that, I can now finally reveal what this post is all about, as I seem to have found another intriguing way which I‘m just trying out now since a few weeks. I found the claim that it is advantageous of sleeping on an inclined bed. That certainly was outside my curretn scope, the underlying claims are about an improved flow of your glymphatic system, as well as pressure regulation. Be that as it may, it doesn‘t sound harmful and it‘s easily obtainable: by raising your bed‘s head end about 10-15cm with some suitable, stable risers, available for below 20€. I found only weird for the first night and seemed to wake up in a better mood. Together with such nice add-on as a wake up light (if you have one), this certainly qualifies as a “why didn’t anyone tell me earlier?”-moment for me.

So, if you have more intriguing bio-hacks that you consider definitely on the non-quacky side, I‘m interested 🙂

Bridging Eons in Web Dev with Polyfills

Indeed, web development is kind of peculiar. On the one hand, there‘s seldom a field in which new technologies overturn each other at that pace, creating very exciting opportunities ranging from quickly sketching out proof-of-concepts to the efficient construction of real-world applications. On the other hand, there is this strange air of browser dependency and with any new technology one acquires, there‘s always the question of whether this is just some temporary fashion or here to stay.

Which is why it hapens, that one would like to quickly scaffold a web application on the base of React and its ecosystem, but has the requirement that the customer is – either voluntarily or forced by higher powers – using some legacy browser like Internet Explorer 11, for which Microsoft has recently announced its end of life support for 30th November this year. Which doesn’t sound nice for the… *searching quickly* … 5% of desktop/laptop users that still use this old horse, but then again, how long can you cling to an outdated thing?

For the daily life of a web developer, his mind full of peculiarities that the evolution of the ECMAScript standard which basically is JavaScript, there is the practical helper of caniuse.com, telling you for every item of your code you want to know about, which browser / device has support and which doesn’t.

But what about whole frameworks? When I recently had my quest for a IE11-comptabile React app, I already feared that at every corner, I needed to double-check all my doing, especially given that for the development itself, one is certainly advised to instead use one of the browsers that come with a quite some helpful developer tools, like extensions for React, Redux, etc. — but also the features in the built-in Console, where it makes your life a lot easier whether you can just log a certain state as a string of “[Object object]” or a fully interactive display of object properties. Sorry IE11, there are reasons why you have to go.

But actually, then, I figured, that my request is maybe not that far outside the range of rather widespread use cases. Thus, the chance that someone already tried to tackle the problem, aren’t so hopeless. And so this works pretty straightforward:

  • Install “react-app-polyfill”, e.g. via npm:
npm install react-app-polyfill
  • At the very top of your index.js, add for good measure:
import "react-app-polyfill/ie11";
import "react-app-polyfill/stable";
  • Include “IE 11” (with quotes) in your package.json under the “browserlist” as a new entry under “production” and “development”

That should do it. There are people on the internet that advise removing the “node_modules/.cache” directory when doing this in an existing project.

The term of a polyfill is actually derived from some kind of putty, which is actually a nice picture. It’s all about allowing a developer to use accustomed features while maintaining the actual production environment.

Another very useful polyfill in this undertaking was…

// install 
npm install --save-dev @babel/plugin-transform-arrow-functions

// then add to the "babel" > "plugins" config array:
"babel": {
    "plugins": [
      "@babel/plugin-transform-arrow-functions"
    ]
  }

… as I find the new-fashioned arrow function notation quite useful.

So, this seems to bridge (most of) the worries one encounters in this web dev world where use cases span eons of technology evolution. Now, do you know any more useful polyfills that make your life easier?