Composition of C# iterator methods

Iterator methods in C# or one of my favorite features of that language. I do not use it all that often, but it is nice to know it is there. If you are not sure what they are, here’s a little example:

public IEnumerable<int> Iota(int from, int count)
{
  for (int offset = 0; offset < count; ++offset)
    yield return from + offset;
}

They allow you to lazily generate any sequence directly in code. For example, I like to use them when generating a list of errors on a complex input (think compiler errors). The presence of the yield contextual keyword transforms the function body into a state machine, allowing you to pause it until you need the next value.

However, this makes it a little more difficult to compose such iterator methods, and in reverse, refactor a complex iterator method into several smaller ones. It was not obvious to me right away how to do it at all in a ‘this always works’ manner, so I am sharing how I do it here. Consider this slightly more complex iterator method:

public IEnumerable<int> IotaAndBack(int from, int count)
{
  for (int offset = 0; offset < count; ++offset)
    yield return from + offset;

  for (int offset = 0; offset < count; ++offset)
    yield return from + count - offset - 1;
}

Now we want to extract both loops into their own functions. My one-size-fits-all solution is this:

public IEnumerable<int> AndBack(int from, int count)
{
  for (int offset = 0; offset < count; ++offset)
    yield return from + count - offset - 1;
}

public IEnumerable<int> IotaAndBack(int from, int count)
{
  foreach (var x in Iota(from, count))
     yield return x;

  foreach (var x in AndBack(from, count))
     yield return x;
}

As you can see, a little ‘foreach harness’ is needed to compose the parts into the outer function. Of course, in a simple case like this, the LINQ version Iota(from, count).Concat(AndBack(from, count)) also works. But that only works when the outer function is sufficiently simple.

You are not safe with Semantic Versioning (right now).

TL;DR: Several recent hijacks of widely spread NPM libraries should make you double-think whether to trust the package.json-semantic-versioning notation using carets and tildes.

So what’s that about?

Version updates are one oft he most haunting things any person with any kind of computer does ever encounter. On the one hand, it’s good news – some thing that you use has evolved again, and you are right at the source. Have better tools, less bugs, new functionality, and usually delivered by just a few clicks. But there’s always these question marks – do I want to know what happened there? Do I want that now? Wasn’t that old version totally working?

So we all know that dilemma. I mean, as we speak, every Windows user is kindly remembered that one could now let that version 11 live on her system. But is it compatible with your system? Will this work out of the box? Is your time worth trying now or do you wait until the waters have settled? And while Microsoft is asking that question only once a few years apart, there’s software like Notepad++ that wants me to have its updates every single start. Because apparently, text editors can grow up so fast..?

Now imagine this problem, times a few googol, and you are in the everyday world of every web developer. In the npm universe, you have nearly total modular flexibility, which comes with so many small packages that update for whatever reason — One likes to believe that mostly these are bugfixes. Or might these be these breaking changes? Did JavaScript evolve again? Do you want that stuff? Are you state-of-the-art or are you in dependency hell?

Issues like these are the source of semantic versioning. The idea is, that you apply a certain level of trust into the usually three-membered version “major.minor.patch” label. Version “3.10.1” means

  • Major version 3; the API of your software is promised to be compatible within each major version (except for the initial version 0)
  • Minor version 10, you use a new version for new API functionality, e.g. there have been 10 times when version 3.0.0 was improved without breaking backwards compatibility
  • Patch version 1, which is incremented for bugfixes within that API specification, i.e. backwards compatible within the whole major version.

This is done in good faith because only when the package maintainer “re-think”s his API, major version is incremented, and that mediates a level of trust in that you in return only have to re-think your usage of that API, if you switch major version. Therefore, dependency management systems like the npm / yarn package.json allow for the convenient notation to specify e.g.

"dependencies": {
    /* ... */
    "styled-components": "^5.1.1",
    "websocket": "~1.0.31"
  },

The caret (^) notation tells us that when the styled-components package was added to our projects, we installed version 5.1.1, but we trust the npm universe that far that every future execution of “npm install” / “yarn install” can increase this version within the same major version, e.g. if version 5.2.0 was released in the meantime, then update for its new content, and as we speak, we are at version 5.3.3, so this project is well up-to-date with whatever the good folks put in there.

Similarly, the tilde (~) notation only allows that behavior within this minor version, e.g. at the moment any call of “… install” would retrieve the current version 1.0.34 but would not get version 1.1.0 whenever that was released.

The opposite of using these is called dependency pinning, and there is lots of further reading available, e.g. here.

There is a certain misconception that “… install” will only update any of these versions if there is no “package-lock.json” for npm or no “yarn.lock” for yarn is around. That is not the case, see below, but first, my actual point.

So the point of semantic versioning is the establishment of trust between the package developer and the user: “This update only changes about that much”.

Problem: You cannot trust the npm universe right now.

Now the last weeks showed us not only a hijacking of the npm package us-parser-js at the end of October, but also another one of the packages coa and rc 11 days ago – these appeared somewhat correlated and came with a mixture of password-stealing and secret installing of crypto-mining tools, all in all the result of some bad folks getting access to these package repositories, making them execute malicious code in their install scripts – note that install scripts are not uncommon for widely spread npm packages. This means that while you can complain that these hackers did not really adhere to the Semantic Versioning code (oh..??), and also these breaches were noticed in a couple of hours each, think about this:

anyone with a certain caret or tilde in her package.json might have infected herself just by a unluckily-timed call of “npm install”.

Think of an automated script. Think of CI. Think of anyone who just wants to build his project and be as up-to-date as one can get. A last year survey of npm developers showed that usage of two-factor authentication is just below 10%, and while this doesn’t mean that the other 90% are completely irresponsible, there just is no system in place that would promise us that such attacks will just go away soon.

So of course we can not write every dependency of your projects itself, especially if they are not direct dependencies. But think of it as Russian Roulette: At least you can minimize the number of pulling the trigger.

You can not know which package is affected next. You better make sure to pin that version to exactly a version you can trust right now, and if you are ever in need of updating this, at least have a quick googling – whether there’s some sh*t going down right now.

Do you have further ideas on how to isolate your development / CI environments from whatever just happens in the outer rims of the npm universe? Please feel free to share.

How to make npm / yarn respect their respective lockfiles (package-lock.json / YARN.lock)

In principle, you can even live with the caret / tilde, if you make sure that you never actually call “npm / yarn install” itself, but make them actually consider their so-called-lockfiles as lockfiles. In their current versions, these calls should lead to that behaviour:

# instead of npm install
npm ci

# instead of yarn install
# for yarn 1.x:
yarn install --frozen-lockfile
# for yarn 2:
yarn install --immutable

As you can see from the npm call, this is especially suited for CI environments, this means you have to make sure the package-lock.json / yarn.lock is part of your repository.

One disadvantage of our approach is that npm really likes to notice you of being not very up to date, and produce lots of noise for whatever reason that you want to get rid of. Just be sure to pay some amount of attention when you update.

Pattern Matching and the SLAP

You probably know that effect: One starts writing a lot of code in a new language, after a while gains a decent appreciation of this or that goodness and a decent annoyance about these or that oddities… Then you do some other project in other languages (the Softwareschneiderei projects are quite diverse in this respect), and each time you switch languages there’s that small moment of pondering about certain design decisions.

Then after a while, sometimes there’s that feeling of a deep “ooooooh”, and you get a understanding of a fundamental mindset that probably must have been influential there. This is always interesting, because you can start to try using similar patterns in other languages, just to find out whether they are generally useful or not.

Now, as I’ve been writing quite some Rust code lately, I somehow started to like the way of pattern matching that it proposes. Suppose you have a structure that is some composition of multiple decisions, that somehow belong together to a sufficient degree that you don’t split it up into multiple pieces. That might be the state of some file handling that was passed to your program, as an example. Such a structure could look like (u32 is Rust’s unsigned 32-bit integer format):

enum InputState {
    Unitialized,
    PlainText(String),
    ImageData {
        width: u32,
        height: u32,
        base64Data: String,
    },
    Error {
        code: u32,
        message: String
    }
}

Now, in order to read such a thing in a context, there’s the “match” statement, a kind of “switch/select with superpowers”, in that it allows you to simultaneously destructure its content to reduce unnecessary typing. This might look like

fn process_input(state: InputState) {
    match state {
        InputState::Uninitialized => {
            println!("Uninitialized. Nothing to do!");
            std::process::exit();
        },
        InputState::PlainText(str) => {
            display_string(str);
        },
        InputState::ImageData {_, base64Data} => {
            println!("This seems to be a image and is now to be displayed");
            display_image(base64Data);
        }
        _ => (),
    }
}

(I probably forgot several &references in writing that example, but that’s Rust and not my point here :D). Anyway. At first, I was quite irritated with that – why does Rust want me to always include the placeholders (the _)? Why can’t it just let me take care of the stuff I want to take care right now? Why does the compiler complain instead of always assuming that _ => (), i.e. if nothing else matches, do nothing? But I eventually found out.

To make my point, a comparison with a more loosely typed environment like JavaScript, where (as a quick sketch) one could have written that as

// inputState might be null, or {message: "bla"},
// or {width: ..., height:..., base64Data}, or {code: ..., message: "bla"}...

if (!inputState) { return; }
if (inputState.code) { /* Error case */ }
else if (inputState.base64Data) { /* Image case */ }
else if (inputState.message) { /* probably the PlainText case */ }

/* but are you sure you forgot nothing? */

Now my point is, that these are not merely translations of the same idea between different languages. These are structurally different.

The latter example is a very microscopic view. I have a kind of squishy looking, alien thing called inputState that lies on the center of my operating table, I take the scalpel and dissect – what do we have here? what color is that? does this have bones? … Without further ado, you grab into the interior of whatever and you better hope that you’re not in a kind of Sci-Fi Horror Movie… :O

The former Pattern Matching, however, is an approach more true to its original question. It stops you with your scalpel right at the beginning.

We first want to know all eventualities. Then cut where it makes sense, and free your mind from the first decision.

We just wanted to distinguish our proceeding depending on the general nature of our object of interest. We do not want to look into details at that moment, we just want to know where we are.

In the language of Clean Code, this is the Single Level of Abstraction Principle (SLAP). It states that each method you write should explicitly concern itself with a constant degree of looking at a certain problem. E.g. Low-level mathematical calculations like milliseconds vs. system time conversions should not appear next to high-level server initialization; with the simple reason of understandability – switching levels of abstractions is quite irritating for the human brain (i.e. everyone who didn’t write that code). It breaks your line of thinking, especially when you are trying to find a bug or worse.

From my experience, I know that I myself would argue “well that’s not true; I can indeed hold multiple level of abstractions in my head simultaneously!” and this isn’t really a lie. But I still see embarrassing mistakes later, of the type “of course there’s also that case. I should have known.”

Another metaphor: Think you just ask your friend about the time. She then directly initiates a very detailed lecture about the movement of Sun and Earth, paired with some epistological considerations about the Heisenberg uncertainty principle, not to neglect the role of time in the concept of Entropy. Would you rather respond with “Thanks, that helped” or “… are you on drugs?”?

With Pattern Matching, this is the same. Look at a single decision, and then first lay all the options open: “What is this?” – then go on to another method. “What do do about it?” is another level. “How?” goes deeper, “And how, actually, if you consider these or that additional conditions” even deeper. And at the bottom are something like components, that just take one input and mangle these bytes without regard for higher morals.

It is a very helpful principle that still sometimes needs a little reminder. For me, I was just happy to see that kind-of-compulsive approach embodied by the Rust match operator.

So, how far do you think one should take it with SLAP? Do you manage to always follow it, or does it work differently for you?

PS: Of course, one could introduce the same caution by defining a similar control flow also in JavaScript. There’s no need to break the SLAP. But it makes you the one responsible of keeping track, while in my Rust example you have the annoying linting / compiler do that for you.

Flexible React-Redux Hook Mocks in jest & React Testing Library

Best practices in mocking React components aren’t entirely unheard of, even in connection with a Redux state, and even not in connection with the quite convenient Hooks description ({ useSelector, useDispatch}).

So, of course the knowledge of a proper approach is at hand. And in many scenarios, it makes total sense to follow their principle of exactly arranging your Redux state in your test as you would in your real-world app.

Nevertheless, there are reasons why one wants to introduce a quick, non-overwhelming unit test of a particular component, e.g. when your system is in a state of high fluctuation because multiple parties are still converging on their interfaces and requirements; and a complete mock would be quite premature, more of a major distraction, less of being any help.

Proponents of strict TDD would now object, of course. Anyway – Fortunately, the combination of jest with React Testing Library is flexible enough to give you the tools to drill into any of your state-connected components without much knowledge of the rest of your React architecture*

(*) of course, these tests presume knowledge of your current Redux store structure. In the long run, I’d also consider this bad style, but during highly fluctuatig phases of develpment, I’d favour the explicit “this is how the store is intended to look” as safety by documentation.

On a basic test frame, I want to show you three things:

  1. Mocking useSelector in a way that allows for multiple calls
  2. Mocking useDispatch in a way that allows expecting a specific action creator to be called.
  3. Mocking useSelector in a way that allows for mocking a custom selector without its actual implementation

(Upcoming in a future blog post: Mocking useDispatch in a way to allow for async dispatch-chaining as known from Thunk / Redux Toolkit. But I’m still figuring out how to exactly do it…)

So consider your component e.g. as a simple as:

import {useDispatch, useSelector} from "react-redux";
import {importantAction} from "./place_where_these_are_defined";

const TargetComponent = () => {
    const dispatch = useDispatch();
    const simpleThing1 = useSelector(store => store.thing1);
    const simpleThing2 = useSelector(store => store.somewhere.thing2);

    return <>
        <div>{simpleThing1}</div>
        <div>{simpleThing2}</div>
        <button title={"button title!"} onClick={() => dispatch(importantAction())}>Do Important Action!</button>
    </>;
};

Multiple useSelector() calls

If we had a single call to useSelector, we’d be as easily done as making useSelector a jest.fn() with a mockReturnValue(). But we don’t want to constrain ourselves to that. So, what works, in our example, to actually construct a mockin store as plain object, and give our mocked useSelector a mockImplementation that applies its argument (which, as selector, is a function of the store)) to that store.

Note that for this simple example, I did not concern myself with useDispatch() that much. It just returns a dispatch function of () => {}, i.e. it won’t throw an error but also doesn’t do anything else.

import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';

jest.mock("react-redux", () => ({
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

describe('Test TargetComponent', () => {

    beforeEach(() => {
        useDispatchMock.mockImplementation(() => () => {});
        useSelectorMock.mockImplementation(selector => selector(mockStore));
    })
    afterEach(() => {
        useDispatchMock.mockClear();
        useSelectorMock.mockClear();
    })

    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const mockStore = {
        thing1: 'this is thing1',
        somewhere: {
            thing2: 'and I am thing2!',
        }
    };

    it('shows thing1 and thing2', () => {
        render(<TargetComponent/>);
        expect(screen.getByText('this is thing1').toBeInTheDocument();
        expect(screen.getByText('and I am thing2!').toBeInTheDocument();
    });

});

This is surprisingly simple considering that one doesn’t find this example scattered all over the internet. If, for some reason, one would require more stuff from react-redux, you can always spread it in there,

jest.mock("react-redux", () => ({
    ...jest.requireActual("react-redux"),
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

but remember that in case you want to build full-fledged test suites, why not go the extra mile to construct your own Test store (cf. link above)? Let’s stay simple here.

Assert execution of a specific action

We don’t even have to change much to look for the call of a specific action. Remember, we presume that our action creator is doing the right thing already, for this example we just want to know that our button actually dispatches it. E.g. you could have connected that to various conditions, the button might be disabled or whatever, … so that could be less trivial than our example.

We just need to know how the original action creator looked like. In jest language, this is known as spying. We add the blue parts:

// ... next to the other imports...
import * as ourActions from './actions';



    //... and below this block
    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const importantAction = jest.spyOn(ourActions, 'importantAction');

    //...

    //... other tests...

    it('dispatches importantAction', () => {
        render(<TargetComponent/>);
        const button = screen.getByTitle("button title!"); // there are many ways to get the Button itself. i.e. screen.getByRole('button') if there is only one button, or in order to be really safe, with screen.getByTestId() and the data-testid="..." attribute.
        fireEvent.click(button);
        expect(importantAction).toHaveBeenCalled();
    });

That’s basically it. Remember, that we really disfigured our dispatch() function. What we can not do this way, is a form of

// arrangement
const mockDispatch = jest.fn();
useDispatchMock.mockImplementation(() => mockDispatch);

// test case:
expect(mockDispatch).toHaveBeenCalledWith(importantAction()); // won't work

Because even if we get a mocked version of dispatch() that way, the spyed-on importantAction() call is not the same as the one that happened inside render(). So again. In our limited sense, we just don’t do it. Dispatch() doesn’t do anything, importantAction just gets called once inside.

Mock a custom selector

Consider now that there are custom selectors which we don’t care about much, we just need them to not throw any error. I.e. below the definition of simpleThing2, this could look like

import {useDispatch, useSelector} from "react-redux";
import {importantAction, ourSuperComplexCustomSelector} from "./place_where_these_are_defined";

const TargetComponent = () => {
    const dispatch = useDispatch();
    const simpleThing1 = useSelector(store => store.thing1);
    const simpleThing2 = useSelector(store => store.somewhere.thing2);
    const complexThing = useSelector(ourSuperComplexCustomSelector);
    
    //... whathever you want to do with it....
};

Here, we want to keep it open how exactly complexThing is gained. This selector is considered to already be tested in its own unit test, we just want its value to not-fail and we can really do it like this, blue parts added / changed:

import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import TargetComponent from './TargetComponent;
import * as reactRedux from 'react-redux';
import * as ourActions from './actions';
import {ourSuperComplexCustomSelector} from "./place_where_these_are_defined";

jest.mock("react-redux", () => ({
    useSelector: jest.fn(),
    useDispatch: jest.fn(),
}));

const mockSelectors = (selector, store) => {
    if (selector === ourSuperComplexCustomSelector) {
        return true;  // or what we want to 
    }
    return selector(store);
}

describe('Test TargetComponent', () => {

    beforeEach(() => {
        useDispatchMock.mockImplementation(() => () => {});
        useSelectorMock.mockImplementation(selector => mockSelectors(selector, mockStore));
    })
    afterEach(() => {
        useDispatchMock.mockClear();
        useSelectorMock.mockClear();
    })

    const useSelectorMock = reactRedux.useSelector;
    const useDispatchMock = reactRedux.useDispatch;

    const mockStore = {
        thing1: 'this is thing1',
        somewhere: {
            thing2: 'and I am thing2!',
        }
    };

    // ... rest stays as it is
});

This wasn’t as obvious to me as you never know what jest is doing behind the scenes. But indeed, you don’t have to spy on anything for this simple test, there is really functional identity of ourSuperComplexCustomSelector inside the TargetComponent and the argument of useSelector.

So, yeah.

The combination of jest with React Testing Library is obviously quite flexible in allowing you to choose what you actually want to test. This was good news for me, as testing frameworks in general might try to impose their opinions on your style, which isn’t always bad – but in a highly changing environment as is anything that involves React and Redux, sometimes you just want to have a very basic test case in order to concern yourself with other stuff.

So, without wanting that you lower your style to such basic constructs, I hope this was of some insight for you. In a more production-ready state, I would still go the way as that krawaller.se blog post state above, it makes sense. I was just here to get you started 😉

Pie-Decoration with Python

Some while ago, I found out that it’s actually quite easy to write your own decorators in Python. That thing that is called a “pie” syntax, available since Python 2.4 but still is mostly only the stuff of some frameworks like Django, Flask or PyTango. As to the “what” and especially “why??”, one might benefit from some thoughts I recently verified.

Motivation: As several of our software projects help our customers in supporting their experimental research with Supervistory Control and Data Acquisition (SCADA) applications, we have build a considerable knowledge base around the development with the open-source Tango Controls system. For me as a developer, its Python implementation PyTango makes my life significantly easier, because I can sketch out Tango applications, like Tango Device Servers, clients for data analysis, plotting live data etc. with little to none overhead. Less than an hour of development time usually gets you started.

PyTango actually makes it that easy to code the basic functionality of a device in a network structure with all the easy Tango features – well, you have to set up Tango once. But then the programming becomes easy. E.g. a current measurement device which is accessible from anywhere in your network could basically only need:

class PowerSupply(Device):
 
    def init_device(self):
        Device.init_device(self)
        self._current = 0
 
    @attribute(label="Current", unit="mA", dtype=float)
    def current(self):
        return self._current
 
    @current.write
    def current(self, current):
        self._current = current

Compare that to the standard C++ Tango implementation, where is all that would be loads of boilerplote code.

Now, this not unusual for Python. Hiding away the ugly implementation in its libraries or frameworks so you can focus on the content. However, you need to trust it, and depending on what kind of Python code you usually enjoy to read, this can make you think a bit. What is this usage of @ doing there? Isn’t this Java code??

But to the contrary: in Java, the @annotations are a way of basically interfering with the way the compiler itself behaves. For Python, the pie is pure syntactical sugar. In a way, they both address a similar way of your source code telling you “the next thing is treated in a specific way”, but whereas you would dig quite deep in Java to produce useful custom annotations (if you know otherwise, tell me), decorating pies in Python can easily help you to dynamically abstract layers of code away to places where it’s nicer and cleaner.

It turns out: A Python decorator is any function that maps a function to a function; i.e. it takes a function as an argument and returns the same or a different function. Easy enough; as in Python, functions are first-class objects. They can be passed around and extended accordingly.

But… Every programming pattern comes with the question: “What do I gain for learning such a new habit?”

Can and should you write your own decorators?

Under the hood, something like this happens. You define a decorator e.g. like

def my_decorator(func):
    print("the decorator is initialized")

    def whatever():
        print("the function is decorated")
        return func()

    return whatever

giving you instant access to your own decorator magic:

@my_decorator
def my_func():
    print("the decorated function is called")

A function call of my_func() now results in the output

the function is decorated
the decorated function is called

while the first “the decorator is initialized” in only executed at the declaration of the @my_decorator syntax itself.

This pie syntax is pure syntactical sugar. It does nothing more than if you instead had defined:

def my_anonymous_func():
    print("the decorated function is called")
my_func = my_decorator(my_anonymous_func)

But it appears somehow cleaner, avoiding the need of the two “my_anonymous_func” that are not useful anywhere else in your code. You stash away more irrelevant implementation details from the actualy meaning of “what is this method/function actually for?”. Now you can pack my_decorator in your own set utility functions as you would with any re-usable code block. To make the example more sophisticated, you can also process function arguments (*args, **kwargs) this way; or extend previously decorated functions with further functionality… and while the Decorator itself might be a bit more complex to read, you can test that separatly, and from then on, the actual __main__ code is tidy and easy to extend.

def my_decorator(func):
    # do setup stuff here

    def whatever(*args, **kwargs):
        print("The keyword argument par is:", kwargs.get('par', None))
        return func(*args, **kwargs)

    def whatever2(func2):
        def some_inner_function():
            print("Call the second function and do something with its result")

            # note that the _actual_ function call of the decorated function would happen in the very core of the decorating function
            return func2()
        return some_inner_function

    result = whatever
    result.other_thing = whatever2
    return result


if __name__ == '__main__':
    print("START")

    @my_decorator
    def my_func(*args, **kwargs):
        print("original my_func", kwargs.get('par', None))

    @my_func.other_thing
    def my_func2():
        pass

    my_func(par='Some keyword argument')
    my_func2()

So really, we are equipped to do anything. Any scenario, in which a function call should actually be guided by one or more extra actions, might be nicely packaged in a decorator. Which gives you a number of good use cases as e.g.

  • Logging (if only for debug reasons)
  • Additional metrics (e.g. performance evaluation)
  • Type checks (making even simple scripts more robust)
  • Error Handling
  • Registering of any service or entity e.g. in one centralized list structure
  • Updates on a user interface
  • Caching, e.g. to ease some expensive operation or asynchronous action
  • or any such meta-pattern that you find yourself repeating in otherwise unrelated scenarios.

And more than that, a good name choice of your decorator gives your code a nice self-explaining semantic structure. You describe more “what”, less “how” of a block of code.

I like that pattern, but I always thought the @ looks somewhat cryptical. If you have any objection, want to warn me about unforeseen implications or generally think this was always super-obvious, let me know 🙂

Who watches the FileSystemWatcher?

One of the ways we sometimes implement communication with legacy systems is via the file-system. The legacy system will write files about some events into a predefined directory, and the other system watches this directory for changes. In C#/.NET, the handy FileSystemWatcher class is a great tool for that.

We had a working solution using that in production for the last two years. Then it suddenly stopped working. There was no apparent change in the related code, so I suspect our upgrade from .NET Core 2.1 to .NET 5.0 triggered the change in behavior. The code looked something like this:

public void BeginWatching()
{
    var filter = NotifyFilters.LastAccess | NotifyFilters.LastWrite
        | NotifyFilters.FileName | NotifyFilters.DirectoryName;

    var watcher = new FileSystemWatcher
    {
        Path = directory,
        NotifyFilter = filter,
        Filter = "*.txt",
        EnableRaisingEvents = true,
    };
    /* Hook up some events here... */
}

And after some debugging, it turned out none of the attached event handlers were firing anymore. The solution was to not let go of the watcher instance, and keep it around in the enclosing class.

this.watcher = new FileSystemWatcher

This makes sense, of course. Before, the watcher was only a local variable, and could be collected by the garbage collector at any moment. That, in turn, disabled the process, causing no more events to be emitted.

Interestingly, a few days later, my student asked about his FileSystemWatcher also no longer working. I immediatly suspected the same problem, but when we looked at the code, he had already moved the watcher into a property of the enclosing class. Turns out, for him the problem was just one level up: the enclosing class was only created as a local variable, and the contained watcher stopped after that went out of scope.

Now the only question is: why did we never observe this before? Either something in the GC changed, or something in the implementation of the watcher changed. Can anyone enlighten the situation?

Applied User Research on my own pile of synthesizer machines

Last year, I moved. Moving into a new apartment is very much akin to a major rewrite of a complex piece of software. A piece of software with a limited amount of users, maybe (well, me, my girlfriend, that’s it), but with an immensely sophisticated structure of requirements… despite the decade-long experience of “living somewhere” one usually has at this point.

For example, I have a scarce habit of hoarding hardware synthesizers – from Soviet-era monstrosities like the Поливокс to modern machines, they have a few things in common, as they

  1. take quite some space (due to their extensive wiring) and some are heavy
  2. idle around for most of the time (I mean, I have a job)
  3. are versatile enough not to have a clear-cut function in any workflow.

These are somewhat essential. All in all, I had some unassigned space in my new home and my machines instantly colonized there. Like his original version of “work expands to fill the time available“, there seems to be another Parkinson‘s Law correlating hardware synthesizers and the space you allow them to have. So basically, they now occupied one room of their own.

Which could be the end of the story and everyone would live happily ever after. If you have, like, a Googol amount of rooms. And considering Point 2 above – this solution feels quite like a waste.

Now – also last year, I began to deep-dive into the techniques of User Experience. And basically, my problem is pretty much comparable to a customer with a few quite diverging requirements. Who wants his problem solved, but hasn‘t yet figured out his actual needs to begin with.

Ergo, I should be able to use my insights of the field of UX, or Interaction Design, directly to my advantage, by applying techniques of User Research to my own behaviour.

And the first question should always be: By which approach do we get the largest understanding by the least effort? But the zeroth question is actually: How “wicked” is my problem?. Do I have an absolutely ill-defined, ever-changing, non-testable set of challenges? Or is my problem-solving rather a technical feat, implementing the best patterns, clearest details, best documentation?

Indeed. Point 3 above makes my problem rather ill-defined. On some wickedness scale, with 1 being a quick YouTube search for a How-To, 10 being a problem that I would need to go on a week-long mountain retreat with daily meditation sessions… I would give it a 6-7.

This feels like it should be in the range of difficult, but solvable with a kind of generalized abstract thinking. I choose to opt for the technique of the Five Whys: The goal is to find a consecutive number of deeper questions after my problem solving. But generally, I understand this as

  • “Five” is a general number one can aim for. There can be more, less, and there can be branches in the questioning.
  • “Why” is a placeholder for any qualifier that goes to a more abstract level. It can also be a „What do you want…“, „How about…“, „What‘s wrong with…“ serving that purpose

So. I consider myself sitting on opposite sides of a table and asking:

  1. What do you want to accomplish, and why?
    • I want to have a truckload of synthesizers, fully functional, and also not to waste my space.
  2. Why do you need your space?
    • Not only do I also have other stuff. Less clutter is generally a way to improve life quality.
  3. Why don‘t you just get rid of the machines, i.e. selling, basement, …?
    • Well. I do want to have access to spontaneous synthesizer jams. The pandemic makes it hard to get that creative input anywhere else.
  4. Why does this need to be spontaneous?
    • Because musical, like any creative flow, comes spontaneously. I don‘t want to have a great idea gone by because of long efforts in setting up.
  5. Why would set up times have to be slow?
    • Because in an strictly ordered system I need to collect stuff from various spaces, i.e. the synthesizer itself, the power plug from a box of power plugs, cables for MIDI, audio, control voltages, …
  6. Why would these have to be in their ordered boxes, for apart from the synthesizer?
    • Hm. Well, I guess they wouldn‘t have to be. That‘s just what one would do..?

Now basically, this example did not happen as straightforward as I just made it seem, but it helped me to channel my focus on a rule that is actually quite known in the design of enjoyable user interfaces: „Group things according to their usage, not their nature“. This is a form of reducing cognitive load. UI designers sometimes make this mistake, e.g. grouping “everything that is a filter” in one section, “everything that is a configuration” in one section, and so on; focussing more on the technical implementation of what these are, not on the proximity in the actual domain. This gets worse, the more technical any domain in its essence is, which is the mistake I did.

Wiring hardware synthesizers together is a very technical task, but there is much more use in grouping what belongs together when one wants to use it, not when one looks at its definition.

So now I have it. I, as a user, am happy that I, as a researcher, actually asked stupid questions like “Why don‘t you sell all the garbage?” in order to channel my focus to a system, in which setting up the whole stuff is rather quick, reducing the cognitive load, or rather shift it to the cleanup process – which is fine because I now can trust the system 😉

Bridging Eons in Web Dev with Polyfills

Indeed, web development is kind of peculiar. On the one hand, there‘s seldom a field in which new technologies overturn each other at that pace, creating very exciting opportunities ranging from quickly sketching out proof-of-concepts to the efficient construction of real-world applications. On the other hand, there is this strange air of browser dependency and with any new technology one acquires, there‘s always the question of whether this is just some temporary fashion or here to stay.

Which is why it hapens, that one would like to quickly scaffold a web application on the base of React and its ecosystem, but has the requirement that the customer is – either voluntarily or forced by higher powers – using some legacy browser like Internet Explorer 11, for which Microsoft has recently announced its end of life support for 30th November this year. Which doesn’t sound nice for the… *searching quickly* … 5% of desktop/laptop users that still use this old horse, but then again, how long can you cling to an outdated thing?

For the daily life of a web developer, his mind full of peculiarities that the evolution of the ECMAScript standard which basically is JavaScript, there is the practical helper of caniuse.com, telling you for every item of your code you want to know about, which browser / device has support and which doesn’t.

But what about whole frameworks? When I recently had my quest for a IE11-comptabile React app, I already feared that at every corner, I needed to double-check all my doing, especially given that for the development itself, one is certainly advised to instead use one of the browsers that come with a quite some helpful developer tools, like extensions for React, Redux, etc. — but also the features in the built-in Console, where it makes your life a lot easier whether you can just log a certain state as a string of “[Object object]” or a fully interactive display of object properties. Sorry IE11, there are reasons why you have to go.

But actually, then, I figured, that my request is maybe not that far outside the range of rather widespread use cases. Thus, the chance that someone already tried to tackle the problem, aren’t so hopeless. And so this works pretty straightforward:

  • Install “react-app-polyfill”, e.g. via npm:
npm install react-app-polyfill
  • At the very top of your index.js, add for good measure:
import "react-app-polyfill/ie11";
import "react-app-polyfill/stable";
  • Include “IE 11” (with quotes) in your package.json under the “browserlist” as a new entry under “production” and “development”

That should do it. There are people on the internet that advise removing the “node_modules/.cache” directory when doing this in an existing project.

The term of a polyfill is actually derived from some kind of putty, which is actually a nice picture. It’s all about allowing a developer to use accustomed features while maintaining the actual production environment.

Another very useful polyfill in this undertaking was…

// install 
npm install --save-dev @babel/plugin-transform-arrow-functions

// then add to the "babel" > "plugins" config array:
"babel": {
    "plugins": [
      "@babel/plugin-transform-arrow-functions"
    ]
  }

… as I find the new-fashioned arrow function notation quite useful.

So, this seems to bridge (most of) the worries one encounters in this web dev world where use cases span eons of technology evolution. Now, do you know any more useful polyfills that make your life easier?

What is it with Software Development and all the clues to manage things?

As someone who started programming a long time ago (roughly 20 years, now that I think about it), but only in recent years entered the world of real software development, the mastery of day-to-day-challenges happens to consist of two main topics: First, inour rapidly evolving field we never run out of new technologies to learn, and then, there’s a certain engineering aspect underlying, how to do things in a certain manner, with lots of input every year.

So after I recently shared some of such ideas with my friends — I indeed still have a few ones of those), I wondered: How is it, that in the modern software development world, most of the information about managing things actually comes from the field itself, and rather feeding back its ideas of project management, quality, etc. into the non-software-subspaces of the world? (Ideas like the Agile movement, Software Craftmanship, the calls of doing things Lean and Clean, nowadays prospered so much that you see their application or modification in several other industries. Like advertisement, just as an example.)

I see a certain kind of brain food in this question. What tells software development apart from other current fields, so that there is a broad discussion and considerable input at its base level? After all, if you plan on becoming someone who builds houses, makes cars, or manages cities, you wouldn‘t engage in such a vivid culture of „how“ to do things, rather focussing on the „what“.

Of course, I might be mistaken in this view. But, by asking: what actually tells software development apart from these other fields of producing something, I see a certain kind of brain food, helpful for approaching every day tasks and valuing better tips over worse ones.

So, what can that be?

1. Quite peculiar is the low entry threshold in being able to call yourself a “programmer”. With the lots of resources you get at relatively little cost (assumption: you have a computer with a working internet connection), you have a lot of channels by which you can learn the „what“ of software development first, and saving the „how“ for later. If you plan on building a house, there‘s not a bazillion of books, tutorials, and videos, after all.

2. Similarly, there‘s the rather low cost of failure when drafting a quick hobby project. Not always will a piece of code that you write in your free time tell yourself „hey man, you ever thought about some better kind of architecture?“ – which is, why bad habits can stick and even feel right. If you choose the „wrong“ mindset, you don‘t always lose heaps of money, and neither do you, if you switch your strategy once in a while, you also don‘t automatically. (you probably will, though, if you are too careless in this process).

3. Furthermore, there‘s the dynamic extension of how your project is going to be used („Scope Creep“). One would build a skyscraper in a different way than a bungalow (I‘m not an expert, though), but with software, it often feels like adding a simple feature here, extending the scope there, unless you hit a point where all its interdependencies are in a complex state of conflict…

4. Then, it‘s a matter of transparency: If you sit in a badly designed car, it becomes rather obvious when it always exhausts clouds of black smoke. Or your house always smells like scents of fresh toilet. Of course, a well designed piece of software will come with a great user experience, but as you can see in many commercial products, there also is quite some presence of low-than-average-but-still-somehow-doing-what-it-should software. Probably users are more tolerant with software than with cars?

5. Also, as in most technical fields, it is not the case that „pure consultants“ are widely received in a positive light. For most nerds, you don‘t get a lot of credibility if you talk about best practices without having got your hands dirty over a longer period of time. Ergo, it needs some experienced software programmers in order to advise less experienced software programmers… but surely, it‘s questionable whether this is a good thing.

6. After all, the requirements for someone who develops a project might be very different in each field. From my academical past in computational Physics, I know that there is quite some demand for „quick & dirty“ solutions. Need to add some Dark Matter in your model here? Well, plug this formula in and check the results. Not every user has the budget or liberty in creating a solid structure of your program. If you want to have a new laboratory building, of course, you very well want it it do be designed as good as it can get.

All in all, these observations somehow boil down to the question, whether software development is to be seen more like a set of various engineering skills, rather like a handcraft, an art, or a complex program of study. It is the question, whether the “crack” in this field is the one who does complex arithmetics in its head, or the one who just gets what the customer wants. I like thinking about such peculiar modes of thought, as they help me in understanding what kinds of things I should learn next.

Or is there something completely else to it?

Compiling Agda 2.6.2 on Fedora 32

Agda is a dependently typed functional programming language that I like very much. Its latest versions have some special features which are not supported by any more well known languages, for example higher inductive types. Since I want to use all the special features of Agda, I regularly compile the latest version. This is a procedure which comes with a few surprises more often than not, so this post is about saving you the time it took me to figure out what to do.

I like to compile Agda using the Haskell tool Stack, which can be installed with

curl -sSL https://get.haskellstack.org/ | sh

The sources of Agda have to be checked out with all submodules – otherwise there will be some weird “cabal” (another Haskell tool, more basic than Stack) errors which I was not able to understand. This can be done with:

git clone --recurse-submodules https://github.com/agda/agda.git

Now if you go to the Agda folder

cd agda

There should be files called “stack-8.8.4.yaml” and similar. Those files can can tell Stack how to build Agda. The number is the version of the Haskell compiler which is used to build Agda. I usually use the latest, you do not have to figure out which (if any) is installed on your system, since Stack will just download the appropriate compiler for you.

However, just running stack failed for my on Fedora 32 due to some linking problem in the end. It turned out, that “libtinfo.so” was not found by the linker. “libtinfo.so.6” is available on Fedora 32, so adding a link to it fixed the problem:

sudo ln -s /usr/lib64/libtinfo.so.6 /usr/lib64/libtinfo.so

Now, you can tell Stack to get all necessary things and compile and install Agda with:

stack install --stack-yaml stack-8.8.4.yaml

This also installs a binary into “~/.local/bin” which is in PATH on Fedora 32 by default, so you should be able to call agda from the command line. Also, you can use “agda-mode” to configure emacs for agda.