Don’t just useCallback() with higher-order-functions

This is a small thing that once took me longer to debug than necessary, which is why it might be useful to some of you out there.

From time to time, we have that situation in a React application where it’s just not really avoidable that a small component has to accomplish a rather expensive computation. That’s what memoization is for, i.e. reusing the results of old computations when we know that these are still applicable.

React, in its functional approach, has three ways of memoiziating things, and for whole components there is React.memo(), while for usage inside a component we have the hooks React.useMemo() most commonly used for values or value-like objects, and React.useCallback() for functions. Because JavaScript is quite a functional languare, there is a rough equivalence between the latter two – but now I’m here to look into that.

// rather trivial function – these are equal React.useMemo(() => () => x, [x]); React.useCallback(() => x, [x]); // higher-order function – they are not! React.useMemo(() => higherOrderFunction(x), [x]); React.useCallback(higherOrderFunction(x), [x]);

There are various such higher-order components that are avilable for developers to use re-existing logic. One such case is debouncing, i.e. when you expect state changes to sometimes come in very large batches, the most common case probably a <input/> field whose value is triggering a server request or something like that. Other common cases would be drag’n’drop interactions or window resizing.

With a useRef(), one can rather easily write such debouncing oneself (google it or ask in the comments), but there is lodash.debounce which take care of that with such a higher-component function.

const MILLISEC = 500;

const Component = () => {
  const [value, setValue] = React.useState("");

  const handle = React.useMemo(() => debounce(event => { ... }, MILLISEC), []);

  return <input onChange={handle} value={value}/>;
};

Now I don’t want to talk about the specific case of debounce() (but one can look at the source code to guess its doing), this is just an example. Third-party logic is helpful when not-reinventing-the-wheel, but you can’t be that sure about computational costs, especially when some of your dependencies might update in the future – so that might be a good point to use memoization without actually seeing the benefit in the time of developing. (*)

As Dmitir Pavlutin here states nicely for that specific case, you can not juse write useCallback(debounce(...), []) here in place of useMemo. It is rather trivial but you need to take care: The JavaScript engine will have no other option than to execute the debounce() on creation of the callback, it can not know that this is something to be evaluated later.

Anything that is not an arrow function () => { ... } or an old-school function() { ... } will be evaluated when the corresponding line is reached. The syntax does not allow anything to be wrapped around it in order to delay that execution to the first call.

So. Debounce might not be the most expensive thing, and in general one might not even need memoization, but if you do – always remember that something has to be a function in order for any of that to work.

(*) This is not a call for premature optimization.

It cannot be stressed enough that one shouldn’t wrap every single computation into a memoization in either case. Sure, one should care about useless computations as stated above, but always know that the memo thing itself is not free. So when in doubt, think about how to quantify your specific gain, e.g. via the React DevTools Profiler, the performance API or at least logging of Date.now() timestamps.

Also, only think about performance when doing so. If there is any case of “my application actually behaves differently” when using useMemo / useCallback, this is a red flag – drop the thought of optimization instantly and care about your overall architecture first.

A Purpose of Domain-Driven-English-German-Language-Mumbo-Jumbo

Disclaimer: Due to it’s nature, this blog article needs to make some use of the German language. This is part of its essence and could not be avoided, sorry to all international readers.

Since its conception in 2003, the expression “Domain-Driven Design” might have been tossed around a bit, together with all the other XYZ-Driven Designs that are out there. As usual with such terms, I only try to gather the core points of these ideas; I do not like sticking to any such concept with religious fervor or otherwise dogmatic understanding. Moreover, these concepts are usually not of the type “you either use them or you don’t”, but you have some control over the degree in which you employ them, depending on your requirements as a whole.

This is why in a new project, I might implement a handful of ideas and see where it goes, always prepared to call it a day and toss any rule out when it endangers my progress. On the other hand, if I only follow principles that instantly convince me, I risk missing out on some practice that just is unusual, but not bad in itself.

Domain-Driven Design, in my understanding, aims at aligning the architectural details of your code base with the domain model, i.e. the technical peculiarities of your (customer’s) specific use case. Which doesn’t sound hard or bad per se, but as usual, takes some practice to shed some light on.

Enter the idea of using German words in your code. For variables, methods, classes, and such stuff – even with Umlauts and the Eszett (“ß”). If one is not used to that, such code might instantly induce some sort of digestive sickness or at least that’s what it has done to me, because of it’s sheer look, i.e.

// just some example to look at

var sortedZuordnungen = szenario.SortedZeitplanForArbeitsplatz(arbeitsplatz.Id)
.ToList();
var gesperrteHalbtage = sperrungen.Where(s => s.AufArbeitsplatz(arbeitsplatz.Id)).Select(s => s.Halbtag);

var nächsteZuordnung = sortedZuordnungen.FirstOrDefault();
Halbtag tryStart = Constants.HeuteVormittag;

while (nächsteZuordnung != default)
{
    tryStart.CreateListFromHere(anzahlHalbtage, gesperrteHalbtage);
    nächsteZuordnung = FindNächsteZuordnung();
}

(replace “German” with any other language your customer might use; if you’re living in a completely English-speaking environment, this article should be of limited insight for you. Sorry again.)

Now code like this – at first – what is this!? That’s not proper! It looks like the sound of some older German politician who never really bothered learning the English language, with some crazy dialect and whatnot!

The advantage behind this concept becomes especially apparent when dealing with a lot of very generic terms. E.g. the word “component” might just mean a button on your UI, or it might mean something very specific for your customer – or even worse, you might mean something very specific for your customer, but in reality, he would never refer to that entity with that word, so… you’re left with a chance of awkward bewilderment in every single meeting with the guy.

So, despite it’s weird look – this is one of the concepts that I haven’t tossed out the window yet. The key point is the overall reduction of friction in your thoughts. In communicating with various languages, one always has to do some minor translations in your head. These can be faulty or misleading either way – the nature of the language itself is secondary.

What works for me, is

  • Pure code fabrications that are close to the programming language get English names like usual
  • Things that a customer might talk about in German should get a German name
  • German and English can be mixed in a single word without any shame
  • Thus, words can be long, but you have an IDE who can help with that
  • German compound words get the correct German capitalization, i.e. the equivalent of “componentNumber” would be “komponentennummer”, not “komponentenNummer”
  • The linking of two German parts happens with the correct grammatical standards, i.e. a “workPlace” becomes an “arbeitsplatz” with the “s” inbetween (Fugen-s).

For some reason, this by now resulted in quite an uninterrupted workflow for me. The last two rules were an interesting finding because I noticed that without them, I really made a noticeable pause in my thinking process whenever I thought about these entities. This pause is now gone.

E.g. by now, the cognitive load of talking about a “KomponentenController” – something that is a Controller from a software engineering point of view and dealing with components from a domain point of view, appears easier for me than having to talk about a “ComponentController” with the extra translation of Component and Komponente. Mind you, there are enough words that do not sound that similar in our two languages.

I will not use this concept in every single project I might start from now. I.e. for hobby projects (where I’m my own customer), I would still prefer the 100%-English-language solution. But depending on your project, this is worth a try, and I’m positively amazed on how well that can work.

X Forwarding from Linux to Windows

This is a very short statement of joy in that I found something I thought of being very complicated – actually turned out to be done quite easy.

We have a multitude of clients with a multitude of infrastructures. Then there is home office and still a Corona pandemic (according to individual voices..?), so all in all, one does not always have a Linux system at hand when working on a Linux project.

SSH access is usually easy, but if you need graphical UIs, that would be a problem, because the X Window System (X11) that is commonplace for the graphical display of Linux and the standard ssh client allows X11 forwarding via the command line option ssh -X out of the box.

Now Windows is a different story, but it turns out that the right tools… just exist. This is the short version:

  1. There is the Xming Public Domain version (last release in 2016) which you can get from e.g. here and is straightforward to install. This plays the role of a X Server, e.g. a software-side display that can receive data via ssh.
  2. After the straighinstallation, call XLaunch to setup
  3. The “Multiple Windows” option is fine, as is using “Display number 0”. I then opted to “start no client” and ignored the other options.
  4. I already use PuTTY for everything else (including SSH tunnels to various remote networks), and while this has a somewhat objectionable user interface, one can manage. If you have an existing session, make sure to select that first, then click Load, then adjust the settings as follows.
  5. go to Connection > SSH > X11
  6. Enable X11 Forwarding
  7. in “X display location”, enter “localhost:0” if you chose “Display number 0” in step 3. Or choose accordingly.
  8. You might now save (or don’t) and open the connection.

I was more than surprised just to be able to start any gui application on our client’s remote machine and seeing the result.

Sure you might get a few seconds delay, but compared to the hassle I expected – this was a walk in the park.

A big shoutout to the creators of Xming and PuTTY, well deserved.

5 Not-so-Beginner’s React Pitfalls

React, in my opinion, has become quite a useful tool over the years. I admin I haven’t given the other major frameworks a try, but from the look of the resulting code, I only would give Svelte a real chance in the nearer future (in fact, you’d really have to pay me real big money to convince me about Angular).

Now with many of the more useful JS libraries, React is in a state where not only has it survived quite a time (reaching v18 only a few weeks ago), but also breeding a community that harbors a lot of valuable knowledge, enabling one to efecavoid the most common pitfalls at the beginning of your journey. There are lots of resources you can easily find online, from few-hour-courses to several posts in other blogs about the most common traps.

However, in our daily life it appears that there still are some very good points to make about how not to go about React’s unopinionatedness. So these are some of our own findings that I’ve not yet seen overly emphasized, and maybe they are here for your advantage.

1. HAVE YOUR STATES ATOMIC

It might happen that one migrates an older React component where functional programming wasn’t the norm yet, or out of whatever habit, that you declares something like a greedy React state as

const [state, setState] = useState({this: ..., that: ... , ..., ...});

Now your state profits much from immutability (think of this as “your machine then knows that it’s content is clear and unique, given any time”) and therefore you do not need to care about the same-or-not-sameness of state.that when evaluating state.this. Therefore, it is usually advised to split that up into several independent states as

const [this, setThis] = useState(...);
const [that, setThat] = useState(...);
...

That is more readable and everything. However, the most useful rule to build your states is not even to split everything up as small-as-possible, but rather, to have your states atomic. By that, we mean, “not needlessly large, but containing all what might change at the same time”.

One common example is basic data fetching. If you don’t choose to grab for react-query, which I personally like. But if you do e.g. a simple GET request, you usually do not only have “data” (some response), but also at least a “pending” (has the request finished yet?) and an “error” (is this response even usable?) field. These all change at the same time. Thus, they belong to the same entity. That state, designed atomically

const [query, setQuery] = useState({
    pending: false,
    data: null,
    error: null,
});

side note: you might choose not to use the null object as an initial value here because of the known problem of ambivalence with this object. For this illustration, it will suffice.

So, this query state now is atomic. Not to split further without serious consequences, as you will. If you had another, unrelated query, you would not just put it right into the same state entity; but if you had another property of that query (like e.g. a separate field for the status code, …), it would belong.

This helps in having more predictable useEffect, useMemo etc. dependency arrays. You can have an Effect depending on [query] as a whole and this makes complete semantic sense. It would be very hard to predict it’s behaviour, if you mashed multiple queries or whatever-state-you-can-think-of in there.

2.HAVE YOUR EFFECTS ATOMIC & TEAR THEM DOWN

Similarly, it is not super obvious (to the newcomer’s eye at least), that you can have multiple useEffects(). You can adhere to the Single Responsibility principle right there — the only good Effects are the ones that you can grasp in a twinkling of an eye. Use one each for every single thing you want to achieve, don’t lump multiple different things together in a somewhat-“constructor”-type of thinking. This keeps the dependency arrays small and controllable, and there are fewer cases of peculiar “But this CANNOT EVEN happen!!”.

Moreover, Effects have a function designed to clean them up, or the teardown function. If your Effect starts any larger operation and then for some reason your component get’s re-rendered before your operation is finished, you are likely to get hit by that effect in a state where you forgot about it already. You can follow this example

// example: listening to the scroll event
useEffect(() => {
    const handler = (event) => { /* ... */ };
    document.addEventListener('scroll', handler);
    return () => document.removeEventListener('scroll', handler);
}, []);

// or you might do something later in life
useEffect(() => {
    const timeout = setTimeout(() => { /* ... */ }, 5000);
    return () => clearTimeout(timeout);
}, []);

Some asynchronous operations might not have a simple teardown operation, but you can at least tell your Promises to disregard the effect. This is at least responsible for the very ugly

Warning: Can’t perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application.

If you are responsible, you clean your Browser Console of all of these warnings. It appears if you call a setState-or-similar function at a point where the teardown actually should have happened. This pattern solves that case:

// this example uses a fetch Promise,
// but it also works for stale setTimeout handlers etc.

useEffect(() => {
    let mounted = true;
    fetch('/whatever').then(() => {
        if (mounted) {
            setState(true);
        }
    };
    return () => { mounted = false };
}, []);

// if you do not check for the value of mounted,
// the "memory leak" error can appear, if the
// fetch returns when the component updated meanwhile.

Side note: I also can not recall a single case in which the common React linter rule “exhaustivedeps” was worth ignoring. I had several occasions in which I believed to outsmart the stupid machine, only to end up in much larger problems down the road. Sure, things like Redux’ dispatch() might be cumbersome to include always, but I found that if I just make sure that exhaustive-deps never fires, I am more happy in the long run.

3.USEEFFECT() in too DEEP Functions

Especially in the context of data fetching, it might appear luring to put your useEffect() calls as deep (in the direction of the smallest components) as you can. Even more so, if you don’t have a rigid way of state management.

Now, I feel the point that this appears as “more modular” and flexible, but for me, has happend to situations where way too many requests were sent to our backends. You trade the modularity for the unpredictability of some Effects, so the best way I came to think of it was: Treat useEffect() like a bug.

I’m not saying that using it is wrong. But if you are wary of it’s appearance, this can help. Sometimes, it is just possible to do everything an Effect does – just completely outside React. Maybe, the Effect code can instead live in your index.js (as vanilla JS or otherwise) and just injected into your Root component, e.g. as props or via other libraries. E.g. with a Redux middleware, some effects can run with a higher degree of control about your state.

Remember: Modularity is not bad per se. It’s good. Don’t elevate the most particular effects to the top level of your application, but figure out where they can live well enough so you exactly know when they need to fire.

So far, there hasn’t been a case where I wished that I stuffed my useEffects further down to the virtual DOM leaves, but several, in which elevating them helped me a lot.

4. USE CUSTOM HOOKS with minimal interface

I consider it helpful, even for React beginners, to always be on the lookout of what could be its own React hook. A React Hook is any function that has a name beginning with “use” and for the most time, these consist of some combination of internal useState, useEffect, useContext and useRef definitions.

But their merit is in that they allow for much cleaner, dumber looking Components themselves – consider: dumb components are the best!

If they are only needed once, you can have them co-located next to where they are needed, but even just the act of giving them an own name makes for much more understandable code.

I use custom hooks for a lot of things, e.g.

  • having a State that is persisted in the localStorage / sessionStorage
  • having a State that updates in a debounced / throttled / delayed manner
  • standardizing very basic data fetching
  • accessing the window width at any time (nice for Responsive layout)
  • creating a React ref for an element with an “clicked outside” handler
  • standardized response of messages from connected websockets

I will now spare you the code, but if you have questions about any of these, just drop a comment.

One important point, though: Always have your interface minimal. E.g. if your custom hook has an internal setState(), think hard about whether you pass that function to the outside via the hook return value. Even if you are the only developer on a project, treat yourself as two different instances, one “framework designer” and one “framework consumer”, and as the designer, think hard about what havoc the consumer could do if you allow him too much.

5. Do not duplicate STATE informAtion (especially with react-router)

This applies to every state information, but it’s important to recognize that your URL route is just that: a kind of global state. One that your user can edit directly at any time, leaving the synchronization up to you.

So do not go about it by reading the URL parameters into some state that has it’s own setState! If you define a certain role of a state parameter in your URL, then it is your obligation to have a uni-directional data flow:

  1. From the route, that value flows into your application in a clearly-defined manner,
  2. where you act upon it as you wish, until you need to change it
  3. Then you change the route. Then go back to 1

Of course, one might imagine that in some cases you can not guarantee that. Then maybe do your own synchronization logic, but I would highly advise you to stash that away into e.g. a custom hook, or middleware if you use Redux, so that you can test it thoroughly and it won’t break too soon.

Further note: There are situations where it is quite sensible to have two very similar states, if they have a different responsibility. These are not a bug.

E.g. if you GET a value from a server, then edit it in a controlled <input/> field, and PUT it to the server again, you do not wish to do so on every key press. Then these are not meant to be the same:

  1. the value as you currently know it from the server
  2. the value as it exists inside the <input/>

These are semantically different. They can and should be a different state entity. But if you have something that is utterly dependant on one other state, then chances are you do not really need another entity.

All in all,

that turned out longer than I envisioned it to be become. But I hope it is of any help to any React coders who managed the absolute basics and now are prone to the next-level pitfalls.

The good news is that after a certain bunch of hardships, there is rarely the case of even more surprises. So, manage your state and effects responsibly, especially the asynchronous ones, and the rest are practices that apply for any software development.

Or am I misled?

Forced Acronyms are not that S.M.A.R.T.

A while back, I noticed that quite a lot of people are following that trend to unify a bunch of talking points to a more or less memorizable acronym. Sometimes, this is a great mnemonic device to make the essence of a thing clear in seconds – but for some reason, there are few stories acknowledged in which such attempts actually fail.

However, one of the most prominent acronyms in project management is the idea of S.M.A.R.T. goals. That easily dissolves into S for Specific, M for Measureable, and… hm… T is… something about Time, and then there are A and R, and they very clearly… well well. let’s consult wikipedia… span up a multidimensional vector space out of {Achievable, Attainable, Assignable, Agreed, Action-oriented, Ambitious, Aligned with corporate goals, Realistic, Resourced, Reasonable, Results-based}.

Now this is the point where it’s hard to follow. These are somehow too much possibilities, with no clear assignment. There are probably lots of people out there with their very specific memorization and their very specific interpretation of these letters; and it might very well be true that this forced acronym holds some value. In their specific case.

But why shouldn’t we be honest about it? If you have such a situation, you are not communicating clearly anymore. You have gone beyond that point. There is not a clear, concise meaning anymore.

These are the points where you would be honest to leave your brilliant acronym behind. If you ever sit in a seminar where someone wants to teach you some “easily memorizable acronym” with lots of degrees of freedom, open to interpretation and obviously changing over time, just – complain. Of course, everyone is entitled to using their own memory hook (“Eselsbrücke”) in order to remember whatever his or her goal is. That is not my point.

My Issue is with “official” acronyms that are not clear and constant. We as software developers have a responsibility to treat such inconsistencies as very dangerous and more harmful than helpful. With this post, I want to bring the idea out there that one should rather more often complain about a bad acronym than just think “weeeeell, but I really like how it sounds and I don’t care that it’s somewhat tainted.”

Or am I completely bullheaded in that regard? What is your opinion?

PS: If you are German and remember the beginning of 2021, a similar laziness happened there when our government tried to make their Covid rules clear and well-known. Note that this remark does have nothing to do with politics. Anyway: they invented this acronym of “AHA” (which, in German, is also that sound of having a light bulb appear over your head.) Not that bad of an idea. However, one of that “A”s originally meant “you just need a non-medical mask (Alltagsmaske) everywhere” – until some day, it was changed to “you need a medical face mask in everyday life (im Alltag)”. They just thought it clever to keep the acronym, but change one letter to mean its near opposite.

This is dangerous. Grossly negilent. Just for the sake of liking your old acronym too much, you needlessly fails to communicate clearly. Which is, for a government as much as for a software developer, usually your job.

Naming things 😉

Always apply the Principle Of Least Astonishment to yourself, too

Great principles have the property that while they can be stated in a concise form, they have far-reaching consequences one can fully appreciate after many years of encountering them.

One of these things is what is known as the Principle of Least Astonishment / Principle of Least Surprise (see here or here). As stated there, in a context of user interface design, its upshot is “Never surprise the user!”. Within that context, it is easily understandable as straightforward for everyone that has ever used any piece of software and notices that never once was he glad that the piece didn’t work as suggested. Or did you ever feel that way?

Surprise is a tool for willful suspension, for entertainment, a tool of unnecessary complication; exact what you do not want in the things that are supposed to make your job easy.

Now we can all agree about that, and go home. Right? But of course, there’s a large difference between grasping a concept in its most superficial manifestation, and its evasive, underlying sense.

Consider any software project that cannot be simplified to a mere single-purpose-module with a clear progression, i.e. what would rather be a script. Consider any software that is not just a script. You might have a backend component with loads of requirements, you have some database, some caching functionality, then you want a new frontend in some fancy fresh web technology, and there’s going to be some conflict of interests in your developer team.

There will be some rather smart ways of accomplishing something and there will be rather nonsmart ways. How do you know which will be which? So there, follow your principle: Never surprise anyone. Not only your end user. Do not surprise any other team member with something “clever”. In most situations,

  1. it’s probably not clever at all
  2. the team member being fooled by you is yourself

Collaboration is a good tool to let that conflict naturally arise. I mean the good kind of conflict, not the mistrust, denial of competency, “Ctrl+A and Delete everything you ever wrote!”-kind of conflict. Just the one where someone would tell you “hm. that behaviour is… astonishing.”

But you don’t have a team member in every small project you do. So just remember to admit the factor of surprise in every thing you leave behind. Do not think “as of right now, I understand this thing, ergo this is not of any surprise to anyone, ever”. Think, “when I leave this code for two months and return, will there be anything… of surprise?”

This principle has many manifestations. As one of Jakob Nielsen’s usability heuristics, it’s called “Recognition rather than Recall”. In a more universal way of improving human performance and clarity, it’s called “Reduce Cognitive Load”. It has a wide range of applicability from user interfaces to state management, database structures, or general software architecture. I like the focus of “Surprise”, because it should be rather easy for you to admit feeling surprised, even by your own doing.

Mutable States can change inside your Browser console log

So we know, that web development must be one of the fastest-changing ecospheres humankind has ever seen (not to say, JavaScript frameworks and their best practices definitely mutate similar in frequency and deadliness as Coronaviruses). While these new developments can also come with great joy and many opportunities, this means that once in a while, we need to take care of older projects which were written in a completely different mindset.

It’s somehow trivial: Even when your infrastructure is prone to constant shifts, any Software Developer holding at least some reputation should strive to write their code as long-living and maintainable as originally intended. Or longer.

But once in a while you run into legacy code that you first have to dissect in order to understand their working. And for JS, this usually means inserting console.log() statements at various places and to trace them during execution (yeah, I know, there’s a plenitude of articles telling you to stop that, but let’s just stay at the most basic level here).

Especially in an architecture with distributed, possibly asynchronous events (which helps in reducing coupling, see e.g. Mediator and Publish-Subscribe patterns), this can help your bugtracing. But there’s a catch. One which took me some time to actually understand as quite the villain.

It does not make any sense to me, but for some reason, at least Chrome and Firefox in their current implementation save some effort when using console.log() for object entities. As in, they seem to just hold a reference for lazy evaluation. It can then be that you look upwards at your log, maybe even need to scroll there, look at some value and then not realize that you are looking at the current state, not the state at time of logging!

Maybe that was clear to you. Maybe it never occured to you because you always cared about using your state immutably. But in case you are developing on some legacy code and don’t know about what your predecessor did everywhere, you might not be prepared.

You can visualize that difference easily by yourself. Consider that short JS script:

var trustfulObject = {number: 0};
var deceptiveObject = {number: 0};

// let's just increase these numbers once each second
setInterval(() => {
    console.log("let's see...", trustfulObject, deceptiveObject);
    trustfulObject = {number: trustfulObject.number + 1};
    deceptiveObject.number = deceptiveObject.number + 1;
}, 1000);

Let that code run for a while and then open your Browser console. Scroll upwards a bit and click on some of the objects. You will find that the trustfulObject is always enumerated as supposed (at the time of logging), while the deceptiveObject will always show the number at the time of clicking. That surely surprised me.

In case you are still wondering why: The trustfulObject is freshly created each step and then reassigned to your reference variable. It seems the Browser has no other choice than logging the old (correct) state, because the reference is lost afterwards. The deceptiveObject holds the same reference during the whole runtime, which somehow makes it look more efficient to the Browser to just not evaluate anything until you want to know the value.

And then, it lies to you. ¯\_(ツ)_/¯

Two notes:

  1. If you really have to deal with legacy code of a given size where you cannot easily change that behaviour, you can log your object using JSON.stringify, i.e. console.log("let's see…", trustfulObject, JSON.stringify(deceptiveObject)); avoids that lazy evaluation.
  2. Note: Not to be confused, the JS “const” keyword does exactly the opposite of creating an immutable object. It creates an immutable reference, i.e. you can only manipulate their content afterwards. Exactly what you not want.

Of course, in modern times you probably wouldn’t write vanilla JS, and e.g. using React useState definitely reduces that issue. But still. If you don’t want to use React & Co. everywhere, then… pay attention.

The ever-connecting WebSocket

This is another of these „funny how we live in a time, where we take connectivity for granted“-posts. But what is taken for granted, usually still is somewhat cumbersome under the hood. As in our current episode.

Admittedly, the arrival of WebSockets in the last decade were one of the more significant steps towards a fluid internet experience. The WebSocket protocol is an advancement from the old „some client asks some server to handle some stuff“ way in that it is bi-directional: After mutual agreement („hand shake“), the connection stays open for the server to send data to the client, without the client having to ask first. Consider the server to be a complex application which processes lots of tasks and from time to time creates some „news“ for the client, which the user might want to read in real time.

Nowadays, the WebSocket itself is long established. What surprised us a few weeks, however – and what made us invest several days in actual research – is their behaviour when paired with loss of internet connection. Which had quite some surprise for us.

Now, this is a real scenario for one of our customers. You have a web application running on a mobile device, and this device moves in and out of WiFi-accessible areas all the time. The application should just show this circumstance and attempt to reconnect. Now the straightforward thing was to use the native WebSocket API class, or the “websocket” npm package (which acts as a small wrapper around that API); this comes with a small enough set of event handlers (onopen, onclose, onerror, onmessage). but the less obvious thing was: How is “connection lost” actually noticed? Is it onerror? Is it onclose?

In reality, this is not clear at all. Depending on the type of internet loss, there might occur a delay of several minutes until onclose fires, and onerror alone seems not to imply any closing at all. Furthermore, it depended on the type of internet loss. How do you even simulate “mobile device walked away from WiFi” as accurately as possible? While disconnecting our WiFi seemed to register with almost no delay, this was too far from the real scenario. It was only after switching to an ethernet cable and then unplugging it, that we saw the effect. And we found that the onclose event is actually quite confused if we reconnect our cable before it has fired. It could happen, then, that one old onclose did not fire until a new WebSocket was already opened, i.e. not a good indicator of “no connection” at all.

This confusion made it clear that the WebSocket technology is not as well defined as we thought it was. We actually resorted to one of the most basic ideas in order to notice our “(dis)connected” state: Continuously checking for it. Indeed – as low-level as it sounds.

We found that following solution to work quite well:

  • The server continuously sends a “heart beat” over the WebSocket. We are aware that there is a websocket.ping() method but we didn’t want to run into more surprises here.
  • WebSocket handling is done inside our own module which
    • wraps the WebSocket onmessage event in order to expect that heart beat or else “the watchdog gets angry”
    • has its own onclose event which communicates the problem to the outside as early as possible
    • also, instantly tries to reconnect
    • wraps the WebSocket onclose event in order to make it quiet if the watchdog gets angry and it would fire too late; but otherwise fire (if the watchdog is happy and the WebSocket is closed normally).

The latter implements Loose Coupling / the Principle of Least Knowledge / Separation of Concerns. We do not want our module to have a much larger interface than the original WebSocket implementation. In fact, the only information from our application to our new module is “is the user logged in”? In our application, this is part of the Redux state, but we want our module to know neither of React, Redux or other magic; it should be vanilla TypeScript in order be testable, or even better, so straightforward that any tests would be trivial.

So there we have it. If you are interested in the code, I’d be glad to share that, but the actual deed here was in finding out what we actually need.

I have no idea why the WebSocket specification is the way it is, but if you ever encounter such a problem, that would be my advice – take the thing, put it in your own thing, and couple the things loosely.

But anyway, it was fun to realize that even in 2021, a two-way-connected client-server system still might need a small guardian that tells you whether everything’s fine.

Addendum: Monkey-patching an existing class in TypeScript

I leave that here for quick reference. As stated above, we needed to equip our websocket instances with a flag to ignore their onclose events. Now some sources might readily give you the quick advice to do it as:

const socket = new w3cwebsocket(...);
(socket as any).silent = false;

But why use TypeScript if you want to work around the type system anyway? Just extend it.

class CustomWebSocket extends w3cwebsocket {
    silent: boolean = false;
    constructor(url: string) {
        super(url);
    }
}

const socket = new CustomWebSocket(...);

You are not safe with Semantic Versioning (right now).

TL;DR: Several recent hijacks of widely spread NPM libraries should make you double-think whether to trust the package.json-semantic-versioning notation using carets and tildes.

So what’s that about?

Version updates are one oft he most haunting things any person with any kind of computer does ever encounter. On the one hand, it’s good news – some thing that you use has evolved again, and you are right at the source. Have better tools, less bugs, new functionality, and usually delivered by just a few clicks. But there’s always these question marks – do I want to know what happened there? Do I want that now? Wasn’t that old version totally working?

So we all know that dilemma. I mean, as we speak, every Windows user is kindly remembered that one could now let that version 11 live on her system. But is it compatible with your system? Will this work out of the box? Is your time worth trying now or do you wait until the waters have settled? And while Microsoft is asking that question only once a few years apart, there’s software like Notepad++ that wants me to have its updates every single start. Because apparently, text editors can grow up so fast..?

Now imagine this problem, times a few googol, and you are in the everyday world of every web developer. In the npm universe, you have nearly total modular flexibility, which comes with so many small packages that update for whatever reason — One likes to believe that mostly these are bugfixes. Or might these be these breaking changes? Did JavaScript evolve again? Do you want that stuff? Are you state-of-the-art or are you in dependency hell?

Issues like these are the source of semantic versioning. The idea is, that you apply a certain level of trust into the usually three-membered version “major.minor.patch” label. Version “3.10.1” means

  • Major version 3; the API of your software is promised to be compatible within each major version (except for the initial version 0)
  • Minor version 10, you use a new version for new API functionality, e.g. there have been 10 times when version 3.0.0 was improved without breaking backwards compatibility
  • Patch version 1, which is incremented for bugfixes within that API specification, i.e. backwards compatible within the whole major version.

This is done in good faith because only when the package maintainer “re-think”s his API, major version is incremented, and that mediates a level of trust in that you in return only have to re-think your usage of that API, if you switch major version. Therefore, dependency management systems like the npm / yarn package.json allow for the convenient notation to specify e.g.

"dependencies": {
    /* ... */
    "styled-components": "^5.1.1",
    "websocket": "~1.0.31"
  },

The caret (^) notation tells us that when the styled-components package was added to our projects, we installed version 5.1.1, but we trust the npm universe that far that every future execution of “npm install” / “yarn install” can increase this version within the same major version, e.g. if version 5.2.0 was released in the meantime, then update for its new content, and as we speak, we are at version 5.3.3, so this project is well up-to-date with whatever the good folks put in there.

Similarly, the tilde (~) notation only allows that behavior within this minor version, e.g. at the moment any call of “… install” would retrieve the current version 1.0.34 but would not get version 1.1.0 whenever that was released.

The opposite of using these is called dependency pinning, and there is lots of further reading available, e.g. here.

There is a certain misconception that “… install” will only update any of these versions if there is no “package-lock.json” for npm or no “yarn.lock” for yarn is around. That is not the case, see below, but first, my actual point.

So the point of semantic versioning is the establishment of trust between the package developer and the user: “This update only changes about that much”.

Problem: You cannot trust the npm universe right now.

Now the last weeks showed us not only a hijacking of the npm package us-parser-js at the end of October, but also another one of the packages coa and rc 11 days ago – these appeared somewhat correlated and came with a mixture of password-stealing and secret installing of crypto-mining tools, all in all the result of some bad folks getting access to these package repositories, making them execute malicious code in their install scripts – note that install scripts are not uncommon for widely spread npm packages. This means that while you can complain that these hackers did not really adhere to the Semantic Versioning code (oh..??), and also these breaches were noticed in a couple of hours each, think about this:

anyone with a certain caret or tilde in her package.json might have infected herself just by a unluckily-timed call of “npm install”.

Think of an automated script. Think of CI. Think of anyone who just wants to build his project and be as up-to-date as one can get. A last year survey of npm developers showed that usage of two-factor authentication is just below 10%, and while this doesn’t mean that the other 90% are completely irresponsible, there just is no system in place that would promise us that such attacks will just go away soon.

So of course we can not write every dependency of your projects itself, especially if they are not direct dependencies. But think of it as Russian Roulette: At least you can minimize the number of pulling the trigger.

You can not know which package is affected next. You better make sure to pin that version to exactly a version you can trust right now, and if you are ever in need of updating this, at least have a quick googling – whether there’s some sh*t going down right now.

Do you have further ideas on how to isolate your development / CI environments from whatever just happens in the outer rims of the npm universe? Please feel free to share.

How to make npm / yarn respect their respective lockfiles (package-lock.json / YARN.lock)

In principle, you can even live with the caret / tilde, if you make sure that you never actually call “npm / yarn install” itself, but make them actually consider their so-called-lockfiles as lockfiles. In their current versions, these calls should lead to that behaviour:

# instead of npm install
npm ci

# instead of yarn install
# for yarn 1.x:
yarn install --frozen-lockfile
# for yarn 2:
yarn install --immutable

As you can see from the npm call, this is especially suited for CI environments, this means you have to make sure the package-lock.json / yarn.lock is part of your repository.

One disadvantage of our approach is that npm really likes to notice you of being not very up to date, and produce lots of noise for whatever reason that you want to get rid of. Just be sure to pay some amount of attention when you update.

WPF: Recipe for customizable User Controls with flexible Interactivity

The most striking feature of WPF is its peculiar understanding of flexibility. Which means, that usually you are free to do anything, anywhere, but it instantly hands back to you the responsibility to pay super-close attention of where you actually are.

As projects with grow, their user interfaces usually grow, and over time there usually appears the need to re-use any given component of that user interface.

This not only is the working of the DRY principle at the code level, Consistency is also one of the Nielsen-Norman Usability Heuristics, i.e. a good plan as to not confuse your users with needless irritations. This establishes trust. Good stuff.

Now say that you have a re-usable custom button that should

  1. Look a certain way at a given place,
  2. Show custom interactivity (handling of mouse events)
  3. Be fully integrated in the XAML workflow, especially accepting Bindings from outside, as inside an ItemsControl or other list-type Control.

As usual, this was a multi-layered problem. It took me a while to find my optimum-for-now solution, but I think I managed, so let me try to break it down a bit. Consider the basic structure:

<ItemsControl ItemsSource="{Binding ListOfChildren}">
	<ItemsControl.ItemTemplate>
		<DataTemplate>
			<Button Style="{StaticResource FancyButton}"
				Command="{Binding SomeAwesomeCommand}"
				Content="{Binding Title}"
				/>
		</DataTemplate>
	</ItemsControl.ItemTemplate>
</ItemsControl>
Quick Note about the Style

We see the Styling of FancyButton (defined in some ResourceDictionary, merged together with a lot of stuff in the App.xaml Applications.Resources), and I want to define the styling here in order to modify it in some other places i.e. this could be defined in said ResourceDictionary like

<Style TargetType="{x:Type Button}" x:Key="FancyButton"> ... </Style>

<Style TargetType="{x:Type Button}" BasedOn="{StaticResource FancyButton}" x:Key="SmallFancyButton"> ... </Style>
<Style TargetType="{x:Type Button}" BasedOn="{StaticResource FancyButton}" x:Key="FancyAlertButton"> ... </Style>
... as you wish ...
Quick Note about the Command

We also see SomeAwesomeCommand, defined in the view model of what ListOfChildren actually consists of. So, SomeAwesomeCommand is a Property of a custom ICommand-implementing class, but there’s a catch:

Commands on a Button work on the Click event. There’s no native way to assign that to different events like e.g. DragOver, so this sounds like our new User Control would need quite some Code Behind in order to wire up any Non-Click-Event with that Command. Thankfully, there is a surprisingly simple solution, called Interaction.Triggers. Apply it as

  1. installing System.Windows.Interactivity from NuGet
  2. adding the to your XAML namespaces: xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
  3. Adding the trigger inside:
<Button ...>
    <i:Interaction.Triggers>
        <i:EventTrigger EventName="DragOver">
            <i:InvokeCommandAction Command="WhateverYouFeelLike"/>
        </i:EventTrigger>
    </i:Interaction.Triggers>
</Button>

But that only as a side note, remember that the point of having our separate User Control is still valid; considering that you would want to have some extra interactivity in your own use cases.

Now: Extracting the functionality to our own User Control

I chose not to derive some class from the Button class itself because it would couple me closer to the internal workings of Button; i.e. an application of Composition over Inheritance. So the first step looks easy: Right click in the VS Solution Explorer -> Add -> User Control (WPF) -> Create under some name (say, MightyButton) -> Move the <Button.../> there -> include the XAML namespace and place the MightyButton in our old code:

// old place
<Window ...
	xmlns:ui="clr-namespace:WhereYourMightyButtonLives
	>
	...
	<ItemsControl ItemsSource="{Binding ListOfChildren}">
		<ItemsControl.ItemTemplate>
			<DataTemplate>
				<ui:MightyButton Command="{Binding SomeAwesomeCommand}"
						 Content="{Binding Title}"
						 />
			</DataTemplate>
		</ItemsControl.ItemTemplate>
	</ItemsControl>
	...
</Window>

// MightyButton.xaml
<UserControl ...>
	<Button Style="{StaticResource FancyButton}"/>
</UserControl>

But now it get’s tricky. This could compile, but still not work because of several Binding mismatches.

I’ve written a lot already, so let me just define the main problems. I want my call to look like

<ui:DropTargetButton Style="{StaticResource FancyButton}"
		     Command="{Binding OnBranchFolderSelect}"
		     ...
		     />

But again, these are two parts. Let me clarify.

Side Quest: I want the Style to be applied from outside.

Remember the idea of having SmallFancyButton, FancyAlertButton or whatsoever? The problem is, that I can’t just pass it to <ui:MightyButton.../> as intended (see last code block), because FancyButton has its definition of TargetType="{x:Type Button}". Not TargetType="{x:Type ui:MightyButton}".

Surely I could change that. But I will regret this when I change my component again; I would always have to adjust the FancyButton definition every time (at several places) even though it always describes a Button.

So let’s keep the Style TargetType to be Button, and just treat the Style as something to be passed to the inner-lying Button.

Main Quest: Passing through Properties from the ListOfChildren members

Remember that any WPF Control inherits a lot of Properties (like Style, Margin, Height, …) from its ancestors like FrameworkElement, and you can always extend that with custom Dependency Properties. Know that Command actually is not one of these inherited Properties – it only exists for several UI Elements like the Button, but not in a general sense, so we can easily extend this.

Go to the Code Behind, and at some suitable place make a new Dependency Property. There is a Visual Studio shorthand of writing “propdp” and pressing Tab twice. Then adjust it to read like

public ICommand Command
        {
            get { return (ICommand)GetValue(CommandProperty); }
            set { SetValue(CommandProperty, value); }
        }

        public static readonly DependencyProperty CommandProperty =
            DependencyProperty.Register("Command", typeof(ICommand), typeof(DropTargetButton), new PropertyMetadata(null));

With Style, we have one of these inherited Properties. Nevertheless, I want my Property to be called Style, which is quite straightforward by just employing the new keyword (i.e. we really want to shadow the inherited property, which is tolerable because we already know our FancyButton Style to its full extent.)

public new Style Style
        {
            get { return (Style)GetValue(StyleProperty); }
            set { SetValue(StyleProperty, value); }
        }

        public static readonly new DependencyProperty StyleProperty =
            DependencyProperty.Register("Style", typeof(Style), typeof(DropTargetButton), new PropertyMetadata(null));

And then we’re nearly there, we just have to make the Button inside know where to take these Properties. In an easy setting, this could be accomplished by making the UserControl constructor set DataContext = this; but STOP!

If you do that, you lose easy access to the outer ItemsControl elements. Sure you could work around – remember the WPF philosophy of allowing you many ways – but more practicable imo is to have an ElementName. Let’s be boring and take “Root”.

<UserControl x:Class="ComplianceManagementTool.UI.DropTargetButton"
	     ...
             xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
             xmlns:local="clr-namespace:ComplianceManagementTool.UI"
             x:Name="Root"
             >
    <Button Style="{Binding Style, ElementName=Root}"
            AllowDrop="True"
            Command="{Binding Command, ElementName=Root}"
            Content="{Binding Text, ElementName=Root}"
            >
        <i:Interaction.Triggers>
            <i:EventTrigger EventName="DragOver">
                <i:InvokeCommandAction Command="{Binding Command, ElementName=Root}"/>
            </i:EventTrigger>
        </i:Interaction.Triggers>
    </Button>
</UserControl>

As some homework, I’ve left you the Content property to add as a Dependency Property, as well. You could go ahead and add as many DPs to your User Control, and inside that Control (which is quite maiden-like still, if we ignore all that DP boilerplate code) you could have as many complex interactivity as you would require, without losing the flexibility of passing the corresponding Commands from the outside.

Of course, this is just one way of about seventeen plusminus thirtythree, add one or two, which is about the usual number of WPF ways of doing things. Nevertheless, this solution now lives in our blog, and maybe it is of some help to you. Or Future-Me.