A few heuristics for rejecting Merges

The title of this post is somewhat of a working title. It is based on the observation, that in larger projects a day might come where you have to outsource some amount of work (an “issue”, if you will) to anyone who is not you, and maybe not even someone you already know well. Or at least now, how well they fit your image of good code.

That concept is probably not new* for you, and neither is the idea that sooner or later, the foreign code has to be merged, and you are the reviewer. (depending on your version control system, the lingo might differ, but let’s call this whole process a Merge Request for now, like GitLab does).

Now, every issue is different, therefore it is not upon me to give you hard rules what a Merge Request should do. These rules would be as diverse as the issues themselves, and there is no clear answer to “as a reviewer, what should I actually look for?”.

The Best-Case Scenario (forget about it)

The best-case scenario might be:

  • You understand the issue well enough
  • You understand the code base well enough
  • You understand the language well enough
  • The changes are few enough that you can understand them quickly enough.

In that case, you do not need some blog post to help you.

So you could try and pull up some rules for your project in which every Issue leads to Best-Case Merge Requests, but as you will fail anyway, it might be the wiser thing to lower your expectation. I mean, if all issues would match that case, you probably would be faster by writing all the code yourself.

Rather go for the Not-Good-Enough Scenarios

So, what do you do to combine the requirements a) the collaboration should really simplify your life and b) you do not want to compromise your standards too heavily?

This thought process is not finished yet, but at least I would go for some heuristics – if these are violated too heavily, chances are that everyone involved might actually profit from having this Merge Request rejected.

  • Can the issue be grasped within a few minutes?
  • Can the changes be grasped in about half an hour or less?
  • Can the problematic code pieces be described in a few short points?
  • Does this increase the feeling of trust in the collaboration?
Can the issue be grasped within a few minutes?

If it can’t, or you can’t, how come you are the reviewer? This is probably the easiest heuristic to ignore, because real-life problems and real-life programmers might be so imperfect that this is utopian, but still, if it happens, be extra wary.

Can the changes be grasped in about half an hour or less?

The half-an-hour is only a rough figure, of course, but that’s not the point.

If you ever thought “my brain is great”, remember that this was your brain speaking. Your attention span is just not good. If you think otherwise, how come you are the reviewer?

It is easy to think that longer Merge Requests just take a longer time to process. But it doesn’t scale. You will use up your willpower, motivation, attention span quicker than you would like to admit; and if you then go for “well, at least I want to finish the thing now, otherwise I have to do it even again” you will probably let too many mistakes slip.

It is just not responsible. If you want to show willpower, go for increasing your willpower in admitting that some Merge Requests are just too big, and reject them.

One can always backup a branch, create a new one, cherry-pick commits on them, even only commit portions of the same file – this might feel like a punishment at first, but chances are that this process might actually find several problems that were hidden before.

To be continued…

I spoilered my two other heuristics above, but figure that this post is long enough already. Even if you don’t agree with e.g. the time spans given above – the main idea is: Do not let a large Merge Request through just because you believe that you are strong enough to handle it. There’s noone to be impressed.

The impressive part of good work is that it doesn’t look impressive.

I will continue this, and – maybe you have some ideas from your experience? What would you do when a Merge Request is too complex?

Unexpected: Web Sockets close when downloading a file

The browsers work in mysterious ways.

That is, mostly they have certain reasons for behaving why they do, but because so many details are too sophisticated as to have been explicitly defined in some specification (…yet).

Recently, we came across a new problem that was surprisingly non-googleable.

Many of our interactive Web GUIs are based on Web Sockets for real-time updates. As is usual. Now, one live system showed a strange effect, because our Web Socket always broke down when the user clicked on a <a>...</a> link to download a file.

This was strange for multiple reasons, but it boiled down to:

  • It happened under the current Firefox, but not under Chrome
  • It happened not when the link had target = "_blank" (i.e. the link was opening in a new window)
  • The Web Socket closed with the “going away” status code 1001, indicating no further error

And to boil it further down, the solution was to definitely include the download attribute for this particular <a> Element.

The web socket stayed open for both browsers in both of these cases:

<a href="..." target="_blank" rel="noopener noreferrer">
  This leaves the Web Socket intact
</a>

<a href="..." download>
  This also leaves the Web Socket intact
</a>

<a href="...">
  This might close the Web Socket, or not, just as the browser feels today
</a>

(For the rel="noopener noreferrer", see also here.)

The data from the href endpoint was served with the corresponding Content-Disposition and Content-Type HTTP headers. For Chrome, this is enough to identify that link as a download. Firefox was not so sure, it believes that you are actually “going away”.

How to get honest UX Feedback from the Technically Adept User? – Part 1: the problem.

After some consideration, I decided to type this title indeed with the Question Mark at its end, because it is an ongoing process, not a particular conclusion. (… and yes, this is a common theme with everything UX).

We have a wide variety of possible customers and usually an even wider variety of possible users. Sometimes, the mission statement is quite well defined, and sometimes, we just start from the acknowledgement that our customer has some pain point and some trust in that we might be able to help.

Of course, we always carefully evaluate whether we believe our uncertainties to stay manageable or rather reiterate the uncertainties / must-have requirements / minimal viable product with the potential customer, upfront. And if in these cases, we find common ground for a collaboration, it is absolutely crucial to always keep the hand on the steering wheel, always reconsidering what it is the user wants.

From very early on, this affects the aspect of User Experience, which is not something one can apply at the end of the project, some lip gloss, a little treat if everything went well before. Wherever possible, it has to be ingrained in the backbone of the software because sooner or later you could risk driving the project into limbo.

UX Limbo is, when your product is good enough so that a potential user will never openly complain about certain design choices, but still too flawed so they will never actually engage in your software, just because… there’s no flow. No dopamine. They won’t tell you – it’s just difficult.

Enter the “Technically Adept User”. If you have a project in which the end users are actually experts in the domain itselves, this is next-level difficult. Here, the problem is not that the user has a underdeveloped mental model, but that they have a time-tested one which has real value for them. But this might differ from the one that you actually implement; and

  • it might be that you “just” need to listen more accurately
  • it might be that you learned something about the problem that it new to them
  • it might be that there are more than one Technically Adept Users, and they do not notice that they hold incompatible models in their head
  • or they do notice, but due to their different roles they try to one-up each other
  • etc.

So this is the problem. If you play your cards well, you might be able to solve a problem in a way that has never been done before, the customers will feel their burdens taken away, replaced by nothing but overwhelming bliss and (…you get the point) – but it doesn’t take much in perceived imperfection, and the users will start doing mistakes, feel unsatisfied, stupid even, never understanding why you “cannot implement the easiest ideas” (most likely your fault), always pretending, always seeming obsessed, wasting their money, just not “getting” it.

And worse is, you cannot even separate yourself from the product that easily. You can always start a UX Session, like A/B Testing, with the Non-Technically-Adept User, let them tell you their emotions, difficulties, misunderstandings and due to the large gap between their mental model and yours, communication is not that hard. But with too similar mental models, they will always think A if you present B. If they don’t understand why something is implemented as B, they can’t stop their thoughts in rationalizing, maybe even nodding their heads, but what you really would need is the open discussion how a common-ground solution C might look.

They might invite you to a meeting and do not even see the points you are seeing, they might consider their problems solved from minute one – and spend the meeting talking about several possible steps ahead.

The Technically Adept User is a blessing in that their knowledge can take the onus in understing the real problem away from you a bit – but is also a challenge because they need to invest more energy in understanding your differences of understanding. There is no universal solution in how to make them.

I am writing this blog post to keep my own thoughts rolling on that topic. There must be some ways in communicating this gap. It should be done in a way that neither lets the user feel dumb, insulted, their pride taken away; but also in a way that they know that you can be both: knowledgeful about their problem AND flexible in iterating through different solutions, trying what works until something sticks. Time and patience are an issue in this, too.

If you have any suggestions or insights to share, please feel free to do so. I have some ideas in mind but will continue with Part 2. Let’s see what we can learn from this 🙂

Why we can’t dispose of every Manager, even in Tech fields

The world is in a place in which we gradually learn to question old patterns of societal behaviour and to look deeply into the meaning of what it is that we actually do. Especially in the technical / digital field, we see that our hands are not tied – most of the things we do, we can decide to do for an actual reason, or to find a way of disposing of exactly that – in what speed or level of disruption we consider appropriate.

While this holds for any level of detail, it is especially fruitful in these patterns where we do something for historical reason; the patterns that emerged as seeming to hold true for generation after generation – why should the present days be any different?

Now, I do not hate any traditions just for the reason of them being traditions. That stance is not useful, as a significant number of traditions have some underlying reasons why they work. Sometimes these are not obvious at all, and worse, sometimes everyone around us thinks they can tell use these reasons, while what they tell is are mostly empty phrases, placeholders, tranquilizers, red herrings, tautologies. But I digress.

One of the largest volume of “what we do” revolves around our work. Not only in a “this is a German clichee” sense, but the universal “what we deliver to sustain our livelihood”. And one of the most profitable discussions are these at the intersections of that work and of our inner state of being, our emotions, attitudes, values, interpersonal relations. The stuff that plays no role in traditional work.

To clarfiy, the world has not yet gone to a magical sphere of enlightenment where we’re just so incredibly smarter than our ancestors, but at least the reasoning takes place, and is within reach to be emplaced.

But there is little insight in only asking “what have they thought wrong?”. You have to also ask “what have they gotten right?” to get to the next level.

The Question: “What is Management?”

It’s not as simple to tell, because if you just look at what most Managers do, you can only answer “What is Mismanagement?”. Think of any fictional CEO, and chances are you will imagine one of these movie stereotypes, the people who put more work in buttering their insecure ego with layers of self-importance and complacency until the whole package somewhat sticks.

As such, there appears to be a current trend in Management Consultants that actually do God’s work in trying to correct that view. Do any internet search, there is plenty of quotable verdict on that field – “As a manager, you should trust your people and get out of their way. They work with you for a reason” – Even if they might not convince many Executives in changing their ways, more lower-employed people lose their tolerance for such mannerisms.

This means, that in a job interview, a Newcomer can very well ask – “Tell me, what do we need you for? Under the assumption that I’m the right person for this job, what is there I could possibly need an Executive for without invalidating the former assumption?” (… is not something your average applicant would say, but it’s the core of that idea that matters.)

So, there’s the obvious bulk of organisational overhead, outside representation, legal matters, ensuring-everyone-gets-paid-on-time, duties about Datenschutz & friends, i.e. tasks that are most efficiently handled by as few hands as possible. (Depending on how American you feel, there you might also need to constantly insult your competitors publicly.) But other than that, do you just hang around and ask your employees about their favorite variety of coffee once in a while?

  • Yes, in some instances that is the most preferable conduct for your Manager,
  • Generally, it’s not.

Where you can really reap the crops

Basically, you are deployed to optimize every flow of any relevant currency. These might change over time, but it is not only the regular appearance of some money in some accounts and it’s also not only the getting out of your way to let the recipients of said money do their magic. It’s also not about everyone feeling absolutely smart and awesome on every occasion. Neither the opposite.

You can be of actual use. Here’s three points that come to my mind.

  • 1.) Define the Culture, especially its Boundaries
  • 2.) Evaluate the Direction of Motion
  • 3.) Advertise that Words can have an Actual Meaning

Now I guess I have written a lot already, so I will have to delve into details in some upcoming posts. But let me summarize.

@1.) Defining the culture of your work place is something you either do conciously or fail miserably, but it gets exponentially more important over time.

  • Some obvious things you might not need to define (e.g.” no physical assault”)
  • Then there’s potential for micromanagent or personal overreach. These you should never interfere with or only decide case-by-case.
  • But some decisions in between have many options, still someone needs to break the symmetry. E.g. “use this particular tool for XYZ”. This can change, and you should always be able to give actual reasons, but if you don’t limit this choice, your people will waste their focus on the wrong questions, and one can only handle a certain number of question marks in their day. It’s about focus.

@2.) Evaluating the direction of motion of your business also sounds obvious. It more or less is. But there is not only motion in the sense of business output, e.g. “we focus on software that…” but inner direction, e.g. “we aim to recruit people who…”. These will change by factors outside your control and if you choose not to change course, you should do it on purpose, not by not thinking about it.

@3.) Knowing what you do (see 2) and how you do it (see 1) is quite nice already. Some focus is even nicer, but even more important than that is guaranteeing a level of consistency that guarantees that the foresaid boundaries are not subject to momentary reassessment. Do not think that one can ever act on a purely ration basis – this does not work ever – but treat your word very deliberate. This does not mean every actual vocabulary, but their fundamental underpinning. Not every attitude can hold forever either. Neither can you be perfect and never forget a thing. But if you appear careless in your word, people will notice that you consider them merely pawns in a game, and they might see that you don’t actually know how to play.

Now this became longer than intended. Anyway, I wish you a happy <legal reason why there’s no work next Monday>!

PS: this blog post is not sponsored or otherwise favorably promoted by the Softwareschneiderei management level 😉

Web Components, Part 2: Encapsulating and Reusing common Element Structure

In my previous post, I gave you some first impressions about custom HTML Web Components. I cut myself short there to make my actual point, but one can surely extend this experiment.

The thing about the DOM is that it is one large, global block of information. In order to achieve loose coupling, you need to exert that discipline by yourself, using document.getElementById() and friends you can easily couple the furthestmost components, to their inner workings, together. Which can make it very insecure to change.

For that problem in Web Components, there is the Shadow DOM. I.e. if you define, as previously, your component as

class CustomIcon extends HTMLElement {
    connectedCallback() {
        this.innerHTML = `
            <svg id="icon">
                <!-- some content -->
            </div>
        `;
        element = document.getElementById("icon");
        element.addEventListener(...);
        // don't forget to removeEventListener(...) in disconnectedCallback()! - but that is not the point here
    }
}

it becomes possible to also document.getElementById("icon") from anywhere globally. Especially with such generic identifiers, you really do not want to leak your inner workings. (Yes, in a very custom application, there might be valid cases of desired behaviour, but then usually the IDs are named as e.g. __framework_global_timeout, custom--modal-dialog, … as to avoid accidental clashes).

This is done as easy as

class CustomIconim d extends HTMLElement {
    constructor() {
        super();
        this.attachShadow({ mode: 'open' });
  }

    connectedCallback() {
        this.shadowRoot.innerHTML = ... // your HTML ere
    }
}

Two points:

  • The attachShadow() can also be called in the connectedCallback(), even if usually not required. Generally, there is some debate between these two options, and I think I’ll write you another episode of this post when I have some further insight about that.
  • The {mode: 'open'} is what you actually use because ‘closed’ does not give you that much benefit, as outlined in this blog here. Just keep in mind that yes, it’s still JavaScript – you can access the shadowRoot object from the outside and then still do your shenanigans, but at least you can’t claim to have done so by accident.

This encapsulation makes it easier to write reusable code, i.e. decrease duplication.

As with my case of the MagicSparkles icon – I might want to implement some other (e.g. Font Awesome) icons and have all of these carry the same “size” attribute. It might look like:

export const addSvgPathAsShadow = (element: HTMLElement, { children, viewBox, defaultColor }: SvgIconProps) => {
    const shadow = element.attachShadow({ mode: "open" });
    const size = element.getAttribute("size") || 24;
    const color = defaultColor || "currentcolor";
    viewBox ||= `0 0 ${size} ${size}`;
    shadow.innerHTML = `
            <svg
                xmlns="http://www.w3.org/2000/svg"
                width="${size}"
                height="${size}"
                viewBox="${viewBox}"
                fill="${color}"
            >
                ${children}
            </svg>
        `;
};

export class PlayIcon extends HTMLElement {
    connectedCallback() {
        addSvgPathAsShadow(this, {
            viewBox: "0 0 24 24",
            children: "<path fill-rule=\"evenodd\" d=\"M4.5 5.653c0-1.427 1.529-2.33 2.779-1.643l11.54 6.347c1.295.712 1.295 2.573 0 3.286L7.28 19.99c-1.25.687-2.779-.217-2.779-1.643V5.653Z\" clip-rule=\"evenodd\" />"
        });
    }
}

// other elements can be defined similarly

// don't forget to actually define the element tag somewhere top-level, as with:
// customElements.define("play-icon", PlayIcon);

Note that

  • this way, children is required as a fixed string. My experiment didn’t work out yet how to use the "<slot></slot>" here (to pass the children given by e.g. <play-icon>Play!</play-icon>)
  • Also, I specifically use the || operator for the default values – not the ?? – as an attribute given as empty string would not be defaulted otherwise (?? only checks for undefined or null).
Conclusion

As concluded in my first post, we see that one tends to recreate the same patterns as already known from the existing frameworks, or software architecture in general. The tools are there to increase encapsulation, decrease coupling, decrease duplication, but there’s still no real reason why not just to use one of the frameworks.

There might be at some point, when framework fatigue is too much to bear, but try to decide wisely.

Web Components – Reusable HTML without any framework magic, Part 1

Lately, I decided to do the frontend for a very small web application while learning something new, and, for a while, tried doing everything without any framework at all.

This worked only for so long (not very), but along the way, I found some joy in figuring out sensible workflows without the well-worn standards that React, Svelte and the likes give you. See the last paragraph for a quick comment about some judgement.

Now many anything-web-dev-related people might have heard of Web Components, with their custom HTML elements that are mostly supported in the popular browsers.

Has anyone used them, though? I personally haven’t had, and now I did. My use case was pretty easy – I wanted several icons, and wanted to be able to style them in a unified fashion.

It shouldn’t be too ugly, so why not take something like Font Awesome or heroicons, these give you pure SVG elements but now I have the Font Awesmoe “Magic Sparkles Wand” like

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 576 512"><!--!Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free Copyright 2024 Fonticons, Inc.--><path d="M234.7 42.7L197 56.8c-3 1.1-5 4-5 7.2s2 6.1 5 7.2l37.7 14.1L248.8 123c1.1 3 4 5 7.2 5s6.1-2 7.2-5l14.1-37.7L315 71.2c3-1.1 5-4 5-7.2s-2-6.1-5-7.2L277.3 42.7 263.2 5c-1.1-3-4-5-7.2-5s-6.1 2-7.2 5L234.7 42.7zM46.1 395.4c-18.7 18.7-18.7 49.1 0 67.9l34.6 34.6c18.7 18.7 49.1 18.7 67.9 0L529.9 116.5c18.7-18.7 18.7-49.1 0-67.9L495.3 14.1c-18.7-18.7-49.1-18.7-67.9 0L46.1 395.4zM484.6 82.6l-105 105-23.3-23.3 105-105 23.3 23.3zM7.5 117.2C3 118.9 0 123.2 0 128s3 9.1 7.5 10.8L64 160l21.2 56.5c1.7 4.5 6 7.5 10.8 7.5s9.1-3 10.8-7.5L128 160l56.5-21.2c4.5-1.7 7.5-6 7.5-10.8s-3-9.1-7.5-10.8L128 96 106.8 39.5C105.1 35 100.8 32 96 32s-9.1 3-10.8 7.5L64 96 7.5 117.2zm352 256c-4.5 1.7-7.5 6-7.5 10.8s3 9.1 7.5 10.8L416 416l21.2 56.5c1.7 4.5 6 7.5 10.8 7.5s9.1-3 10.8-7.5L480 416l56.5-21.2c4.5-1.7 7.5-6 7.5-10.8s-3-9.1-7.5-10.8L480 352l-21.2-56.5c-1.7-4.5-6-7.5-10.8-7.5s-9.1 3-10.8 7.5L416 352l-56.5 21.2z"/></svg>

Say I want to have multiple of these and I want them to have different sizes. And I have no framework for that. I might, of course, write JavaScript functions that create a SVG element, equip it with the right attributes and children, and use that throughout my code, like

// HTML part
<div class="magic-sparkles-container">
</div>

// JS part
for (const element of [...document.getElementsByClassName("magic-sparkles-container")]) {
    elements.innerHTML = createMagicSparkelsWand({size: 24});
}

// note that you need the array destructuring [...] to convert the HTMLCollection to an Array

// also note that the JS part would need to be global, and to be executed each time a "magic-sparkles-container" gets constructed again

But one of the main advantages of React’s JSX is that it can give you a smooth look on your components, especially when the components have quite speaking names. And what I ended up to have is way smoother to read in the HTML itself

// HTML part
<magic-sparkles></magic-sparkles>
<magic-sparkles size="64"></magic-sparkles>

// global JS part (somewhere top-level)
customElements.define("magic-sparkles", MagicSparklesIcon);

// JS class definition
class MagicSparklesIcon extends HTMLElement {
    connectedCallback() {
        // take "size" attribute with default 24px
        const size = this.getAttribute("size") || 24;
        const path = `<path d="M234.7..."/>`;
        this.innerHTML = `
            <svg
                xmlns="http://www.w3.org/2000/svg"
                viewBox="0 0 576 512"
                width="${size}"
                height="${size}"
            >
                ${path}
            </svg>
        `;
    }
}

The customElements.define needs to be defined very top-level, once, and this thing can be improved, e.g. by using shadowRoot() and by implementing the attributesChangedCallback etc. but this is good enough for a start. I will return to some refinements in upcoming blog posts, but if you’re interested in details, just go ahead and ask 🙂

I figured out that there are some attribute names that cause problems, that I haven’t really found documented much yet. Don’t call your attributes “value”, for example, this gave me one hard to solve conflict.

But other than that, this gave my No-Framework-Application quite a good start with readable code for re-usable icons.

To be continued…

In the end – should you actually go framework-less?

In short, I wouldn’t ever do it for any customer project. The above was a hobby experience, just to have went down that road for once, but it feels there’s not much to gain in avoiding all frameworks.

I didn’t even have hard constraints like “performance”, “bundle size” or “memory usage”, but even if I did, there’s post like Framework Overhead is bikeshedding that might question the notion that something like React is inherently slower.

And you pay for such “lightweightness” dearly, in terms of readability, understandibility, code duplication, trouble with code separation, type checks, violation of Single-Level-of-Abstraction; not to mention that you make it harder for your IDE to actually help you.

Don’t reinvent the wheel more often than necessary. Not for a customer that just wants his/her product soon.

But it can be fun, for a while.

JavaScript – some less known Gems

I guess we can all agree that the most fun part of anything related to JavaScript is reading foreign JavaScript code 🙂

But while most of any hardness in understanding foreign (especially older) code lies in the every-year-fluctuations in common style – I also can’t really tell why I have this urge to change each “function” to a “const” declaration nowadays – once in a while you stumble across some feature, that is just too arcane.

Which means that it’s at least worth knowing about – after that, you can decide for yourself whether these enter your active vocabulary.

So these are some of my recent findings. Feel free to add.

Labelled Loops

JavaScript allows you to label any statement with a unique identifier. This is probably not a surprise for any Svelte developer (the $:… syntax is exactly that), but it is most useful in loops, because you can break or continue an outer loop using this:

// "outer" is the label here

outer: for (...) {
  for (...) {
    ...
    if (weAreDone()) break outer;
  }
}

It might have some old-school “GOTO” vibes indeed, and one can argue that in most cases there might be a more concise solution right around the corner, but especially if you have a tricky lookup algorithm, this might come handy one day.

Comma Operator

While I wondered for some years who on earth actually uses the Comma operator in C/C++, just a few weeks ago I found out that JavaScript actually has the same thing.

It allows you to execute some expression, ignoring it’s return value and directly execute the next statement in that same expression.

// (expr1, expr2) does evaluate expr1 and expr2 and return expr2.

const a = (b = 5, 3);
// a is now 3, but b = 5;

let a, b;
for (a = 0, b = 0; a + b < 3; a++, b++) { console.log(a, b) }
// output:
// 0 0
// 1 1

However, I would not advise using this thing ever. I mean, if you have some complicated expression where you decide just to do some step before evaluating some crucial other step – maybe it’s time to refactor the whole method.

void Operator

The void operator is like a special case of the comma operator in that it evaluates something and then returns undefined.

const a = calculateStuff(); // a might be whatever
const b = void calcaluteStuff(); // b is undefined

const c = (calculateStuff(), undefined); // c is identical to b, see above

This is e.g. an expression that I found while having to read some minified React code. So it might be of a certain use if you want to minify the number of letters in your code, but a more readable way would be just defining a function evaluating calculateStuff(), then returning without a return value.

However, the MDN web docs (referenced again here) give some real use cases of that operator, so if you are into that cryptic knowledge, go right ahead.

Bitwise NOT as a “found in List” check

Recently, I was weirded out by having to look at a list inclusion check in the likes of:

const list = ["a", "b"];
if (~list.indexOf("a")) {
  alert("found");
}

And this piece of code will actualy reach the alert, while it might leave you in wonders what the “~” is doing here. Unless you are used to very low-level bit arithmetic, in which case you’d directly recognize the bitwise NOT. It’s the operator that inverts every bit of a binary representation of a number to it’s opposite.

And the whole magic here lies in that for normal numbers (or BigInt), this is mathematically the same as

~a = -a - 1;

especially:
~-1 = 0
~0 = -1

and that for JavaScript, 0 is a falsy value while any other number is a truthy one. So this code above is used just to explicitly distinguish whether indexOf() returned “-1”, which it does if the object in question was not found.

So there you have it, but I’d rather use the way more readable

if (list.includes("a")) {...}
Bonus: console.log() styling

Now this does not really fit to the other operations above, but nevertheless I hear of people who did not know that before, so I’ll just drop it.

If there is any argument of console.log() starting with “%c”, it will take the next argument as a styling instruction instead of normally printing it. That is, the next argument needs to be a valid string that could also appear in a HTML stlye=”…” attribute, as

console.log("%cSo Big!", "font-size: 100pt; color: magenta");

Now, considering that console.log() is really only the most rudimentary way to output some statements for debugging (and one usually neglects the other ones like console.time(), console.timeEnd(), console.table()), this is not the next biggest thing that your imaginary Crypto Blockchain AI SaaS startup just needed, but it’s neverless good to know if you need some distinction in your logs.

Conclusion: Do whatever you like with that knowledge

While there are many things that one might love or hate about JavaScript – e.g. you might like the boolean-coercion via !! or you might hate the ??= or you might, with a mission, peddle generation expressions with function*/yield to your team – or or or… – there are always some more things that even after some years just look weird to my trained eye.

I guess it’s not completely wrong to thing that such expressions are occult for a specific reason in that there are only a handful of legit use cases for these, and forcing occult expression in a code that exceeds script size might cause some serious pain in the future.

But nevertheless, knowledge is power, so have a nice powerful evening.

Nonreligious Guidance for the JavaScript vs. TypeScript Debate

It’s always fun times when developers in the internet get heated over some discussion about their tool stack. One current case seems to be that some developers experienced their cases of “TypeScript is not giving me an adequate return of investment” and there are several articles which boil down to “I just don’t like it” – just google something along abandoning / ditching / dropping TypeScript and the resulting discussions on Reddit.

Now – bad news for anyone enjoying online arguments: Never has wisdom been reached by stating advantages and disregarding the disadvantages. I took some time to reflect, as several of my projects at Softwareschneiderei as well as my private ones use different tech stacks, to note the cases where I was happy about each choice of language, and to note where I wished to have the other one.

Of course, there is some kind of rule that if there are two quite similar things, most humans would pick one of these things to embrace and caress, the other one to hate with a passion and then insult each other’s intelligence. That is, of course, very helpful and generally awesome not productive to actually change anyone’s mind.

Now I would mostly suggest people to try to get used to TypeScript in order to have that tool at your hand. But I also have cherished the flexibility of JavaScript and seen the case where I would prefer it at least for the current state of that corresponding project.

Let me elaborate.

Quick Note: TypeScript is not really a superscript of JavaScript

This has to be state beforehand. It is to be said that if you write your TypeScript in a fashion where you “as any” your types at will, I would not call this by that name. Yes, the language allows doing so, but the choice of any language is also the choice of a certain mindset going with it. Several linting presets even disallow the explicit any. Which makes sense, because if you love the “as any”, you are not thinking TypeScript anyway.

Yes, doing it sparingly is rather a code smell then a red flag. But the mindset of TypeScript does not include the mindset of JavaScript as subset, therefore TypeScript can not be anything like a superscript of JavaScript.

When I would use TypeScript

So when was I most happy about TypeScript?

  • where I already had a rather clear model of my domain and then had to extend or change the current functionality.
  • When writing new methods that work with clear types, the support in knowing what these things are give you a real support in productivity
  • When my last episode of development was some considerable time ago, or was done by a different developer
  • When the smaller parts / submodules / … interface each other in a clear fashion and most development is focussed on only particular parts. Therefore, if an API changes for a particular reason, having to redesign your types avoids dangerous regressions that happen down the line.
  • Also, if you have a clear use case of many different similar types of data. If it is not clear from seeing an object (“oh, this is a house, not an animal” vs “oh, this is a house per se, not an offer for a house for sale”), the type hints alone will speed up your though processes.
  • … also, if you don’t have an IDE which does some type analysis anyway.
And when did I prefer plain JavaScript?
  • I experienced my largest annoyance with TypeScript in cases where our development aimed at clarifying its domain model itself -as in, very experimental stages in which it is more important to scaffold a basis for discussion. I.e. changes where not just renaming a field or changing its type, but a fundamentally updated understanding.
  • Interfacing large modules where data gets serialized in between anyways (e.g. server-client-interactions) – remember that TypeScript does not garant you real type safety. Any object still can still be what it likes to be. If I have to double check any content anyway, I rather do so without the extra boilerplate.
  • When doing lots of functional programming. TypeScript is just plain ugly when you pass function types as an argument and I have not yet seen the case where that really prevented any mistakes.
  • Mostly, when I do “library” code as opposed to “application” code, especially when you deal with many intermediate types. Your code can become bloated by verbose type definitions which do not contain any real value. The extra work of having to think up a name for these does not make one a hero then.
  • Especially in having to deal with Redux or some of the React querying / web request / caching libraries that aim to make your life easier etc. – sometimes these don’t even export all their types, being quite a hassle to write utility functions for them.

In short, forcing oneself to use TypeScript can lead to problems similar to the “wrong abstraction” problem. If you are in a state of development where you thoroughly define your types and these are (mostly smaller), clearly cut types, it’s likely that you gain traction by doing this work beforehand.

Conclusion: Don’t be too religious about it.

I consider it just not true that one cannot write large, safe projects in plain JavaScript. And one is still able to write monstrous, nonmaintainable projects with TypeScript. Sometimes the type definitions are just not the main concern in a current stage of development.

Think about it deliberately, and know each one’s advantages.

Also, some people currently propagate JSDoc as the current way most superior to all. I did not yet give it a proper chance, mostly because of its ugly aesthethics – but I’m open to trying it some day.

Addendum: JavaScript for flexible React Components

While this is a special case of my suggestion “rather use JavaScript for functional-programming-heavy cases”, you might run into TypeScript trouble a lot in cases where you want to use flexible React Components like

import ComponentA from ...;
import ComponentB from ...;

const FlexibleComponent = ({conditionProp, ...props}) => {

    const Component = React.useMemo(() =>
        conditionProp 
            ? ComponentA
            : ComponentB
        , [conditionProp]);

    return <Component {...props}/>;
};

While you can argue that usually this hints at “you need a better pattern for ComponentA and ComponentB, if they share so many similarities”, such a construct might be useful if patching together several external dependencies.

I have not yet found a way to cleanly match this distinction using TypeScript, especially since external dependencies might come – see above – with closed type definitions. Of course, you might go the “any” route here as well…

When custom React Hooks do not rerender Components on their own – make them.

Depending on who you ask, custom React Hooks are

  • a great way to stash away detailed inner workings of your application, making a) them reusable and b) your component cleaner and less complex
  • a horrible invention that hides away all the dreadful complexities one can think of, and by just making it invsible, not reducing any complexity at all

As usual, one has to calculate that balance depending on the use case, but in most cases I prefer my components to have a rather manageable lines-of-code-count (because this makes it easier to visually analyze their actual JSX structure, i.e. their semantics, what they are supposed to do.

However, sometimes an app grows over time and reaches a level of intricacy that seems to “outsmart” React itself, therefore breaking it. I do not know how to describe it otherwise:

I had a case of nested custom Hooks, in which one inner hook was executing a database query, giving a result and also a function to invalidate() and thus re-execute the query. It had been my understanding, that…

const useOurHook = () => {
    const query = useInnerHookWhichExecutesSomeQuery();

    console.log("query returned", query);

    return {
        result: query.result,
        invalidate: query.invalidate
    };
};

const Component1 = () => {
    const {result} = useOurHook();

    return <div>{JSON.stringify(result)}</div>;
};

const Component2 = () => {
    const {invalidate} = useOurHook();

    return (
        <button onClick={() => invalidate()}>
            invalidate
        </button>
    );
};

… pressing the button in Component2 will update the return value of the inner query hook, thus update the return value of the outer hook and finally update Component1.

However, that just did not happen. Even stranger, I could see my updated query result in the console.log statement within useOurHook(), but Component1 was staying as it was.

It took me several attempts in the inner workings of my both hooks, I tried to wrap the return values inside React.useMemo(), or to specifically put them inside a React.useState() that was explicitly set by a React.useEffect() – which should rather have the same outcome, but then again I do not know the actual React source code by heart – and there was just nothing that helped.

If you have any explanation for me that excels “yeah, React was broken” in its level of insight, please tell me. (maybe I have to read some docs, but it wasn’t obvious…)

So this is what helped. Rather than passing the invalidate function to my components, I decided to use the update functionality of the Redux useSelector() hook in such a way:

const useOurHook = () => {
    const lastRequestAt = useSelector(state => state.somewhere.lastRequestAt); // get timestamp from Redux store
    const query = useInnerHookWhichExecutesSomeQuery();

    React.useEffect(() => {
        if (lastRequestAt > 0) {
            query.invalidate();
        }
    }, [lastRequestAt, query.invalidate]);

    console.log("query returned", query);

    return {
        result: query.result,
    };
};

const Component1 = () => {
    const {result} = useOurHook();

    return <div>{JSON.stringify(result)}</div>;
};

const Component2 = () => {
    const dispatch = useDispatch();

    return (
        <button onClick={() => dispatch(updateRequest())}>
            invalidate
        </button>
    );
};

//////// and somewhere in a Redux slice:

...
reducers: {
    updateRequest: (state) => {
        state.lastRequestAt = Date.now();
    }
}
...

and this brought me the desired results. Now, I saw the update of query.result not only in the console.log, but also in Component1.

Now I agree that it appears quite wasteful to employ something as overbearing as Redux just to work around my weird situation, but I had Redux in my project anyway. I guess you couuld also use another state management or custom useContext() solution to work around this, just to give you an idea.

But I found it quite remarkable. It went against what I knew about React that you can have a hook update (visible in the console.log) without actually having React update a component that uses its return value.

Please, please – if any of you has any hint for insight, or is just curious about my concrete use case – I’ll be happy to discuss.

Interacting with SVG files inside your React applications

So, for some reason you have a SVG file that somehow resembles a part of the application you are currently developing.

This might be only a sketch that you want to prepare as a Click Dummy, or it might be that you need to display a somewhat unique, complicated structure that is best layed out per SVG editor. This is somewhat expected when you have customers in the technical / scientific research sector.

So now you want to fill it with life.

Now, the SVG format is quite close to the <svg> structure that one can embed into HTML, but there are some steps in between. Most importantly, most SVG Editors fill their .svg files up with meta data or specific information only required for the editor in case you want to edit the files again.

Thus, you have three choices to integrate the SVG component in a React application

  • Re-Build your SVG with custom React Components that, via JSX, render their <svg>, <g>, <path>, etc. accordingly
  • Convert your SVG to valid JSX – this is possible in many cases, but you need to take care e.g. that the style attribute is a string in the SVG and an object in JSX, also it can be still way too large to be readily put in a single React Component
  • Import your .svg as its own React Component and then wrap that into an own Component that takes care about the interaction part

While I also have written a small converter that does the SVG-JSX-Conversion just fine for me for any file that comes out of Inkscape (probably an idea for my next blog post), we had the case of some files with about 16000 lines of SVG each, so I chose the third method in our case.

In my eyes, it is very correct to mention that the following way somehow goes against the React Mindset. In which you render all your components yourself to attach them the required mouse event handlers, never having to interact with a HTML “id” or any document.getElementById() or document.getElementByClassName().

In any React application, these should be avoided, but the idea here is to have a singular point – a SvgWrapper Component – where you allow these functions because you’d agree about the need to somehow target the specific SVG elements.

The gist:

import {ReactComponent as OurHorrificSvgMonster} from "/src/monster.svg";

const OurBeautifulComponent = () => {

    useOurCarefulSvgSynchronizationEffect(); // more on this below

    return (
        <SvgWrapper>
             <OurHorrificSvgMonster/>
        </SvgWrapper>
    )
};

Quick note: you can target the embedded <svg> element itself by <OurHorrificSvgMonster ref={...}/> and you could use this (ref.current holds that HTML element) to traverse all the children, so if you know much about the structure of your svg you could even live without the <SvgWrapper>. But say someone else made the horrific svg monster and all you have is the id or class names to all the individual svg elements inside.

Then

const WithVanillaHandlersConnected = ({children}) => {
    const dispatch = useDispatch();

    React.useEffect(() => {

        const onClick = (event) => {
            dispatch(awesomeAction(event.target.id));
        };

        const awesomeElements = [...document.getElementsByClassName("awesome")];

        awesomeElements.forEach(elem => {
            elem.addEventListener("click", onClick);
        });

        return () => awesomeElements.forEach(pipe => {
            pipe.removeEventListener("click", onClick);
        });
    }, []);

    return children;
};

I chose this dispatch() as a placeholder for any interaction with the surrounding web application, it could also be a simple React state or something. You can register any event listener you want here (also “mouseover”, “mouseout”, “contextmenu”, …), but think of removing it again in the effect return function.

By the way, document.getElementsByClassName(…) returns something like a HTMLCollection which is not exactly iterable, thus the […destructuring] to make the .forEach() possible.

We now have the first part – our elements (in our case, everything that has class “awesome”) has got a click handler that allows to dispatch anything to the application state. But now they need to change, too.

In a purely React-y way, this could be done by a svg element that chooses its fill = {isActive? "magenta" : "black"} but as we choose not to render our components ourselves, we need to once again grab deeply into the DOM and dare to manipulate it by hand.

As mentioned above – this is a step towards very ugly problems as React cannot guarantee that your visual layer matches your application state. You, on yoru own, have to guarantuee to do what’s right.

This is where this comes in:

/*
 for this example, think of that the redux selector selectSomethingFromTheState returns something like:

result = [
  {elementId: "elem1", isActive: true},
  ...
];

and isActive could be the thing that was toggled by our awesomeAction() above

*/

const useOurCarefulSvgSynchronizationEffect = () => {
    const elementStates = useSelector(selectSomethingFromTheState);

    React.useEffect(() => {
        for (const state of elementStates) {
            const element = document.getElementById(state.elementId};
            element.style.fill = isActive ? "magenta" : "black";
            // ... do other stuff with the DOM element
        }
    }, [elementStates]);
};

There we have it – we have the SvgWrapper and the use…SynchronizationEffect() that both stray from the React mindset by accessing the DOM directly, but we do it in a fashion where it is concisely encapsulated inside <OurBeautifulComponent> and there is no direct knowledge about the IDs inside the SVG, or Class manipulations, CSS Selectors, etc. elsewhere.

In my opinion, one can indeed go against the rules if it’s necessary, but I also see the option for a Stockton Rush quotation here.. so, if you know of any more elegant way, please feel free to share.

PS: by the way, if you use vite, you might get an “Uncaught SyntaxError” when trying import { ReactComponent ... }I’ve written about this before.