Web Components, Part 2: Encapsulating and Reusing common Element Structure

In my previous post, I gave you some first impressions about custom HTML Web Components. I cut myself short there to make my actual point, but one can surely extend this experiment.

The thing about the DOM is that it is one large, global block of information. In order to achieve loose coupling, you need to exert that discipline by yourself, using document.getElementById() and friends you can easily couple the furthestmost components, to their inner workings, together. Which can make it very insecure to change.

For that problem in Web Components, there is the Shadow DOM. I.e. if you define, as previously, your component as

class CustomIcon extends HTMLElement {
    connectedCallback() {
        this.innerHTML = `
            <svg id="icon">
                <!-- some content -->
            </div>
        `;
        element = document.getElementById("icon");
        element.addEventListener(...);
        // don't forget to removeEventListener(...) in disconnectedCallback()! - but that is not the point here
    }
}

it becomes possible to also document.getElementById("icon") from anywhere globally. Especially with such generic identifiers, you really do not want to leak your inner workings. (Yes, in a very custom application, there might be valid cases of desired behaviour, but then usually the IDs are named as e.g. __framework_global_timeout, custom--modal-dialog, … as to avoid accidental clashes).

This is done as easy as

class CustomIconim d extends HTMLElement {
    constructor() {
        super();
        this.attachShadow({ mode: 'open' });
  }

    connectedCallback() {
        this.shadowRoot.innerHTML = ... // your HTML ere
    }
}

Two points:

  • The attachShadow() can also be called in the connectedCallback(), even if usually not required. Generally, there is some debate between these two options, and I think I’ll write you another episode of this post when I have some further insight about that.
  • The {mode: 'open'} is what you actually use because ‘closed’ does not give you that much benefit, as outlined in this blog here. Just keep in mind that yes, it’s still JavaScript – you can access the shadowRoot object from the outside and then still do your shenanigans, but at least you can’t claim to have done so by accident.

This encapsulation makes it easier to write reusable code, i.e. decrease duplication.

As with my case of the MagicSparkles icon – I might want to implement some other (e.g. Font Awesome) icons and have all of these carry the same “size” attribute. It might look like:

export const addSvgPathAsShadow = (element: HTMLElement, { children, viewBox, defaultColor }: SvgIconProps) => {
    const shadow = element.attachShadow({ mode: "open" });
    const size = element.getAttribute("size") || 24;
    const color = defaultColor || "currentcolor";
    viewBox ||= `0 0 ${size} ${size}`;
    shadow.innerHTML = `
            <svg
                xmlns="http://www.w3.org/2000/svg"
                width="${size}"
                height="${size}"
                viewBox="${viewBox}"
                fill="${color}"
            >
                ${children}
            </svg>
        `;
};

export class PlayIcon extends HTMLElement {
    connectedCallback() {
        addSvgPathAsShadow(this, {
            viewBox: "0 0 24 24",
            children: "<path fill-rule=\"evenodd\" d=\"M4.5 5.653c0-1.427 1.529-2.33 2.779-1.643l11.54 6.347c1.295.712 1.295 2.573 0 3.286L7.28 19.99c-1.25.687-2.779-.217-2.779-1.643V5.653Z\" clip-rule=\"evenodd\" />"
        });
    }
}

// other elements can be defined similarly

// don't forget to actually define the element tag somewhere top-level, as with:
// customElements.define("play-icon", PlayIcon);

Note that

  • this way, children is required as a fixed string. My experiment didn’t work out yet how to use the "<slot></slot>" here (to pass the children given by e.g. <play-icon>Play!</play-icon>)
  • Also, I specifically use the || operator for the default values – not the ?? – as an attribute given as empty string would not be defaulted otherwise (?? only checks for undefined or null).
Conclusion

As concluded in my first post, we see that one tends to recreate the same patterns as already known from the existing frameworks, or software architecture in general. The tools are there to increase encapsulation, decrease coupling, decrease duplication, but there’s still no real reason why not just to use one of the frameworks.

There might be at some point, when framework fatigue is too much to bear, but try to decide wisely.

Web Components – Reusable HTML without any framework magic, Part 1

Lately, I decided to do the frontend for a very small web application while learning something new, and, for a while, tried doing everything without any framework at all.

This worked only for so long (not very), but along the way, I found some joy in figuring out sensible workflows without the well-worn standards that React, Svelte and the likes give you. See the last paragraph for a quick comment about some judgement.

Now many anything-web-dev-related people might have heard of Web Components, with their custom HTML elements that are mostly supported in the popular browsers.

Has anyone used them, though? I personally haven’t had, and now I did. My use case was pretty easy – I wanted several icons, and wanted to be able to style them in a unified fashion.

It shouldn’t be too ugly, so why not take something like Font Awesome or heroicons, these give you pure SVG elements but now I have the Font Awesmoe “Magic Sparkles Wand” like

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 576 512"><!--!Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free Copyright 2024 Fonticons, Inc.--><path d="M234.7 42.7L197 56.8c-3 1.1-5 4-5 7.2s2 6.1 5 7.2l37.7 14.1L248.8 123c1.1 3 4 5 7.2 5s6.1-2 7.2-5l14.1-37.7L315 71.2c3-1.1 5-4 5-7.2s-2-6.1-5-7.2L277.3 42.7 263.2 5c-1.1-3-4-5-7.2-5s-6.1 2-7.2 5L234.7 42.7zM46.1 395.4c-18.7 18.7-18.7 49.1 0 67.9l34.6 34.6c18.7 18.7 49.1 18.7 67.9 0L529.9 116.5c18.7-18.7 18.7-49.1 0-67.9L495.3 14.1c-18.7-18.7-49.1-18.7-67.9 0L46.1 395.4zM484.6 82.6l-105 105-23.3-23.3 105-105 23.3 23.3zM7.5 117.2C3 118.9 0 123.2 0 128s3 9.1 7.5 10.8L64 160l21.2 56.5c1.7 4.5 6 7.5 10.8 7.5s9.1-3 10.8-7.5L128 160l56.5-21.2c4.5-1.7 7.5-6 7.5-10.8s-3-9.1-7.5-10.8L128 96 106.8 39.5C105.1 35 100.8 32 96 32s-9.1 3-10.8 7.5L64 96 7.5 117.2zm352 256c-4.5 1.7-7.5 6-7.5 10.8s3 9.1 7.5 10.8L416 416l21.2 56.5c1.7 4.5 6 7.5 10.8 7.5s9.1-3 10.8-7.5L480 416l56.5-21.2c4.5-1.7 7.5-6 7.5-10.8s-3-9.1-7.5-10.8L480 352l-21.2-56.5c-1.7-4.5-6-7.5-10.8-7.5s-9.1 3-10.8 7.5L416 352l-56.5 21.2z"/></svg>

Say I want to have multiple of these and I want them to have different sizes. And I have no framework for that. I might, of course, write JavaScript functions that create a SVG element, equip it with the right attributes and children, and use that throughout my code, like

// HTML part
<div class="magic-sparkles-container">
</div>

// JS part
for (const element of [...document.getElementsByClassName("magic-sparkles-container")]) {
    elements.innerHTML = createMagicSparkelsWand({size: 24});
}

// note that you need the array destructuring [...] to convert the HTMLCollection to an Array

// also note that the JS part would need to be global, and to be executed each time a "magic-sparkles-container" gets constructed again

But one of the main advantages of React’s JSX is that it can give you a smooth look on your components, especially when the components have quite speaking names. And what I ended up to have is way smoother to read in the HTML itself

// HTML part
<magic-sparkles></magic-sparkles>
<magic-sparkles size="64"></magic-sparkles>

// global JS part (somewhere top-level)
customElements.define("magic-sparkles", MagicSparklesIcon);

// JS class definition
class MagicSparklesIcon extends HTMLElement {
    connectedCallback() {
        // take "size" attribute with default 24px
        const size = this.getAttribute("size") || 24;
        const path = `<path d="M234.7..."/>`;
        this.innerHTML = `
            <svg
                xmlns="http://www.w3.org/2000/svg"
                viewBox="0 0 576 512"
                width="${size}"
                height="${size}"
            >
                ${path}
            </svg>
        `;
    }
}

The customElements.define needs to be defined very top-level, once, and this thing can be improved, e.g. by using shadowRoot() and by implementing the attributesChangedCallback etc. but this is good enough for a start. I will return to some refinements in upcoming blog posts, but if you’re interested in details, just go ahead and ask 🙂

I figured out that there are some attribute names that cause problems, that I haven’t really found documented much yet. Don’t call your attributes “value”, for example, this gave me one hard to solve conflict.

But other than that, this gave my No-Framework-Application quite a good start with readable code for re-usable icons.

To be continued…

In the end – should you actually go framework-less?

In short, I wouldn’t ever do it for any customer project. The above was a hobby experience, just to have went down that road for once, but it feels there’s not much to gain in avoiding all frameworks.

I didn’t even have hard constraints like “performance”, “bundle size” or “memory usage”, but even if I did, there’s post like Framework Overhead is bikeshedding that might question the notion that something like React is inherently slower.

And you pay for such “lightweightness” dearly, in terms of readability, understandibility, code duplication, trouble with code separation, type checks, violation of Single-Level-of-Abstraction; not to mention that you make it harder for your IDE to actually help you.

Don’t reinvent the wheel more often than necessary. Not for a customer that just wants his/her product soon.

But it can be fun, for a while.

JavaScript – some less known Gems

I guess we can all agree that the most fun part of anything related to JavaScript is reading foreign JavaScript code 🙂

But while most of any hardness in understanding foreign (especially older) code lies in the every-year-fluctuations in common style – I also can’t really tell why I have this urge to change each “function” to a “const” declaration nowadays – once in a while you stumble across some feature, that is just too arcane.

Which means that it’s at least worth knowing about – after that, you can decide for yourself whether these enter your active vocabulary.

So these are some of my recent findings. Feel free to add.

Labelled Loops

JavaScript allows you to label any statement with a unique identifier. This is probably not a surprise for any Svelte developer (the $:… syntax is exactly that), but it is most useful in loops, because you can break or continue an outer loop using this:

// "outer" is the label here

outer: for (...) {
  for (...) {
    ...
    if (weAreDone()) break outer;
  }
}

It might have some old-school “GOTO” vibes indeed, and one can argue that in most cases there might be a more concise solution right around the corner, but especially if you have a tricky lookup algorithm, this might come handy one day.

Comma Operator

While I wondered for some years who on earth actually uses the Comma operator in C/C++, just a few weeks ago I found out that JavaScript actually has the same thing.

It allows you to execute some expression, ignoring it’s return value and directly execute the next statement in that same expression.

// (expr1, expr2) does evaluate expr1 and expr2 and return expr2.

const a = (b = 5, 3);
// a is now 3, but b = 5;

let a, b;
for (a = 0, b = 0; a + b < 3; a++, b++) { console.log(a, b) }
// output:
// 0 0
// 1 1

However, I would not advise using this thing ever. I mean, if you have some complicated expression where you decide just to do some step before evaluating some crucial other step – maybe it’s time to refactor the whole method.

void Operator

The void operator is like a special case of the comma operator in that it evaluates something and then returns undefined.

const a = calculateStuff(); // a might be whatever
const b = void calcaluteStuff(); // b is undefined

const c = (calculateStuff(), undefined); // c is identical to b, see above

This is e.g. an expression that I found while having to read some minified React code. So it might be of a certain use if you want to minify the number of letters in your code, but a more readable way would be just defining a function evaluating calculateStuff(), then returning without a return value.

However, the MDN web docs (referenced again here) give some real use cases of that operator, so if you are into that cryptic knowledge, go right ahead.

Bitwise NOT as a “found in List” check

Recently, I was weirded out by having to look at a list inclusion check in the likes of:

const list = ["a", "b"];
if (~list.indexOf("a")) {
  alert("found");
}

And this piece of code will actualy reach the alert, while it might leave you in wonders what the “~” is doing here. Unless you are used to very low-level bit arithmetic, in which case you’d directly recognize the bitwise NOT. It’s the operator that inverts every bit of a binary representation of a number to it’s opposite.

And the whole magic here lies in that for normal numbers (or BigInt), this is mathematically the same as

~a = -a - 1;

especially:
~-1 = 0
~0 = -1

and that for JavaScript, 0 is a falsy value while any other number is a truthy one. So this code above is used just to explicitly distinguish whether indexOf() returned “-1”, which it does if the object in question was not found.

So there you have it, but I’d rather use the way more readable

if (list.includes("a")) {...}
Bonus: console.log() styling

Now this does not really fit to the other operations above, but nevertheless I hear of people who did not know that before, so I’ll just drop it.

If there is any argument of console.log() starting with “%c”, it will take the next argument as a styling instruction instead of normally printing it. That is, the next argument needs to be a valid string that could also appear in a HTML stlye=”…” attribute, as

console.log("%cSo Big!", "font-size: 100pt; color: magenta");

Now, considering that console.log() is really only the most rudimentary way to output some statements for debugging (and one usually neglects the other ones like console.time(), console.timeEnd(), console.table()), this is not the next biggest thing that your imaginary Crypto Blockchain AI SaaS startup just needed, but it’s neverless good to know if you need some distinction in your logs.

Conclusion: Do whatever you like with that knowledge

While there are many things that one might love or hate about JavaScript – e.g. you might like the boolean-coercion via !! or you might hate the ??= or you might, with a mission, peddle generation expressions with function*/yield to your team – or or or… – there are always some more things that even after some years just look weird to my trained eye.

I guess it’s not completely wrong to thing that such expressions are occult for a specific reason in that there are only a handful of legit use cases for these, and forcing occult expression in a code that exceeds script size might cause some serious pain in the future.

But nevertheless, knowledge is power, so have a nice powerful evening.

Using JSON-Schema for data exchange

Several years ago XML was a quite popular document format – mostly due to its schema validation possibilities and clearly defined structure. Many libraries made working with such data documents possible (not really nice or a pleasure…) and humans could read them if need be. XML as a text format is programming language agnostic and processable in practically all useful programming environments.

Working with XML always was more of a pain for me. Fortunately, since then a lot of time passed and alternatives like JSON, YAML and TOML arised. All of them have their strengths and weaknesses and can fill similar roles as XML.

In general they have 2 things in common compared to XML:

  1. superiour readability
  2. lacking validation compared to XML schema

Nowadays, JSON is very widespread due to the popularity of JavaScript and perhaps the most used data exchange format across the internet. Despite having some syntactic quirks like its strictness about commas and forbidding of comments it is imho quite a good format. It is concise, human-readable, flexible and relatively simple. Many languages treat it like nested dictionaries so understanding and working with JSON is easy.

The main drawback is missing documentation and validation.

Enter JSON schema

JSON schema is a specification with accompanying libraries to fix the major issues about JSON. You define a schema of your data documents in JSON and put it in separate files. This adds the missing features to your JSON data documents I complained about: documentation and the possibility of automatic validation.

How does a simple JSON schema file look like? Let us have a look:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "$id": "https://cars.softwareschneiderei.com/car.schema.json",
  "title": "Car",
  "description": "Describing some important properties of a car",
  "type": "object",
  "properties": {
    "manufacturer": {
      "description": "The company producing the car.",
      "type": "string"
    },
    "model": {
      "description": "The name of the car model.",
      "type": "string"
    },
    "engineType": {
      "description": "One of the available engine types.",
      "enum": [
        "gasoline",
        "diesel",
        "hybrid",
        "electric"
      ]
    },
    "availableColors": {
      "description": "The colors the car is available in. Some colors may increase the price.",
      "type": "array",
      "items": {
        "type": "string"
      },
      "minItems": 1,
      "uniqueItems": true
    },
    "price": {
      "description": "The price tag in € including VAT. The price is optional.",
      "type": "number"
    }
  },
  "required": [
    "manufacturer",
    "model",
    "engine",
    "availableColors"
  ]
}

A valid data document could look like below:

{
  "manufacturer": "Porsche",
  "model": "911 Turbo",
  "engineType": "gasoline",
  "availableColors": [ "black", "blue", "red", "yellow", "white" ],
  "price": 150000
}

I think we can easily see how the documentation helps to understand the data, for example regarding the price property. The Java code to validate the data may look similar to this:

public void processCarData(String carFileName) { 
  var schemaDefinition = Json.decodeValue(readFile("car.schema.json"));
  JsonSchema schema = JsonSchema.of((JsonObject) schemaDefinition);
  var schemaValidator = Validator.create(schema, new JsonSchemaOptions()
      .setDraft(Draft.DRAFT202012)
      .setBaseUri("https://cars.softwareschneiderei.com"));
  var car = Json.decodeValue(readFile(carFileName));

  if (!schemaValidator.validate(car).getValid()) {
    throw new IllegalArgumentException("The format of the car data is invalid.");
  }
  // Data valid, work with it...
}

Conclusion

As depicted above JSON schema fills some important gaps when using JSON as an data exchange format. It offers helpful tools to document data structure and to validate data against a definition written in JSON itself.

That way we can safely work with the data, document the structure and still maintain the other good properties of JSON like interoperability, human-readability and data effienciency.

Nonreligious Guidance for the JavaScript vs. TypeScript Debate

It’s always fun times when developers in the internet get heated over some discussion about their tool stack. One current case seems to be that some developers experienced their cases of “TypeScript is not giving me an adequate return of investment” and there are several articles which boil down to “I just don’t like it” – just google something along abandoning / ditching / dropping TypeScript and the resulting discussions on Reddit.

Now – bad news for anyone enjoying online arguments: Never has wisdom been reached by stating advantages and disregarding the disadvantages. I took some time to reflect, as several of my projects at Softwareschneiderei as well as my private ones use different tech stacks, to note the cases where I was happy about each choice of language, and to note where I wished to have the other one.

Of course, there is some kind of rule that if there are two quite similar things, most humans would pick one of these things to embrace and caress, the other one to hate with a passion and then insult each other’s intelligence. That is, of course, very helpful and generally awesome not productive to actually change anyone’s mind.

Now I would mostly suggest people to try to get used to TypeScript in order to have that tool at your hand. But I also have cherished the flexibility of JavaScript and seen the case where I would prefer it at least for the current state of that corresponding project.

Let me elaborate.

Quick Note: TypeScript is not really a superscript of JavaScript

This has to be state beforehand. It is to be said that if you write your TypeScript in a fashion where you “as any” your types at will, I would not call this by that name. Yes, the language allows doing so, but the choice of any language is also the choice of a certain mindset going with it. Several linting presets even disallow the explicit any. Which makes sense, because if you love the “as any”, you are not thinking TypeScript anyway.

Yes, doing it sparingly is rather a code smell then a red flag. But the mindset of TypeScript does not include the mindset of JavaScript as subset, therefore TypeScript can not be anything like a superscript of JavaScript.

When I would use TypeScript

So when was I most happy about TypeScript?

  • where I already had a rather clear model of my domain and then had to extend or change the current functionality.
  • When writing new methods that work with clear types, the support in knowing what these things are give you a real support in productivity
  • When my last episode of development was some considerable time ago, or was done by a different developer
  • When the smaller parts / submodules / … interface each other in a clear fashion and most development is focussed on only particular parts. Therefore, if an API changes for a particular reason, having to redesign your types avoids dangerous regressions that happen down the line.
  • Also, if you have a clear use case of many different similar types of data. If it is not clear from seeing an object (“oh, this is a house, not an animal” vs “oh, this is a house per se, not an offer for a house for sale”), the type hints alone will speed up your though processes.
  • … also, if you don’t have an IDE which does some type analysis anyway.
And when did I prefer plain JavaScript?
  • I experienced my largest annoyance with TypeScript in cases where our development aimed at clarifying its domain model itself -as in, very experimental stages in which it is more important to scaffold a basis for discussion. I.e. changes where not just renaming a field or changing its type, but a fundamentally updated understanding.
  • Interfacing large modules where data gets serialized in between anyways (e.g. server-client-interactions) – remember that TypeScript does not garant you real type safety. Any object still can still be what it likes to be. If I have to double check any content anyway, I rather do so without the extra boilerplate.
  • When doing lots of functional programming. TypeScript is just plain ugly when you pass function types as an argument and I have not yet seen the case where that really prevented any mistakes.
  • Mostly, when I do “library” code as opposed to “application” code, especially when you deal with many intermediate types. Your code can become bloated by verbose type definitions which do not contain any real value. The extra work of having to think up a name for these does not make one a hero then.
  • Especially in having to deal with Redux or some of the React querying / web request / caching libraries that aim to make your life easier etc. – sometimes these don’t even export all their types, being quite a hassle to write utility functions for them.

In short, forcing oneself to use TypeScript can lead to problems similar to the “wrong abstraction” problem. If you are in a state of development where you thoroughly define your types and these are (mostly smaller), clearly cut types, it’s likely that you gain traction by doing this work beforehand.

Conclusion: Don’t be too religious about it.

I consider it just not true that one cannot write large, safe projects in plain JavaScript. And one is still able to write monstrous, nonmaintainable projects with TypeScript. Sometimes the type definitions are just not the main concern in a current stage of development.

Think about it deliberately, and know each one’s advantages.

Also, some people currently propagate JSDoc as the current way most superior to all. I did not yet give it a proper chance, mostly because of its ugly aesthethics – but I’m open to trying it some day.

Addendum: JavaScript for flexible React Components

While this is a special case of my suggestion “rather use JavaScript for functional-programming-heavy cases”, you might run into TypeScript trouble a lot in cases where you want to use flexible React Components like

import ComponentA from ...;
import ComponentB from ...;

const FlexibleComponent = ({conditionProp, ...props}) => {

    const Component = React.useMemo(() =>
        conditionProp 
            ? ComponentA
            : ComponentB
        , [conditionProp]);

    return <Component {...props}/>;
};

While you can argue that usually this hints at “you need a better pattern for ComponentA and ComponentB, if they share so many similarities”, such a construct might be useful if patching together several external dependencies.

I have not yet found a way to cleanly match this distinction using TypeScript, especially since external dependencies might come – see above – with closed type definitions. Of course, you might go the “any” route here as well…

When custom React Hooks do not rerender Components on their own – make them.

Depending on who you ask, custom React Hooks are

  • a great way to stash away detailed inner workings of your application, making a) them reusable and b) your component cleaner and less complex
  • a horrible invention that hides away all the dreadful complexities one can think of, and by just making it invsible, not reducing any complexity at all

As usual, one has to calculate that balance depending on the use case, but in most cases I prefer my components to have a rather manageable lines-of-code-count (because this makes it easier to visually analyze their actual JSX structure, i.e. their semantics, what they are supposed to do.

However, sometimes an app grows over time and reaches a level of intricacy that seems to “outsmart” React itself, therefore breaking it. I do not know how to describe it otherwise:

I had a case of nested custom Hooks, in which one inner hook was executing a database query, giving a result and also a function to invalidate() and thus re-execute the query. It had been my understanding, that…

const useOurHook = () => {
    const query = useInnerHookWhichExecutesSomeQuery();

    console.log("query returned", query);

    return {
        result: query.result,
        invalidate: query.invalidate
    };
};

const Component1 = () => {
    const {result} = useOurHook();

    return <div>{JSON.stringify(result)}</div>;
};

const Component2 = () => {
    const {invalidate} = useOurHook();

    return (
        <button onClick={() => invalidate()}>
            invalidate
        </button>
    );
};

… pressing the button in Component2 will update the return value of the inner query hook, thus update the return value of the outer hook and finally update Component1.

However, that just did not happen. Even stranger, I could see my updated query result in the console.log statement within useOurHook(), but Component1 was staying as it was.

It took me several attempts in the inner workings of my both hooks, I tried to wrap the return values inside React.useMemo(), or to specifically put them inside a React.useState() that was explicitly set by a React.useEffect() – which should rather have the same outcome, but then again I do not know the actual React source code by heart – and there was just nothing that helped.

If you have any explanation for me that excels “yeah, React was broken” in its level of insight, please tell me. (maybe I have to read some docs, but it wasn’t obvious…)

So this is what helped. Rather than passing the invalidate function to my components, I decided to use the update functionality of the Redux useSelector() hook in such a way:

const useOurHook = () => {
    const lastRequestAt = useSelector(state => state.somewhere.lastRequestAt); // get timestamp from Redux store
    const query = useInnerHookWhichExecutesSomeQuery();

    React.useEffect(() => {
        if (lastRequestAt > 0) {
            query.invalidate();
        }
    }, [lastRequestAt, query.invalidate]);

    console.log("query returned", query);

    return {
        result: query.result,
    };
};

const Component1 = () => {
    const {result} = useOurHook();

    return <div>{JSON.stringify(result)}</div>;
};

const Component2 = () => {
    const dispatch = useDispatch();

    return (
        <button onClick={() => dispatch(updateRequest())}>
            invalidate
        </button>
    );
};

//////// and somewhere in a Redux slice:

...
reducers: {
    updateRequest: (state) => {
        state.lastRequestAt = Date.now();
    }
}
...

and this brought me the desired results. Now, I saw the update of query.result not only in the console.log, but also in Component1.

Now I agree that it appears quite wasteful to employ something as overbearing as Redux just to work around my weird situation, but I had Redux in my project anyway. I guess you couuld also use another state management or custom useContext() solution to work around this, just to give you an idea.

But I found it quite remarkable. It went against what I knew about React that you can have a hook update (visible in the console.log) without actually having React update a component that uses its return value.

Please, please – if any of you has any hint for insight, or is just curious about my concrete use case – I’ll be happy to discuss.

Interacting with SVG files inside your React applications

So, for some reason you have a SVG file that somehow resembles a part of the application you are currently developing.

This might be only a sketch that you want to prepare as a Click Dummy, or it might be that you need to display a somewhat unique, complicated structure that is best layed out per SVG editor. This is somewhat expected when you have customers in the technical / scientific research sector.

So now you want to fill it with life.

Now, the SVG format is quite close to the <svg> structure that one can embed into HTML, but there are some steps in between. Most importantly, most SVG Editors fill their .svg files up with meta data or specific information only required for the editor in case you want to edit the files again.

Thus, you have three choices to integrate the SVG component in a React application

  • Re-Build your SVG with custom React Components that, via JSX, render their <svg>, <g>, <path>, etc. accordingly
  • Convert your SVG to valid JSX – this is possible in many cases, but you need to take care e.g. that the style attribute is a string in the SVG and an object in JSX, also it can be still way too large to be readily put in a single React Component
  • Import your .svg as its own React Component and then wrap that into an own Component that takes care about the interaction part

While I also have written a small converter that does the SVG-JSX-Conversion just fine for me for any file that comes out of Inkscape (probably an idea for my next blog post), we had the case of some files with about 16000 lines of SVG each, so I chose the third method in our case.

In my eyes, it is very correct to mention that the following way somehow goes against the React Mindset. In which you render all your components yourself to attach them the required mouse event handlers, never having to interact with a HTML “id” or any document.getElementById() or document.getElementByClassName().

In any React application, these should be avoided, but the idea here is to have a singular point – a SvgWrapper Component – where you allow these functions because you’d agree about the need to somehow target the specific SVG elements.

The gist:

import {ReactComponent as OurHorrificSvgMonster} from "/src/monster.svg";

const OurBeautifulComponent = () => {

    useOurCarefulSvgSynchronizationEffect(); // more on this below

    return (
        <SvgWrapper>
             <OurHorrificSvgMonster/>
        </SvgWrapper>
    )
};

Quick note: you can target the embedded <svg> element itself by <OurHorrificSvgMonster ref={...}/> and you could use this (ref.current holds that HTML element) to traverse all the children, so if you know much about the structure of your svg you could even live without the <SvgWrapper>. But say someone else made the horrific svg monster and all you have is the id or class names to all the individual svg elements inside.

Then

const WithVanillaHandlersConnected = ({children}) => {
    const dispatch = useDispatch();

    React.useEffect(() => {

        const onClick = (event) => {
            dispatch(awesomeAction(event.target.id));
        };

        const awesomeElements = [...document.getElementsByClassName("awesome")];

        awesomeElements.forEach(elem => {
            elem.addEventListener("click", onClick);
        });

        return () => awesomeElements.forEach(pipe => {
            pipe.removeEventListener("click", onClick);
        });
    }, []);

    return children;
};

I chose this dispatch() as a placeholder for any interaction with the surrounding web application, it could also be a simple React state or something. You can register any event listener you want here (also “mouseover”, “mouseout”, “contextmenu”, …), but think of removing it again in the effect return function.

By the way, document.getElementsByClassName(…) returns something like a HTMLCollection which is not exactly iterable, thus the […destructuring] to make the .forEach() possible.

We now have the first part – our elements (in our case, everything that has class “awesome”) has got a click handler that allows to dispatch anything to the application state. But now they need to change, too.

In a purely React-y way, this could be done by a svg element that chooses its fill = {isActive? "magenta" : "black"} but as we choose not to render our components ourselves, we need to once again grab deeply into the DOM and dare to manipulate it by hand.

As mentioned above – this is a step towards very ugly problems as React cannot guarantee that your visual layer matches your application state. You, on yoru own, have to guarantuee to do what’s right.

This is where this comes in:

/*
 for this example, think of that the redux selector selectSomethingFromTheState returns something like:

result = [
  {elementId: "elem1", isActive: true},
  ...
];

and isActive could be the thing that was toggled by our awesomeAction() above

*/

const useOurCarefulSvgSynchronizationEffect = () => {
    const elementStates = useSelector(selectSomethingFromTheState);

    React.useEffect(() => {
        for (const state of elementStates) {
            const element = document.getElementById(state.elementId};
            element.style.fill = isActive ? "magenta" : "black";
            // ... do other stuff with the DOM element
        }
    }, [elementStates]);
};

There we have it – we have the SvgWrapper and the use…SynchronizationEffect() that both stray from the React mindset by accessing the DOM directly, but we do it in a fashion where it is concisely encapsulated inside <OurBeautifulComponent> and there is no direct knowledge about the IDs inside the SVG, or Class manipulations, CSS Selectors, etc. elsewhere.

In my opinion, one can indeed go against the rules if it’s necessary, but I also see the option for a Stockton Rush quotation here.. so, if you know of any more elegant way, please feel free to share.

PS: by the way, if you use vite, you might get an “Uncaught SyntaxError” when trying import { ReactComponent ... }I’ve written about this before.

How to migrate a create-react-app project to vite

It seems that the React community is finally accepting that their old way of scaffolding a new projects, create-react-app (CRA in short), has outlived its usefulness. While there is no official statement about that, there was no update on npm in about a year, which in the JS universe screams “TOXIC WASTE” in very clear words, and meanwhile also has vanished from the official “Start a new React Project” docs.

In search for possibilities, one can do some quick google searches (e.g. this or that or maybe this) and at the moment, I’m giving vite a chance and it has not disappointed me yet, as the opposite:

  • the build definitely feels faster (as the French would say: plus vite), but I never quantified it
  • that over 9000 deprecation warnings one was accustomed to using CRA – gone TO ZERO
  • and the biggest point, no dependency on webpack. Webpack has this weird custom to introduce brutally breaking changes between their versions and then you have to polyfill Node JS core modules or whatever floats their boat, giving users not a choice – i.e. making it highly TOXIC in itself

But still, the react-scripts which CRA employs have played quite a role in development, as it also helped with the “npm start” development server and also as a test runner – so generally, if you have developed your project over some years, you might have relied on it quite a bit, and now you don’t want to recreate everything from scratch.

I recently migrated one of our projects and this is what worked for me. There were three main concerns

  • switch the general infrastructure to vite, so we can develop and build again
  • introduce vitest as a test runner
  • migrate Redux store tests specifically

Let’s focus today on the thing without tests and I will come back to that next time.

Migrate to vite INFRASTRUCTURE

This was actually surprisingly concise, I just had to

npm install -D vite @vitejs/plugin-react
npm uninstall react-scripts

(when in doubt, remove the node_modules folder and run npm install again, but I didn’t have to), then I adjusted package.json to:

  "scripts": {
    "start": "vite",
    "build": "vite build", 
  },

You might prefer to call your dev server via “npm run dev” instead of “npm start”, in that case just replace the "start": "vite" with "dev": "vite" above.

The Vite templates prefer to include a script "preview": "vite preview" but I do not use it, so I didn’t copy that.

It also was required to set this package.json entry:

  // somewhere top-level, i.e. next to "version" or somewhere like that
  "type": "module",

(I’m not entirely sure whether we can now safely remove the “browserslist” or “babel” entries from the package.json because they might be useless now, but I will have to think about in another minute.)

Now, some real code changes. One of the larger todos here might be to make sure that every JSX-containing source file ends with .jsx – there have been discussions about this and beforehand, it was still possible to just place your <App/> etc. inside an App.js, but vite does not like that anymore, so this is a thing you have to do.

So the code changes amount to:

  • Rename every .js file which has some JSX in it to .jsx – pro tip: do it via the IDE so you do not have to care for every import / require-Statement manually!
  • move the template in ./public/index.html directly to ./index.html and in there, replace every mentioning of %PUBLIC_URL% just by the single slash /
  • In the index.html <body>, include your index.jsx e.g. like:
  <body>
    <noscript>You need to enable JavaScript to run this app.</noscript>
    <div id="root"></div>
    <script type="module" src="/src/index.jsx"></script>
  </body>

It might be said that the vite templates like to call their index file “main.jsx”, but it’s not important – just match whatever you put inside the <script src="..."/>.

Now in order not to change your habits too much, i.e. keep your CI build as it is, plus maybe some Docker Dev Containers or even browser bookmarks, you can use this vite.config.js – see docs:

import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';

export default defineConfig({
  plugins: [react()],
  server: {
    port: 3000,
    host: true
  },
  build: {
    outDir: './build'
  },
});

otherwise, vite prefers to run its dev server on port 5173 (guess it’s Leetspeak) and build in ./dist – just so you know.

Addon: Using ReactComponents from SVGs with Vite. Also with refs.

Since today morning, when I wrote this article, I already learned something new. In another project we were importing SVG files via the approach

import {ReactComponent as Bla} from "./bla.svg";

const ExampleUsage = () => {
  return <Bla />;
};

Doing so now results in

Uncaught SyntaxError: ambiguous indirect export: ReactComponent

This can be solved by npm install vite-plugin-svgr and then updating vite.config.js:

import {defineConfig} from "vite";
import react from "@vitejs/plugin-react";
import svgr from "vite-plugin-svgr";

export default defineConfig({
    plugins: [
        svgr({
            svgrOptions: {
                ref: true,
            },
        }),
        react(),
    ],
    server: {
        port: 3000,
        host: true,
    },
    build: {
        outDir: "./build",
    },
});

The { svgrOptions: {ref: true} } was a specific requirement for our use case, it is necessary if you ever want to access the imported ReactComponents ref; i.e. in our ExampleUsage we needed a specification <Bla ref={...}/> . Leaving the svgrOption ref then at false (its default) gives us the error:

Warning: Function components cannot be given refs. Attempts to access this ref will fail. Did you mean to use React.forwardRef()?

Then, Make the tests work again

As mentioned above, these were a bit trickier, and while I found a way to leave most tests untouched, there was some specific tweaking to be done with Redux store tests, and also with mocking a foreign class (GraphQLClient from “graphql-request” in my case).

But as also mentioned above, I guess this might be a topic for my next blog post. In case you urgently need that knowledge, drop us a mail or something.. 🙂

Using Docker Containers in Development with WebStorm: Next Iteration

We are always in pursue of improving our build and development infrastructures. Who isn’t?

At Softwareschneiderei, we have about five times as many projects than we have developers (without being overworked, by the way) and each of that comes with its own requirements, so it is important to be able to switch between different projects as easily as cloning a git repository, avoiding meticulous configuration of your development machines that might break on any change.

This is the main advantage of the development container (DevContainer) approach (with Docker being the major contestant at the moment), and last November, I tried to outline my then-current understanding of integrating such an approach with the JetBrains IDEs. E.g. for WebStorm, there is some kind of support for dockerized run configurations, but that does some weird stuff (see below), and JetBrains did not care enough yet to make that configurable, or at least to communicate the sense behind that.

Preparing our Dev Container

In our projects, we usually have at least two Docker build stages:

  • one to prepare the build platform (this will be used for the DevContainer)
  • one to execute the build itself (only this stage copies actual sources)

There might be more (e.g. for running the build in production, or for further dependencies), but the basic distinction above helps us to speed up the development process already. (Further reading: Docker cache management)

For one of our current React projects (in which I chose to try Vite in favor of the outdated Create-React-App, see also here), the Dockerfile might look like

# --------------------------------------------
FROM node:18-bullseye AS build-platform

WORKDIR /opt
COPY package.json .
COPY package-lock.json .

# see comment below
RUN npm install -g vite

RUN npm ci --ignore-scripts
WORKDIR /opt/project

# --------------------------------------------
FROM build-platform AS build-stage

RUN mkdir -p /build/result
COPY . .
CMD npm run build && mv dist /build/result/app

The “build platform” stage can then be used as our Dev Container, from the command line as (assuming, this Dockerfile resides inside your project directory where also src/ etc. are chilling)

docker build -t build-platform-image --target build-platform .
docker run --rm -v ${PWD}:/opt/project <command_for_starting_dev_server>

Some comments:

  • The RUN step to npm install -g vite is required for a Vite project because the our chosen base image node:18-bullseye does not know about the vite binaries. One could improve that by adding another step beforehand, only preparing a vite+node base image and taking advantage of Docker caching from then on.
  • We specifically have to take the WORKDIR /opt/project because our mission statement is to integrate the whole thing with WebStorm. If you are not interested in that, that path is for you to choose.

Now, if we are not working against any idiosyncrasies of an IDE, the preparation step “npm ci” gives us all our node dependencies in the current directory inside a node_modules/ folder. Because this blog post is going somewhere, already now we chose to place that node_modules in the parent folder of the actual WORKDIR. This will work because for lack of an own node_modules, node will find it above (this fact might change with future Node versions, but for now it holds true).

The Challenge with JetBrains

Now, the current JetBrains IDEs allow you to run your project with the node interpreter (containerized within the node-platform image) in the “Run/Debug Configurations” window via

“+” ➔ “npm” ➔ Node interpreter “Add…” ➔ “Add Remote” ➔ “Docker”

then choose the right image (e.g. build-platform-image:latest).

Now enters that strange IDE behaviour that is not really documented or changeable anywhere. If you run this configuration, your current project directory is going to be mounted in two places inside the container:

  • /opt/project
  • /tmp/<temporary UUID>

This mounting behaviour explains why we cannot install our node_modules dependencies inside the container in the /opt/project path – mounting external folders always override anything that might exist in the corresponding mount points, e.g. any /opt/project/node_modules will be overwritten by force.

As we cared about that by using the /opt parent folder for the node_modules installation, and we set the WORKDIR to be /opt/project one could think that now we can just call the development server (written as <command_for_starting_dev_server> above).

But we couldn’t!

For reasons that made us question our reality way longer than it made us happy, it turned out that the IDE somehow always chose the /tmp/<uuid> path as WORKDIR. We found no way of changing that. JetBrains doesn’t tell us anything about it. the “docker run -w / --workdir” parameter did not help. We really had to use that less-than-optimal hack to modify the package.json “scripts” options, by

 "scripts": {
    "dev": "vite serve",
    "dev-docker": "cd /opt/project && vite serve",
    ...
  },

The “dev” line was there already (if you use create-react-app or something else , this calls that something else accordingly). We added another script with an explicit “cd /opt/project“. One can then select that script in the new Run Configuration from above and now that really works.

We do not like this way because doing so, one couples a bad IDE behaviour with hard coded paths inside our source files – but at least we separate it enough from our other code that it doesn’t destroy anything – e.g. in principle, you could still run this thing with npm locally (after running “npm install” on your machine etc.)

Side note: Dealing with the “@esbuild/linux-x64” error

The internet has not widely adopteds Vite as a scaffolding / build tool for React projects yet and one of the problems on our way was a nasty error of the likes

Error: The package "esbuild-linux-64" could not be found, and is needed by esbuild

We found the best solution for that problem was to add the following to the package.json:

"optionalDependencies": {
    "@esbuild/linux-x64": "0.17.6"
}

… using the “optionalDependencies” rather than the other dependency entries because this way, we still allow the local installation on a Windows machine. If the dependency was not optional, npm install would just throw an wrong-OS-error.

(Note that as a rule, we do not like the default usage of SemVer ^ or ~ inside the package.json – we rather pin every dependency, and do our updates specifically when we know we are paying attention. That makes us less vulnerable to sudden npm-hacks or sneaky surprises in general.)

I hope, all this information might be useful to you. It took us a considerable amount of thought and research to come to this conclusion, so if you have any further tips or insights, I’d be glad to hear from you!

Web Security for Frontend and Backend

The web is everywhere and we use it for tons of important tasks like online banking, shopping and communication. So it becomes increasingly important to implement proper security. As attacks like cross-site scripting (XSS) or cross-site request forgery (CSRF) are wide-spread browsers, web standards designers and web application developers implement more and more mechanisms to make such attacks harder or even impossible. This puts a certain burden on both frontend and backend developers.

Since security is hard and should not be an afterthought I would like to give you some advice when implementing a web app using a Javascript-frontend and a backend service written in some of the common languages/frameworks like .NET, Micronaut, Javalin, Flask or the like.

Frontend advice

I prefer traditional cookie-based sessions to JWT-based approaches for interactive web frontends because of simplicity, browser support and the possibility to use it without Javascript. For service-to-service communication bearer tokens of some kind may be more appropriate. Your Javascript client has to include the credentials in the fetch() calls to cause the browser to send the cookie.

Unfortunately, incorrect use of cookies may be insecure, so be sure to check up-to-date advice on cookies; see some hints below in the backend part because cookies are configured and issued there.

Backend advice

Modern web security requires additional measures on the server side to ensure secure authentication and communication with web clients. You should use https whereever possible to gain at least transport security and avoid many cases of sniffing credentials or changing content between client and backend.

Improving security of cookies

First of all, cookies should be HttpOnly so that scripts cannot access the contents of a cookie. Furthermore you should ideally set the SameSite and Secure attributes appropriately and use https whenever possible. That way you have mitigated the most common attacks on your session handling and authentication.

Another bonus for cookies is that browsers can inform you about problems with your cookie setup:

Configuring Cross-Origin Resource Sharing (CORS)

Nowadays it is common for web app to be served from a different host than the backend API. This is a potential problem because attackers may sneak scripts into the browser of a user and use the existing session to access the resources in an illegal way. Therefore another means of improving security of web apps running in browsers was introduced with the access control using CORS.

For browsers to be able to prevent or allow requests to certain resources the backend has to provide appropriate Access-Control-headers, most notably Access-Control-Allow-Origin and Access-Control-Allow-Credentials. Make sure to set these values correctly or your frontend will have trouble to access your backend or you introduce a potential security whole.

Fortunately many web frameworks make it easy to configure CORS, see Micronaut documentation for example.

Conclusion

Security is always important and browser vendors keep implementing additional measures to mitigate problems in the current web environment. Make sure you keep up with the latest advice and measures and implement them in your applications.