Make your users happy by not caring about how they would enter data

As many of our customers are operating somewhere in the technical or scientific field, we happen to get requests that are somewhat of the form

Well, so we got that spreadsheet managing our most important data and well…
… it’s growing over our heads with 1000+ entries…
… here are, sadly, multiple conflicting versions of it …
… no one understands these formulae that Kevin designed …
… we just want ONE super-concise number / graph from it …
… actually, all of the above.

And while that scenario might be exaggerated for the sole reason of entertainment, at its core there’s something quite universal. Now there’s one thing to feel like the only adult in the room, throwing one’s hands up into the air, and then calculating the minimum expenses for a self-contained solution and while doing so, we still would strive for a simple proposal, that’s always quite a tightrope walk between the “this really needs not to be fancy” and the “but we can not violate certain standards either”.

And in such a case, your customer might happily agree with you – they do not want to rely on their spreadsheets, it just was too easy just to start with. And now they’re addicted.

The point of this post is this: If your customer has tabular data, there’s almost no “simple” data input interface you can imagine that is not absolutely outgunned by spreadsheets, or even text files. The sheer amount of conventions that nearly everyone already knows, don’t try to compete with some input fields and “Add entity” buttons and HTML <form>s.

This is no praise towards spreadsheet software at all. But consider the hidden costs that can wait behind a well-intended “simple data input form”.

For the very first draft of your software

  • do not build any form of data input interface, if you cannot exactly know that it will quickly empower the user to something they can not do already.
  • Do very boldly state: Keep your spreadsheets, CSV, text files. I will work with them, not against them.
  • E.g. Offer something as “uncool” as a “(up) load file” button, or a field to paste generic text. You do not have to parse .xls, .xlsx, .ods files either. The message should be: Use whatever software you want to use, up to the point where it started to be a liability, not your friend anymore.
  • Do not be scared about “but what if they have wrong formatting”. Well then find the middle ground – what kind of data should you really sanitize, what can you straightout reject?

Make it clear that you prioritize the actual problem of your customer. Examples:

  1. if that problem is “too many sources-of-not-really-truth-anymore”, then your actual solution should focus on merging such data sources. But if that was the main problem of your customer, imagine them to always have some new long-lost-data-sources on some floppy disk, stashed in their comic book collection, or something.
  2. if that problem is “we cannot see through that mess”, then your actual solution should focus on analyzing, on visualizing the important quantities.
  3. if the problem is that Kevin starts writing new crazy formulae everyday and your customer is likely go out of business whenever he chooses to quit, then yes, maybe your actual solution is really in constructing a road block, but then rather in a long discussion about the actual use cases, not by just translating the crazy logic into Rust now, because “that’s safer”.

It is my observation, that a technically-accustomed customer is usually quite flexible in their ways. In some cases it really is your job to impose a static structure, and these are the cases in which they will probably approve of having that done to them. In other cases, e.g. their problem is a shape-shifting menace that they cannot even grasp, do not start with arbitrary abstractions, templates, schemas that will just slow them down.
I mean – either you spend your day migrating their data all the time, and putting new input elements on their screen, or you spend your day solving their actual troubles.

Integrating API Key Authorization in Micronaut’s OpenAPI Documentation

In a Java Micronaut application, endpoints are often secured using @Secured(SecurityRule.IS_AUTHENTICATED), along with an authentication provider. In this case, authentication takes place using API keys, and the authentication provider validates them. If you also provide Swagger documentation for users to test API functionalities quickly, you need a way for users to specify an API key in Swagger that is automatically included in the request headers.

For a general guide on setting up a Micronaut application with OpenAPI Swagger and Swagger UI, refer to this article.

The following article focuses on how to integrate API key authentication into Swagger so that users can authenticate and test secured endpoints directly within the Swagger UI.

Accessing Swagger Without Authentication

To ensure that Swagger is always accessible without authentication, update the application.yml file with the following settings:

micronaut:  
  security:
    intercept-url-map:
      - pattern: /swagger/**
        access:
          - isAnonymous()
      - pattern: /swagger-ui/**
        access:
          - isAnonymous()
    enabled: true

These settings ensure that Swagger remains accessible without requiring authentication while keeping API security enabled.

Defining the Security Schema

Micronaut supports various Swagger annotations to configure OpenAPI security. To enable API key authentication, use the @SecurityScheme annotation:

import io.swagger.v3.oas.annotations.security.SecurityScheme;
import io.swagger.v3.oas.annotations.enums.SecuritySchemeIn;
import io.swagger.v3.oas.annotations.enums.SecuritySchemeType;

@SecurityScheme(
    name = "MyApiKey",
    type = SecuritySchemeType.APIKEY,
    in = SecuritySchemeIn.HEADER,
    paramName = "Authorization",
    description = "API Key authentication"
)

This defines an API key security scheme with the following properties:

  • Name: MyApiKey
  • Type: APIKEY
  • Location: Header (Authorization field)
  • Description: Explains how the API key authentication works

Applying the Security Scheme to OpenAPI

Next, we configure Swagger to use this authentication scheme by adding it to @OpenAPIDefinition:

import io.swagger.v3.oas.annotations.info.*;
import io.swagger.v3.oas.annotations.security.SecurityRequirement;

@OpenAPIDefinition(
    info = @Info(
        title = "API",
        version = "1.0.0",
        description = "This is a well-documented API"
    ),
    security = @SecurityRequirement(name = "MyApiKey")
)

This ensures that the Swagger UI recognizes and applies the defined authentication method.

Conclusion

With these settings, your Swagger UI will display an Authorization field in the top-left corner.

Users can enter an API key, which will be automatically included in all API requests as a header.

This is just one way to implement authentication. The @SecurityScheme annotation also supports more advanced authentication flows like OAuth2, allowing seamless token-based authentication through a token provider.

By setting up API key authentication correctly, you enhance both the security and usability of your API documentation.

How React components can know their actual dimensions

Every once in a while, styling a Web Application can be oh so frustrating quite interesting because stuff that appears easy does actually not comply with any of your suggestions. And there are some fields that ambush me more often than I’d like to admit, and with each application there appears some unique quirk that makes a universal solution hard.

Right now, I’m thinking about a CSS-only nested layout of several areas on your available screen that need to make good use of the available space, but still be somewhat dynamic in order to be maintainable.

Web Apps are especially delicate in layout things because if you ask most customers, a fully responsive layout is never the goal (as in, way too expensive for their use case), but no matter how often you make them assure you that there is only a small set of target resolutions, there will be one day where something changed and well-yeah-these-ones-too-of-course.

It is also commonly encountered that “looking good” is “not that important”, but as progress goes, everyone still knows that that was a pure lie.

Of course, this is a manifestation of Feature Creep, but one that is hard to argue about. And we do not want to argue with customers anyway, we want to solve their problems with as little friction as possible.

So by now, one would have thought that CSS would have evolved quite enough in order to at least place dynamic content somewhat predictable. There are flexbox and grid displays and these are useful as hell, but still.

And while, for some reason or another, the width of dynamic nested content can usually be accounted for in some pure CSS solution that one can find in under a day’s work; getting the height quite right is a problem that is officially harder than all multi-order corrections I ever encountered in my studies of quantum field theory. Only solvable in some oversimplified use-cases.

The limits of “height: 100%;” are reached in cases where content is dominated by its content instead of their container; as in nested <svg> elements that love to disagree about the meaning of “100%”. Dynamic SVG content is especially more cumbersome because you neither want distorted nor cut-off content, and you can try to get along with viewBox and preserveAspectRatio, but even then.

Maybe it won’t budge, and maybe that’s the point where I find it acceptable to read the actual DOM elements even from within a React component, an approach that is usually as dangerous as it is intrusive,

but is it a code smell if it is rather concise and reliantly does the job?

const useHeightAwareRef = () => {
    const [height, setHeight] = useState({
        initialized: false,
        value: null,
    });
    const ref = useRef(null);

    useEffect(() => {
        if (!ref.current || height.initialized) {
            return;
        }

        const adjustHeight = () => {
            const rect = ref.current?.getBoundingClientRect();
            setHeight({
                value: rect?.height ?? null,
                initialized: true
            });
        };

        adjustHeight();
        window.addEventListener("resize", adjustHeight);
        return () => {
            window.removeEventListener("resize", adjustHeight);
        };
    }, [height.initialized]);

    return {
        height: height.initialized ? height.value : null,
        ref
    };
};

// then use this like:

const SomeNestedContent = () => {
    const {height, ref} = useHeightAwareRef();

    return (
        <div ref={ref}>
        {
            height &&
            <svg height={height} width={"100%"}>
                { /* ... Dragons be here ... */ }
            </svg>
        }
        </div>
    );
};

I find this worthfile to have in your toolbox. If you manage your super-dynamic* content in some other super-responsive** fashion in a way that is super-arguable*** to your customer, sure, go by it. But remember, at some point each, possibly,

  • (*) your customer might have data outside the mutually agreed use cases,
  • (**) your customer might have screens outside the mutually agreed ones,
  • (***) your customer might have less patience / time than originally intended,

so maybe move the idea of “there must be one super-elegant pure-CSS solution in the year of 2025” back into your dreams and shoehorn that <svg> & friends into where they belong :´)

Four-way Navigation in UIs

Just yesterday, I was working on the task of enabling gamepad navigation of a graphical UI. I had implemented this before in my game abstractanks but since forgotten how exactly I did it. So I opened the old code and tried to decipher it, and I figured that’d make a nice topic to write about.

Basic implementation

Let’s break down the simple version of the problem: You have a bunch of rectangular controls, and given a specific one, figure out the next one with an input of either left, up, right or down.

This sketch shows a control setup with a possible solution. It also contains an interesting situation: going ‘down’ from B box goes to C, but going up from there goes to A!

The key to creating this solution is a metric that weights the gap for a specific input direction, e.g. neighbor_metric(
box<> const& from, box<> const& to, navigation_direction direction)
. To implement this, we need to convert this gap into numbers we can use. I’ve used a variant of Arvo’s algorithm for that: For both axes, get the difference of the rectangles’ intervals along that axis and store those in a 2d-vector. In code:

template <int axis> inline float difference_on_axis(
box<> const& from, box<> const& to)
{
if (to.min[axis] > from.max[axis])
return to.min[axis] - from.max[axis];
else if (to.max[axis] < from.min[axis])
return to.max[axis] - from.min[axis];
return 0.f;
}

v2<> arvo_vector(box<> const& from, box<> const& to)
{
return {
difference_on_axis<0>(from, to),
difference_on_axis<1>(from, to) };
}

That sketch shows the resulting vectors from the box in the top-left going to two other boxes. Note that these vectors are quite different from the difference of the boxes’ centers. In the case of the two top boxes, the vector connecting the centers would tilt down slightly, while this one is completely parallel to the x axis.

Now armed with this vector, let’s look at the metric I was using. It results in a 2d ‘score’ that is later compared lexicographically to determine the best candidate: the first number determines the ‘angle’ with the selected axis, the other one the distance.

template <int axis> auto metric_on_axis(box<> const& from, box<> const& to)
{
auto delta = arvo_vector(from, to);
delta[0] *= delta[0];
delta[1] *= delta[1];
auto square_distance = delta[0] + delta[1];

float cosine_squared = delta[axis] / square_distance;
return std::make_pair(-cosine_squared, delta[axis]);
}

std::optional<std::pair<float, float>> neighbor_metric(
box<> const& from, box<> const& to, navigation_direction direction)
{
switch (direction)
{
default:
case navigation_direction::right:
{
if (from.max[0] >= to.max[0])
return {};
return metric_on_axis<0>(from, to);
}
case navigation_direction::left:
{
if (from.min[0] <= to.min[0])
return {};
return metric_on_axis<0>(from, to);
}
case navigation_direction::up:
{
if (from.max[1] >= to.max[1])
return {};
return metric_on_axis<1>(from, to);
}
case navigation_direction::down:
{
if (from.min[1] <= to.min[1])
return {};
return metric_on_axis<1>(from, to);
}
}
}

In practice this means that the algorithm will favor connections that best align with the input direction, while ties resolved by using the closest candidate. The metric ‘disqualifies’ candidates going backward, e.g. when going right, the next box cannot start left of the from box.

Now we just need to loop through all candidates and the select the one with the lowest metric.

This algorithm does not make any guarantees that all controls will be accessible, but that is a property that can easily be tested by traversing the graph induced by this metric, and the UI can be designed appropriately. It also does not try to be symmetric, e.g. going down then up does not always result in going back to the previous control. As we can see in the first sketch, this is not always desirable. I think it’s nice to be able to go from B to C via ‘down’, but I’d be weird to go ‘up’ back there instead of A. Instead, going ‘right’ to B does make sense.

Hard cases

But there can be ambiguities that this algorithm does not quite solve. Consider the case were C is wider, so that is is also under B:

The algorithm will connect both A and B down to C, but the metric will be tied for A and B going up from C. The metric could be extended to also include the ‘cross’ axis min-point of the box, e.g. favoring left over right for westerners like me. But going from B down to C and then up to A would feel weird. One idea to resolve this is to use the history to break ties, e.g. when coming from B to C, going back up would go back to C.

Another hard case is scroll-views. In fact, they seem to change the problem domain. Instead of treating the inputs as boxes in a flat plane, navigating in a scroll view requires to navigate to potentially only partially visible or even invisible boxes and bringing them into view. I’ve previously solved this by treating every scroll-view as its own separate plane and navigating only within that if possible. Only when no target is found within the scroll-view, did the algorithm try to navigate to items outside.

Don’t Just Blink at Me

I have a lot of electronic devices that essentially only do one thing. A flashlight, a hair trimmer, a razor, a toothbrush, some smoke detectors and a handful of power banks and other energy storage devices. They have a few things in common:

  • They have their own rechargeable battery
  • They store a (more or less) complex internal state
  • They have exactly one status LED
  • They try to communicate with me using the LED

The razor is an exception, because it has a lot more signal lights than just the LED, but I include it as the positive example. The LED of the razor changes its color to show what it wants from me.

All other devices try to convey meaning by different styles of blinking. Let’s take the hair trimmer as the negative example: It blinks when it is happy because it is fed energy. The blinking stops to indicate that it is fully loaded.

The flashlight blinks when it is unhappy and hungry. It will stop blinking when it is fed. It never indicates that it has enough, you have to guess.

The smoke detectors flash shortly once in a while to show that they are still on duty. They might flash more vigorously if they get hungry. But they have another means to get the message across: They sound a short alarm. You’ll feed them instantly and always have a spare battery at hands.

The point of this story is that it is impossible to decipher a blinking LED. What does it mean? The device is busy and I don’t need to do something? The device is on the verge of starvation and I should intervene? It’s the most ambiguous signal I can think of.

If there were a standard that blinking means the desire for human intervention, I would learn the signal and adhere to it. But nearly half of my devices blink when they are busy and don’t need anything from me. And some try to talk to me in morse.

I’m not going to learn dozens of LED signal dialects. If a device wants to be understood, it should be designed in an understandable way. Give it a color-changing LED and we can talk:

  • Steady green: I’m happy.
  • Steady orange: I’m content, but it could be better. You might allocate some care for me.
  • Steady red: Please care for me.
  • Blinking red: Attention! I need urgent care.

What does this interaction design rant have to do with software development?

I often see the same categorical error in the design of software user interfaces. And while manufacturers of cheap consumer electronics can argue that a multi-color LED is more expensive, we software developers can’t really hide behind costs.

Anything that pops up, slides in or just blinks on my screen better has a damn good reason to try to grab my attention. Grabbing not only my attention but also my input focus is the upmost interruption, comparable to the alarm signal of the smoke detector. I blogged recently about a blocking dialog that shows up unprompted and informs about some random unprompted background work.

But there is another type of ambiguous signal that I see more often than I like. It is silent and tries to shift the blame for the inevitable frustration on you: The vague selection field:

Sorry for the german text, I didn’t find it on an english website. The german website this is from is still online, so if you can find it, you can try it yourself. I won’t link it here, though. (Maybe search for “steckdosenleiste klemmbar” and roam the results)

Let me describe your experience: you can select the color of a product, either white or black. The selected color is stated above, inexplicably written in the color blue. The selected field is “highlighted” black. Or is it? Maybe the white field is the highlighted one? And why is the selection field for the color black presented in mostly white?

The selection is not communicating properly, but I feel intimidated and stupid by a simple choice between two colors.

You can probably think of dozens of improvements and that’s fine. But the designer was certainly restricted by a corporate design rulebook and had to use three colors only: blue, white and black. You decide if he used his potential successfully.

Which brings me back to the introductory example: Give an engineer a single color LED and he will implement an elaborate morse code scheme to make the most of it. The problem is the initial restriction, not the shenanigans that follow.

How to design a dynamic website with flex containers

Every frontend developer will know this situation: You design a frontend so that all components fit together perfectly. You invest your time, heart, and soul, working tirelessly and frequently testing to see the results. And then you deliver it to a customer, only to find that their display presents something different. The beautiful design looks bad, or some parts are no longer accessible.

To avoid these unpleasant surprises, you should make your layout dynamic. One possibility for achieving this is by using the flex container. In the following, I will demonstrate how to dynamically build an example page using flex containers.

Example page

My example page has a header spanning the entire content. The content is divided into a navigation area on the left and the content page on the right. The content page itself has a header, main content and a button bar. Now let’s cast that into HTML.

<!doctype html>
<html>
    <body>
        <div class="whole-page">
            <div class="header">
                Header<br>
                ...
            </div>
            <div class="content">
                <div class="navigation">Navigation</div>
                <div class="content-page">
                    <div class="content-header">
                        Title<br>
                        ...
                    </div>
                    <div class="main-content">
                        Content
                    </div>
                    <div class="button-bar">
                        Button Bar<br>
                        <button>Save</button>
                        <button>Cancel</button>
                    </div>
                </div>
            </div>
        </div>
    </body>
</html>

And now for the design: I want my page to take up the entire screen. The navigation area should take up 20% of the width, and the page should take up the rest. I want the button bar to be at the bottom of the screen and the main content to take up as much space as possible between the heading and the bar.

Let’s implement this with CSS:

html, body {
    font-family: sans-serif;
    font-size: 20pt;
    height: 100%;
    margin: 0;
}
.whole-page{
    display: flex;
    flex-direction: column;
    height: 100%;
    width: 100%;
}

.content {
    display: flex;
    flex-direction: row;

    flex-grow: 1;
    min-height: 0;
}
.navigation {
    min-width: 20%;
    overflow: auto;

}
.content-page {
    display: flex;
    flex-direction: column;
  
    flex-grow: 1;
    min-width: 0;
}

.main-content {
    flex-grow: 1;
    overflow: auto;

}

First, we configure html and body to take up the entire screen. To do this, we set the height to 100% and remove margins.

In the next step, we say that the layout of the whole page should build up flexibly among itself (column) and set the size to the maximum size of the screen.

For the parts with fixed sizes, like headers and the button bar, no further settings are needed.

The content also gets a dynamic layout, but side-by-side (row). Since row is the default, you would not have to write it explicitly, but I did it here for explanation. With “flex-grow: 1” we make the content take the maximum remaining height. With “min-height: 0” we prevent it from hogging more space than is still available and overflowing the screen.

Our navigation area gets the min-width of 20%. In most cases, width is sufficient, but, for example, with tables with large test cells, the table steals more space from the navigation than allowed. With min-width, this does not happen. As a bonus, the area gets a scrollbar with “overflow: auto” if the content is too big.

The content page arranges the elements in columns again and takes the maximum available width. The main content takes all the space left next to the header and button bar and gets a scrollbar if there is not enough space available.

When I fill my containers with background color and example content, I get the following views:

Conclusion

Flex containers are an easy way to create page layouts dynamically. Thus, applications can run independently of the screen to a certain extent. Of course, the whole thing also has its limits, and for very small devices such as smartphones, completely different operating concepts and layouts must be developed.

Interacting with SVG files inside your React applications

So, for some reason you have a SVG file that somehow resembles a part of the application you are currently developing.

This might be only a sketch that you want to prepare as a Click Dummy, or it might be that you need to display a somewhat unique, complicated structure that is best layed out per SVG editor. This is somewhat expected when you have customers in the technical / scientific research sector.

So now you want to fill it with life.

Now, the SVG format is quite close to the <svg> structure that one can embed into HTML, but there are some steps in between. Most importantly, most SVG Editors fill their .svg files up with meta data or specific information only required for the editor in case you want to edit the files again.

Thus, you have three choices to integrate the SVG component in a React application

  • Re-Build your SVG with custom React Components that, via JSX, render their <svg>, <g>, <path>, etc. accordingly
  • Convert your SVG to valid JSX – this is possible in many cases, but you need to take care e.g. that the style attribute is a string in the SVG and an object in JSX, also it can be still way too large to be readily put in a single React Component
  • Import your .svg as its own React Component and then wrap that into an own Component that takes care about the interaction part

While I also have written a small converter that does the SVG-JSX-Conversion just fine for me for any file that comes out of Inkscape (probably an idea for my next blog post), we had the case of some files with about 16000 lines of SVG each, so I chose the third method in our case.

In my eyes, it is very correct to mention that the following way somehow goes against the React Mindset. In which you render all your components yourself to attach them the required mouse event handlers, never having to interact with a HTML “id” or any document.getElementById() or document.getElementByClassName().

In any React application, these should be avoided, but the idea here is to have a singular point – a SvgWrapper Component – where you allow these functions because you’d agree about the need to somehow target the specific SVG elements.

The gist:

import {ReactComponent as OurHorrificSvgMonster} from "/src/monster.svg";

const OurBeautifulComponent = () => {

    useOurCarefulSvgSynchronizationEffect(); // more on this below

    return (
        <SvgWrapper>
             <OurHorrificSvgMonster/>
        </SvgWrapper>
    )
};

Quick note: you can target the embedded <svg> element itself by <OurHorrificSvgMonster ref={...}/> and you could use this (ref.current holds that HTML element) to traverse all the children, so if you know much about the structure of your svg you could even live without the <SvgWrapper>. But say someone else made the horrific svg monster and all you have is the id or class names to all the individual svg elements inside.

Then

const WithVanillaHandlersConnected = ({children}) => {
    const dispatch = useDispatch();

    React.useEffect(() => {

        const onClick = (event) => {
            dispatch(awesomeAction(event.target.id));
        };

        const awesomeElements = [...document.getElementsByClassName("awesome")];

        awesomeElements.forEach(elem => {
            elem.addEventListener("click", onClick);
        });

        return () => awesomeElements.forEach(pipe => {
            pipe.removeEventListener("click", onClick);
        });
    }, []);

    return children;
};

I chose this dispatch() as a placeholder for any interaction with the surrounding web application, it could also be a simple React state or something. You can register any event listener you want here (also “mouseover”, “mouseout”, “contextmenu”, …), but think of removing it again in the effect return function.

By the way, document.getElementsByClassName(…) returns something like a HTMLCollection which is not exactly iterable, thus the […destructuring] to make the .forEach() possible.

We now have the first part – our elements (in our case, everything that has class “awesome”) has got a click handler that allows to dispatch anything to the application state. But now they need to change, too.

In a purely React-y way, this could be done by a svg element that chooses its fill = {isActive? "magenta" : "black"} but as we choose not to render our components ourselves, we need to once again grab deeply into the DOM and dare to manipulate it by hand.

As mentioned above – this is a step towards very ugly problems as React cannot guarantee that your visual layer matches your application state. You, on yoru own, have to guarantuee to do what’s right.

This is where this comes in:

/*
 for this example, think of that the redux selector selectSomethingFromTheState returns something like:

result = [
  {elementId: "elem1", isActive: true},
  ...
];

and isActive could be the thing that was toggled by our awesomeAction() above

*/

const useOurCarefulSvgSynchronizationEffect = () => {
    const elementStates = useSelector(selectSomethingFromTheState);

    React.useEffect(() => {
        for (const state of elementStates) {
            const element = document.getElementById(state.elementId};
            element.style.fill = isActive ? "magenta" : "black";
            // ... do other stuff with the DOM element
        }
    }, [elementStates]);
};

There we have it – we have the SvgWrapper and the use…SynchronizationEffect() that both stray from the React mindset by accessing the DOM directly, but we do it in a fashion where it is concisely encapsulated inside <OurBeautifulComponent> and there is no direct knowledge about the IDs inside the SVG, or Class manipulations, CSS Selectors, etc. elsewhere.

In my opinion, one can indeed go against the rules if it’s necessary, but I also see the option for a Stockton Rush quotation here.. so, if you know of any more elegant way, please feel free to share.

PS: by the way, if you use vite, you might get an “Uncaught SyntaxError” when trying import { ReactComponent ... }I’ve written about this before.

Using the File System as an Interaction Device

In a recent project, my job was to build a scientific data processing pipeline for a new algorithm that wasn’t set in stone yet. Part of my work would be to explore different mathematical formulas interactively with the customer.

My usual approach to projects is a “risk first” strategy. I try to identify the riskiest or most demanding part of the project and deal with it first. This approach essentially resembles the “fail fast” mindset, just that we haven’t failed yet.

In the case of the calculation pipeline, the riskiest part and at the same time the functionality that matters to the customer most, was the pipeline itself. If we were able to implement a system that can transform the given entry data into the desired results, we had an end-to-end prototype and the means to explore different mathematical approaches.

The pipeline consists of different steps that can be described as a complex transformation each. The first step/transformation takes a proprietary data format file and converts it into a big JSON file. The main effort of this step is a deep physical analysis of the data contained in the proprietary format. This analysis requires a lot of thought, exploration and work, but can be seen as a black box that the data traverses on its way from proprietary format to JSON.

The next step takes the JSON input and extracts the necessary information required by the following step. It is essentially a data reduction operation.

The third step feeds the analyzed, reduced data into the formulas and stores the calculation result.

The fourth step aggregates the calculation results into a daily time series report in a format that can be read by a spreadsheet application. This report is the end product of the pipeline and will be used to make decisions and to rule out certain environmental hazards.

The main difference of this project to virtually every project before is that I didn’t write any user interface code. The application’s main window is still blank. The whole interaction of the system with other systems that provide the entry data, of the pipeline steps among each other and with the human user is based on files in the file system.

The system periodically checks for the existence of new entry data. If some is found, it is copied in the “inbox” directory of the first step. The first step periodically checks for the existence of files in its inbox and processes them into its “outbox” that conveniently serves as the inbox of the second step. You probably get the idea by now. All the steps in the system, including the upstream data fetching routine, are actors in an file-based actor model. The files serve as messages from one actor to another. The file system and its directory structure is the common communication channel that passes the messages around.

Each processing step is an actor node with input and output storages

One advantage of this approach is that the file system viewer application of the operating system can be used as the (graphical) user interface. By opening the appropriate directories and viewing their content, the user can supervise the operating state of the system. The system can report problems by moving the incoming message not in the step’s “done” directory , but into its “failed” or “problem” directory. If several directories are on display at once, the user can follow a specific piece of data through the pipeline and view the intermediate results. For domain specific reasons, the actors in this project also have the result directory “omitted” for data that will not be processed any further because some domain rules have determined a cancellation.

An user can even manipulate the data’s flow by moving files away or into a specific directory. Let’s say that we want to calculate a certain amount of data again, we can just copy the files from the “done” directory of the first step into its “inbox” and the system will process it again.

Because the analysis step takes some time while the calculation step is surprisingly fast, we can perform just the calculation again by not moving the initial data files, but the analyzed and reduced entry files for the calculation step. Using this approach, we can try different mathematical formulas by stopping the system, swapping the calculation step with a new version, starting the system again and moving the desired entry files into its inbox.

Using the file system as an interaction device for the user and the system’s parts has many immediate advantages, but some drawbacks, too. One drawback is performance. Using the harddisk for data transfer is the slowest possible way to bring data from step X to step X+1. If your system is required to have high throughput or low latency, this approach isn’t suitable. My project has a low, forecastable throughput and a latency requirement that is measured in minutes or seconds, but not in milliseconds or even nanoseconds. It can spend some time in the filesystem, because the first step alone takes several seconds for each file.

Another drawback is a certain fragility of the communication medium, the file system. You have to account for concurrent reads, writes or even deletes. The target platform of my system (Microsoft Windows) exhibits signs of exhaustion if the amount of files in one directory grows too large. This means that your file selection, already a costly operation, becomes more costly if the systems is put under pressure. If your throughput is usually steady, which is the case in my project, this won’t be a problem. Until you manually copy 100k files in an inbox for swift recalculation and discover that the file copy process alone takes several minutes.

Of course, the system cannot operate without a graphical user interface forever. But some basic interactions with the system will probably just result in some files being copied from one directory to another one in the background.

WPF: Recipe for customizable User Controls with flexible Interactivity

The most striking feature of WPF is its peculiar understanding of flexibility. Which means, that usually you are free to do anything, anywhere, but it instantly hands back to you the responsibility to pay super-close attention of where you actually are.

As projects with grow, their user interfaces usually grow, and over time there usually appears the need to re-use any given component of that user interface.

This not only is the working of the DRY principle at the code level, Consistency is also one of the Nielsen-Norman Usability Heuristics, i.e. a good plan as to not confuse your users with needless irritations. This establishes trust. Good stuff.

Now say that you have a re-usable custom button that should

  1. Look a certain way at a given place,
  2. Show custom interactivity (handling of mouse events)
  3. Be fully integrated in the XAML workflow, especially accepting Bindings from outside, as inside an ItemsControl or other list-type Control.

As usual, this was a multi-layered problem. It took me a while to find my optimum-for-now solution, but I think I managed, so let me try to break it down a bit. Consider the basic structure:

<ItemsControl ItemsSource="{Binding ListOfChildren}">
	<ItemsControl.ItemTemplate>
		<DataTemplate>
			<Button Style="{StaticResource FancyButton}"
				Command="{Binding SomeAwesomeCommand}"
				Content="{Binding Title}"
				/>
		</DataTemplate>
	</ItemsControl.ItemTemplate>
</ItemsControl>
Quick Note about the Style

We see the Styling of FancyButton (defined in some ResourceDictionary, merged together with a lot of stuff in the App.xaml Applications.Resources), and I want to define the styling here in order to modify it in some other places i.e. this could be defined in said ResourceDictionary like

<Style TargetType="{x:Type Button}" x:Key="FancyButton"> ... </Style>

<Style TargetType="{x:Type Button}" BasedOn="{StaticResource FancyButton}" x:Key="SmallFancyButton"> ... </Style>
<Style TargetType="{x:Type Button}" BasedOn="{StaticResource FancyButton}" x:Key="FancyAlertButton"> ... </Style>
... as you wish ...
Quick Note about the Command

We also see SomeAwesomeCommand, defined in the view model of what ListOfChildren actually consists of. So, SomeAwesomeCommand is a Property of a custom ICommand-implementing class, but there’s a catch:

Commands on a Button work on the Click event. There’s no native way to assign that to different events like e.g. DragOver, so this sounds like our new User Control would need quite some Code Behind in order to wire up any Non-Click-Event with that Command. Thankfully, there is a surprisingly simple solution, called Interaction.Triggers. Apply it as

  1. installing System.Windows.Interactivity from NuGet
  2. adding the to your XAML namespaces: xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
  3. Adding the trigger inside:
<Button ...>
    <i:Interaction.Triggers>
        <i:EventTrigger EventName="DragOver">
            <i:InvokeCommandAction Command="WhateverYouFeelLike"/>
        </i:EventTrigger>
    </i:Interaction.Triggers>
</Button>

But that only as a side note, remember that the point of having our separate User Control is still valid; considering that you would want to have some extra interactivity in your own use cases.

Now: Extracting the functionality to our own User Control

I chose not to derive some class from the Button class itself because it would couple me closer to the internal workings of Button; i.e. an application of Composition over Inheritance. So the first step looks easy: Right click in the VS Solution Explorer -> Add -> User Control (WPF) -> Create under some name (say, MightyButton) -> Move the <Button.../> there -> include the XAML namespace and place the MightyButton in our old code:

// old place
<Window ...
	xmlns:ui="clr-namespace:WhereYourMightyButtonLives
	>
	...
	<ItemsControl ItemsSource="{Binding ListOfChildren}">
		<ItemsControl.ItemTemplate>
			<DataTemplate>
				<ui:MightyButton Command="{Binding SomeAwesomeCommand}"
						 Content="{Binding Title}"
						 />
			</DataTemplate>
		</ItemsControl.ItemTemplate>
	</ItemsControl>
	...
</Window>

// MightyButton.xaml
<UserControl ...>
	<Button Style="{StaticResource FancyButton}"/>
</UserControl>

But now it get’s tricky. This could compile, but still not work because of several Binding mismatches.

I’ve written a lot already, so let me just define the main problems. I want my call to look like

<ui:DropTargetButton Style="{StaticResource FancyButton}"
		     Command="{Binding OnBranchFolderSelect}"
		     ...
		     />

But again, these are two parts. Let me clarify.

Side Quest: I want the Style to be applied from outside.

Remember the idea of having SmallFancyButton, FancyAlertButton or whatsoever? The problem is, that I can’t just pass it to <ui:MightyButton.../> as intended (see last code block), because FancyButton has its definition of TargetType="{x:Type Button}". Not TargetType="{x:Type ui:MightyButton}".

Surely I could change that. But I will regret this when I change my component again; I would always have to adjust the FancyButton definition every time (at several places) even though it always describes a Button.

So let’s keep the Style TargetType to be Button, and just treat the Style as something to be passed to the inner-lying Button.

Main Quest: Passing through Properties from the ListOfChildren members

Remember that any WPF Control inherits a lot of Properties (like Style, Margin, Height, …) from its ancestors like FrameworkElement, and you can always extend that with custom Dependency Properties. Know that Command actually is not one of these inherited Properties – it only exists for several UI Elements like the Button, but not in a general sense, so we can easily extend this.

Go to the Code Behind, and at some suitable place make a new Dependency Property. There is a Visual Studio shorthand of writing “propdp” and pressing Tab twice. Then adjust it to read like

public ICommand Command
        {
            get { return (ICommand)GetValue(CommandProperty); }
            set { SetValue(CommandProperty, value); }
        }

        public static readonly DependencyProperty CommandProperty =
            DependencyProperty.Register("Command", typeof(ICommand), typeof(DropTargetButton), new PropertyMetadata(null));

With Style, we have one of these inherited Properties. Nevertheless, I want my Property to be called Style, which is quite straightforward by just employing the new keyword (i.e. we really want to shadow the inherited property, which is tolerable because we already know our FancyButton Style to its full extent.)

public new Style Style
        {
            get { return (Style)GetValue(StyleProperty); }
            set { SetValue(StyleProperty, value); }
        }

        public static readonly new DependencyProperty StyleProperty =
            DependencyProperty.Register("Style", typeof(Style), typeof(DropTargetButton), new PropertyMetadata(null));

And then we’re nearly there, we just have to make the Button inside know where to take these Properties. In an easy setting, this could be accomplished by making the UserControl constructor set DataContext = this; but STOP!

If you do that, you lose easy access to the outer ItemsControl elements. Sure you could work around – remember the WPF philosophy of allowing you many ways – but more practicable imo is to have an ElementName. Let’s be boring and take “Root”.

<UserControl x:Class="ComplianceManagementTool.UI.DropTargetButton"
	     ...
             xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
             xmlns:local="clr-namespace:ComplianceManagementTool.UI"
             x:Name="Root"
             >
    <Button Style="{Binding Style, ElementName=Root}"
            AllowDrop="True"
            Command="{Binding Command, ElementName=Root}"
            Content="{Binding Text, ElementName=Root}"
            >
        <i:Interaction.Triggers>
            <i:EventTrigger EventName="DragOver">
                <i:InvokeCommandAction Command="{Binding Command, ElementName=Root}"/>
            </i:EventTrigger>
        </i:Interaction.Triggers>
    </Button>
</UserControl>

As some homework, I’ve left you the Content property to add as a Dependency Property, as well. You could go ahead and add as many DPs to your User Control, and inside that Control (which is quite maiden-like still, if we ignore all that DP boilerplate code) you could have as many complex interactivity as you would require, without losing the flexibility of passing the corresponding Commands from the outside.

Of course, this is just one way of about seventeen plusminus thirtythree, add one or two, which is about the usual number of WPF ways of doing things. Nevertheless, this solution now lives in our blog, and maybe it is of some help to you. Or Future-Me.

CSS: z-index can be weird.

Before I start this post, there are three things I want to state:

  1. If you think the “z-index” is quite simple, you probably never bothered to care.
  2. When in doubt, one can always read the official specification
  3. There are multiple good elaborations available already (see bottom of this post), but I was missing a comprehensive list of the most important points.
Quick Motivation (skip that if you only want the facts)

Yes. We know: The web has become a place which it never intended to be. Nowadays, it seems to be accommodate everything. You want live control of measurement devices? 3D camera applications? Advanced data wizardry? Or in my case, a kind of sophisticated layout engine? … Web Dev in 2021 gives you the impression that it is all merely a matter of time (or cost).

But then, there are always the caveats. Some semi-suggestive idea turns out to be not that accessible at all. Our user experience considerations made us implement “kind of basic” windows (in the operation system sense), that appear at times and disappear at other times, and give the user maximum information while maintaining minimum clutter.

Very early on, I noticed that I had to implement my own drag’n’drop functionality, because HTML5 isn’t really there yet. But I consider that as something advanced, which also has its idiosyncrasy in every conceivable use case, so that’s ok.

But then again, a somewhat-native-feeling windowing system (even if they are only rectangles with text) makes use of a seemingly simple thing: That stuff gets drawn over other stuff in the right order. And this comes with certain pecularities.

The painting order of HTML elements is divided into stacking contexts. Stacking contexts can be stacked above or below each other, and most of the times they behave as expected, but sometimes, they are not. So, for the roundup…

Stacking Context – Essential Rules

(This assumes CSS knowledge, but don’t hesitate to comment if you have any questions.)

  • If you set any z-index, you set that z-index within the current stacking context.
    • You can never enter an outside stacking context, only create new ones inside
  • One stacking context as a whole is always either above or below other stacking contexts as a whole
  • The root stacking order (from the <html> element) is as you expect:
    • Further down in the HTML source means more upfront
  • Higher z-index means “more upfront”, but
    • z-index doesn’t mean a thing if your neighboring elements do not live inside the same stacking context!
  • Within any parent stacking context, new child stacking contexts are created by
    • Setting CSS “position” to something other than “static”
    • Setting a z-index different from the default value “auto”
      • For clarity: “z-index: 0;” is nearly the same as “z-index: auto;”, but the latter doesn’t open up a child stacking context, while the former does.
    • Setting CSS “display: flex;” or “display: grid”
    • Setting CSS: “isolation: isolate;” (what is that even?)
    • Setting CSS: “will-change” to something non-initial (what is that even?)
    • Setting one of the “graphically advanced” CSS properties like
      • opacity, transform, filter, mix-blend-mode, clip-path, mask, …

There is more, but maybe this can help you bugtracing. And two meta-points:

  • CSS evolves, so with new features, always have stacking context in mind
  • In a framework context (like the React biosphere), you might not know what your imported dependencies do under the hood. Maybe better isolate them.
Reading recommendations:

They have nice illustrations, too.

If everything fails, go back to square one:

https://www.w3.org/TR/CSS2/visuren.html#propdef-z-index

Be aware.