Windows 10 quality of life features

Starting with Windows 10 Microsoft switched from big-bang releases of its operating system to so called rolling releases: They release new features and improvements in regular intervals – once or twice a year – without changing the product name.

The great thing is that users get the improvements made by Microsofts engineers much sooner than in the past where you had to wait several years for a big “service pack” to arrive or even a new major release of Windows like 2000, XP, Vista, 7, 8, 8.1 and finally 10 (I am leaving out the dark times on purpose 😉).

The bad thing is that it is harder to see what version or release you are running. Of course there is a (less visible) name for every Windows release. This version or codename sometimes gets mentioned on support pages or in blog posts because functionality of Window 10 can change significantly between these rolling feature updates. And sometimes you may run some app or tool that tells you need Windows 10 2004 or higher.

What version of Window 10 am I running?

I know of two simple ways to find out what version of Windows 10 you are running currently:

  1. Running the tool winver
  2. Opening the Settings -> System -> Info page

Why does it matter?

Another downside is that users often are not aware of new stuff added to their operating system. And Microsoft does an awful job promoting the changes and improvements!

Of course there are announcements about the big things after upgrading your operating system to the next feature level. And Microsoft uses these for marketing its own apps and services. They slap you their new Edge-browser in the face on every occasion and try to trick you in creating a Microsoft account. It is absolutely not obvious how to use Windows without a Microsoft account like the decades before. Skip the process here, continue without and risk your live…

On the other hand they really improve their software and slowly but steadily round the rough edges of their system. The UIs for environment variables are finally quite usable.

Now back to the main theme of the post: There are some hidden gems built into Windows 10 that I learned of only lately and I think are vastly underadvertised – unlike Microsoft’s marketing of their big products.

Built-in screen recorder

Ok, many gamers may know it because it Windows briefly displays the shortcut Win+G when starting a game. It is not only usable for games but you can record any window, capture application sounds and record your voice. You can easily record your own screencasts and video tutorials using this built-in solution.

Built-in clipboard manager

How often did you wish to be able copy multiple items and choose one of the last few copied elements when pasting? While such clipboard managers have been around for a long time and sometimes provide tons of useful features Windows 10 has a simple one built-in. Just press Win+V instead of Ctrl+V to paste your clipboard entry and you will get a list of the copied items to choose from.

Built-in screenshot/snipping tool

Many people may know the old way of making screenshots using the oddly labeled PrtScr-key (sometimes also PrntScrn or simply Print), opening a painting application like MS paint and pasting the image using Ctrl+V. Well, Microsoft improved this workflow a lot by including a snipping tool that you can activate using Win+Shift+S. This tool lets you select either a rectangular or free-form region, a window or an entire screen to capture. After doing that you get a notification allowing you to make modifications to the capture and save it to disk.

On-screen emoji-keyboard

Just a little helper in these modern social media times is the on-screen emoji-keyboard. Using Win+. you can activate it, browse tons of common emojis and enter them into you messages and texts 🐱‍🏍🤘.

Windows Terminal

Ok, this last but not least one is not (yet) built-in and mostly interesting for developers and power users. Nevertheless, I think it is noteworthy that Microsoft finally built a capable terminal application with modern features like multiple tabs, full unicode and font support, customizable background with blur and the ability to host different shells like the old and trusted command prompt CMD, the newer PowerShell and WSL. You can find it in the Microsoft store for free.

Conclusion

While releases of Windows 10 are more subtle than past new Windows releases many things change both under the hood and user visible. Every once in a while something you missed for years or installed third-party tools for may be added without you knowing. That’s another reason why talking to colleagues and friends and practices like pair-programming and brown-bag meetings are so valuable for sharing knowledge and experience.

I hope there is something for you in my findings of hidden windows gems. If you have some Windows 10 features you discovered and really like, feel free to leave them in the comments. I will gladly try them out!

The spell that reveals your onboarding decade

Every one of us has started somewhere. By telling you what my first computer was, I also convey a lot about the place and time my journey in IT started. For many of my fellows, it was a Commodore C64 or an Atari 500. But even if I don’t tell you about my first machine, there is a simple “magic spell” that you can cast to at least get a hint about the decade my first working days started, 15 years after my first contact with computers.

The spell is just one word: “container”. What a container is and how to use it is bound to the decades. Let me guide you through some typical answers.

Pre-2010 answer

If you entered the industry around the year 2000, a container was a big chunk of software that you preferably installed on an even bigger machine, the infamous “application server”. The container, or “servlet container”, “application container”, or, if you were with the right folks, “enterprise bean container” (in short: EJB-Container) was the central hub to host all of your web applications. If you deployed your application into the container, it handled the rest, like unpacking the web archive, providing resources and publishing to the internet. Typical names of containers were Tomcat, Jetty, JBoss or WildFly. You can probably see them around even today, because the concept itself is appealing. Some aspects of it inevitably lead to problems, though. Resource management was a big topic. Your application wasn’t expected to care for a database connection, a logging context or, sometimes, even security features, because the container provided those things to it. As you can probably imagine, that left your application crippled and unable to function outside a container.

Pre-2010 containers

So if you onboarded more than ten years ago, your first thoughts reacting to the word “container” will be “big machine”, “slow startup” and “logging framework”. There cannot reasonably be more than one container per machine. Maintaining a cluster of containers would be the work of luminaries. Being asked to start a container on your developer machine is a dreadful endeavour. “Booting the container” is a reason to visit the coffee machine.

Post-2010 answer

But if you started your career less than ten years ago, your reaction to the word “container” will be different. Starting in 2013, a technology named “Docker” reinvented an old practice to isolate processes and package them into a transport format. Simplified enough, a container is just the RAM-based projection of an application image. You boot a container by loading the image into RAM. That’s some of the fastest things you can do on a computer (not really, but it fits the story better). Even better, because each container ideally contains just one small application or part of it, you don’t boot one container per machine, you can run dozens at the same time. Each container brings everything it needs with it and only relies on three common external resources being provided: Networking, persistent storage and a facility to dump logging output.

Post-2010 containers

It is good practice to partition your application into several containers of the post-2010 kind. It is good practice to have them talk to each other over network, either real or simulated. The lines between actual computers get blurry real fast with this kind of containering.

As a youngster, your first thoughts reacting to the word “container” will be “just one?”, “scale up” and “log output management”. You see an opportunity to maintain a cluster of containers. Being asked to start a container on your developer machine is a no-brainer. “Booting the container” is a reason to automate your container infrastructure.

The reactions to the word “container” are very different, based on socialization period. In the old days, pre-2010 containers were boss fight adversaries. Nowadays, post-2010 containers are helpful spirits that just need to be controlled.

Post-2020 answer?

What better way to control the helpful spirits but to deploy them to an environment that handles unpacking, wiring, providing resources and publishing to the internet? Your application isn’t expected to care for topics like scalability, cluster robustness or load balancing. The environment, your container cluster platform, handles those things for you. There can only be one cluster platform per cloud. Being asked to start a cluster platform on your developer machine – well, that’s just not possible, sorry. Best we can do is a minified version of it. Our applications tend to function poorly outside a cluster platform.

As you hopefully can see, developers of all decades crave a thing they tend to call “container” that they can throw their software into to have it perform well without all the hassle of operations. But as soon as they give away responsibility for the environment, they also give away the possibility of comfortable “developer machine” operations. The goal is the same, just the technicality what exactly a “container” happens to be changes over time.

What is your “spell” that reveals a lot about the responder?

React for the algebra enthusiast – Part 2

In Part 1, I explained how algebra can shed some light on a quite restricted class of react-apps. Today, I will lift one of the restrictions. This step needs a new kind of algebraic structure:

Categories

Category theory is a large branch of pure mathematics, with many facets and applications. Most of the latter are internal to pure mathematics. Since I have a very special application in mind, I will give you a definition which is less general than the most common ones.

Categories can be thought of as generalized monoids. At the same time, a Category is a labelled, directed multigraph with some extra structure. Here is a picture of a labelled directed multigraph – its nodes are labelled with upper case letters and its edges are labelled with lower case letters:

If such a graph happens to be a category, the nodes are called objects and the edges morphisms. The idea is, that the objects are changed or morphed into other objects by the morphisms. We will write h:A\to B for a morphism from object A to object B.

But I said something about extra structure and that categories generalize monoids. This extra structure is essentially a monoid structure on the morphisms of a category, except that there is a unit for each object called identity and the operation “\_\circ\_” can only be applied to morphisms, if they form “a line”. For example, if we have morphisms like k and i in the picture below, in a category, there will be a new morphism “k\circ i“:

Note that “i” is on the right in “k\circ i” but it is the first morphism if you follow the direction indicated by the arrows. This comes from function composition in mathematics, which suffers from the same weirdness by some historical accident. For now that just means that chains of morphisms have to be read from right to left to make sense of them.

For the indentities and the operation “\_\circ\_“, we can ask for the same laws to hold as in a monoid, which will complete the definition of a category:

Definition (not as general as it could be…)

A category consists of the following data:

  • A set of objects A,B,…
  • A set of morphisms f : A_1\to B_1, g:A_2\to B_2,\dots
  • An operation “\_\circ\_” which for all (consecutive) pairs of morphisms f:A\to B and g:B\to C returns a morphism g \circ f : A \to C
  • For any object a morphism \mathrm{id}_A : A\to A

Such that the following laws hold:

  • \_\circ\_ ” is associative: For all morphisms f : A \to B, g : B\to C and h : C\to D, we have: h \circ (g \circ f) = (h \circ g) \circ f
  • The identities are left and right neutral: For all morphisms f: A\to B we have: f \circ \mathrm{id}_A=\mathrm{id}_B \circ f

Examples

Before we go to our example of interest, let us look at some examples:

  • Any monoid is a category with one object O and for each element m of the monoid a morphism m:O\to O. “m\circ n” is defined to be m\cdot n.
  • The graph below can be extended to a category by adding the morhpisms ef: B\to B, fe: A\to A, efe: A\to B, fef: B\to A, \dots and an identity for A and B. The operation “\_\circ\_” is defined as juxtaposition, where we treat the identities as empty sequences. So for example, ef\circ efe is efefe: A\to B.
  • More generally: Let G be a labelled directed graph with edges e_1,\dots,e_r and nodes n_1,\dots,n_l. Then there is a category C_G with objects n_1,\dots,n_l and morphisms all sequences of consectutive edges – including the empty sequence for any node.

Action Categories

So let’s generalize Part 1 with our new tool. Our new scope are react-apps, which have actions without parameters, but now, action can not neccessarily be applied in any order. If an action can be fired, may now depend on the state of the app.

The smallest example I can think of, where we can see whats new, is an app with two states, let’s call them ON and OFF and two actions, let’s say SWITCH_ON and SWITCH_OFF:

Let us also say, that the action SWITCH_ON can only be fired in state OFF and SWITCH_OFF only in state ON. The category for that graph has as its morphims the possible sequences of actions. Now, if we follow the path of part 1, the obvious next step is to say that SWITCH_ON after SWITCH_OFF (and the other way around) is the same as the empty action-sequence — which leads us to…

Quotients

We made a pretty hefty generalization from monoids to categories, but the theory for quotients remains essentially the same. As we defined equivalence relations on the elements of a monoid, we can define equivalence relations on the morphisms of a category. As last time, this is problematic in general, but turns out to just work if we replace sequences of morphisms in the action category with matching source and target.

So in the example above, it is ok to say that SWITCH_ON SWITCH_OFF is the empty sequence on ON and SWITCH_OFF SWITCH_ON is the empty sequence on OFF (keep in mind that the first action to be executed is on the right). Then any action sequence can be reduced to simply SWITCH_ON, SWITCH_OFF or an empty sequence (not the empty sequence, because we have two of them with different source and target). And in this case, the quotient category will be what we drew above, but as a category.

Of course, this is not an example where any high-powered math is needed to get any insights. So far, these posts where just about understanding how the math works. For the next part of this series, my plan is to show how existing tools can be used to calculate larger examples.

3 good uses for the C++ preprocessor in 2020

As this weird year, 2020, comes to a close, I noticed that I am still using the preprocessor in my C++ programs. And not just for #includes which might, at last, slowly fade away with C++20’s modules. The preprocessor’s got a pretty bad rep, and new C++ programmers are usually taught to stay as far away as possible. Justifiably so – some things, like the dreaded X-Macros really should go the way of the dinosaurs.

But there are still some good uses left in the thing, and here’s my top 3 of those:

0. Commenting out big-chunks of code

I’ve often seen people comment out big parts of code with block comments: /* this is not active */. However, that will only work as long as the code does not contain any other block comments, let alone a stray */ in a string. A great alternative is to use the preprocessor:

#if 0
auto i_do_not_want_to_compile_this() -> auto
{
  std::vector<std::deque<std::mutex>> baz{};
  return baz;
}
#endif

This can easiely by wrapped multiple times around bigger parts of code, which is very helpful when refactoring large chunks of legacy code. It can very easiely be toggled on and off while in this state. And the IDE will usually still show a dimmed version of syntax highlighting in the disabled region.

1. Conditionally throw away “cross-cutting” concerns

Some parts of aspects of programs can be “cross-cutting”, which means they cannot easiely be separated from the rest of the code-base by putting them in a separate module. The most prominent example is probably logging. While you can typically modularize the actual implementation, the actual log calls will be all over your code. Another of those concerns is “profiling”. This is also something that you typically want to take out of your application when deploying it, because users will rarely profile the end-product. Again, the preprocessor comes to the rescue. For example, in the excellent Optick, most of the code you insert is actually macros that can be completely eliminated with a simple compile-time switch. Consider this “tag” that add some additional metric to your profile:

OPTICK_TAG("CoolMetric", compute_cool_metric());

When Optick is turned off via the aforementioned compile-time switch, compute_cool_metric() is never called. The call is not even compiled. Just turning Optick off will completely remove it from your source. Now this can be potentially dangerous, if the function has a side effect, but you wouldn’t do that anyways, would you?

2. Making forward declarations more visible

Presumably owing to its history as a continuously-evolved language, C++ has a very limited set of reserved keywords, often avoiding to introduce keywords to not interfere with any working software out there. Do not get me wrong, that is a great reason. But because of this, some language constructs can sometimes be a bit cryptic, for example forward declarations: class will_be_defined;. If you ever worked with a big, old or big and old code-base with lots of those, you probably know that maintaining them can be a bit of a chore and prone to error. So I think it is a great idea to at least make them more visible with your own macro “KEYWORD”:

#define FORWARD_DECL(x) class x

FORWARD_DECL(will_be_defined);

That FORWARD_DECL immediatly stands out visually and helps me keep track of those subtle declarations.

Grammar as a leaky abstraction

Internationalisation, or i18n for short, is the process of making the user interface of a program ready for translation into multiple languages. This usually means to factor out texts from the program source code into separate files, often called translation bundles. These files have a key-value structure. The program code then only refers to keys that are resolved into the actual texts from the translation bundle for the selected target language.

Here’s a simple example of two translation bundles, one for English and one for German:

# translations_en.properties
quit_confirm_message=Do you really want to quit?
yes_option=Yes
no_option=No
# translations_de.properties
quit_confirm_message=Wollen Sie die Anwendung wirklich beenden?
yes_option=Ja
no_option=Nein

The actual source code might look like this:

var answer = showDialog(
  t("quit_confirm_message"),
  t("yes_option"),
  t("no_option")
);

Here the function t looks up the key in the currently active translation bundle and returns the translated message as a string.

How not to do it

Last month I discovered an amusing attempt at internationalisation in a third-party code base. It looked similar to this:

# translations_en.properties
a=A
an=An
is=is
not=not
available=available
article=article
# translations_de.properties
a=Ein
an=Ein
is=ist
not=nicht
available=verfügbar
article=Artikel

These translation keys were used like this:

"${t('an')} ${t('article')} ${t('is')}
${available ? '' : t('not')} ${t('available')}."

to produce messages like

"An article is available."
"An article is not available."

… or in German:

"Ein Artikel ist verfügbar."
"Ein Artikel ist nicht verfügbar."

Why is this a clumsy attempt at internationalisation?

Because it uses single words as translation units, and it relies on the fact that English and German have the same sentence structure in this particular case. In general, of course, languages do not have the same sentence structure, not even related languages like English and German.

The author of the code also introduced separate translation keys for “a” and “an”. The German translation for both keys was “ein”. The author was lucky so far that all texts with “a” or “an” in this particular program translated to “ein” in German, not “eine”, “einen”, “einem”, “einer”, or “eines”.

How to do it

So what would be the correct way to do it? The internationalisation should have looked like this:

# translations_en.properties
article_available=An article is available.
article_not_available=An article is not available.
# translations_de.properties
article_available=Ein Artikel ist verfügbar.
article_not_available=Ein Artikel ist nicht verfügbar.
available ? t("article_available")
          : t("article_not_available")

By using whole phrases and sentences as translation units the translations into various languages have the freedom to use their own word orders and grammatical structures.

Keeping in touch with your pipeline Jenkins jobs

We are using continuous integration (CI) at the Softwareschneiderei for many years now. Our CI platform of choice is historically Jenkins which was called Hudson back in the day.

Things moved on since then and the integration with GitLab got a lot better with the advent of multibranch pipeline jobs. This kind of job allows you to automatically build branches and merge requests within the same job and keep the builds separate.

Another cool feature of Jenkins is the job configuration as code, defined in Jenkinsfile and used in pipeline jobs. That way it is easy to create and maintain a job configuration alongside your project’s source code inside your repository. No need anymore to click through pages of web UIs to configure your job. That way you also get the complete job configuration change history as additional benefits.

I prefer using scripted instead of declarative pipelines for Jenkinsfiles because they give me more control, freedom and power. But like always, this power and flexiblity comes at a price…

Sending out build notifications

In my case I wanted to always send out build notification regardless of the job result. This is quite easy if you have plugins like the Mattermost Notification Plugin or one of the mail plugins. Since our pipeline script consists of Groovy code this seems quite straightforward: Put the notification code into a try-finally-block:

node {
    try {
        stage ('Checkout and build') {
            checkout scm
			// Do something to build our project
		}
		// Maybe some additional stages like testing, code-analysis, packaging and deployment
    } finally {
        stage ('Notify') {
            mattermostSend "${env.JOB_NAME} - ${currentBuild.displayName} finished with Status [${currentBuild.currentResult}] (<${env.BUILD_URL}|Open>)"
        }
    }
}

Unfortunately, this pipeline script will always return SUCCESS as the build result! Even if someone aborts the job execution or a stage in the try-block fails…

Managing build status

So the seasoned programmer probably already knows the fix: Setting the build result in appropriate catch-blocks:

node {
    try {
        stage ('Checkout and build') {
            checkout scm
			// Do something to build our project
		}
		// Maybe some additional stages like testing, code-analysis, packaging and deployment
    } catch (Exception e) {
        if (e in org.jenkinsci.plugins.workflow.steps.FlowInterruptedException) {
            currentBuild.result = 'ABORTED'
        } else {
            echo "Exception: ${e.class}, message: ${e.message}"
            currentBuild.result = 'FAILURE'
        }
    } finally {
        stage ('Notify') {
            mattermostSend "${env.JOB_NAME} - ${currentBuild.displayName} finished with Status [${currentBuild.currentResult}] (<${env.BUILD_URL}|Open>)"
        }
    }
}

You can control the granularity and the exceptions thrown by your build steps at will and implement exactly the status reporting that you want. The available statuses are defined in hudson.model.Result, so feel free to realize your own build status management to best fit your project.

Bridging Eons in Web Dev with Polyfills

Indeed, web development is kind of peculiar. On the one hand, there‘s seldom a field in which new technologies overturn each other at that pace, creating very exciting opportunities ranging from quickly sketching out proof-of-concepts to the efficient construction of real-world applications. On the other hand, there is this strange air of browser dependency and with any new technology one acquires, there‘s always the question of whether this is just some temporary fashion or here to stay.

Which is why it hapens, that one would like to quickly scaffold a web application on the base of React and its ecosystem, but has the requirement that the customer is – either voluntarily or forced by higher powers – using some legacy browser like Internet Explorer 11, for which Microsoft has recently announced its end of life support for 30th November this year. Which doesn’t sound nice for the… *searching quickly* … 5% of desktop/laptop users that still use this old horse, but then again, how long can you cling to an outdated thing?

For the daily life of a web developer, his mind full of peculiarities that the evolution of the ECMAScript standard which basically is JavaScript, there is the practical helper of caniuse.com, telling you for every item of your code you want to know about, which browser / device has support and which doesn’t.

But what about whole frameworks? When I recently had my quest for a IE11-comptabile React app, I already feared that at every corner, I needed to double-check all my doing, especially given that for the development itself, one is certainly advised to instead use one of the browsers that come with a quite some helpful developer tools, like extensions for React, Redux, etc. — but also the features in the built-in Console, where it makes your life a lot easier whether you can just log a certain state as a string of “[Object object]” or a fully interactive display of object properties. Sorry IE11, there are reasons why you have to go.

But actually, then, I figured, that my request is maybe not that far outside the range of rather widespread use cases. Thus, the chance that someone already tried to tackle the problem, aren’t so hopeless. And so this works pretty straightforward:

  • Install “react-app-polyfill”, e.g. via npm:
npm install react-app-polyfill
  • At the very top of your index.js, add for good measure:
import "react-app-polyfill/ie11";
import "react-app-polyfill/stable";
  • Include “IE 11” (with quotes) in your package.json under the “browserlist” as a new entry under “production” and “development”

That should do it. There are people on the internet that advise removing the “node_modules/.cache” directory when doing this in an existing project.

The term of a polyfill is actually derived from some kind of putty, which is actually a nice picture. It’s all about allowing a developer to use accustomed features while maintaining the actual production environment.

Another very useful polyfill in this undertaking was…

// install 
npm install --save-dev @babel/plugin-transform-arrow-functions

// then add to the "babel" > "plugins" config array:
"babel": {
    "plugins": [
      "@babel/plugin-transform-arrow-functions"
    ]
  }

… as I find the new-fashioned arrow function notation quite useful.

So, this seems to bridge (most of) the worries one encounters in this web dev world where use cases span eons of technology evolution. Now, do you know any more useful polyfills that make your life easier?

React for the algebra enthusiast – Part 1

When I learned to use the react framework, I always had the feeling that it is written in a very mathy way. Since simple googling did not give me any hints if this was a consideration in the design, I thought it might be worth sharing my thoughts on that. I should mention that I am sure others have made the same observations, but it might help algebraist to understand react faster and mathy computer scientiests to remember some algebra.

Free monoids

In abstract algebra, a monoid is a set M together with a binary operation “\cdot” satisfying these two laws:

  • There is a neutral element “e”, such that: \forall x \in M: x \cdot e = e \cdot x = e
  • The operation is associative, i.e. \forall x,y,z \in M: x \cdot (y\cdot z) = (x\cdot y) \cdot z

Here are some examples:

  • Any set with exactly one element together with the unique choice of operation on it.
  • The natural numbers \mathbb{N}=\{0,1,2,\dots \} with addition.
  • The one-based natural numbers \mathbb{N}_1=\{1,2,3,\dots\} with multiplication.
  • The Integers \mathbb Z with addition.
  • For any set M, the set of maps from M to M is a monoid with composition of maps.
  • For any set A, we can construct the set List(A), consisting of all finite lists of elements of A. List(A) is a monoid with concatenation of lists. We will denote lists like this: [1,2,3,\dots]

Monoids of the form List(A) are called free. With “of the form” I mean that the elements of the sets can be renamed so that sets and operations are the same. For example, the monoid \mathbb{N} with addition and List({1}) are of the same form, witnessed by the following renaming scheme:

0 \mapsto []

1 \mapsto [1]

2 \mapsto [1,1]

3 \mapsto [1,1,1]

\dots

— so addition and appending lists are the same operation under this identification.

With the exception of \mathbb{N}_1, the integers and the monoid of maps on a set, all of the examples above are free monoids. There is also a nice abstract definition of “free”, but for the purpose at hand to describe a special kind of monoid, it is good enough to say, that a monoid M is free, if there is a set A such that M is of the form List(A).

Action monoids

A react-app (and by that I really mean a react+redux app) has a set of actions. An action always has a type, which is usally a string and a possibly empty list of arguments.

Let us stick to a simple app for now, where each action just has a type and nothing else. And let us further assume, that actions can appear in arbirtrary sequences. That means any action can be fired in any state. The latter simplification will keep us clear from more advanced algebra for now.

For a react-app, sequences of actions form a free monoid. Let us look at a simple example: Suppose our app is a counter which starts with “0” and has an increment (I) and decrement (D) action. Then the sequences of action can be represented by strings like

ID, IIDID, DDD, IDI, …

which form a free monoid with juxtaposition of strings. I have to admit, so far this is not very helpful for a practitioner – but I am pretty sure the next step has at least some potential to help in a complicated situation:

Quotients

Quotients of sets by an equivalence relation are a very basic tool of modern math. For a monoid, it is not clear if a quotient of its underlying set will still be a monoid with the “same” operations.

Let us look at an example, where everything goes well. In the example from above, the counter should show the same integer if we decrement and then increment (or the other way around). So we could say that the two action sequences

  • ID and
  • DI

do really nothing and should be considered equivalent to the empty action sequence. So let’s say that any sequence of actions is equivalent to the same sequence with any occurence of “DI” or “ID” deleted. So for example we get:

IIDIIDD \sim I

With this rule, we can reduce any sequence to an equivalent one that is a sequence of Is, a sequence of Ds or empty. So the quotient monoid can be identified with the integers (in two different ways, but that’s ok) and addition corresponds to juxtaposition of action sequences.

The point of this example and the moral of this post is, that we can take a syntactic description (the monoid of action sequences), which is easy to derive from the source code and look at a quotient of the action monoid by a reasonable relation to arrive at some algebraic structure which has a lot to do with the semantic of the app.

So the question remains, if this works just well for an example or if we have a general recipe.

Here is a problem in the general situation: Let x,y,z\in M be elements of a monoid M with operation “\cdot” and \sim be an equivalence relation such that x is identified with y. Then, denoting equivalence classes with [\_] it is not clear if [x] \cdot [y] should be defined to be [x\cdot z] or [y\cdot z].

Fortunately problems like that disappear for free monoids like our action monoid and equivalence relations constructed in a specific way. As you can see on wikipedia, it is always ok to take the equivalence relation generated by the same kind of identifications we made above: Pick some pairs of sequences which are known “to do the same” from a semantic point of view (like “ID” and “DI” did the same as the empty sequence) and declare sequences to be equivalent, if they arise by replacing sequences known to be the same.

So the approach is that general: It works for apps, where actions do not have parameters and can be fired in any order and for equivalence relations generated by defining finitely many action sequences to do the same. The “any order” is a real restriction, but this post also has a “Part 1” in the title…

Crashes when returning references to vector elements

Recently, I was experiencing a strange crash that I traced to a piece of C++ code looking more or less like this:

template <class T>
class container
{
public:
  std::vector<T> values_;
  T default_;

  T const& get() const
  {
    if (values_.empty())
      return default_;
    return values.front();
  }
};

This was crashing when calling get(), with a non-empty values_ member. It looks fairly innocent. And it ran in production for a couple of years already. So what changed?

I had, in fact, never instanciated this template with T = bool before. And that was causing the crash, while still compiling without any errors. Now if you’re a little versed in the C++ standard library you might know that std::vector is a special snowflake indeed. In an effort to save space, and, I suspect, prove the usefulness of template specializations, it is not really a “normal” container holding bool values. Instead, it holds some type of integers and packs each pseudo-bool into one of their bits. The consequence is that the accessor functions like operator[], front() and back() cannot return a reference to a bool. Instead, they return a “proxy” object that supports assignment to and from a bool.

Back to the get() function: it tries to return a reference to a bool. Of course, that bool doesn’t really exist except as a temporary, and so this results in a dangling reference that causes a segmentation fault when used.

I suspect there could have been a warning about a dangling reference somewhere there. I have seen clang-tidy especially report things like this (with a few false positives too), but it did not show up for me. To fix it, I am now just returning a bool instead of a bool const& for T = bool. A special case in my case to work around a special case in std::vector.

Co-Variant methods on C# collections

C# offers a powerful API for working with collections and especially LINQ offers lots of functional goodies to work with them. Among them is also the Concat()-method which allows to concatenate two IEnumerables.

We recently had the use-case of concatenating two collections with elements of a common super-type:

class Animal {}
class Cat : Animal {}
class Dog : Animal {}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  // XXX: This does not work because Concat is invariant!!!
  return cats.Concat(dogs);
}

The above example does not work because concat requires both sequences to have the same type and returns a combined sequences of this type. If we do not care about the specifities of the subclasses be can build a Concatenate()-method ourselves which make the whole thing possible because instances of both subclasses can be put into a collection of their common parent class.

private static IEnumerable<TResult> Concatenate<TResult, TFirst, TSecond>(
  this IEnumerable<TFirst> first,
  IEnumerable<TSecond> second)
    where TFirst: TResult where TSecond : TResult
{
  IList<TResult> result = new List<TResult>();
  foreach (var f in first)
  {
    result.Add(f);
  }
  foreach (var s in second)
  {
    result.Add(s);
  }
  return result;
}

The above method is a bit clunky to call but works as intended:

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  // Works great!
  return cats.Concatenate<Animal, Cat, Dog>(dogs);
}

A variant of the above is a Concatenate()-method can be useful if you use a collection of the parent class to collect instances of subclass collections:

private static IEnumerable<TResult> Concatenate<TResult, TIn>(
  this IEnumerable<TResult> first,
  IEnumerable<TIn> devs)
    where TIn : TResult
{
  IList<TResult> result = first.ToList(); 
  foreach (var dev in devs)
  {
    result.Add(dev);
  }
  return result;
}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  IEnumerable<Animal> result = new List<Animal>();
  result = result.Concatenate(cats);
  return result.Concatenate(dogs);
}

Maybe the above examples can serve as an inspiration for more utility methods that may improve working with collections in C#.