Improving Windows Terminal

As mentioned in my earlier post about hidden gems in the Windows 10 eco system a very welcomed addition is Windows Terminal. Finally we get a well performing and capable terminal program that not only supports our beloved tabs and Unicode/UTF-8 but also a whole bunch of shells: CMD, PowerShell, WSL and even Git Bash.

See this video of a small ASCII-art code golf written in Julia and executed in a Windows Terminal PowerShell:The really curious may try running the code in the standard CMD-Terminal or the built-in PowerShell-Terminal…

But now on to some more productive tipps for getting more out of the already great Windows Terminal.

Adding a profile per Shell

One great thing in Windows Terminal is that you can provide different profiles for all of the shells you want to use in it. That means you can provide visual clues like Icons, Fonts and Color Schemes to instantly visually recognize what shell you are in (or what shell hides behind which tab). You can also set a whole bunch of other parameters like transparency, starting directory and behaviour of the tab title.

Nowadays most of this profile stuff can simply be configured using the built-in windows terminal settings GUI but you also have the option to edit the JSON-configuration file directly or copy it to a new machine for faster setup.

Here is my settings.json provided for inspiration. Feel free to use and modify it as you like. You will have to fix some paths and provide icons yourself.

Pimping it up with oh-my-posh

If that is still not enough for you there are a prompt theme engine like oh-my-posh using a command like

Install-Module oh-my-posh -Scope CurrentUser

and try different themes with Set-PoshPrompt -Theme <name>. Using your customized settings for a specific Windows Terminal profile can be done by specifying a commandline to execute expressions defined in a file:

powershell.exe -noprofile -noexit -command \"invoke-expression '. ''C:/Users/mmv/Documents/PowerShell/PoshGit.ps1

where PoshGit.ps1 contains the commands to set up the prompt:

Import-Module oh-my-posh

$DefaultUser = 'Your Name'

Set-PoshPrompt -Theme blueish

Even Microsoft has some tutorials for highly customized shells and prompts

How does my Window Terminal look like?

Because seeing is believing take a look at my setup below, which is based on the instructions and settings.json above:

I hope you will give Windows Terminal a try and wish a lot of fun with customizing it to fit your needs. I feel it makes working with a command prompt on Windows much more enjoyable than before and helps to speed you up when using many terminal windows/tabs.

A final hint

You may think, that you cannot run Windows Terminal as an administrator but the option appears if you click the downward-arrow in the start menu:

My own little Y2K22 bug

Ever since the year 2000 (or Y2K), software developers dread the start of a new year. You’ll never know which arbitrary limit will affect the fitness of your projects. Sometimes, it isn’t even the new year (see the year 2038 problem that will manifest itself in late January). But more often than not, the first day of a new year is a risky time.

Welcome, 2022!

The year 2022 started with Microsoft Exchange quarantining lots of e-mails for no apparent reason other than it is no longer 2021. I was amused about this “other people’s problem” until my phone rang.

A customer reported that one of my applications doesn’t start anymore, when it ran perfectly a few days ago – in 2021. My mind began to race:

The application in question wasn’t updated recently. It has to be something in the code that parses a current date with an unfortunate date/time format. My search for all format strings (my search term was “MMddHH” without the quotes) in the application source code brought some expected instances like “yyyyMMddHHmmss” and one of a very suspicious kind: “yyMMddHHmm”.

The place where this suspicious format was used took a version information file and reported a version number, some other data and a build number. The build number was defined as an integer (32 bit). Let me explain why this could be a problem:

2G should be enough for everyone!

A 32-bit integer has an arbitrary value limit of 231=2.147.483.648. If you represent the last minute of 2021 in the format above, you get 2.112.312.359 which is beneath the limit, but quite close.

If you add one minute and count up the year, you’ll be at 2.201.010.000 which is clearly above the value limit and result in either an integer overflow ending in a very negative number or an arithmetic exception.

In my case, it was the arithmetic exception which halted the program in its very first steps while figuring out what, where and when it is.

This is a rookie mistake that can only be explained by “it evolved that way”. The mistake is in the source code since the year 2004. I wrote it myself, so it is my mistake. But I didn’t just think about a weird date format that won’t spark joy 18 years later. I started with a build number from continuous integration. The first build of the project is “build 1”, the next is “build 2”, and so on. You really have to commit early, commit often (and trigger builds) to reach the integer limit that way. This is true for a linear series of builds. But what if you decide to use feature branches? The branches can happen in parallel and each have their own build number series. So “build 17” could be the 17th build of your main branch and go in production or it could be a fleeting build result on a feature branch that gets merged and deleted a few days later. If you want to use the build number as a chronological ordering, perhaps to look for updates, you cannot rely on the CI build numbering. Why not use time for your chronological ordering?

Time as an integer

And how do you capture time in an integer? You invent a clever format that captures the essence of “now” in a string that can be parsed as an integer. The infamous “yyMMddHHmm” is born. The year 2022 is a long time down the road if you apply a quick and clever fix in 2004.

But why did the application crash in 2022 without any update? The build number had to be from 2021 and would still pass the conversion. Well, it turned out that this specific application had no build number set, because we changed our build system and deemed this information not important for this application. So the string in the version file was empty. How is an empty string interpreted as today?

Well, there was another clever code by another developer from 2008 that took a string being null or empty and replaced it with the current date/time. The commit message says “Quickfix for new version format”.

Combined cleverness

Combine these three things and you have the perfect timebomb:

  1. A clever way to store a date/time as an integer
  2. A clever way to intepret missing settings
  3. A lazy way to intriduce a new build process

The problem described above was present in a total of five applications. Four applications had fixed build numbers/dates and would have broken with the next version in 2022 or later. The fifth application had an empty build number and failed exactly as programmed after the 01.01.2022.

Lessons learnt

What can we learn from this incident?

First: clever code or a quick fix is always a bad idea.

Second: cleverness doesn’t stack. One clever workaround can neutralize another clever hack even if both “solutions” would work on their own.

Third: If your solution relies on a certain limit to never be reached, it is only a temporary solution. The limit will be reached eventually. At least leave an automated test that warns about this restriction.

Fourth: Don’t mitigate a hack with another hack. You only make your situation worse in the long run.

The fourth take-away is important. You could fix the problem described above in at least two ways:

  • Replace the integer with a long (64 bit) and hope that your software isn’t in production anymore when the long wraps around. Replace the date/time format with the usual “yyyyMMddHHmmss”.
  • Leave the integer in place and change the date/time format to “yyDDDHHmm” with “DDD” being the day of the year. With this approach, you shorten the string by one digit and keep it below the limit. You also make the build number even less readable and leave a timebomb for the year 2100.

You can probably guess which route I took, even if it was a lot more work than expected. The next blog entry about this particular code can be expected at 01.01.10000.

Mutable States can change inside your Browser console log

So we know, that web development must be one of the fastest-changing ecospheres humankind has ever seen (not to say, JavaScript frameworks and their best practices definitely mutate similar in frequency and deadliness as Coronaviruses). While these new developments can also come with great joy and many opportunities, this means that once in a while, we need to take care of older projects which were written in a completely different mindset.

It’s somehow trivial: Even when your infrastructure is prone to constant shifts, any Software Developer holding at least some reputation should strive to write their code as long-living and maintainable as originally intended. Or longer.

But once in a while you run into legacy code that you first have to dissect in order to understand their working. And for JS, this usually means inserting console.log() statements at various places and to trace them during execution (yeah, I know, there’s a plenitude of articles telling you to stop that, but let’s just stay at the most basic level here).

Especially in an architecture with distributed, possibly asynchronous events (which helps in reducing coupling, see e.g. Mediator and Publish-Subscribe patterns), this can help your bugtracing. But there’s a catch. One which took me some time to actually understand as quite the villain.

It does not make any sense to me, but for some reason, at least Chrome and Firefox in their current implementation save some effort when using console.log() for object entities. As in, they seem to just hold a reference for lazy evaluation. It can then be that you look upwards at your log, maybe even need to scroll there, look at some value and then not realize that you are looking at the current state, not the state at time of logging!

Maybe that was clear to you. Maybe it never occured to you because you always cared about using your state immutably. But in case you are developing on some legacy code and don’t know about what your predecessor did everywhere, you might not be prepared.

You can visualize that difference easily by yourself. Consider that short JS script:

var trustfulObject = {number: 0};
var deceptiveObject = {number: 0};

// let's just increase these numbers once each second
setInterval(() => {
    console.log("let's see...", trustfulObject, deceptiveObject);
    trustfulObject = {number: trustfulObject.number + 1};
    deceptiveObject.number = deceptiveObject.number + 1;
}, 1000);

Let that code run for a while and then open your Browser console. Scroll upwards a bit and click on some of the objects. You will find that the trustfulObject is always enumerated as supposed (at the time of logging), while the deceptiveObject will always show the number at the time of clicking. That surely surprised me.

In case you are still wondering why: The trustfulObject is freshly created each step and then reassigned to your reference variable. It seems the Browser has no other choice than logging the old (correct) state, because the reference is lost afterwards. The deceptiveObject holds the same reference during the whole runtime, which somehow makes it look more efficient to the Browser to just not evaluate anything until you want to know the value.

And then, it lies to you. ¯\_(ツ)_/¯

Two notes:

  1. If you really have to deal with legacy code of a given size where you cannot easily change that behaviour, you can log your object using JSON.stringify, i.e. console.log("let's see…", trustfulObject, JSON.stringify(deceptiveObject)); avoids that lazy evaluation.
  2. Note: Not to be confused, the JS “const” keyword does exactly the opposite of creating an immutable object. It creates an immutable reference, i.e. you can only manipulate their content afterwards. Exactly what you not want.

Of course, in modern times you probably wouldn’t write vanilla JS, and e.g. using React useState definitely reduces that issue. But still. If you don’t want to use React & Co. everywhere, then… pay attention.

The boy scout rule and git in practice

There’s a dichotomy when applying the boy scout rule to programming: cleaning up code that you happen to come across ‘pollutes’ your merge-/pull-requests, making it harder to review and therefor more unlikely to be accepted.

One way to cope with this is to submit the ‘clean up’ and the feature/task related changes separately, and merge them back into upstream in separate steps. But often times, it is much easier to just fix a small problem right away instead of switching back to your main branch and doing it there. In fact, it might prevent the developer from doing the improvement, which I want to avoid. Quite the opposite, I want to encourage my fellow developers to do improvements.

So one thing that we do about this is to mark the changes that are unrelated (or tangentially related) to the task with their own commit and a special prefix in the commit message like:

BSR: More consistent function signatures

As you might have guessed, BSR stands for boy scout rule. This does not solve the fact that the diffs get larger than necessary, but it makes it possible to ‘filter out’ the pure refactorings. In some cases, these commits can later be cherry-picked onto the main branch before doing the review. Of course, this only works for small refactorings, but this is where the boy scout rule applies.

The four stages of automation – Part II

One of the core concepts of software development and IT in general is “automation”. By delegating work to machines, we hope to reduce costs and save time while maintaining the quality of results. But automation is not an all-or-nothing endeavor, there are at least four different stages of automation that can be distinguished.

In the first part of this blog series, we looked at the first two stages, namely “documentation” and “recurring reminders”. Both approaches are low tech, but high effect. Machines only played a minor role – this will change with this blog series part. Let’s look at the remaining two stages of automation:

Stage 3: Semi-automatic

If you have a process that is properly documented and you are reminded in a regular fashion, like once a month, you’ll soon find that some steps of the process could be done by a machine, while you as the “human in duty” still pull all the strings that orchestrate the whole thing.

If you know the term “semi-automatic” from firearms, a semi-automatic firearm doesn’t aim or shoot itself, it just reloads automatically after each shot. The shooter still has to pull (and release) the trigger for each single shot. The shooter is in full control of the weapon, it just automates the mundane and repetitive task of chambering the next round.

This is the kind of automation we are taking about for stage 3. It is the most common type of automation. We know it from our cars, our coffee machines and other consumer electronics. The car manages a lot of different tasks under the hood while we are still in control of the overall task of driving from A to B.

How does it look like for business processes? One class of stage 3 utilities are reporting tools that gather and aggregate data from different sources and present the result in a suitable manner. In our company, these tools make up the majority of stage 3 services. There are reporting tools for the most important numbers (the key performance indices – KPI) and even some for less important, but cumbersome to acquire data. Most tools just present a nice website with the latest results while others send e-mails or create pages in our wiki. If you need a report, just press a button or visit an URL and the machine comes up with the answer. I tend to call this class of tools “sensors”, because they acquire data and process it, but don’t decide on the results.

The other class of stage 3 utilities that are common are “actuators” in the sense that they perform tasks on command. We have scripts in place to shut down whole clusters of computers, clear wiki spaces or reset custom fields on important data objects, but those scripts are only triggered by humans.

A stage 3 actuator could even be something small as a mailto link. Let’s say you have to send a standardized e-mail to a known recipient as part of a monthly process. Sure, you can save a draft in your e-mail application, but you can also prepare the whole mail in an URL directly in the documentation of the process:

mailto:nobody@softwareschneiderei.de?subject=The%20schneide%20blog%20rocks!&body=I%20read%20your%20blog%20post%20about%20automation%20and%20tried%20the%20mailto%20link.%20This%20thing%20is%20awesome%2C%20thank%20you!

If you click the link above, your e-mail application will prompt you to send an e-mail to us. You don’t need to follow through – we won’t read it on that address.

You can read about the format of mailto links here, but you probably want to create working mailto links right away, which is possible with this nifty stage 3 service utility written by Michael McKeever (buy him a beer!).

Be aware that this is a classic example of chaining stage 3 tools together: You use a tool to create the mailto link that you use subsequently to write, but not send, e-mails. You, as the human coordinator, decide when to write the e-mail, if you want to adapt it to current circumstances and when to send it. The tools only speed you up, but don’t act or decide on their own.

An important aspect of this type of automation is the human duty of orchestration (which service does its thing when) and the possibility of inspection and adaption. The mailto link doesn’t send an e-mail, it just prepares it for you to send. You have the final word on the things that happen.

If you require this level of control, stage 3 automation is where your automation journey ends. It still needs the competent human operator (what, when, why) – but given a decent documentation (as outlined in stage 1), this competence can be delegated quickly. It is also the first automation stage that enables higher effectiveness through speedup and error reduction. The speedup is capped by the maximum speed of the human operator, though.

Stage 4: Full automation

The last stage of automation is “full automation”, which means that a machine gathers the data on its own, comes up with a decision based on the data and acts on its own. This is a powerful tool, but a dangerous one, too.

It is powerful because you just employed an additional worker. Not a human worker, but a machine. It doesn’t go on holiday, it doesn’t lose interest and won’t ask for a bonus.

It is dangerous, because your additional worker does exactly what it is told (programmed) to do, even if it doesn’t make sense or needs just the slightest adaption to circumstances.

Another peril lies in the fact that the investment to reach the fully automated stage is maximized. As with nearly everything related to IT, there is a relevant xkcd comic for this:

https://xkcd.com/1319/

The problem is that machines are not aware of their context. They don’t deal well with slight deviations (like “1,02” instead of “1.02”) and cannot weigh the consequences of task failure. All these things are done by a competent human operator, even without specific training. You need to train a machine for every eventuality, down to the dots.

This means that you can’t just program the happy path, as you do in stage 3, when a human operator will notice the error and act accordingly. You have to implement special case behaviour, failure detection, failure handling and problem reporting. You have to adapt the program to changes in the environment in a timely manner (this work is also present in stage 3, but can be delayed more often).

If the process contains mostly routine and is recurring often enough to warrant full automation, it is a rewarding investment that pays off quickly. It will take your human-based work on a new level: designing and maintaining an automation platform that is cost-efficient, scalable and adjustable. The main problem will be time-critical adjustments and their overall effects on the whole system. You don’t need routine workers anymore, but you’ll need competent technicians on stand-by.

Examples of fully automated processes in our company are data backups, operating system upgrades, server monitoring and the recurring reminder system that creates the issues for our stage 2 automation. All of these processes have increased reporting capabilities that highlight problems or just anomalies in a direct manner. They all have one thing in common: They are small, work on only one thing and try to do so with minimal dependencies and interaction.

Conclusion

There are four distinguishable stages of automation:

  1. documentation
  2. recurring reminders
  3. semi-automatic
  4. full automation

The amount of human work for the actual process decreases with each stage, while the amount of human work for the automation increases. For most processes in an organization, there will be a sweet spot between process cost and automation cost somewhere on that spectrum. Our job as automaters is to find the sweet spot and don’t apply too much automation.

If you have a good story about not enough automation or too much automation or even about automation being just right – tell us in the comments!

The four stages of automation – Part I

One of the core concepts of software development and IT in general is “automation”, the “creation and application of technologies to produce and deliver goods and services with minimal human intervention” (definition from techopedia).

The problem is that “minimal human intervention” is often misunderstood as “no human intervention”, which is the most laborious and expensive stage of automation that might not have the most economic return on investment. It might be more efficient to have some degree of intervention left while investing only a fraction of the automation work and duration.

In order to decide “how much” automation is the most profitable for the foreseeable future, I’ve established a model with four stages of automation that I can quickly check against the circumstances. In this blog post, I describe the first two stages and give some ideas how to implement them.

Stage 1: Documentation

The first step to automation is to just describe the process in a manner that can be repeated. The documentation itself does nothing, but it enables repetition and scalability, two fundamental aspects of automation.

Think about baking a pie. If you just mix some ingredients and put it in the oven for an arbitrary amount of time, you might produce the most delicious pie ever, but you cannot do it again if you don’t remember all details and, even more tragic, nobody else can bake your pie. In order to give others the secret to your special pie, you have to give them the recipe – the documentation of its production process. Once the recipe is written down (and published), it can be read by many bakers in parallel and enables all of them to recreate your invention (to some degree at least, there are probably still some tricks and secrets left out of the recipe).

While the pie baking process still needs human intervention (the bakers that read the recipe and transform it into a series of actions), it is automated in the sense that it can be repeated with roughly the same result and these repetitions, given enough bakers and ovens, can be performed in parallel.

The economic evaluation of documentation shows that it is really easy to create, fast to change and, given some quality of content, nearly universally understood. If you don’t want to invest a lot of time and money, documenting your processes is the first and most important step towards automation. For a lot of your processes, it will also be the last possible stage of automation, at least until artificial intelligence learns your tricks and interpretations.

Documenting your processes is (no surprises here) the foundation of most quality assurance standards. But it is surprisingly hard to start with. This is not a matter of tools – pen and paper will do in the beginning. It is a shift in your mindset. The goal is no longer to bake a pie. It is to write a recipe while you bake the pie as a reference piece for it. If you want to start documenting your processes, here are three tips that might help you:

  • Choose a digital tool that doesn’t obstruct you. It should be digital because this facilitates distribution and collaboration. It should not hinder you because every time you need to think about the tool, you lose the focus on your process. I’m using a Wiki that lets me type the things I want to say without interference. In my case, that’s Confluence, but Obsidian or other tools are just as good.
  • Try to adopt a narrative structure to describe your processes. Think about the established structure of a baking recipe. For example, there is an ingredients list separate from the preparation instructions. If you find a structure that works for you, repeat and evolve it. It helps you and your readers to stay on track and don’t scatter the information all over the place. In my case, the structure consists of four paragraphs:
    1. Event/Trigger – The circumstance(s) that should be present at the beginning of the process
    2. Actions/Steps – The things you have to do, described in the necessary details for the target audience. This is often the paragraph with the most content.
    3. Result – Description of the circumstance(s) that should be present once you’ve done all steps. In recipes, this is often a photo of the meal/pastry. For first-time performers, this description is important to be able to declare success.
    4. Report – Who needs to be informed? This paragraph is often missing in descriptions, but crucial for collaboration. If nobody knows there is a fresh and delicious pie in the kitchen, it will not be eaten. Ok, that’s a bad example: Pies in the oven announce themselves with their smell. Digital products often have no smell – inform your peers!
  • Iterate over your documentation any chance you get. It is easy to bake your signature pie from memory. But is the recipe still accurate? Are there details that are important, but missing from the description? Your digital tool probably allows immediate modification of your documentation and maybe even informs interested readers about your update. Unchanged documentation is dead documentation. In my case, I always open my process description on a secondary monitor whenever I perform them. Sometimes, I invite others to perform the process for me to review the accuracy and fidelity of the documentation.

If you can open the process description of many of your routine tasks, you have reached the first stage of automation for your work. Of course, there will be lots of things you do that are not “routine” – yet. With good documentation, you can even think about delegation – the art of maximizing the amount of work done by others – without sacrificing essential quality.

In later stages, the delegation target (the “others”) will be machines.

Stage 2: Recurring reminders

If you’ve documented a process with a structure similar to mine, you specified a trigger or event that requires the process to be performed. Perhaps its the first day of the month and you need to update your timesheet or send out the appointment overview for the next weeks. Maybe your office plants silently thirst for some water. Whatever it is, if your process is recurrent, you might think about recurring reminders.

This will not automate the performance of the process, but unburden you of thinking about the triggering event. The machines will now remind you about certain tasks. This can be a simple series of reminders in your schedular app or, like in my case, the automated creation of issues (or todo items, tickets) in your work planning application.

For example, once every few weeks, a friendly machine creates an issue for me to write a blog entry on this blog. It does the same for my colleagues and even sets a “due date” (The due date for this post is today). With this simple construct, some discipline and coordination, we’ve managed to write one blog post every week for more than ten years now.

The machine that creates the issues doesn’t check them. It doesn’t supervise their progress and isn’t offended if we “won’t fix” issues because we are on holiday or the plants are still wet. It will just create the next issue according to the rhythm. It is our duty as humans to check if that rhythm fits or if it should be sped up or slowed down.

If you want to employ really elaborate triggers for your reminders, a platform like “If this then that (IFTTT)” might be the right choice. Just keep in mind that with complexity, there often comes rigidity, which isn’t always desired.

By automating the aspect of reminding us about the routine tasks, we can concentrate on doing them. We don’t forget to write blog posts or to water the plants because the machine doesn’t forget. Another improvement is that this clearly distinguishes between routine (has a recurring reminder) and anomaly. If the special one-time task occurs again, we give it a recurring reminder and adopt it as a new routine task. If a reminder about a routine task is “won’t fixed” often enough without any inclination that it will be required again, we delete the reminder.

Conclusion for part I

If you combine automated recurring reminders with structured documentation, you already gain a lot of advantages and can free your mind from the mundane details and intervals of your routine tasks. You haven’t automated any aspect of your real work yet, which means that these two stages can be applied to most if not all workplaces.

In the next part of this series, we will look at the two stages that become integrated with your actual work. Stay tuned!