return first example

It seems my “return first” post was not as enlightening as I had hoped. It was posted on reddit, and while the majority of commenters completely missed the point, it wasn’t really clear for those that did not just read the title. Either way, I am to blame for that – the examples and my reasoning were not very conclusive. So let me try clearing up the confusion with a better example.

First things first, here’s the mantra again: Whenever you want to call a function, ask yourself:

Can I return first?

But now to the example:

Parsing array braces

The task was to parse a string with a data-type in it. This was already working for single-value types, so we could parse "int", "double", "string" etc, via the function from_input_type. Now I was to extend it to also parse array definitions with one or two fixed dimensions, like "int[5]" or "double[4,7]".

My first attempt, implementing it as a constructor taking the definition string, looked like this:

auto suffix_begin = type_code.find('[');
if (suffix_begin == std::string::npos)
{
  this->type = from_input_type(type_code);
  return;
}

auto suffix_end = type_code.find(']', suffix_begin);
if (suffix_end == std::string::npos)
{
  throw std::invalid_argument("Malformed attribute type suffix: no end brace.");
}

auto type_tag = type_code.substr(0, suffix_begin);
this->type = from_input_type(type_tag);
auto in_brackets = type_code.substr(suffix_begin+1, suffix_end-suffix_begin-1);

auto separator = in_brackets.find(',');
if (separator == std::string::npos)
{
  this->rank = attribute_rank_t::1d;
  this->dim[0] = parse_size(in_brackets);
  return;
}
  
auto first = in_brackets.substr(0, separator);
auto second = in_brackets.substr(separator+1);

this->rank = attribute_rank_t::2d;
this->dim[0] = parse_size(first);
this->dim[1] = parse_size(second);

It’s not pretty, but it passed all the tests I set up for it. And this was pre-refactoring. I knew there was something else coming up: In a different constructor, we wanted to parse type definitions that look similar, but are not quite the same. Instead of 1 or 2 fixed dimensions, the brackets have to be empty there, e.g. "float[]" or "string[]". Note that they are still optional, it can still have single-values as well.
Now I wanted to reuse the code to locate the brackets, but the current structure wasn’t really well suited for that, with the member initialization spread all over the function. Obviously, the code parsing the contents of the brackets (from the auto separator = ... line down) was of no use for the second case, the first half is the interesting bit here. So I was looking at the calls to from_input_type in the upper half and asked myself: Can I return first, before calling this? The answer is, of course, yes.

struct type_with_brackets_t
{
  std::string_view type;
  std::string_view in_brackets;
  // There's a difference between empty brackets (e.g. string[])
  // and no brackets (e.g. string)
  bool has_brackets = false;
};

type_with_brackets_t split_type(std::string_view const& type_code)
{
  auto suffix_begin = type_code.find('[');
  if (suffix_begin == std::string::npos)
  {
    return {type_code, {}, false};
  }

  auto suffix_end = type_code.find(']', suffix_begin);
  if (suffix_end == std::string::npos)
  {
    throw std::invalid_argument("Malformed attribute type suffix: no end brace.");
  }

  auto type_tag = type_code.substr(0, suffix_begin);
  auto suffix = type_code.substr(suffix_begin+1, suffix_end-suffix_begin-1);
  return {type_tag, suffix, true};
}

With this, we can replace the upper half of the first function with:

auto [tag, in_brackets, has_brackets] = split_type(type_code);
this->type = from_input_type(tag);
if (!has_brackets)
  return;

/* continue parsing in_brackets */

The other “int[]” case can obviously be implemented very easiely now:

auto [tag, _, has_brackets] = split_type(type_code);
this->type = from_input_type(tag);
this->is_array = has_brackets;

Of course, when just extracting the code as a function, you could be tempted to also call from_input_type in that function, but return first guided us away from that. I think this is a very good outcome, as it clearly separates splitting the string and interpreting the parts, naturally eliminating the duplicated from_input_type call. You can still have a function that does both, if you want, by adding a small facade araound split_type that also does the conversion.

I hope this example cleared up the method a bit more. One reason why deeply nested function calls are so common is that most languages make it easier to pass parameters than return multiple values. You will often find that this style will require more custom data-types that are just used as function return values. But functions will naturally compose easier because you will bundle smaller pieces, e.g. in this case, you can use the function without from_input_type, and I believe that will pay off in the end.

JDBC’s wasNull method pitfall

Java’s java.sql package provides a general API for accessing data stored in relational databases. It is part of JDBC (Java Database Connectivity). The API is relatively low-level, and is often used via higher-level abstractions based on JDBC, such as query builders like jOOQ, or object–relational mappers (ORMs) like Hibernate.

If you choose to use JDBC directly you have to be aware that the API relatively old. It was added as part of JDK 1.1 and predates later additions to the language such as generics and optionals. There are also some pitfalls to be avoided. One of these pitfalls is ResultSet’s wasNull method.

The wasNull method

The wasNull method reports whether the database value of the last ‘get’ call for a nullable table column was NULL or not:

int height = resultSet.getInt("height");
if (resultSet.wasNull()) {
    height = defaultHeight;
}

The wasNull check is necessary, because the return type of getInt is the primitive data type int, not the nullable Integer. This way you can find out whether the actual database value is 0 or NULL.

The problem with this API design is that the ResultSet type is very stateful. Its state does not only change with each row (by calling next method), but also with each ‘get’ method call.

If any other ‘get’ method call is inserted between the original ‘get’ method call and its wasNull check the code will be wrong. Here’s an example. The original code is:

var width = rs.getInt("width");
var height = rs.getInt("height");
var size = new Size(width, rs.wasNull() ? defaultHeight : height);

A developer now wants to add a third dimension to the size:

var width = rs.getInt("width");
var height = rs.getInt("height");
var depth = rs.getInt("depth");
var size = new Size(width, rs.wasNull() ? defaultHeight : height, depth);

It’s easy to overlook the wasNull call, or to wrongly assume that adding another ‘get’ method call is a safe code change. But the wasNull check now refers to “depth” instead of “height”, which breaks the original intention.

Advice

So my advice is to wrap the ‘get’ calls for nullable database values in their own methods that return an Optional:

Optional<Integer> getOptionalInt(ResultSet rs, String columnName) {
    final int value = rs.getInt(columnName);
    if (rs.wasNull()) {
        return Optional.empty();
    }
    return Optional.of(value);
}

Now the default value fallback can be safely applied with the orElse method:

var width = rs.getInt("width");
var height = getOptionalInt(rs, "height").orElse(defaultHeight);
var depth = rs.getInt("depth");
var size = new Size(width, height, depth);

Running Tango-Servers in Docker

Containers are a great way of running your software in an environment that you defined and no one else needs to maintain of even know about. This is especially true if your software provides its service using a network port. All that operators have to provide is the execution platform for the container and the required resources.

Tango servers fit quite well into this type of software: Essentially they provide a service over the network. Unfortunately, they need a tango control system to be fully usable, so thats some other services. Luckily, there are docker images for this so building a stack of containers to run your Tango server in the scope of a control system is quite easy (sample docker-compose.yml):

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    environment:
      - TANGO_HOST=tango-cs:10000
    depends_on:
      - tango-cs
  tango-cs:
    container_name: tango-database
    image: tangocs/tango-cs:9.3.2-alpha.1-no-tango-test
    ports:
      - "10000:10000"
    environment:
      - TANGO_HOST=localhost:10000
      - MYSQL_HOST=tango-db:3306
      - MYSQL_USER=tango
      - MYSQL_PASSWORD=tango
      - MYSQL_DATABASE=tango
    links:
      - "tango-db:localhost"
    depends_on:
      - tango-db
  tango-db:
    container_name: mysql-database
    image: tangocs/mysql:9.2.2
    environment:
      - MYSQL_ROOT_PASSWORD=root
    volumes:
      - tango_mysql_data:/var/lib/mysql
volumes:
  tango_mysql_data:

Example Dockerfile for the image my-ts used for the my-tango-server container:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts

Unfortunately this naive approach has the following issues:

  • The network port of our tango server changes with each restart and is not available from outside the docker stack
  • The tango server communicates the internal IP-Address of its container to the tango database so connection attempts from the outside fail even if the port is correct
  • Even though the tango server container depends on the tango control system container the tango server might start up before the tango control system is available.

Let us tackle the issues one by one.

Expose a fixed port for the Tango server

This is probably the easiest issue to fix because it is well documented for OmniORB and widely used. So let us change docker-compose.yml to expose a port and pass it into the container as an environment variable:

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    ports:
      - "45450:45450"
   environment:
      - TANGO_HOST=tango-cs:10000
      - TANGO_SERVER_PORT=45450
    depends_on:
      - tango-cs
  tango-cs:
 ...

And use it in our Dockerfile for the my-ts image:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT

Fine, now our Tango server always listens on a defined port and that same port is exposed to the outside of our docker stack.

Publish the correct IP/hostname of the Tango server

I found it hard to find documentation about this and how to put it on the command line correctly but here is the solution to the problem. The crucial command line parameter is -ORBendPointPublish . We need to pass a hostname of the host machine into the container and let OmniORB publish that name (tango-server.local in our example):

version: '3.7'
services:
  my-tango-server:
    container_name: my-tango-server
    image: my-ts
    ports:
      - "45450:45450"
   environment:
      - TANGO_HOST=tango-cs:10000
      - TANGO_SERVER_PORT=45450
      - TANGO_SERVER_PUBLISH=tango-server.local:45450
    depends_on:
      - tango-cs
  tango-cs:
 ...

And the corresponding changes to our Dockerfile:

FROM debian:buster

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT -ORBendPointPublish giop:tcp:$TANGO_SERVER_PUBLISH

Waiting for availability of a service

This one is not specific to our tango server and the whole stack but a general problem when building stacks where one service depends on another one being up and listening on a network port. Docker compose takes care of the startup order of the containers but does not check for the readiness of services running in the containers. In linux containers this can easily be achieved by using the small shell utility wait-for-it. Here we only need to make some changes to our Dockerfile:

FROM debian:buster
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install wait-for-it

WORKDIR /tango-server
COPY ./my-tango-server .

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ./*.deb

CMD wait-for-it $TANGO_HOST -- my-tango-server myts -ORBendPoint giop:tcp::$TANGO_SERVER_PORT -ORBendPointPublish giop:tcp:$TANGO_SERVER_PUBLISH

Depending on your distribution/base image installation of the tool (or other similar alternatives like dockerize or docker-compose-wait).

Summing it up

After moving some rocks out of the way it is dead easy to deploy your tango servers together with a complete tango control system to any host platform providing a container runtime. You do not need to delivery everything in one single stack if you do not want to. Most of the stuff described above works the same when deploying tango control system and tango servers in independent stacks or as separate containers even on multiple hosts.

Hacking one‘s wetware by sleeping slantwise

Over the turn of the year I took some weeks off, in order to work out some private projects, finish some books that I had halfway-finished for far too long, and of course, to reflect about what such concepts like New Year actually could mean. As usual, the short answer is… not much per se, but one can seize the occasion to contrive a few goals for the year. You know, not these mundane, marginal resolutions like “on February 20th I will definitely go for a run”, or ambiguous abstracts like “in 2021 I‘m finally gonna be a people person!”, but a more profound search for something new, a kind of evaluation of undiscovered instrument in the toolkit of one‘s being, the juicy stuff.

Now we‘ve seen for quite some years, that the world of self-improvement likes to border on the superstitious. From particularly fine-tuned compositions of one‘s diet plan, to the sheer religious belief in certain routines, there‘s no shortage in shady suggestions that draw their data from unique success stories that makes me wonder: Even if there was a kind of biological truth to these underlying claims, how could I survive the cringing of my heart that I would experience by reading these articles?

A special downside of solutions of the kind „collection of very intricate details“ is that they aim to intervene in your life at a very incessant level. Which is, you have to think about them all day in fear of breaking the patterns, i.e. you not only distract yourself from the stuff you actually want to do, but also not giving you a certainty of feedback in either – if something appears to help – what exactly it is that helps, or – if there‘s no effect to be noticed – which detail is maybe done wrong in order to blast away all the other efforts. I guess I would only resort to such methods if I had the impression that something‘s seriously wrong with me, and I currently don‘t notice people telling me this more than once a week, so it‘s fine.

Then there are the kind of solutions that I consider just minor variations of the stuff that one already sort of knows. E.g. I don‘t consider “doing some physical movement once in a while is quite ok” as a real form of “bio-hack”, as that is just common knowledge. Ok, you still have to actually apply it, fair enough, but from an intellectual standpoint it’s boring.

So anyway, I‘m still convinced there ought to be some ways of modifying your life style at quite a beginner-suitable level, one that can easily be opted-in, low requirement of risk or investion, and that‘s where I usually start listening.

A few years back, for instance, for me that was the discovery of intermittent fasting, which in my case happened to show nearly instantaneous effects, mostly positive (e.g. for subjective impressions of my attention and overall well-being). This is something that I‘d at least easily recommend trying out a few times, but as for me, right now, I‘m not really inclined in implementing it right now.

One can probably be a bit ambivalent about all these over-the-(online-)counter supplement prescriptions that are listed everywhere. I‘m not going to recommend the regular use of any substance here, but there‘s plenty of articles about nootropics or other cognitive enhancers; some of these articles also border on the quasi-religious realm, others appear more scientific (the usage of coffee is living in the same domain, and I like that a lot) – so I‘ll leave it to the reader to form his own opinion.

Having said that, I can now finally reveal what this post is all about, as I seem to have found another intriguing way which I‘m just trying out now since a few weeks. I found the claim that it is advantageous of sleeping on an inclined bed. That certainly was outside my curretn scope, the underlying claims are about an improved flow of your glymphatic system, as well as pressure regulation. Be that as it may, it doesn‘t sound harmful and it‘s easily obtainable: by raising your bed‘s head end about 10-15cm with some suitable, stable risers, available for below 20€. I found only weird for the first night and seemed to wake up in a better mood. Together with such nice add-on as a wake up light (if you have one), this certainly qualifies as a “why didn’t anyone tell me earlier?”-moment for me.

So, if you have more intriguing bio-hacks that you consider definitely on the non-quacky side, I‘m interested 🙂

Literate formal math

In my spare time, I like to develop formal mathematics using the proof assistant and programming language Agda. Here is a screenshot of what my latest code looks like:

This is an html-rendering of the latest version of my code hosted here:

https://felix-cherubini.de/sag/Cubical.AlgebraicGeometry.Spec.html

My post is not about the mathematical content – that can safely be ignored for the matter of this post. What I want to show you is, what the code (and the html generated from it) looks like. If you want to look at the source for this file on github, you have to use ‘raw’-view, since github uses the ending to decide, that it should be rendered like a markdown file:

https://raw.githubusercontent.com/felixwellen/cubical/basic-algebraic-geometry/Cubical/AlgebraicGeometry/Spec.lagda.md

One thing I am constantly dissatisfied with when writing mathematics as formal code, is, that the result is usually far less readable than mathematical articles or textbooks. Most things I wrote recently in Agda, just look like functional source code and should not be too inviting for everyone in my target audience. Of course, when thinking about my math code, I always ask myself, if something working there, carries over to practical programming. I guess in this case I am just rediscovering the fun of something quite old and well known in practical programming.

Until about a month ago, the best remedies for my not so well readable math code, was to extensively use Agda’s mixfix syntax, math symbols via unicode and large comments, sometimes even with diagrams in ascii-art. While my mixfix use turned out to be a bit too much, the other two things work quite ok. However, above I showed you the results of somethin I tried quite late: literate programming!

I learnt how to use Agda’s limited literate programming capabilites together with some pandoc-postprocessing here:

https://jesper.sikanda.be/posts/literate-agda.html

I knew about the literate programming features, but I underestimated the difference it makes. What I use so far, amounts to not much more than having comments, that are rendered to html in some way. The real difference is, that I start with the text explaining what the code is about.

I am pretty sure though, that I would probably not have started to use it so enthusiasticly, if I hadn’t been writing math. This is because literate math coding let’s you mix prose and math code in very much the same way as it is usually done when writing math the traditional way. So one could say that I learnt already how to write literate code in my mathematical education, which allows me now to do it now without too much thinking and discipline.

I do not think, that it is a good idea to write all code like that. In the example above, my goal is to write some interesting math story which leaves out a lot of details which are hidden away. In a practical project, it might help to write, say, the main method in this style to give a top level explanation on how things are organized.

Let me have a pass around my main point again: It helps a lot to have a text file, which also contains code instead of a code file, which also contains text (i.e. comments). It made me think that I am editing a text file, when editing the literate code. Which is great, because it helps against my main problem with comments: I forget to change them, because they are not a part of the code.

return first

Let me introduce the “return first” method? Fear not, this is not a treatise on guard-clauses. What it is, is both a code design approach and a refactoring method. It starts with a simple question to ask yourself whenever you want to call a function:

Can I return first?

That’s supposed to be catchy, but it is probably not terribly enlightening. Let me demonstrate with an example. I’ll be using C++, but I dare say that this method can be used in all imperative languages.

void process_all(
  std::vector<input_info_t>& input_list,
  context_t const& context,
  target_t const& target)
{
  for (auto& each : input_list)
  {
    if (some_filter_applies(each, context))
    {
      hand_off_to(each, target);
      continue;
    }
    
    process_one(each, context);
    hand_off_to(each, target);
  }
}

For the sake of this example, the three functions some_filter_applies, process_one and hand_off_to are immutable. Let us try to improve process_all by extracting a function:

void maybe_process_and_hand_off(
  input_info_t& input,
  context_t const& context,
  target_t const& target)
{
  if (some_filter_applies(input, context))
  {
    hand_off_to(input, target);
    return;
  }
  
  process_one(input, context);
  hand_off_to(input, target);
}

void process_all(
  std::vector<input_info_t>& input_list,
  context_t const& context,
  target_t const& target)
{
  for (auto& each : input_list)
  {
    maybe_process_and_hand_off(each, context, target);
  }
}

So what do we have now:

  1. 26 instead of 17 lines. 22 instead of 13 if we do not count lines with just braces, a 70% increase.
  2. A pretty clumsy name for the function called in the loop. Can you do better?
  3. We have to pass the target all the way to hand_off_to without really using it directly in maybe_process_and_hand_off.
  4. The complexity is more or less the same.

So that was not great. So let us try to use return first and focus on hand_off_to. What if instead of calling hand_off_to, we just return first, and then do it? In this case, it’s pretty easy, since hand_off_to is a tail-call in each case:

void maybe_process(
  input_info_t& input,
  context_t const& context)
{
  if (some_filter_applies(input, context))
  {
    return;
  }
  
  process_one(input, context);
}

void process_all(
  std::vector<input_info_t>& input_list,
  context_t const& context,
  target_t const& target)
{
  for (auto& each : input_list)
  {
    maybe_process(each, context);
    hand_off_to(each, target);
  }
}

Now we no longer have to pass target through a function that does not need it, which makes both the call site and the function declaration simpler. Now a few more other refactorings are available. Let’s assume some_filter_applies is pure and process_one only changes its input parameter, as the signatures suggest. We can use loop fission, and inline the function again:

void process_all(
  std::vector<input_info_t>& input_list,
  context_t const& context,
  target_t const& target)
{
  for (auto& each : input_list)
  {
    if (some_filter_applies(each, context))
    {
      continue;
    }
  
    process_one(each, context);
  }
  for (auto const& each : input_list)
  {
    hand_off_to(each, target);
  }
}

“return first” actually works for all control structures, not just functions. So in this case we returned from the first loop before starting to call hand_off_to multiple times. Often times, the code will not be as easy to refactor, because there is actually some data flowing between the function we’re in and the one we’re calling. The simple solution then is to pack all the parameters into a struct and return that, aka using data as the interface instead.

void hand_off_individually(
  std::vector<input_info_t>& input_list)
{
  for (auto& each : input_list)
  {
    auto target = compute_target(each);
    if (!target.valid())
      continue;

    hand_off_to(each, target);
  }
}

That be turned into this:

void hand_off_individually(
  std::vector<input_info_t> const& input_list)
{
  struct targeted
  {
    input_info_t info;
    target_t target;
  };
  std::vector<targeted> valid;
  for (auto const& each : input_list)
  {
    auto target = compute_target(each);
    if (!target.valid())
      continue;
    valid.push_back({each, target});
  }

  for (auto const& each : valid)
  {
    hand_off_to(each.info, each.target);
  }
}

This is definitely longer, and probably not worth the hassle for this contrived example – this is more to show how to do it, not that is is effective.

Evaluation

In real programs, this applying “return first” is often worthwhile. It makes the code flow “wider instead of deeper”, which is often easier to follow, especially if you try to debug or measure/profile your code. It is also a gold-recipe to enable batching, which is curcial whenever you’re dealing with latency, e.g. when using RAM or building web requests. Have you tried this technique before? Do you, maybe, know it by another name? Do tell!

Key ingredients for the home office

Since March 2020, we transformed from an “on-site” company to a remote company, not particularly because we wanted to, but because the corona pandemic forced us to. Our office is not suitable to ensure transmission safety, so I decided that work from home is the lesser problem. When I say “transform” and “decided”, please bear in mind that these are retrospect notions. The decision was made at Monday, March 16th 2020 and the transformation happened in the next two days.

But there is a real difference between being operable and fully equipped for the situation. We were operable in the remote situation within 48 hours. But we still work on improving our equipment to match the situation that seems to linger a lot longer than initially anticipated. This blog post tries to summarize what we’ve learned since March 2020 in regards to equipping home office workplaces past the makeshift phase.

The fundamentals

The most fundamental ingredients of any office workplace are the table and the chair. If any of those lack in necessary ergonomic features, your comfort will never be the same as in the office (provided that your equipment there is adequate). And this constant discomfort will permeat everything you do.

The chair was easier to detect because it shows up in the video calls, it is part of the “zoom room”. Still, it took some time to order new chairs or transport existing office chairs to the home offices. If you experience back pain or mechanically induced headaches, review your chair thoroughly.

The table was more tricky, because it is typically invisible during video calls. My approach was to retrieve a photo of every home office space and talk about the possible improvements. During these talks, we came up with two solution categories that I want to present.

The notebook workplace

We all had work notebooks as secondary computers before the pandemic. So it was not a problem to start with working from home on that notebook, we’ve done it before. But if all you do is to work directly at a notebook, you might have the best chair and table, your body posture will be suboptimal. We equipped the notebook-based workplaces with the following extras:

  • An external keyboard
  • An external computer mouse
  • One (or better: two) additional monitors
  • A matching docking station, at least to connect to the monitors
  • A notebook riser stand

The last item, the notebook riser stand, was the game changer when it came to multi-monitoring (two or three monitors). It elevates your notebook to the same height as the other displays and might even change its angle. This transforms your notebook from being the CPU unit with cumbersome monitor to a secondary monitor with a CPU. The riser stand doesn’t cost much (if you don’t go overboard with its design) and provides you with more table space and improved displays.

The existing computer workplace

Because we are software developers, we mostly have a very decent computer already fully equipped at home. The only problem: The computer is for private tasks and should stay that way. We want to separate our work environment from our leisure environment as much as possible. But several developers wanted to use their usual “battlestation” for home office work, too.

In this case, we bought a lavish SSD as an additional boot drive for the home PC. This separates the work operating system from the leisure drive as much as the notebook approach. And the existing hardware can be used for both timeslots, much to the comfort of the developer.

The video conference equipment

But regardless of your approach (notebook or existing computer), there are still some things missing that improve the quality of work of yourself and your colleagues tremendously:

  • A good headset, preferably with top-notch comfort and active noise cancelling (ANC)
  • An at least decent webcam

Most notebooks provide a mediocre webcam and some low quality microphone. Do yourself (and your communication partners) a favor and invest in a good microphone. Oftentimes, it is coupled with good headphones. The difference between a good audio setup and an echo-prone makeshift solution is the deciding factor when essentially all communication with your colleagues go through this channel.

The webcam is not as essential as the audio equipment, because it “only” affects your communication partners, but it adds a nice touch to your other equipment. You don’t have to go overboard on it, a model for one hundred euros is already an improvement.

The bottleneck

One thing that can really invalidate most of the other improvements is a small internet connection. This is probably to hardest thing to fix in a timely manner, but give it some thoughts. If your internet connection is too slow for your daily work and communication pattern, it will be a constant annoyance. Just because it takes some time to improve doesn’t mean you shouldn’t try.

We will probably remain in this situation long enough to still reap the profit of our effort. And even if not (at least we can hope), nobody ever complained about an internet connection that is a bit oversized.

TL;DR

If you didn’t read the article, here are the major take-away points neatly summarized:

  • Ergonomic chair and table
  • Notebook with external keyboard, mouse and monitor
  • Notebook riser stand
  • SSD for dual boot systems
  • Good headset
  • External webcam
  • “Broadband” internet

If you miss an item from this list for your home office, give it a thought. And if you plan to only think about one item, think about your chair first.

What are your experiences with working from home? What accessory makes your work life better? Give us a hint by writing a comment below. Thank you!

Windows 10 quality of life features

Starting with Windows 10 Microsoft switched from big-bang releases of its operating system to so called rolling releases: They release new features and improvements in regular intervals – once or twice a year – without changing the product name.

The great thing is that users get the improvements made by Microsofts engineers much sooner than in the past where you had to wait several years for a big “service pack” to arrive or even a new major release of Windows like 2000, XP, Vista, 7, 8, 8.1 and finally 10 (I am leaving out the dark times on purpose 😉).

The bad thing is that it is harder to see what version or release you are running. Of course there is a (less visible) name for every Windows release. This version or codename sometimes gets mentioned on support pages or in blog posts because functionality of Window 10 can change significantly between these rolling feature updates. And sometimes you may run some app or tool that tells you need Windows 10 2004 or higher.

What version of Window 10 am I running?

I know of two simple ways to find out what version of Windows 10 you are running currently:

  1. Running the tool winver
  2. Opening the Settings -> System -> Info page

Why does it matter?

Another downside is that users often are not aware of new stuff added to their operating system. And Microsoft does an awful job promoting the changes and improvements!

Of course there are announcements about the big things after upgrading your operating system to the next feature level. And Microsoft uses these for marketing its own apps and services. They slap you their new Edge-browser in the face on every occasion and try to trick you in creating a Microsoft account. It is absolutely not obvious how to use Windows without a Microsoft account like the decades before. Skip the process here, continue without and risk your live…

On the other hand they really improve their software and slowly but steadily round the rough edges of their system. The UIs for environment variables are finally quite usable.

Now back to the main theme of the post: There are some hidden gems built into Windows 10 that I learned of only lately and I think are vastly underadvertised – unlike Microsoft’s marketing of their big products.

Built-in screen recorder

Ok, many gamers may know it because it Windows briefly displays the shortcut Win+G when starting a game. It is not only usable for games but you can record any window, capture application sounds and record your voice. You can easily record your own screencasts and video tutorials using this built-in solution.

Built-in clipboard manager

How often did you wish to be able copy multiple items and choose one of the last few copied elements when pasting? While such clipboard managers have been around for a long time and sometimes provide tons of useful features Windows 10 has a simple one built-in. Just press Win+V instead of Ctrl+V to paste your clipboard entry and you will get a list of the copied items to choose from.

Built-in screenshot/snipping tool

Many people may know the old way of making screenshots using the oddly labeled PrtScr-key (sometimes also PrntScrn or simply Print), opening a painting application like MS paint and pasting the image using Ctrl+V. Well, Microsoft improved this workflow a lot by including a snipping tool that you can activate using Win+Shift+S. This tool lets you select either a rectangular or free-form region, a window or an entire screen to capture. After doing that you get a notification allowing you to make modifications to the capture and save it to disk.

On-screen emoji-keyboard

Just a little helper in these modern social media times is the on-screen emoji-keyboard. Using Win+. you can activate it, browse tons of common emojis and enter them into you messages and texts 🐱‍🏍🤘.

Windows Terminal

Ok, this last but not least one is not (yet) built-in and mostly interesting for developers and power users. Nevertheless, I think it is noteworthy that Microsoft finally built a capable terminal application with modern features like multiple tabs, full unicode and font support, customizable background with blur and the ability to host different shells like the old and trusted command prompt CMD, the newer PowerShell and WSL. You can find it in the Microsoft store for free.

Conclusion

While releases of Windows 10 are more subtle than past new Windows releases many things change both under the hood and user visible. Every once in a while something you missed for years or installed third-party tools for may be added without you knowing. That’s another reason why talking to colleagues and friends and practices like pair-programming and brown-bag meetings are so valuable for sharing knowledge and experience.

I hope there is something for you in my findings of hidden windows gems. If you have some Windows 10 features you discovered and really like, feel free to leave them in the comments. I will gladly try them out!

The spell that reveals your onboarding decade

Every one of us has started somewhere. By telling you what my first computer was, I also convey a lot about the place and time my journey in IT started. For many of my fellows, it was a Commodore C64 or an Atari 500. But even if I don’t tell you about my first machine, there is a simple “magic spell” that you can cast to at least get a hint about the decade my first working days started, 15 years after my first contact with computers.

The spell is just one word: “container”. What a container is and how to use it is bound to the decades. Let me guide you through some typical answers.

Pre-2010 answer

If you entered the industry around the year 2000, a container was a big chunk of software that you preferably installed on an even bigger machine, the infamous “application server”. The container, or “servlet container”, “application container”, or, if you were with the right folks, “enterprise bean container” (in short: EJB-Container) was the central hub to host all of your web applications. If you deployed your application into the container, it handled the rest, like unpacking the web archive, providing resources and publishing to the internet. Typical names of containers were Tomcat, Jetty, JBoss or WildFly. You can probably see them around even today, because the concept itself is appealing. Some aspects of it inevitably lead to problems, though. Resource management was a big topic. Your application wasn’t expected to care for a database connection, a logging context or, sometimes, even security features, because the container provided those things to it. As you can probably imagine, that left your application crippled and unable to function outside a container.

Pre-2010 containers

So if you onboarded more than ten years ago, your first thoughts reacting to the word “container” will be “big machine”, “slow startup” and “logging framework”. There cannot reasonably be more than one container per machine. Maintaining a cluster of containers would be the work of luminaries. Being asked to start a container on your developer machine is a dreadful endeavour. “Booting the container” is a reason to visit the coffee machine.

Post-2010 answer

But if you started your career less than ten years ago, your reaction to the word “container” will be different. Starting in 2013, a technology named “Docker” reinvented an old practice to isolate processes and package them into a transport format. Simplified enough, a container is just the RAM-based projection of an application image. You boot a container by loading the image into RAM. That’s some of the fastest things you can do on a computer (not really, but it fits the story better). Even better, because each container ideally contains just one small application or part of it, you don’t boot one container per machine, you can run dozens at the same time. Each container brings everything it needs with it and only relies on three common external resources being provided: Networking, persistent storage and a facility to dump logging output.

Post-2010 containers

It is good practice to partition your application into several containers of the post-2010 kind. It is good practice to have them talk to each other over network, either real or simulated. The lines between actual computers get blurry real fast with this kind of containering.

As a youngster, your first thoughts reacting to the word “container” will be “just one?”, “scale up” and “log output management”. You see an opportunity to maintain a cluster of containers. Being asked to start a container on your developer machine is a no-brainer. “Booting the container” is a reason to automate your container infrastructure.

The reactions to the word “container” are very different, based on socialization period. In the old days, pre-2010 containers were boss fight adversaries. Nowadays, post-2010 containers are helpful spirits that just need to be controlled.

Post-2020 answer?

What better way to control the helpful spirits but to deploy them to an environment that handles unpacking, wiring, providing resources and publishing to the internet? Your application isn’t expected to care for topics like scalability, cluster robustness or load balancing. The environment, your container cluster platform, handles those things for you. There can only be one cluster platform per cloud. Being asked to start a cluster platform on your developer machine – well, that’s just not possible, sorry. Best we can do is a minified version of it. Our applications tend to function poorly outside a cluster platform.

As you hopefully can see, developers of all decades crave a thing they tend to call “container” that they can throw their software into to have it perform well without all the hassle of operations. But as soon as they give away responsibility for the environment, they also give away the possibility of comfortable “developer machine” operations. The goal is the same, just the technicality what exactly a “container” happens to be changes over time.

What is your “spell” that reveals a lot about the responder?

Three programming languages the world isn’t ready for yet

The year 2020 is coming to an end and we can finally relax a bit. In order to lighten up your mood, this blog entry is comprised entirely of humor, satire and plain silliness. Nothing in it has any resemblance with reality and you should not try any of this at work. But if, for whatever reason, you find something useful in here and go to revolutionize the world of software development, remember that we’ve called it first.

There are many programming languages for all sorts of purposes. If you’ve developed software for some decades, you saw them appear, getting useful and being forgotten over the span of time. But what will the future bring? Here are the descriptions of three programming languages that have their purpose, but the world is not ready for them. They aren’t even invented yet!

A programming language for long-lived projects

Most of today’s world source code is categorized as “legacy code”. This degatory term describes code that is old, unwieldy or just too clever for current programmers. Typical programming languages that have lots of legacy code include Cobol, C and Java. Most programmers don’t associate themselves with that code. It’s “other people’s” code. But there is one programming language that not only embraces the notion of “legacy code”, but in fact imposes it. This programming language is “Legacy”, the most productive one to produce heaps and heaps of, well, legacy code in short manners of time.

An unique feature of Legacy is that the code can be written at nearly the speed of thought, but is impossible to decipher even minutes later. A typical Legacy project doesn’t employ version control to differentiate between new and old code, but line numbers: Lower numbers indicate older code, while higher numbers are written more recently. To really drive this point home, every line of code needs to start with its line number, just like the good old BASIC did. An useful convention in Legacy is to choose the line number based on your current timestamp like 20201221181736 (the moment this text got written). Modern Legacy IDEs do this automatically for you.

(A cool but seldom used syntax feature that is based on the timestamp convention is the time-relative jump: You can address your jump target by absolute or relative line number, but even cooler is the relative amount of time: “jump -3d” resumes code execution at the line you wrote three days ago. Just remember: “jump +3d” is equivalent to undefined behaviour for most practical use cases. Only Legacy wizards can pull the “just-in-time jump” off in a useful manner.)

The most pressing issue about Legacy are its third-party dependencies: There are none. All dependencies are second-party dependencies, meaning they are much more involved in your project as usual. In order to compile or deploy a Legacy project, you need to have the exact version, down to the patch and oh-crap-i-forgot-hotfix number, of

  • the Legacy SDK
  • the compiler
  • the IDE
  • the Legacy runtime
  • and your text encoding

The last point might be surprising, but given the different versions of Unicode and even UTF-8, the Legacy ecosystem has chosen to follow the ideal of Python that dictates the indentation, but redirect it to the parts outlined above. You don’t get to choose the compiler version, the compiler version chooses you, based on your Unicode level. By the way, indentation is a no-brainer in Legacy: Each line starts with the line number, that is enough indentation already.

If you want to deploy a Legacy project to a production server, you need, by the rules above, the exact machine with a perfect replication of all installations for development. Because this is a painful endeavour, most developers have adopted the best practice of “one machine per project” and develop directly on the production server. Most of the time, this is a surprisingly powerful machine, making programming even more faster (remember, the goal is to produce the most amount of code in the least time). It also shortens the delivery pipeline length and facilitates communication between business and development departments, even if not of the pleasant type.

A curiosity that novice Legacy programmers often don’t grok at first is the IMPOSE keyword. It is a variant of the IMPORT functionality of other languages, but doesn’t extend the capabilities of your code. Instead, it limits the ability of the developers in this project by the given imposition. A typical example would be the line

IMPOSE variable name length <= 3

That, as you can read in clear text, limits your variable names to three characters or less. You can often find Legacy code with variable names like “usr” instead of “user”, “pwd” instead of “password” and “idx” instead of “index”. They all follow the imposition above, increase your typing speed and speed up the compilation, which counts as a triple win.

So, if you want to impress your customer with huge amounts of important looking code and build a certain reputation among peers, Legacy might be your new favorite language. And if anybody calls your work result “legacy code” in the future, you should feel validated and proud.

A programming language for mission-critical software

Software written for high-stake contexts like flight control, medical supervision and power plant management needs to meet extreme requirements in regard of correctness, robustness and resilience. Most mainstream programming languages have reacted by providing additional complexity to address the situation. For example, the demand for correct software has lead to the rise of testing frameworks that introduce additional syntax and require additional source code that is, by definition, untested in itself.

This is the problem the inventors of “Untested” try to solve. By writing your code in “Untested”, you can forgo all the extra effort of trying to prove it right. Untested code is, by definition, good enough without test. Remember the definition of Michael Feathers?

To me, legacy code is simply code without tests.

Michael Feathers in his book “Working Effectively with Legacy Code”

If you are ok with “Legacy”, you probably also enjoy “Untested”. The language makes it impossible to write tests for your code, so you can fend off the demand for them more easily. Your boss cannot ask for things that are impossible to do.

One interesting way in which “Untested” wards off calls from test routines is to couple every statement with a side effect in the hardware (oftentimes the TRAP flag on the CPU is flipped). Most programmers in traditional languages find those lines not testable and try to factor them away in order to test the rest. Untested factors away the rest. You don’t need to feel guilty about your lacking test coverage – it’s a feature, not a bug.

If your boss asks if a certain module is thorougly tested, you can respond “yes” in good faith. It’s tested in the best manner possible with “Untested”. If you need to give an overview of your system, you can write “Untested” beside every module and your reviewers will accept it as accurate.

Oh, the problem of long and tedious code reviews are taken into account, too. Because “Untested” code is just “Legacy” code (see Michael Feather’s definition above), it is impossible to read and understand with the exception of the developer machine (aka production server). If a thing is impossible to do, why even start trying? This will give you more time to produce “Untested” code.

And if problems arise in production? Well, you are already on the machine, so you can just hotfix it. Nobody can blame you, you’ve stated again and again that it’s untested code.

By the way: “Hotfix” is another promising programming language worth speaking about, but that would go beyond the scope of this blog entry. Might add it later, though.

A programming language for non-programmers

The central tragedy of software development is that the people that CAN program don’t know what they SHOULD program and the people that know exactly what SHOULD be programmed CANNOT do it. The latter group is mostly managers and people with million-dollar ideas.

The new kid on the programming language block tries to solve this problem by utilizing state-of-the-art artificial intelligence in the compiler AND the runtime. We are talking, of course, about “Straightforward”. It’s a programming language with a natural syntax that’s so easy and lenient, you can call it, well – you probably get the joke by now.

Remember the last time a stunned manager tried to explain the new feature to you and, when you came up with an estimate encompassing weeks for the implementation, shouted out “but it’s straightforward!”. He talked about his preferred programming language and you probably misunderstood him again.

“Straightforward” is so popular with the business folks because the compiler is in the “do what I mean” category of compilers. By using natural language recognition, it infers your most probable meaning of the code, looks it up on the internet and translates it into machine code. The first versions used sites like stackoverflow.com for the translation step, but that didn’t work out, because the site is filled by developers, not business people. Newer versions just access the cloud and find the answer there.

The machine code of “Straightforward” is not actual binary code, but an intermediate representation, much like Java’s bytecode, but for non-technical concepts. Because these concepts are subject of interpretation and the zeitgeist, they are really interpreted again at execution time by another artificial intelligence. This approach might be a bit demanding with processing power, but that’s just a financial problem. The big advantage is that code like “Make the colors more lively!” is both compilable and executable and yields the correct results regarding the current fashion every time. Your color scheme doesn’t age as fast or virtually not at all with this straightforward code.

The only problem that prohibits widespread adoption of “Straightforward” in the business right now is the unsolved equation:

Do what I mean != Do what I want

This is a fundamental theoretical problem in the field of management, much like P = NP in computer science. The race has already started, whoever solves his equation first gets the prize. It is rumored that quantum computing is the key to both. But I suspect that if quantum computing is available for everyday use, other programming languages like “ASAP” will take over the market.

Your turn

I hope this blog post has entertained (and maybe inspired) you. Now, it’s your turn. What is the programming language you always wanted to use? Be silly, be creative, be vocal. Write a comment below and tell us!