Key ingredients for the home office

Since March 2020, we transformed from an “on-site” company to a remote company, not particularly because we wanted to, but because the corona pandemic forced us to. Our office is not suitable to ensure transmission safety, so I decided that work from home is the lesser problem. When I say “transform” and “decided”, please bear in mind that these are retrospect notions. The decision was made at Monday, March 16th 2020 and the transformation happened in the next two days.

But there is a real difference between being operable and fully equipped for the situation. We were operable in the remote situation within 48 hours. But we still work on improving our equipment to match the situation that seems to linger a lot longer than initially anticipated. This blog post tries to summarize what we’ve learned since March 2020 in regards to equipping home office workplaces past the makeshift phase.

The fundamentals

The most fundamental ingredients of any office workplace are the table and the chair. If any of those lack in necessary ergonomic features, your comfort will never be the same as in the office (provided that your equipment there is adequate). And this constant discomfort will permeat everything you do.

The chair was easier to detect because it shows up in the video calls, it is part of the “zoom room”. Still, it took some time to order new chairs or transport existing office chairs to the home offices. If you experience back pain or mechanically induced headaches, review your chair thoroughly.

The table was more tricky, because it is typically invisible during video calls. My approach was to retrieve a photo of every home office space and talk about the possible improvements. During these talks, we came up with two solution categories that I want to present.

The notebook workplace

We all had work notebooks as secondary computers before the pandemic. So it was not a problem to start with working from home on that notebook, we’ve done it before. But if all you do is to work directly at a notebook, you might have the best chair and table, your body posture will be suboptimal. We equipped the notebook-based workplaces with the following extras:

  • An external keyboard
  • An external computer mouse
  • One (or better: two) additional monitors
  • A matching docking station, at least to connect to the monitors
  • A notebook riser stand

The last item, the notebook riser stand, was the game changer when it came to multi-monitoring (two or three monitors). It elevates your notebook to the same height as the other displays and might even change its angle. This transforms your notebook from being the CPU unit with cumbersome monitor to a secondary monitor with a CPU. The riser stand doesn’t cost much (if you don’t go overboard with its design) and provides you with more table space and improved displays.

The existing computer workplace

Because we are software developers, we mostly have a very decent computer already fully equipped at home. The only problem: The computer is for private tasks and should stay that way. We want to separate our work environment from our leisure environment as much as possible. But several developers wanted to use their usual “battlestation” for home office work, too.

In this case, we bought a lavish SSD as an additional boot drive for the home PC. This separates the work operating system from the leisure drive as much as the notebook approach. And the existing hardware can be used for both timeslots, much to the comfort of the developer.

The video conference equipment

But regardless of your approach (notebook or existing computer), there are still some things missing that improve the quality of work of yourself and your colleagues tremendously:

  • A good headset, preferably with top-notch comfort and active noise cancelling (ANC)
  • An at least decent webcam

Most notebooks provide a mediocre webcam and some low quality microphone. Do yourself (and your communication partners) a favor and invest in a good microphone. Oftentimes, it is coupled with good headphones. The difference between a good audio setup and an echo-prone makeshift solution is the deciding factor when essentially all communication with your colleagues go through this channel.

The webcam is not as essential as the audio equipment, because it “only” affects your communication partners, but it adds a nice touch to your other equipment. You don’t have to go overboard on it, a model for one hundred euros is already an improvement.

The bottleneck

One thing that can really invalidate most of the other improvements is a small internet connection. This is probably to hardest thing to fix in a timely manner, but give it some thoughts. If your internet connection is too slow for your daily work and communication pattern, it will be a constant annoyance. Just because it takes some time to improve doesn’t mean you shouldn’t try.

We will probably remain in this situation long enough to still reap the profit of our effort. And even if not (at least we can hope), nobody ever complained about an internet connection that is a bit oversized.

TL;DR

If you didn’t read the article, here are the major take-away points neatly summarized:

  • Ergonomic chair and table
  • Notebook with external keyboard, mouse and monitor
  • Notebook riser stand
  • SSD for dual boot systems
  • Good headset
  • External webcam
  • “Broadband” internet

If you miss an item from this list for your home office, give it a thought. And if you plan to only think about one item, think about your chair first.

What are your experiences with working from home? What accessory makes your work life better? Give us a hint by writing a comment below. Thank you!

The spell that reveals your onboarding decade

Every one of us has started somewhere. By telling you what my first computer was, I also convey a lot about the place and time my journey in IT started. For many of my fellows, it was a Commodore C64 or an Atari 500. But even if I don’t tell you about my first machine, there is a simple “magic spell” that you can cast to at least get a hint about the decade my first working days started, 15 years after my first contact with computers.

The spell is just one word: “container”. What a container is and how to use it is bound to the decades. Let me guide you through some typical answers.

Pre-2010 answer

If you entered the industry around the year 2000, a container was a big chunk of software that you preferably installed on an even bigger machine, the infamous “application server”. The container, or “servlet container”, “application container”, or, if you were with the right folks, “enterprise bean container” (in short: EJB-Container) was the central hub to host all of your web applications. If you deployed your application into the container, it handled the rest, like unpacking the web archive, providing resources and publishing to the internet. Typical names of containers were Tomcat, Jetty, JBoss or WildFly. You can probably see them around even today, because the concept itself is appealing. Some aspects of it inevitably lead to problems, though. Resource management was a big topic. Your application wasn’t expected to care for a database connection, a logging context or, sometimes, even security features, because the container provided those things to it. As you can probably imagine, that left your application crippled and unable to function outside a container.

Pre-2010 containers

So if you onboarded more than ten years ago, your first thoughts reacting to the word “container” will be “big machine”, “slow startup” and “logging framework”. There cannot reasonably be more than one container per machine. Maintaining a cluster of containers would be the work of luminaries. Being asked to start a container on your developer machine is a dreadful endeavour. “Booting the container” is a reason to visit the coffee machine.

Post-2010 answer

But if you started your career less than ten years ago, your reaction to the word “container” will be different. Starting in 2013, a technology named “Docker” reinvented an old practice to isolate processes and package them into a transport format. Simplified enough, a container is just the RAM-based projection of an application image. You boot a container by loading the image into RAM. That’s some of the fastest things you can do on a computer (not really, but it fits the story better). Even better, because each container ideally contains just one small application or part of it, you don’t boot one container per machine, you can run dozens at the same time. Each container brings everything it needs with it and only relies on three common external resources being provided: Networking, persistent storage and a facility to dump logging output.

Post-2010 containers

It is good practice to partition your application into several containers of the post-2010 kind. It is good practice to have them talk to each other over network, either real or simulated. The lines between actual computers get blurry real fast with this kind of containering.

As a youngster, your first thoughts reacting to the word “container” will be “just one?”, “scale up” and “log output management”. You see an opportunity to maintain a cluster of containers. Being asked to start a container on your developer machine is a no-brainer. “Booting the container” is a reason to automate your container infrastructure.

The reactions to the word “container” are very different, based on socialization period. In the old days, pre-2010 containers were boss fight adversaries. Nowadays, post-2010 containers are helpful spirits that just need to be controlled.

Post-2020 answer?

What better way to control the helpful spirits but to deploy them to an environment that handles unpacking, wiring, providing resources and publishing to the internet? Your application isn’t expected to care for topics like scalability, cluster robustness or load balancing. The environment, your container cluster platform, handles those things for you. There can only be one cluster platform per cloud. Being asked to start a cluster platform on your developer machine – well, that’s just not possible, sorry. Best we can do is a minified version of it. Our applications tend to function poorly outside a cluster platform.

As you hopefully can see, developers of all decades crave a thing they tend to call “container” that they can throw their software into to have it perform well without all the hassle of operations. But as soon as they give away responsibility for the environment, they also give away the possibility of comfortable “developer machine” operations. The goal is the same, just the technicality what exactly a “container” happens to be changes over time.

What is your “spell” that reveals a lot about the responder?

Three programming languages the world isn’t ready for yet

The year 2020 is coming to an end and we can finally relax a bit. In order to lighten up your mood, this blog entry is comprised entirely of humor, satire and plain silliness. Nothing in it has any resemblance with reality and you should not try any of this at work. But if, for whatever reason, you find something useful in here and go to revolutionize the world of software development, remember that we’ve called it first.

There are many programming languages for all sorts of purposes. If you’ve developed software for some decades, you saw them appear, getting useful and being forgotten over the span of time. But what will the future bring? Here are the descriptions of three programming languages that have their purpose, but the world is not ready for them. They aren’t even invented yet!

A programming language for long-lived projects

Most of today’s world source code is categorized as “legacy code”. This degatory term describes code that is old, unwieldy or just too clever for current programmers. Typical programming languages that have lots of legacy code include Cobol, C and Java. Most programmers don’t associate themselves with that code. It’s “other people’s” code. But there is one programming language that not only embraces the notion of “legacy code”, but in fact imposes it. This programming language is “Legacy”, the most productive one to produce heaps and heaps of, well, legacy code in short manners of time.

An unique feature of Legacy is that the code can be written at nearly the speed of thought, but is impossible to decipher even minutes later. A typical Legacy project doesn’t employ version control to differentiate between new and old code, but line numbers: Lower numbers indicate older code, while higher numbers are written more recently. To really drive this point home, every line of code needs to start with its line number, just like the good old BASIC did. An useful convention in Legacy is to choose the line number based on your current timestamp like 20201221181736 (the moment this text got written). Modern Legacy IDEs do this automatically for you.

(A cool but seldom used syntax feature that is based on the timestamp convention is the time-relative jump: You can address your jump target by absolute or relative line number, but even cooler is the relative amount of time: “jump -3d” resumes code execution at the line you wrote three days ago. Just remember: “jump +3d” is equivalent to undefined behaviour for most practical use cases. Only Legacy wizards can pull the “just-in-time jump” off in a useful manner.)

The most pressing issue about Legacy are its third-party dependencies: There are none. All dependencies are second-party dependencies, meaning they are much more involved in your project as usual. In order to compile or deploy a Legacy project, you need to have the exact version, down to the patch and oh-crap-i-forgot-hotfix number, of

  • the Legacy SDK
  • the compiler
  • the IDE
  • the Legacy runtime
  • and your text encoding

The last point might be surprising, but given the different versions of Unicode and even UTF-8, the Legacy ecosystem has chosen to follow the ideal of Python that dictates the indentation, but redirect it to the parts outlined above. You don’t get to choose the compiler version, the compiler version chooses you, based on your Unicode level. By the way, indentation is a no-brainer in Legacy: Each line starts with the line number, that is enough indentation already.

If you want to deploy a Legacy project to a production server, you need, by the rules above, the exact machine with a perfect replication of all installations for development. Because this is a painful endeavour, most developers have adopted the best practice of “one machine per project” and develop directly on the production server. Most of the time, this is a surprisingly powerful machine, making programming even more faster (remember, the goal is to produce the most amount of code in the least time). It also shortens the delivery pipeline length and facilitates communication between business and development departments, even if not of the pleasant type.

A curiosity that novice Legacy programmers often don’t grok at first is the IMPOSE keyword. It is a variant of the IMPORT functionality of other languages, but doesn’t extend the capabilities of your code. Instead, it limits the ability of the developers in this project by the given imposition. A typical example would be the line

IMPOSE variable name length <= 3

That, as you can read in clear text, limits your variable names to three characters or less. You can often find Legacy code with variable names like “usr” instead of “user”, “pwd” instead of “password” and “idx” instead of “index”. They all follow the imposition above, increase your typing speed and speed up the compilation, which counts as a triple win.

So, if you want to impress your customer with huge amounts of important looking code and build a certain reputation among peers, Legacy might be your new favorite language. And if anybody calls your work result “legacy code” in the future, you should feel validated and proud.

A programming language for mission-critical software

Software written for high-stake contexts like flight control, medical supervision and power plant management needs to meet extreme requirements in regard of correctness, robustness and resilience. Most mainstream programming languages have reacted by providing additional complexity to address the situation. For example, the demand for correct software has lead to the rise of testing frameworks that introduce additional syntax and require additional source code that is, by definition, untested in itself.

This is the problem the inventors of “Untested” try to solve. By writing your code in “Untested”, you can forgo all the extra effort of trying to prove it right. Untested code is, by definition, good enough without test. Remember the definition of Michael Feathers?

To me, legacy code is simply code without tests.

Michael Feathers in his book “Working Effectively with Legacy Code”

If you are ok with “Legacy”, you probably also enjoy “Untested”. The language makes it impossible to write tests for your code, so you can fend off the demand for them more easily. Your boss cannot ask for things that are impossible to do.

One interesting way in which “Untested” wards off calls from test routines is to couple every statement with a side effect in the hardware (oftentimes the TRAP flag on the CPU is flipped). Most programmers in traditional languages find those lines not testable and try to factor them away in order to test the rest. Untested factors away the rest. You don’t need to feel guilty about your lacking test coverage – it’s a feature, not a bug.

If your boss asks if a certain module is thorougly tested, you can respond “yes” in good faith. It’s tested in the best manner possible with “Untested”. If you need to give an overview of your system, you can write “Untested” beside every module and your reviewers will accept it as accurate.

Oh, the problem of long and tedious code reviews are taken into account, too. Because “Untested” code is just “Legacy” code (see Michael Feather’s definition above), it is impossible to read and understand with the exception of the developer machine (aka production server). If a thing is impossible to do, why even start trying? This will give you more time to produce “Untested” code.

And if problems arise in production? Well, you are already on the machine, so you can just hotfix it. Nobody can blame you, you’ve stated again and again that it’s untested code.

By the way: “Hotfix” is another promising programming language worth speaking about, but that would go beyond the scope of this blog entry. Might add it later, though.

A programming language for non-programmers

The central tragedy of software development is that the people that CAN program don’t know what they SHOULD program and the people that know exactly what SHOULD be programmed CANNOT do it. The latter group is mostly managers and people with million-dollar ideas.

The new kid on the programming language block tries to solve this problem by utilizing state-of-the-art artificial intelligence in the compiler AND the runtime. We are talking, of course, about “Straightforward”. It’s a programming language with a natural syntax that’s so easy and lenient, you can call it, well – you probably get the joke by now.

Remember the last time a stunned manager tried to explain the new feature to you and, when you came up with an estimate encompassing weeks for the implementation, shouted out “but it’s straightforward!”. He talked about his preferred programming language and you probably misunderstood him again.

“Straightforward” is so popular with the business folks because the compiler is in the “do what I mean” category of compilers. By using natural language recognition, it infers your most probable meaning of the code, looks it up on the internet and translates it into machine code. The first versions used sites like stackoverflow.com for the translation step, but that didn’t work out, because the site is filled by developers, not business people. Newer versions just access the cloud and find the answer there.

The machine code of “Straightforward” is not actual binary code, but an intermediate representation, much like Java’s bytecode, but for non-technical concepts. Because these concepts are subject of interpretation and the zeitgeist, they are really interpreted again at execution time by another artificial intelligence. This approach might be a bit demanding with processing power, but that’s just a financial problem. The big advantage is that code like “Make the colors more lively!” is both compilable and executable and yields the correct results regarding the current fashion every time. Your color scheme doesn’t age as fast or virtually not at all with this straightforward code.

The only problem that prohibits widespread adoption of “Straightforward” in the business right now is the unsolved equation:

Do what I mean != Do what I want

This is a fundamental theoretical problem in the field of management, much like P = NP in computer science. The race has already started, whoever solves his equation first gets the prize. It is rumored that quantum computing is the key to both. But I suspect that if quantum computing is available for everyday use, other programming languages like “ASAP” will take over the market.

Your turn

I hope this blog post has entertained (and maybe inspired) you. Now, it’s your turn. What is the programming language you always wanted to use? Be silly, be creative, be vocal. Write a comment below and tell us!

From multiplayer Pac-Man to a twenty year old company

This blog post does not contain big insights. It’s just the story of the very first days of our company which happens to celebrate its 20th anniversary this month. And because most stories begins a lot earlier than when the narrator begins to tell them, I’ll try to tell this one from the start.

It starts with an eight-year old boy that has access to his very first personal computer, a Tandon 8088 with 8 MHz. Just to put this glorified pocket calculator in today’s perspective: A basic arduino board has more power. But back in the days, this personal computer was a magical tool that could act as all kinds of things, including a gaming machine. One of the first games on this machine was Pac-Man, in 80×25 character ASCII “graphics” and without any scoreboard or competitive element. It was strictly single player and the computer-controlled ghosts acted strictly by their algorithms, so it became a repetitive chore rather soon. The boy would play the usual route, add some new steps at the end and watch the ghosts react. After some time, the boy could predict the ghosts’ reactions and plan the new steps with accuracy, clearing level after level. The ghosts never adapted.

By the age of twelve, the boy knew that he would become a “computer engineer”. Every occupational counselor (two in total) advised against this decision, not because it was bad, but because the counselors didn’t know anything about the profession. But the boy sticked to his decision and began his studies in computer science immediately after school was over. This was in 1997, when the internet still made sounds and you could ruin an hour-long download just by picking up the phone.

The boy, now a young man far from home, studied basic computer science for six month until the semester break arrived. Most other students returned home, but he stayed and teamed up with other students still on campus. They planned to program a computer game. A pac-man game, but with multiplayer abilities. One team would be “the players” or “pac-men”, the other team would be “the ghosts”. If somehow there wouldn’t be a dozen human players in front of the keyboard, the computer would control the remaining avatars. Game controls worked with split keyboard and – planned for later versions – over the network.

The only way the students knew how to organize the project was to transform one room into a computer-ridden workshop and hack away. Every horizontal platform in the room became a desk. The project should happen in a span of 24 hours. Today, this would be called a “game jam“. After 24 hours, all we had was a map. No game, no players, nothing exciting – just the future game’s map. But we agreed to continue working until the game is finished.

It took the three students a whole week. A week without much sleep, slippy food and lots of source code. Because we didn’t know about version control yet (nobody told us and we didn’t set up a local network, anyway), we had to structure the code in way that would allow us to work on different parts without collisions and transfer them from computer to computer using floppy disks. We had to maintain a list of files that were modified and did so on a central whiteboard that the young man had bought at the beginning of his studies. This whiteboard became the planning area where we would keep track of our modifications, tasks and concepts, including the stereotypical post-it notes. In hindsight, you could call it a chaotic story board. Without the whiteboard, we probably would have failed.

But after the week, the game was finished. We had developed a multiplayer pac-man in Java, complete with graphics, sounds and multi-threading. It was playable! We named it “Hubert 2D”, a reference to both “Duke Nukem 3D”, a very 90s game, and to one of our most famous fellow students. The game was blazingly fast – so fast, in fact, that you often lost track of your avatar. The unofficial motto of the game turned out to be “where am i?”. It was crammed with features. Just a Pac-Man where you could gobble up little pills and evade the ghosts was not enough for us. First, there was Hubert, the boss ghost. He appeared randomly and could not be player-controlled. He had a rocket launcher. If you defeated Hubert, you could grab the rocket launcher and, well, launch rockets. How can you defeat a rocket-launching ghost in Pac-Man? With your chainsaw, evidently. Players could pick up chainsaws to defend themselves against the ghosts. Ghosts could pick up energy shields to defend themselves against the chainsaws. Players could place mines to blow up ghosts that didn’t pay attention. Ghosts could place bombs to create new passageways to evade the mines or blow up the players. Sheep wandered around cluelessly, being blown up by mines, bombs, chainsaws or rockets and generally acting like a mobile roadblock. Teleporters added to the confusion by instantly teleporting you to either another teleporter or a random place on the map (leading to the infamous “where am i?”). But above all, you could poison and heal other avatars with various potions. Taking everything into account, this wasn’t Pac-Man anymore. This was team deathmatch that lasted until all the pills on the map were gobbled up accidentally.

Two funny moments during development and testing (aka playing) will always stay in my memory:

  • You could poison an avatar, but also heal it with medicine. Being healed was indicated by an “hallelujah” sound effect. But, because every new avatar on the map would be created in the “healed” state, we had a serious “hallelujah” epidemic going on. It took us way longer than it should have to connect the dots and eliminate the sound effect during creation.
  • Every avatar on the map moved with the same speed. Some avatars like bombs or mines decided not to move at all, others like sheep and Hubert only moved sometimes, but rockets flew twice as fast. So it was not possible to outrun a rocket. Because of this imbalance in power, we deemed the Hubert boss to be invincible in close combat. You could not walk up to him without facing a rocket that reached you at least a tile before you could employ your chainsaw. We were proven wrong, when one player used an energy shield in combination with a chainsaw and a hallway corner to sneak up on Hubert, neutralize the first rocket with the energy shield and defeat Hubert with the chainsaw before the second rocket could be fired. Because we thought that Hubert would be invincible, this move didn’t gain any in-game points. But the moment turned legendary immediately.

This week of intensive teamwork, combined with the result of an actual game, provided us with the trust and groundworks for future collaboration. So it was no wonder that, just a few semesters later, we came up with the idea of selling this collaboration ability in the form of a software development company. We were more knowledgeable, better equipped and had trained working together multiple times. What better times than now?

So we founded our company, the Softwareschneiderei (“software tailoring”) in late 2000, twenty years ago. Because we really meant it to be earnest, we invested the money to create a limited liability company and had to learn all the topics and obligations that follow such a creation in a very short time. We were still studying at university, but working for our own company, in a rented office, in every free minute. Our primary goal was to finish our studies with a degree. Our secondary goal was to let the company survive long enough to make it the primary goal after graduation. The plan worked out and here we are, twenty years later.

The statistics says that only one out of ten companies survives their first five years. Even after that, keeping a company afloat is not sunshine sailing. Somehow, we made it. Despite all our mistakes and misconceptions (and there were many, most on a more serious level than deeming Hubert to be invincible), we developed our company in a way that provides benefit for our customers and profit for our employees.

And in a corner of my desk drawer, there is still a 3,5″ floppy disk labelled “Hubert 2D”. Because that’s the source code that got this company started, 23 years ago.

Make yourself comfortable

If you have to add a new feature to an existing code base, you’ve likely already experienced an uncomfortable truth: Nobody has thought about your use case. Nothing in the existing code base fits your goals. This isn’t because everybody wanted you to fail, but because your new feature is in fact brand new to the software and responsible software developers stop working on a code as soon as their work is done (while obeying the project’s definition of done).

So you try to shoehorn your functionality into the existing code. It’s not neat, but you get it working. Are you done? In my opionion, you haven’t even started yet. Your first attempt to combine your idea of the best implementation of your functionality and the existing system will be cumbersome and painful. Use it as a learning experience how the system behaves and throw it away once you get a working prototype. Yes, you’ve read that right. Throw it away as in undo your commits or changes. This was the learning/exploration phase of your implementation. You’ve applied your idea to reality. It didn’t work well. Now is the time to apply reality to your idea. Commence your second attempt.

For your second attempt, you should make use of your refactoring skills on the existing code. Bend it to your anticipated and tried needs. And once the code base is ready, drop your new feature into the new code “socket”. Your work doesn’t need to be cumbersome and painful. Make yourself comfortable, then make it work.

Here is a example, based on a real case:

An existing system was in development for many years and worked with a lot of domain objects. One domain object was a price tag that looked something like this:

interface PriceTag {
    PriceCategory category();
    TaxGroup taxGroup();
    Euro nettoAmount();
    Product forProduct();
}

Well, it was a normal domain object giving back other normal domain objects. The new feature should be an audio module that could read price tags out loud. The team used a text-to-speech synthesizing library that takes a string and outputs an audio stream. No big deal and pretty independent from the already existing code base.

But the code that takes a price tag and converts it into a string, aka the connection point between the unbound library code and the existing system, was ugly and undecipherable:

String priceTagToText(PriceTag price) {
    return price.forProduct().getDenotation()
        + " for only "
        + CurrencyFormatter.format(price.nettoAmount())
        + " with "
        + String.valueOf(price.taxGroup().percentage())
        + " % VAT in the "
        + price.category().getDenotation()
        + " section.";
}

This is how it looks if somebody tries to combine two building blocks that aren’t meant for each other. To test this method, you’ll have to mock deep into the domain objects.

If two building blocks aren’t matching naturally, maybe it’s an idea to add some lubrication code between them. This code isn’t exactly doing anything newfound, but adds a requirement seam that point towards the existing system:

interface ReadablePriceTag {
    String denotation();
    String netto();
    String vatPercentage();
    String category();
}

You can probably already see where this is heading. Just in case you cannot, I will take you through all parts of the code.

First we can write a priceTagToText() method that reads a lot nicer:

String priceTagToText(ReadablePriceTag price) {
    return price.denotation()
        + " for only "
        + price.netto()
        + " with "
        + price.vatPercentage()
        + " VAT in the "
        + price.category()
        + " section.";
}

The second and complementary part is the implementation of the ReadablePriceTag interface that is given a PriceTag object and translates the data for the new methods:

class PriceTagBasedReadablePriceTag {
    private final PriceTag price;

    PriceTagBasedReadablePriceTag(PriceTag price) {
        this.price = price;
    }

    @Override
    public String denotation() {
        return this.price.forProduct().getDenotation();
    }

    @Override
    public String netto() {
        return CurrencyFormatter.format(this.price.nettoAmount());
    }

    @Override
    public String vatPercentage() {
        return String.valueOf(
                this.price.taxGroup().percentage()) + " %";
    }

    @Override
    public String category() {
        return this.price.category().getDenotation();
    }
}

Basically, you have a lot of existing code that is using PriceTag objects and some new code that wants to use ReadablePriceTag objects. The PriceTagBasedReadablePriceTag class is the connector between both worlds (at least in one direction). We can definitely argue about the name, but that’s a detail, not the main point. The main points of all this effort are two things:

  1. The new code does not suffer in quality and readability from decisions made at a different time in a different context.
  2. The code clearly models these contexts. If you are aware of Domain Driven Design, you probably see the “Bounded Context” border that crosses right between PriceTag and ReadablePriceTag. The PriceTagBasedReadablePriceTag class is the bridge across that border.

If you express your context borders explicitely like in this example, your code reads fine on any side of the border. There is no notion of “old and fitting” and “new and awkward” code. It seems like additional work and it surely is, but is pays off in the long run because you can play this game indefinitely. A code base that gets more muddied and forced with time will reach a breaking point after which any effort needs knowledge in archeology and cryptoanalysis.

So, my advice boils down to one thing: Make yourself comfortable when adding new code to an existing code base.
And then, think about your type names. PriceTagBasedReadablePriceTag is most likely not the best name for it. But that’s a topic for another blog post. What would be your name for this class?

The Bird in the Jungle

When I was a young boy and attending school, there were art classes that I kind of liked. You either were told cool stories about lunatic painters or painted cool pictures by yourself. I still remember one particular assignment that sticks out to me: draw a bird!

In case you ask what this blog entry is all about: It is about project management and expectation management. It will stray a little bit into process territory. But mostly, it’s about a picture of a bird in a jungle.

When the teacher told us the task for the next weeks, my imagination ran wild. Draw a bird! I envisioned a stork-like bird with long legs and a long beak, fishing by a river in a tropical setting. The river would flow through a full-blown jungle with lots of plants and insects and things to discover. The bird would be in the center of the picture, drawing your attention to its eyes as it hovered over a fish in the water, ready for the strike that would provide its supper. But if your eyes follow the water, you would see that the bird is just a piece in a complex ecosystem with lots of untold stories. My pen would tell some of those stories!

We had rougly two hours of art class, so I had to move fast. Because the bird was the essential piece of my picture, I had to draw it last, on top of the background jungle and aligned with all the secondary motives. In the first hour, the jungle grew on my sheet of paper, following the outlined river. If I grew weary of drawing plant leaves, I added a bug or a little snake. Somewhere, a happy little jungle squirrel peaked through the green (Ok, that’s not true. I didn’t know about Bob Ross back then and my squirrel was probably not happy).

The thing was, the art class was suddenly over. My jungle wasn’t completed yet. Worse, the entire river, fish and bird were still missing. The teacher asked me if I misunderstood the assignment. What a joke! Of course I understand, my bird will be majestic – once it is finished next week.

I peeked at the pictures of my classmates. Most of them had some bird-like outlines, a beak or a foot. Nobody had finished their drawing. As far as I was concerned, I was well within expectations. Many classmates took their picture home to work on it in the afternoon. I waited until next week, I was certain about my plan.

Next week came and a sudden realization followed: Most of my classmates had indeed worked on their bird at home. Their drawings were energetic, strong and defined. They looked like black-and-white copies of bird photographies and nothing like the awkward lines from last week. They had traced an image through their sheet and called it a day. Two classmates had even traced the same image and produced essentially identical birds. Those cheaters. Not me!

When half the class had already turned in the assignment, the pressure began to rise for the strugglers. My plans were in jeopardy and I skipped several trees and mountains in the outskirts of the picture to save time. This also meant I had to fade out the picture at the edges to make it seem deliberately.

This art class went over so fast, I still hadn’t drawn my bird when the bell rang. The teacher wanted to collect our pictures, but I had to bargain for a week of work at home. With that much time at my disposal, I could implement my plans. I drew six evenings and half the weekend. My jungle was never right. My river wasn’t as dynamic as envisioned. But most of all, my bird was nowhere near majestic. It was a stork-like bird, but way too small. It didn’t hover over the fish, I had to make it dive into the water to even be near its prey. In the end, I had spend around 20 hours on a two-hour assignment that produced a drawing of a jungle that happened to have a hungry, desperate bird in the middle. Somewhere in the outskirts, a squirrel laughed.

The teacher gave me a mediocre grade out of pity. It was clear I didn’t copy from anywhere. But it was also clear as the water of the river in my jungle that I didn’t follow the instructions at all.

Looking back now, I can smile about the naive boy that set out to tackle a task that nobody asked for while getting lost and receiving an average grade for an enormous effort. You can probably already see that the return on investment was awful.

The interesting thing for me to see clearly now is that the project was doomed from the start. Mostly because I didn’t work on the teacher’s assignment, I worked on my own assignment. And my own assignment was much bigger than the original one, but the time budget was not. If you don’t see it as clearly, just count the words in the teacher’s assignment (“draw a bird” – 3 words) and just the nouns in my plan. Calculate one hour for every noun and you can see how this would never have worked. So this is the first take-away: Always make sure that you work on your customer’s project and not your own. If the customer didn’t use a specific noun in the mission statement, why did we include it in our project plan? In my story, the teacher never even mentioned a “background”. Just a bird. Draw it and be done. Go home and draw the jungle in your spare time.

The next thing that sticks out now for me is that I delayed the essential work (the bird) as much as possible. This is exactly the wrong approach if you want to act at least a little bit agile. Cover the essential things first and fill in the details later. In my today’s work, we call it the “risk first approach”. The most critical part comes first. It might not be pretty, it might not be complete, but it already works. Applying this to my story: If I had drawn the bird first (big and majestic in the middle), I could have called it quits anytime afterwards. Nobody asked for more and my teacher wouldn’t grade the background – only the bird.

The third thing is about expectation management: The teacher expected us to try, fail and cheat at home. Those clearly traced pictures matched the assignment better than my work. I just refused to fail and start over. I didn’t set out to match the expectations of my teacher, I wanted to fulfill my own expectations that were not grounded in reality. I even ignored the grading scheme of my teacher (my customer, but I couldn’t see it that way at that time) and got upset when all my effort just yielded a mediocre grade.

Today, I see all these things and can handle them successfully. As a young boy, those were foreign concepts for me. Between the two points in time, I acquired concepts, terms and words to reflect on the process. But I’m still learning. Perhaps you see something in the story that I haven’t seen yet? Tell me in a comment below!

Nevertheless, I often think about my bird in the jungle as a reminder to keep on track.

Your most precious resource

When I was a young student, living on my own for the first time, my most scarce resource seemed to be money. Money’s too tight to mention was (and probably still is) a motto that every student could understand. So we traded our time for money and participated in experiments and underpaid student assistant jobs.

Soon after I graduated, money began to accumulate. I have a rather frugal lifestyle, so my expenses didn’t suddenly surge. Instead, my perception of time and money shifted: Money isn’t the bottleneck (anymore), time is. Suddenly, time was much more valueable than money and I would gladly pay money if that meant some hours of additional leisure time or one less problem to tend to. It seems that time is the most precious thing there is.

The traditional economic wisdom supports this idea: “Time is money” is true, but the reverse is not: “money is time” doesn’t cut it. The richest man on earth still only has as much time available to him as anybody else.

If time is the most precious resource, the drive to automation as a time-saving effort can be understood directly. Automation also reduces learning costs if you scale horizontally by parallelizing production.

But soon after I had enough money to optimize my time, I hit another resource bottleneck. Suddenly, I had more time on hand than attention to spare. It turns out that attention is the most valueable resource you can spend. It is just entangled enough with time that its hard to distinguish which runs out first. If you reflect a bit, it becomes obvious. The term “to pay attention” is pretty spot on.

Let me take up the thought of automation again, this time in the domain of software development, in the form of automated tests. Here, automation is not in its most profitable shape. You don’t gain much from scaling your tests horizontally. If you don’t change the code, it doesn’t matter if your tests run once or a thousand times in parallel, the result will be the same (except if you run hardware-dependent tests, but even then you probably don’t gain much after covering all hardware variations).

You also don’t gain much from scaling your tests vertically by making them run faster and faster. It sure helps to have them run continuously in the background (think of a user-modded compiler – look into Continuous Test Runners if you are interested), but after one test run per meaningful change, the profit hits a limit.

So why else is automated testing an economically sound practice? My take on it is delegated attention. You write a small software (your test) that augments your attention area onto code that probably fades from your own attention pretty soon. Automated tests provide automated attention in a sustainable manner (except for those tests that cry wolf for no good reason, those are attention sinks and should be removed from your portfolio). Because of the automation, this delegated attention never fades – even after many years, the test has a close eye on “its” code.

If you are a developer, you have automation and zero-cost copying (aka parallelization without upfront costs) intrinsically in your solution portfolio. Look for ways how to make money with those super-powers. Or even better, look for ways to save time. But if you want the best return on investment for your efforts, you should look for ways to expand your attention area.

Do you agree that attention is our most precious resource? What do you do to lower your attention expenses? Perhaps you have experience in the Ops/DevOps area that resonates with this thoughts? Share your opinion by commenting below or writing your own blog entry!

We will pay attention to you.

The ALARA principle in software engineering

The ALARA principle originates in radiation protection and means “As Low As Reasonably Achievable”. It means that you have to weigh the purpose of an action dealing with radiation against its disadvantages, like radiation damage or long-term risks. The word “reasonably” means that while some disadvantages are not avoidable, a practicable amount of protection should be in place to lower them. The principle calls for a balancing act: Not without safety measures, but don’t overextend your means by trying to achieve a safety level that isn’t helpful anymore.
To put the ALARA principle in practice with an example: You shouldn’t need a X-ray every time you go to the dentist, but given enough time since the last one and reasonable doubt about a tooth, the X-ray examination will benefit your dental (and overall) health more than if you deny it. It isn’t healthy by itself, but the information gained by it will be used to improve your health.

I learnt about the ALARA principle when my father (a nuclear physicist by heart) explained it to me in context of the current corona pandemic: Use protection like face masks and distance, but don’t stress yourself too much over that one time when you grabbed a pen in the postal office. While preparation and watchfulness is helpful, fear is detrimental to your mental health. And even the most resilient mind has bounded resources that can better be spent on constructive things instead of fear.

A fun fact from radiation protection is that at least three of the four main rules of protection can be applied to corona, too:

  • Distance yourself from the radiation source
  • Use appropriate protection gear
  • Avoid incorporation (keep the thing outside your body)
  • Limit your exposure time (this doesn’t fit as nicely, because the virus is probably not cumulative)

But how can we apply the ALARA principle to software engineering? I was instantly reminded about the “Thorough” rule of unit testing. In the book “Pragmatic Unit Testing” by Andy Hunt and Dave Thomas, the two original Pragmatic Programmers, good unit tests have to follow the ATRIP-rules. The T stands for “Thorough” which is often misinterpreted as “test everything at least twice”. In reality, the rule states that:

  • all mission critical functionality needs to be tested
  • for every occuring bug, there needs to be an additional test that ensures that the bug cannot happen again

The first thing that meets the eye is that the rule doesn’t define a bug as a failure of your testing effort. It takes a bug that probably happened in production and caused some damage as a motivation to strengthen your test coverage in that particular area. The second part of the rule calls for directed, well-aimed testing effort. It is easy to follow because it has a clear trigger: A bug happened, now you have to write a test.

The first part of the rule is more complicated: What is mission critical functionality? And what means “is tested thoroughly” in this context? And here, the ALARA principle can help us. The bug rate in the important parts of your code should be as low as reasonably achievable. “Reasonably achievable” is defined by the resources at your disposal (like time to market), your expertise in testing and the potential damage that could happen if something in your code goes wrong.

If the potential damage is high or even life-threatening, your reasonable effort should be much higher than if the most critical thing that happens is a 15 minute downtime while you restart the server. There are use cases where even 15 minutes mean subsequent damage, but most software is written for a more relaxed context.

I’ve always found the “Thorough” rule of good unit tests pleasant and comforting: If you made reasonable effort to test your most important code and write a test for every bug you or your users encounter, you can say that your bug rate is ALARA – “As Low As Reasonably Achievable”. And that is good enough for most cases.

What was your first thought when you heard about the ALARA principle? Tell us in the comment section!

Contain me if you can – the art of tunneling

This blog post briefly explains the basics of SSH tunneling and dives into the detail of reverse tunneling. If you are already familiar with all these concepts, this might be a boring read. If you heard “reverse tunneling” for the first time now, you’re about to discover the dark side of a network administrator’s might.

SSH basics

SSH (secure shell) is a tool to connect to a remote computer. In this blog post, the remote computer is always “the server” and your local computer is “the notebook”. That’s just nomenclature, in reality, each computer (or “endpoint”, which is a misleading term, as we will see) can be any device you want, as long as “the server” runs a SSH server daemon and “the notebook” can run a SSH client program. Both computers can live in different local area networks (LAN). As long as it is possible to send network packets to the specific port the SSH server daemon listens to and you have some means to authenticate on the server, you have free reign over both LANs by utilizing SSH tunnels.

The trick to bypass the remote LAN firewall is to have a port forwarded to the server. This allows you to access the server’s port 22 by sending packets to the internet gateway’s worldwide address on a port of your choice, maybe 333. So the forwarding rule is:

gateway:333 -> server:22

If you can comprehend this, you’ve basically grasped the whole foundation of SSH tunneling, even if it had nothing to do with SSH at all.

SSH tunneling basics

Now that we have a connection to the server, we can enjoy the fun of forward tunneling. You didn’t just connect to the server, you opened up every computer in the remote LAN for direct access. By using the SSH connection, you can define a SSH tunnel to connect to another computer in the remote LAN by saying:

localhost:8000 -> another:80

This is what you give your SSH client, but what actually happens is:

localhost:8000 -> gateway:333 -> server:22 -> another:80

You open a local network port (8000) on your notebook that uses your SSH connection over the gateway and the server to send packets to the “another” computer in the remote LAN. Every byte you send to your port 8000 will be sent to port 80 of the “another” computer. Every byte it answers will be relayed all the way back to your client, speaking to your local port 8000. For your client, it will look as if port 80 of “another” and port 8000 of your notebook are the same thing (if you don’t mind a small delay). And the best thing: Your client can be anywhere in your local LAN. Given the right configuration, “notebook2” in your LAN can speak with port 80 of “another” in the remote LAN by using your SSH tunnel. If you’ve ever played the computer game “Portal”, SSH tunneling is like wielding the portal gun in networks, but with unlimited portals at the same time. Yes, you can have many SSH tunnels open at once. That’s pretty neat and powerful, but it’s essentially still only half of the story.

SSH reverse tunneling basics

What you can also do is to open a network port on the server that acts like a portal in the other direction. You probably noticed the arrows in the tunnel descriptions above. The arrows really give a direction. If your tunnel is

localhost:8000 -> another:80

you can send packets from you to another (and receive their answer), but it doesn’t work the other way around. The software listening at another:80 could not initiate a data transfer to localhost:8000 on its own. So SSH tunnels always have a direction, even if the answers flow back. It’s like a door phone: You, inside the building, can initiate a call to the entrance door phone at any time. The guest in front of the door can speak back, but is not able to “call you back” once you hang up. He can not strike up a conversation with you over the phone. (In reality he still has the doorbell to annoy you. Luckily, the doorbell is missing in networking.)

SSH reverse tunnels change the direction of your tunnels to do exactly that: Give somebody on the remote LAN the ability to send network packets to the server that will get “teleported” into your local LAN. Let’s try this:

localhost:8070 <- server:8080

If you set up a reverse tunnel, any computer that can send packets to the port 8080 on the server will in fact speak with your notebook on port 8070. This is true for the server itself, any computer in the remote LAN and any computer with a SSH tunnel to port 8080 on the server. Yes, you’ve just discovered that you can send a data packet from your LAN to the remote LAN and have it reverse tunneled to a third LAN. And if you haven’t thought about it yet: Every “endpoint” in your three-LAN hop can be a portal to yet another hop, with or without your knowledge. This is why “endpoint” is misleading: It implies an “end” to the data transfer. That might just be your perception of the situation. Every endpoint can be another portal until it’s portals all the way down.

SSH reverse tunneling fun

You can use this back and forth teleporting of network traffic to create all kinds of complexity.

But in our case, we want to use our new might for something fun. Let’s imagine the following scenario:

  • On your notebook, you can start a SSH connection to the server. You can open forward and reverse tunnels.
  • On your notebook, you run a VNC (Virtual Network Computing) server. VNC is basically a simple remote desktop technology that lets you see the display content of a remote computer and control its keyboard and mouse. You running the server means somebody else can connect to your notebook using a VNC client and control it as if it was local (if you can ignore a small delay).
  • Somebody else (let’s call her Eve), on her notebook, can also start a SSH connection to the server. She can open forward and reverse tunnels.
  • On her notebook, she has a VNC client software installed. She can connect and control VNC server machines.

And now the fun begins:

  • VNC servers typically listen on network port 5900. In the client, this is known as “display 0”, which means that a client connects to the port “display + 5900” on the server machine.
  • Using this knowledge, you create a reverse tunnel for your SSH connection to the server: localhost:5900 <- server:5901. Notice how we changed the port. This is not necessary, but possible.
  • Now, anybody with direct access to your network port 5900 (in your LAN) or access to the network port 5901 on the server (in the remote LAN) can remote control your desktop.
  • Eve creates a forward tunnel for her SSH connection to the server or any other machine in the remote LAN: localhost:5902 -> server:5901. Notice the port change again. Also just a possibility, not a requirement. Notice also that Eve doesn’t need to connect to the same server as you. Her server just needs to be able to initiate a data transfer with the server. This might be through multiple portals that don’t matter in our story.
  • Now, Eve starts her VNC client and lets it connect to localhost:2 (2 is the display number that gets resolved to local port 5902).
  • Eve can now see your screen, control your inputs and even transfer files (depending on the capabilities of your VNC setup).

This is the point where LAN containment doesn’t really matter anymore. Sure, you cannot reign freely over any machine anywhere, but you can always establish a combination of reverse and forward tunnels to connect two computers as if they were connected by a cable. Which is, in fact, a good analogy to think about SSH tunnels: cables with two different connector types that span between computers.

Wrap up

You are probably aware about the immediate security risks that follow all that complexity. If you want to mitigate some of the risks, have a look at stunnel. It is effectively a more secure (and constrainable) way to create portals between LANs.

And if you just want to have fun: Please be aware that most of the actions here are probably not legal if done without consent of all participants. Stay safe!

The emoji checksum

This blog article is a story about an idea, not an actual project report. If it were a movie, it would feature the “based on real events” disclaimer.

The warehouse

Imagine a warehouse of a medium sized company. You would expect a medium sized warehouse, but in reality, the amount of items in this warehouse is nearly as big as in a big company. The difference might be the storage count of each item, but the item count is a big number. So big that each item has its own “item ID”, which is also used as the location identifier in the warehouse. Let’s see three (contrived) examples:

  • 211 725: Retaining screw, 8 mm
  • 413 114: Power transformer, 5 A
  • 413 115: Power transformer, 10 A

As you can see, different item groups have numbers with a large numerical distance while similar items are numerically close. This makes sense for the engineers using these numbers by muscle memory and for the warehouse navigation. If you read the first three digits, you already know where to turn to in the large hall. If you’ve arrived in the general area, the next three digits lead you to the exact storage space.

The operators

But that’s not how it works. The warehouse workers cannot read. Yes, you’ve read that right. The warehouse is operated by humans and the workers are not familiar with digits and numbers. They decipher each digit on their own and cannot cross-check with the article name. They navigate the warehouse with a best-effort approach. The difference between item 413114 and item 413115 is negligible for them. It’s the same thing anyway – unless you can read (and understand) that one of them blows up above 5 Ampere and the other one doesn’t. And this is a problem for the engineers. The difference between a “Power transformer able to take 10 Ampere” and a “Power transformer (5 A), aka molten copper lump” is a successful or a failed project.

So what can you do? Teach the warehouse workers how to read and deal with numbers? Would be a good approach if the turnover rate among them wasn’t so high. What else can we do? We can abstract the problem at hand, apply a suitable solution approach and see if it works.

The abstraction

If you think about the situation in abstract terms, you deal with an unreliable data transmission. You send your item list to the warehouse and receive a collection of loosely related items. That’s similar to sending data over a faulty cable. To mitigate transmission errors, we’ve invented checksums. Each suitable part of the transmission is validated (or invalidated) by a checksum.

In our case, the “suitable part of the transmission” is each single item. We should add a checksum to the item list! Instead of requesting item 413114, we request 413114/7, while item 413115 is requested as 413115/1. Now, we have a clear indicator for wrong or right. But it is still an indicator in a foreign alphabet. If you ignore the difference between 4 and 5, why not also ignore the difference between 7 and 1?

The emojification

But what if we don’t rely on numbers or characters, but on something every human can understand, regardless of literacy level? What if we transpose the numbers into an emoji alphabet? Let 413114 be πŸ˜„πŸŒ΅β˜οΈπŸŒ΅πŸŒ΅πŸ˜„ and 413115 is written as πŸ˜„πŸŒ΅β˜οΈπŸŒ΅πŸŒ΅πŸ . But more important: The checksum is in emoji, too:

πŸ˜„πŸŒ΅β˜οΈπŸŒ΅πŸŒ΅πŸ˜„ (413114)

πŸš— (7)
vs.
πŸ˜„πŸŒ΅β˜οΈπŸŒ΅πŸŒ΅πŸ  (413115)

🌡 (1)

Even if you only glance at the emoji series (and fail to notice the difference at the end), you still have to acknowledge that your checksum doesn’t fit. A cactus is no car, regardless of your literacy.

This transposition of numbers into the iconographic realm plays right into every human’s built-in ability to distinguish concrete objects. Numbers, digits and characters are (more) abstract concepts and objects, but a cloud is recognizable as a cloud even if you draw it by hand and without care. The transposition is reversable quiet easily – you only have to remember ten number/emoji pairs (or eleven, if your checksum has an extra character). And nobody stops you from printing both on the item list and warehouse storage boxes:

And the best thing? You don’t even have to invent the transposition yourself. Just use the existing work of others by checking out emojisum by Vincent Batts or ecoji by Keith Turner.

The only thing that is stopping you is that ancient dot matrix printer that prints the item lists on continuous paper.