It’s not a bug, it’s a missing test

Recently, I changed some wording when I talk about code and code problems. In my opinion, the new words have more correlation with the root cause of the problem, while the old words used to be “nearer” to the symptoms.

Let me explain the changes with a small code example that serves no other purpose than to contain a few problems:

public class InMemoryItemRepository {
	private final Map<String, Item> items;
	
	public InMemoryItemRepository() {
		this.items = new HashMap<>();
	}
	
	public Optional<Item> itemFor(String itemNumber) {
		return Optional.of(this.items.get(itemNumber));
	}
}

This class is a concrete implementation of an “item repository” that stores items by their “item number”. You can retrieve items by calling the itemFor method and giving it a valid itemNumber. Otherwise, you’ll end up with an empty Optional.

The first problem

The first problem in this code is a code smell. The data structure used to store the items is an HashMap. In Java, the HashMap implementation is not thread-safe:

Note that this implementation is not synchronized. If multiple threads access a hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally.

https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/HashMap.html

With our current implementation, we leak this limitation onto our clients, which will be totally unaware, because nothing in the class design hints towards an HashMap. In most production environments, our in-memory implementation might even be swapped out with a database-based one. And the database implementation hopefully is thread-safe.

A code smell is a bug

My change in wording calls this type of code problem a bug (instead of a code smell). It might be possible that right now, this class is only used in a single-threaded manner and the implementation flaw doesn’t matter. In that case, it’s a bug in hibernation. Right now, it sleeps, but the day will come when it wakes up. That doesn’t change its bug-like features, just its immediate damage potential.

If you label a code smell as a bug, you highlight the damage potential instead of the current “nice to have” prioritization.

The second problem

Let’s return to the code example. There is another problem with this implementation: If you ask for an item number that is not stored in the repository, you don’t get an empty Optional, you’ll get a nice and unexpected NullPointerException.

A small unit test that asks for any item number on an empty repository reveals the problem:

	@Test
	void returnsEmptyIfNotStored() {
		var target = new InMemoryItemRepository();
		assertThat(
			target.itemFor("non-existent")
		).isEmpty();
	}

This test will run red with a NullPointerException, pointing to this line in the implementation:

		return Optional.of(this.items.get(itemNumber));

Of course, this is “just a typo” in the way we construct the Optional. Because our HashMap returns null if a given key is not stored in it, we need to call Optional.ofNullable() and have it convert null to Optional.empty:

		return Optional.ofNullable(this.items.get(itemNumber));

A bug is a missing test

This “little fix” makes our unit test pass and the repository useful. My change in wording calls every bug a “missing test”. This points out that the test you write as you fix the bug could have prevented the bug’s damage altogether.

If you were surprised by my assumption that you write tests when you fix bugs, please consider adopting this as a habit. It is the last possible moment to improve your test coverage at a place that has proven to be important enough to have tests and undertested enough to exhibit bugs. If you want your project to be antifragile (and there are good reasons why it should be), start by strengthening it at every place it breaks.

Tests for every code smell?

The combination of both changes would make every code smell a missing test. Right now, I’m not sure if this needs to be an automatism. The existence of a code smell hints towards a more fragile part of the system, but I’m not sure if “potential fragility” is a valid indicator for additional tests.

Of course, if you program completely test driven, these kind of thoughts probably seem pointless to you.

What are your thoughts on this topic and specifically on compulsory testing for every code smell?

Characteristics of a Good Merge Request

If you happen to use Merge Requests as a basic mechanics of your software development workflow (and there are compelling arguments that you should), you’ve probably encountered some merge requests that could be handled in a straight-forward manner and some that are just painful to work with. Some merge requests “spark joy” and others elicit nothing but displeasure.

What differentiates the good, joyful merge requests from the dreadful? We’ve found six (or seven, based on your categorization) characteristics that good merge requests exhibit, while bad ones don’t.

Good merge requests are:

  • Atomic – Only one topic
  • Minimal – Only necessary changes
  • Curated – Double-checked by the developer
  • Finished – Free of deactivated code, debug code, etc.
  • Explained – Provided with commentary in the merge request (not the code)
  • Manageable – Small in changeset size
  • Separated – Either domain-based or technology-based, not intermingled

Bad merge requests lack in some or even all these categories. The first three aspects are ignored the most in my experience. This is why they are at the top of the list. A lot of trouble vanishes simply by staying on course, being succinct and caring about the next pair of eyes as a developer.

Let’s look at each characteristic in some depth:

Atomic

Let’s say you watch a romantic comedy movie and halfway through, all of a sudden, it changes into a war movie with gruesome scenes. You would feel betrayed. Ok, we don’t work in entertainment, our work is serious. Let’s say you read a newspaper article about the latest stock market movement and in the third paragraph, it plainly changes into a car advertisement. This is called “bait and switch” and is a diversionary tactics or a feint. If you do it unwittingly, you might mean no harm, but still cause irritation.

An atomic merge request tells only one story which means it contains changes for only one issue. If you happen to make “just a quick fix” at the wayside of your task, you break the atomicity property of your changeset and force your reviewer to follow your train of thought, but without the context.

There is nothing to say against a quick fix or a small refactoring, a little improvement here and there. But it’s not part of your story, so make it a story of its own. Bundling the tiny change with the overarching story makes it harder for the reviewer to differentiate topical changes from opportunitic ones and difficult to accept your main changes while rejecting your secondary ones. If you open up a second merge request for your peripheral changes, you retain the atomicity and the goodwill of your reviewer.

Minimal

A good merge request contains only changes that are essential to the story and consciously modified by the developer. Because we use a plethora of development tools that all want to store their metadata somewhere in our project, we often end up with involuntary modifications in files we don’t know, never looked at and don’t feel responsible for.

The minimalistic aspect of a good merge request puts you, the developer, into the position of responsibility of all content in your changeset. It doesn’t matter that “a tool made that change”. The tool acted on your command. It is not the responsibility of the reviewer to figure out what that change means. It is your duty. Explain the change to your reviewer and if you can’t, revert the change. If the rest of your changes work without it, it wasn’t necessary and shouldn’t have been included anyway. If your changes don’t work anymore, you are about to learn the inner workings of your development tools.

A minimal changeset also implies a reduced number of modified files. That’s true, but you shouldn’t design your system to have an artificial low number of files (or parts). If you don’t space out your code according to domain structures, you’ll end up with small changesets, but lots of merge conflicts and an ongoing dissonance between the domain model and your software model.

Curated

If you can follow the concept of a merge request telling a story, then you are the storyteller. You need to take care of your narration. In a good story, only necessary bits are told. If you muddy the water by introducing meaningless details and “red herrings”, you essentially irritate your listeners for no good reason.

At least have a second look at the changeset of your merge request. Is there any change present by mishap? That happens often and is not a sign of bad development. It is a sign of neglected care for your teammates if you let it show up in the changeset review, though.

Bring yourself in a position where you are fully aware of your story and the bits that tell it.

Finished

Your merge request should tell a complete story, not a fragment of it. It also shouldn’t show the auxiliary constructs you employ while developing your changes. These constructs might be debug output statements, commented out code, comments in the code that serve as a reminder for yourself (TODO comments play a big role in this regard) or extra code that ultimately proved to be unnecessary.

Your merge request should not contain any of that. After the merge, the resulting code base is regarded as “finished”. Any temporary content will stay forever.

Explained

Your story is probably very clear and recognizable. If not or if parts of it are more obscure than you hoped it would be, it is good practice to provide an explanation. It is your decision to explain yourself in the code (in form of an inline comment) or in the merge request itself (in form of a merge request comment). My preference tends to be the latter, because the comment will not be part of the “eternal” code base. The comment is a tool to help the reviewer grasp the details. Your approach may vary, but there needs to be some form of extra explanation from your side if your code isn’t crystal clear.

Manageable

The best stories are short. Humans aren’t good at keeping large numbers of details in their head and your reviewer is no exception. Keep your merge request small to make the review bearable. A good rule of thumb is a merge request containing no more than 10 files or 250 lines of changed code. While these numbers are arbitrary, they serve the purpose to give you a threshold you can check. You can’t check the real threshold of your reviewer, so try to stay below it.

If you find yourself in a position where a lot more files are touched and/or the changed lines of code are way above 250, you should think about what lead you there. These changes didn’t happen on their own. Your work produced them. Can you adjust your workflow so that smaller milestones are possible? What was the developer action that produced most of these changes? Are we looking at a shotgun surgery?

It is easier to review several small merge requests than one big one. Big, unmanageable merge requests may happen, but they should be the (painful) exception. Learn to work in episodes.

Separated

One aspect of software development is that we tend to mix two types of development into one stream of actions: There is development that produces new domain functionality and is inherently domain-based. And then, there is development that is required by technology but probably can’t be explained to the domain expert because it has nothing to do with the domain. Both types of code work together to provide the domain functionality.

But if you look at it from a story-telling aspect, then domain code is a novel while the technical code is more like a manual. Your domain code is probably unique while your technical code very much looks the same as in the next project. You should try to separate both types of code in your code base. And you should definitely separate them in your merge requests.

Keeping your domain related changes separate from your technical changes gives your reviewer a chance to ask the right questions. With domain code, some aspects are that way because the expert says so. There is no “universally correct way” to do these things. It might look strange or even wrong, but that’s the rule of the domain.

Technical changes, on the other hand, tend to look the same, no matter the domain. You can apply technical experience to these kind of changes regardless of context. Refactorings are the typical member of these changes. Renaming a variable or method will inevitably lead to a lot of changes in a lot of files. But it won’t affect the domain functionality (that’s the definition of a refactoring!). Keep your refactorings out of domain changesets to keep your reviewers happy.

What next?

There is a lot of adaption to be done from a “normal” development practice to one that tends to produce merge requests with these characteristics. These changes in behaviour don’t happen over night.

My proposal would be to start with one aspect and focus on it for one month. Starting with “Atomic” will have the most effect, but you can choose your starting point according to your preference. Just keep it for one month. After that, you can choose to focus on another aspect for a month or add another aspect to your focus group and now keep two aspects in mind. The main goal of this approach is to form a habit that enables you to produce better merge requests without really thinking about it. It might take some time, lets say a year or even two to incorporate all aspects in your day-to-day working style.

After that, you’ll probably look back on your earlier merge requests and smile because you see proof that you can do better now.

The IT architect, Part III: Improve your environment

If you happen to work on a system that scales to the size of an IT landscape, your worst bet is to let it evolve by circumstances. You want to have a plan and act upon that plan. The base for your plan could be a landscape map, which we talked about in the first part of this series. Upon drawing the map, you want to interpret it in order to find the strong points and weak spots. We’ve talked about assessing the map in the second part of this series.

In this blog article, we look at ways to improve our IT landscape towards the goal of overall stability.

Our mission statement

If we want to improve things, we need to know in what aspect the improvement should occur. At the scale of an IT landscape, overall stability is a commonly desired trait. This doesn’t mean rigidity, where you cannot change a thing in the landscape lest the whole thing breaks. It also doesn’t mean that ever part of our landscape needs to be stable itself. Overall stability means that even with the inevitable outage or replacement of a part, the whole system still works. The system is resilient to change and failure, at least resilient enough for the organization working with the system.

If our mission is to improve towards overall stability, we need to work on the relationships between our services (or assets, as we called them earlier, because “service” is a greatly overloaded term) more than we need to work on the service itself.

This doesn’t mean that individual stability of an asset isn’t important. It certainly is, but more often than not, you cannot improve this single value that much. What you can iterate on with recognizable effect is limiting the consequences of lacking individual stability.

Our mantra

The fundamental rule that brings overall stability is the “dependency rule” of the clean architecture that is meant for internal software application architecture. But if we see our IT landscape as one big application (software or not might make less of a difference than thought), we can apply the rule without modification:

All dependencies point towards the center (inside) and never in the opposite direction (outside).

That’s it. You define a center of your map and have all dependencies point towards it. This results in a structure of “rings” around the center that denote different levels of stability. The dependency rule can be rewritten as such:

All dependencies point from the less stable asset to the more stable asset and never in the opposite direction.

If you think of stability only in terms of “service availability” that tells you the percentage of time you can utilize the service without degradation, you’re thinking too short. Stability also means stable interface and stable implementation. You can have a really rock solid ISDN internet connection at the center of your IT landscape map, if your ISP discontinues the technology, the lack of implementation stability will force you to change the asset and hope that all dependent assets (basically your whole map) are not affected by the change.

Planning for obsolescence

Trying to bring the relationships between your assets in congruence with their significance for your IT landscape is the central work of an IT architect. The main question is always: What happens if this asset needs to be replaced?

In IT, there is no such thing as an “eternally working asset”. I’m not well-versed in more physical domains like mechanical engineering to say that this an univeral invariant, but in my field of speciality, everything changes eventually.

If you create an IT landscape where every asset can be replaced with manageable effort and predictable consequences, you’ve created an overall stable system. You can probably improve the availability of parts of it, but you won’t need to overhaul the whole thing over and over again. Your IT landscape is ready to grow, evolve and change, but it does so in a controlled manner and without compromising the mission.

Anti-obsolescence patterns

On your way from your current map to your anticipated one, you’ll recognize recurring patterns that you employ to solve dependency problems or improve the longevity of overall structures. Here are three patterns that have helped me in my endeavours:

Protected variation

If you have more than one implementation for basically the same service (like the example of an internet connection), you probably want the rest of the map to not know about the multiplicity. In this case, you introduce an additional asset that acts as a router between the implementations. Think of the router (or service interface) as a guarding wall for your service implementations. It acts as a “portal” to the real service and can be paper-thin (at least for now). If you want to improve the runtime availability of your service, the router can also act as a load balancer and a circuit breaker. The important rule is that all outside relationships only point to the router, not the actual implementations.

Opinionated interface

If you find an asset that has a lot of incoming dependencies, you’ve found a change risk. If you swap the service for a newer version with a similar, but not quite equivalent interface, you’ll find that you have to adjust lots of dependent assets in oftentimes surprising fashion. You can reduce your surprise by introducing a “portal” interface, just like you did in the protected variations, but without the variations. The portal or “opinionated service” interface offers everything your other assets require of the original service, but nothing more. It captures the “opinion” of your organization towards the service. When you introduce such a portal, it is nothing more than a forwarding service that maybe handles authentication itself. If you plan to swap the service, the portal becomes your requirement list and its new implementation will convert data back and forth.

If you find that your portal gets to big, you could think about multiple portals with their own separated “opinion” about the service, all forwarding to the same “source of truth”:

Now you need to maintain several interface services, but they separate different concerns or contexts into separate assets, which might help with future migrations. Chances are, if there are separate concerns, that they will be provided by separate assets in the future.

Circular portals

The most nasty thing to occur on your map is the ring (see part II of the series). In its smallest form, its just two assets requiring each other:

There are no easy ways out of this situation. But we can make some steps in the right direction and see where it takes us. The first step is introducing buffer assets that act as stand-ins for the real asset:

This doesn’t break the ring yet, but it gives us a chance to do so later. The service interfaces are opinionated and maybe even tailored specifically to the service using it. This reduces the “area of dependency” to its minimum. With a little luck, we find that Service A requires things of Service B that, if isolated, don’t require things of Service A themselves. If that’s the case, we can work on splitting Service B in two parts: One dependent on Service A, but not required by it and one indepedent of Service A, but needed by it. This would break the ring and give us a long chain that is much easier to work with. The problem is: None of this is guaranteed. In the worst case, you’ll still end up with your circular dependency with extra steps and nothing can be done about it.

Conclusion

When you begin to work with your IT landscape map, you begin to transform your assets from what they provide to what others actually require from them. Minimizing the relationships between assets, if not by number then at least by scope, is an appreciable improvement that gives you leeway to make changes in your actual IT setup without compromising the overall structure.

If you accompany your journey towards the best-fitting IT landscape with your map, you always have a plan at hands that you can show to people to form a shared understanding of the current state and the desired outcome. And if you keep old versions of your map in the archives, you can sometimes look back and see how far you’ve come yet.

The IT architect, Part II: Assess your situation

If you want to work on the scale of an IT landscape, you need to have a plan in the form of a map. In the first part of this series, we talked about creating such a map. This blog entry will give you the basic tools to make sense of all the things on it and how to convey meaning to other people while using the map.

The third part will talk about actionable steps that are a result of our interpretation of the map.

Making sense of the map

You’ve drawn the map of all your IT assets and given all the boxes names that you find useful. You’ve asked around to find relationships between your assets, represented by arrows between the boxes. You’ve moved the boxes around a bit to reduce arrow intersections. The map seems to be as “clean” as it can get at the moment.

Now is the time to apply meaning to the structures you see.

Interpreting loners

The first thing you want to look for are boxes without any relationships. These entities don’t interact with other things on your map and are not required by anything, too. Let’s think of them as independent value sources. If this asset brings your organization a describable and current advantage, you’ve found the ideal asset.

An example could be the blue box “L” in our example map. It isn’t coupled to any other asset. Let’s say it is a “customer relationship management” (CRM) system. Remember, boxes are not labeled by their actual implementation (in this case, maybe a vTiger or SugarCRM), but by the value they provide for the organization. If your organization needs a CRM (or benefits from its presence), then you have a “loner”, which is a good thing.

If the CRM stops working, the humans in the organization will be unhappy about it, but the outage itself will be limited to the CRM and not spread to other parts of your IT landscape (given that your map reflects the reality). If the outage lasts longer, your employees will adapt their work processes to circumvent the pothole in your IT. There will be a lot of post-it notes, at least for some time.

If the CRM is updated to a new version, you need to train your employees, but it won’t require other IT entities in your organization to match that update. The CRM can run on ancient hardware and software, as long as the human requirements are met. A loner on your map is a good thing.

Interpreting relicts

If you find a lonely box without a current use case, you’ve found a relict. Be glad that you’ve found it, because relicts tend to remain hidden and not show up on architect maps. If you can make sure that the relict serves no purpose for the organization anymore, you can eliminate it. Removing an asset from the map (and your real IT infrastructure) is a good thing, because you reduce complexity, costs and risks. There is no IT asset without associated costs and risks.

If, for example, the yellow box “P” represents a computer that provides a service that nobody uses anymore, the computer itself is still present in the network and can be used as a stepping stone for malicious itents. Let’s say the computer is a Raspberry Pi that isn’t included in the first tier of workhorse computers, its operating system might be outdated and susceptible to attacks. It doesn’t provide value for the organization anymore, but it increases the organization’s risk.

Revealing this kind of “dead weight” in your IT landscape is a real advantage, because you can cut it out rather easy.

Interpreting rings

A typical structure on your map could be a circular dependency. In its smallest form, it is just two boxes that both depend on each other. The more elaborate ring consists of several boxes that are connected without a clear start and end. This is the worst thing to find.

A ring in your entities means that you have to consider all elements in the ring as one big entity. You cannot modify them independently, neither on the technological level nor on in the temporal dimension. A ring is basically a mexican standoff situation for all included entities. You can also call it a deadlock. Whatever you call it, it is bad news. You probably want to break the ring as soon as possible.

Breaking a ring would warrant its own blog post altogether. A basic starting point might be the Acyclic dependencies principle of software design. You probably need to split at least one of your entities into smaller parts or introduce a new entity. The least favorable move would be to merge all entities into one bigger entity, creating a monolith. You will regret this move when the inevitable modernization pressure rises.

Interpreting chains

If your entities form “deep” dependency lines where A depends on B, B depends on C, C depends on D and so forth, you have discovered a chain. This structure is less troublesome compared to the ring, but worth a worry nonetheless. In terms of operational risk, the chain creates a meta-system with a failure rate that is the sum of the failure rates of the chain elements. To make a long story short, you’ll never get a reliable infrastructure with long chains.

The longer your chains are, the more ripple effects an outage will have on your IT landscape. Remember that a chain always breaks at its weakest link, but this link will bring down the whole line.

You can reduce the length of a chain of entities in your IT landscape by inserting buffer elements like read-only copies of central data sources. But more important is to think (and talk) about why the dependencies are there in the first place. Maybe your data storage strategy is too decentralized and you would gain some favorable dependency structures by pooling data together (essentially creating a data monolith if you overdo it).

Introducing zones

Recognizing the basic shapes on your map is important, but you also have to look at the forest and not only the trees. The basic layout of your boxes already tell you a lot about your IT landscape zones.

A zone on your map is a region of boxes that you can encircle and give it a superordinate name. The basic rule of a zone is that all entities in it should share a common property. The less technology-based this property is, the better is your zoning. A zone for “java web services” or “metal computers” is eventually useful, but won’t stand the test of time. Sooner or later, some java services are replaced by other programming languages and some real machines get virtualized. Do you move them to other zones on your map? What really changed for the users of your IT landscape?

If you concentrate on your users, you might be able to come up with properties that really affect them. Look at this example that takes our initial example and separates it into three zones:

And now, we find a user-oriented name for each zone. In our example, we’ve grouped the entities by user role and are now able to label our zones:

This grouping has the added advantage that the target audience for each modification to the map can be identified nearly immediately. It makes it easier to anticipate the effects of outages or problems and to identify non-cohesive usage of the same tool/entity.

In our example, each box in the “Both” zone is essential to the functioning of the organization. But just because a specific service is used by both other groups doesn’t mean they have overlapping requirements. Maybe it is better for everybody involved to actually divide an entity into two separate boxes in the respective zones, even if both boxes are implemented with the exact same tool/technology at the moment.

Identifying the zones takes your map to the next level. You end up with fewer, but bigger boxes and their dependencies. It’s the same IT landscape, but with less detail. Now you can start your discovery process again.

Conclusion

Your IT landscape map can be interpreted by looking for common structures (like loners, rings and chains) and by defining zones. This allows us to gather a list of problem points that we want to improve. It also allows to evaluate the expectable ramifications of changes to entities in our IT landscape. And there will be changes. The one (and probably only) constant in IT is that all things change.

In the next part of this series, we look at ways to transform the map from the current state towards a better one. Stay tuned!

Upgrade with a twist

A few weeks ago, I heard a nice story about the hidden cost of new features. Imagine a website, driven by a content management system, consisting of text, pictures and fancy styling. When the content management system gets an update, the website developer takes a look at the release notes and finds that a lot of new and cool features are included that you’ll get for free once you update.

So he updates the site, tries it out and publishes it onto the web. A few days later, the customer and owner of the website sends a bug report of some arbitrarily flipped images. There are just short of a hundred images on the website and a handful of them now show up upside down.

Who would update a website and randomly rotate some images?

Why would a content management system decide that excatly these images need a spin?

The answer is not as obvious as one might think.

The latest cause of the effect was a change of the imaging library the content management system uses to deliver the image content. It got upgraded to a new engine that essentially does the same thing as the old one: Take the image file content and put it on the web. But, it does it more thoroughly.

One feature of JPEG images are the EXIF metadata properties. Examples of useful properties are the photography time, the geolocation or the camera model. Some cameras add even more information into the metadata, like exposure time or the camera’s orientation (rotation) during the photographing process. There are cameras that notice if you hold them upside down and store this circumstance into the picture.

Then, there are imaging libraries that just take the pixels and put them on the screen. And there are libraries that know about their domain and read the EXIF metadata, interpret the rotation data and accomodate for that fact. Because, who would like to look at pictures that are displayed totally wrong?

The first version of the content management system’s imaging library didn’t care much about metadata. The new version takes rotation into account.

So, the cause of the suddenly rotated pictures originates with the photographer that happened to work during a workout session or in australia. This fact was registered and stored by the camera and promply ignored by the picture editing software and the earlier content management system. It was rediscovered only when the new version went live.

For the customer, this is a random regression. It worked just fine all those years! For the developer, this is a minefield. Every picture could contain an evil rotation information that gets applied someday.

For a security engineer, this is a harmless but perfect example of a persistence attack. You embed malicious payloads into data that do nothing for a long time, but are activated suddenly, without outside intervention, by an unrelated change of system parts towards a “lucky” constellation.

Guess what you can embed into EXIF metadata, too? Javascript or any other form of executable code. And then you wait.

To end this blog entry on a light note, sometimes the payload may just happen to be your last name – True!

The IT architect, Part I: Map your assets

When I’m tasked with commenting on a software architecture, my first step is to request or draw a map of all distinguishable elements of the software system and give them relationships to each other. This inevitably results in a boxes-and-arrows type of diagram that serves as a base for all future communication about the subject. Having a shared representation about a system is a great way to pinpoint discussions and focus on a particular area without forgetting the rest completely.

When I’m tasked with commenting on IT infrastructure, my first step is to request or draw a map of all distinguishable elements of the IT architecture (or “IT landscape”, a term that I actually prefer because it conveys better that a lot of things on this scale happen unplanned) and give them relationships to each other. Once again, we are drawing boxes and connecting them with arrows.

Being able to rely on this map is an essential base for all communication about IT architecture. And if you know how to read the map, it directs your efforts of consolidating your IT architecture nearly intuitively.

In this blog entry, we talk about drawing the map. The second part goes into interpretation of the map, the third part emphasizes actionable steps based on the map and our interpretation of it. Based on questions and discussion, there might even be a fourth part, but that’s not planned yet.

Your initial boxes

Beginning the IT architecture map is easy: Draw a box and give it a name. The name should correspond to an element of your work environment that is distinguishable from other elements. Note how I don’t say “system” or “service” or “server”. For an IT architecture, these words describe an implementation, a particular manifestation of the architecture. They don’t belong on the map (or this map). If you cannot see the difference yet, think about the floor plan of a house. It doesn’t tell you about the material the house is made of and you can use the same floor plan for a wooden cabin or a marble mansion (barring some pesky statics limitations that I don’t have a clue of). In our IT architecture map, each box represents a “thing” that will at the latest get a name the minute it stops working.

There will be boxes in your IT architecture map that don’t relate to anything else. That’s fine and not a problem, as long as the box relates to humans. If you cannot find a meaningful relationship between the box and humans or other boxes, you’ve found a relict. This is in fact one of the hardest tasks in IT architecture analysis, so congratulations!

Adding relationships

Every other box interacts with its environment in some manner. Again, the concrete implementation of that interaction is not important for our map. For our current view on the landscape, it makes no difference if a software system uses HTTP calls to a server or a computer tranfers bytes over RS232 wire to an appliance box. The fact that one box relies on the availability of another box is all that matters. That’s the essence of our arrows: Box 1 requires box 2 to be “online” in order to perform its duties. Without box 2, the functionality offered by box 1 will be limited, down to a point where it is no longer useful to others. Our arrows denote dependencies between boxes. If you happen to be a software developer: we don’t talk about code dependencies here. Also, even if closely related, we don’t mean format or protocol dependencies. We just state that if box 2 “goes down”, box 1 will follow closely.

This is the base for a rule of thumb about dependency arrows: Don’t draw them bidirectionally. Each arrow has one clear direction (like box 1 –> box 2). If you find that box 2 also depends on box 1, you should draw two arrows in opposite directions. As a preview for the interpretation step: This dependency cycle is a sore spot in your current architecture. It means that your two boxes appear as one to the outside. It means that you cannot replace one part without the other. The replaceability of single boxes is an important aspect of your landscape’s health.

Making it readable

When you’ve placed your boxes and drawn the arrows, it’s time to improve on the map’s layout. A guideline for the layout is that arrows shouldn’t intersect each other. Another guideline is that boxes that are semantically related should be near each other on the map. These two requirements alone often result in a lot of movement and experiments. You might want to use a software that allows for these experiments without much effort.

You’ll recognize a fitting layout when you see it. The map corresponds to your internal landscape representation enough to be useful in discussions. It might look like this real example:

First thing you’ll notice it that the names are replaced by denotations with zero meaning. In a real map, the box “C” might be named “time tracking” and box “D” could be labelled “issue tracking”. The name should indicate the responsibility of the element/box. You can also add the current implementation of that responsibility, if that makes things clearer. In our example, box “D”, indicating “issue tracking” might have “(JIRA)” added to the description. Just be aware that your organization probably needs another issue tracking system in that place even if JIRA falls out of favour. Following your arrows backwards, you’ll know which other elements of your landscape will be affected by this replacement. More on that in the next part about interpreting the map.

Evolving the map

Another thing you probably scoffed at is the intersecting arrows in the example. The map’s author came up with this layout as the best representation when the map had fewer boxes. With each subsequently added box, another arrow or two tried to reach the “center”. The intersections are a direct consequence of the emergence of a “center”. This is an important finding of your map: Being able to identify your map’s center and deduce meaning from it. To spoiler a bit: If your center is “time tracking” and “issue tracking”, you probably charge money per hour to solve other people’s problems.

Conclusion

You’ve probably seen how drawing an IT landscape map can benefit your organization and your discussions about its present and future. One thing you should keep in mind is that the map should reflect the current state and not your desired state of your organization’s IT architecture. That’s what will be addressed in part 3 of this series. Stay tuned!

Want to read more? Head over to part II of this series.

Key ingredients for the home office

Since March 2020, we transformed from an “on-site” company to a remote company, not particularly because we wanted to, but because the corona pandemic forced us to. Our office is not suitable to ensure transmission safety, so I decided that work from home is the lesser problem. When I say “transform” and “decided”, please bear in mind that these are retrospect notions. The decision was made at Monday, March 16th 2020 and the transformation happened in the next two days.

But there is a real difference between being operable and fully equipped for the situation. We were operable in the remote situation within 48 hours. But we still work on improving our equipment to match the situation that seems to linger a lot longer than initially anticipated. This blog post tries to summarize what we’ve learned since March 2020 in regards to equipping home office workplaces past the makeshift phase.

The fundamentals

The most fundamental ingredients of any office workplace are the table and the chair. If any of those lack in necessary ergonomic features, your comfort will never be the same as in the office (provided that your equipment there is adequate). And this constant discomfort will permeat everything you do.

The chair was easier to detect because it shows up in the video calls, it is part of the “zoom room”. Still, it took some time to order new chairs or transport existing office chairs to the home offices. If you experience back pain or mechanically induced headaches, review your chair thoroughly.

The table was more tricky, because it is typically invisible during video calls. My approach was to retrieve a photo of every home office space and talk about the possible improvements. During these talks, we came up with two solution categories that I want to present.

The notebook workplace

We all had work notebooks as secondary computers before the pandemic. So it was not a problem to start with working from home on that notebook, we’ve done it before. But if all you do is to work directly at a notebook, you might have the best chair and table, your body posture will be suboptimal. We equipped the notebook-based workplaces with the following extras:

  • An external keyboard
  • An external computer mouse
  • One (or better: two) additional monitors
  • A matching docking station, at least to connect to the monitors
  • A notebook riser stand

The last item, the notebook riser stand, was the game changer when it came to multi-monitoring (two or three monitors). It elevates your notebook to the same height as the other displays and might even change its angle. This transforms your notebook from being the CPU unit with cumbersome monitor to a secondary monitor with a CPU. The riser stand doesn’t cost much (if you don’t go overboard with its design) and provides you with more table space and improved displays.

The existing computer workplace

Because we are software developers, we mostly have a very decent computer already fully equipped at home. The only problem: The computer is for private tasks and should stay that way. We want to separate our work environment from our leisure environment as much as possible. But several developers wanted to use their usual “battlestation” for home office work, too.

In this case, we bought a lavish SSD as an additional boot drive for the home PC. This separates the work operating system from the leisure drive as much as the notebook approach. And the existing hardware can be used for both timeslots, much to the comfort of the developer.

The video conference equipment

But regardless of your approach (notebook or existing computer), there are still some things missing that improve the quality of work of yourself and your colleagues tremendously:

  • A good headset, preferably with top-notch comfort and active noise cancelling (ANC)
  • An at least decent webcam

Most notebooks provide a mediocre webcam and some low quality microphone. Do yourself (and your communication partners) a favor and invest in a good microphone. Oftentimes, it is coupled with good headphones. The difference between a good audio setup and an echo-prone makeshift solution is the deciding factor when essentially all communication with your colleagues go through this channel.

The webcam is not as essential as the audio equipment, because it “only” affects your communication partners, but it adds a nice touch to your other equipment. You don’t have to go overboard on it, a model for one hundred euros is already an improvement.

The bottleneck

One thing that can really invalidate most of the other improvements is a small internet connection. This is probably to hardest thing to fix in a timely manner, but give it some thoughts. If your internet connection is too slow for your daily work and communication pattern, it will be a constant annoyance. Just because it takes some time to improve doesn’t mean you shouldn’t try.

We will probably remain in this situation long enough to still reap the profit of our effort. And even if not (at least we can hope), nobody ever complained about an internet connection that is a bit oversized.

TL;DR

If you didn’t read the article, here are the major take-away points neatly summarized:

  • Ergonomic chair and table
  • Notebook with external keyboard, mouse and monitor
  • Notebook riser stand
  • SSD for dual boot systems
  • Good headset
  • External webcam
  • “Broadband” internet

If you miss an item from this list for your home office, give it a thought. And if you plan to only think about one item, think about your chair first.

What are your experiences with working from home? What accessory makes your work life better? Give us a hint by writing a comment below. Thank you!

The spell that reveals your onboarding decade

Every one of us has started somewhere. By telling you what my first computer was, I also convey a lot about the place and time my journey in IT started. For many of my fellows, it was a Commodore C64 or an Atari 500. But even if I don’t tell you about my first machine, there is a simple “magic spell” that you can cast to at least get a hint about the decade my first working days started, 15 years after my first contact with computers.

The spell is just one word: “container”. What a container is and how to use it is bound to the decades. Let me guide you through some typical answers.

Pre-2010 answer

If you entered the industry around the year 2000, a container was a big chunk of software that you preferably installed on an even bigger machine, the infamous “application server”. The container, or “servlet container”, “application container”, or, if you were with the right folks, “enterprise bean container” (in short: EJB-Container) was the central hub to host all of your web applications. If you deployed your application into the container, it handled the rest, like unpacking the web archive, providing resources and publishing to the internet. Typical names of containers were Tomcat, Jetty, JBoss or WildFly. You can probably see them around even today, because the concept itself is appealing. Some aspects of it inevitably lead to problems, though. Resource management was a big topic. Your application wasn’t expected to care for a database connection, a logging context or, sometimes, even security features, because the container provided those things to it. As you can probably imagine, that left your application crippled and unable to function outside a container.

Pre-2010 containers

So if you onboarded more than ten years ago, your first thoughts reacting to the word “container” will be “big machine”, “slow startup” and “logging framework”. There cannot reasonably be more than one container per machine. Maintaining a cluster of containers would be the work of luminaries. Being asked to start a container on your developer machine is a dreadful endeavour. “Booting the container” is a reason to visit the coffee machine.

Post-2010 answer

But if you started your career less than ten years ago, your reaction to the word “container” will be different. Starting in 2013, a technology named “Docker” reinvented an old practice to isolate processes and package them into a transport format. Simplified enough, a container is just the RAM-based projection of an application image. You boot a container by loading the image into RAM. That’s some of the fastest things you can do on a computer (not really, but it fits the story better). Even better, because each container ideally contains just one small application or part of it, you don’t boot one container per machine, you can run dozens at the same time. Each container brings everything it needs with it and only relies on three common external resources being provided: Networking, persistent storage and a facility to dump logging output.

Post-2010 containers

It is good practice to partition your application into several containers of the post-2010 kind. It is good practice to have them talk to each other over network, either real or simulated. The lines between actual computers get blurry real fast with this kind of containering.

As a youngster, your first thoughts reacting to the word “container” will be “just one?”, “scale up” and “log output management”. You see an opportunity to maintain a cluster of containers. Being asked to start a container on your developer machine is a no-brainer. “Booting the container” is a reason to automate your container infrastructure.

The reactions to the word “container” are very different, based on socialization period. In the old days, pre-2010 containers were boss fight adversaries. Nowadays, post-2010 containers are helpful spirits that just need to be controlled.

Post-2020 answer?

What better way to control the helpful spirits but to deploy them to an environment that handles unpacking, wiring, providing resources and publishing to the internet? Your application isn’t expected to care for topics like scalability, cluster robustness or load balancing. The environment, your container cluster platform, handles those things for you. There can only be one cluster platform per cloud. Being asked to start a cluster platform on your developer machine – well, that’s just not possible, sorry. Best we can do is a minified version of it. Our applications tend to function poorly outside a cluster platform.

As you hopefully can see, developers of all decades crave a thing they tend to call “container” that they can throw their software into to have it perform well without all the hassle of operations. But as soon as they give away responsibility for the environment, they also give away the possibility of comfortable “developer machine” operations. The goal is the same, just the technicality what exactly a “container” happens to be changes over time.

What is your “spell” that reveals a lot about the responder?

Three programming languages the world isn’t ready for yet

The year 2020 is coming to an end and we can finally relax a bit. In order to lighten up your mood, this blog entry is comprised entirely of humor, satire and plain silliness. Nothing in it has any resemblance with reality and you should not try any of this at work. But if, for whatever reason, you find something useful in here and go to revolutionize the world of software development, remember that we’ve called it first.

There are many programming languages for all sorts of purposes. If you’ve developed software for some decades, you saw them appear, getting useful and being forgotten over the span of time. But what will the future bring? Here are the descriptions of three programming languages that have their purpose, but the world is not ready for them. They aren’t even invented yet!

A programming language for long-lived projects

Most of today’s world source code is categorized as “legacy code”. This degatory term describes code that is old, unwieldy or just too clever for current programmers. Typical programming languages that have lots of legacy code include Cobol, C and Java. Most programmers don’t associate themselves with that code. It’s “other people’s” code. But there is one programming language that not only embraces the notion of “legacy code”, but in fact imposes it. This programming language is “Legacy”, the most productive one to produce heaps and heaps of, well, legacy code in short manners of time.

An unique feature of Legacy is that the code can be written at nearly the speed of thought, but is impossible to decipher even minutes later. A typical Legacy project doesn’t employ version control to differentiate between new and old code, but line numbers: Lower numbers indicate older code, while higher numbers are written more recently. To really drive this point home, every line of code needs to start with its line number, just like the good old BASIC did. An useful convention in Legacy is to choose the line number based on your current timestamp like 20201221181736 (the moment this text got written). Modern Legacy IDEs do this automatically for you.

(A cool but seldom used syntax feature that is based on the timestamp convention is the time-relative jump: You can address your jump target by absolute or relative line number, but even cooler is the relative amount of time: “jump -3d” resumes code execution at the line you wrote three days ago. Just remember: “jump +3d” is equivalent to undefined behaviour for most practical use cases. Only Legacy wizards can pull the “just-in-time jump” off in a useful manner.)

The most pressing issue about Legacy are its third-party dependencies: There are none. All dependencies are second-party dependencies, meaning they are much more involved in your project as usual. In order to compile or deploy a Legacy project, you need to have the exact version, down to the patch and oh-crap-i-forgot-hotfix number, of

  • the Legacy SDK
  • the compiler
  • the IDE
  • the Legacy runtime
  • and your text encoding

The last point might be surprising, but given the different versions of Unicode and even UTF-8, the Legacy ecosystem has chosen to follow the ideal of Python that dictates the indentation, but redirect it to the parts outlined above. You don’t get to choose the compiler version, the compiler version chooses you, based on your Unicode level. By the way, indentation is a no-brainer in Legacy: Each line starts with the line number, that is enough indentation already.

If you want to deploy a Legacy project to a production server, you need, by the rules above, the exact machine with a perfect replication of all installations for development. Because this is a painful endeavour, most developers have adopted the best practice of “one machine per project” and develop directly on the production server. Most of the time, this is a surprisingly powerful machine, making programming even more faster (remember, the goal is to produce the most amount of code in the least time). It also shortens the delivery pipeline length and facilitates communication between business and development departments, even if not of the pleasant type.

A curiosity that novice Legacy programmers often don’t grok at first is the IMPOSE keyword. It is a variant of the IMPORT functionality of other languages, but doesn’t extend the capabilities of your code. Instead, it limits the ability of the developers in this project by the given imposition. A typical example would be the line

IMPOSE variable name length <= 3

That, as you can read in clear text, limits your variable names to three characters or less. You can often find Legacy code with variable names like “usr” instead of “user”, “pwd” instead of “password” and “idx” instead of “index”. They all follow the imposition above, increase your typing speed and speed up the compilation, which counts as a triple win.

So, if you want to impress your customer with huge amounts of important looking code and build a certain reputation among peers, Legacy might be your new favorite language. And if anybody calls your work result “legacy code” in the future, you should feel validated and proud.

A programming language for mission-critical software

Software written for high-stake contexts like flight control, medical supervision and power plant management needs to meet extreme requirements in regard of correctness, robustness and resilience. Most mainstream programming languages have reacted by providing additional complexity to address the situation. For example, the demand for correct software has lead to the rise of testing frameworks that introduce additional syntax and require additional source code that is, by definition, untested in itself.

This is the problem the inventors of “Untested” try to solve. By writing your code in “Untested”, you can forgo all the extra effort of trying to prove it right. Untested code is, by definition, good enough without test. Remember the definition of Michael Feathers?

To me, legacy code is simply code without tests.

Michael Feathers in his book “Working Effectively with Legacy Code”

If you are ok with “Legacy”, you probably also enjoy “Untested”. The language makes it impossible to write tests for your code, so you can fend off the demand for them more easily. Your boss cannot ask for things that are impossible to do.

One interesting way in which “Untested” wards off calls from test routines is to couple every statement with a side effect in the hardware (oftentimes the TRAP flag on the CPU is flipped). Most programmers in traditional languages find those lines not testable and try to factor them away in order to test the rest. Untested factors away the rest. You don’t need to feel guilty about your lacking test coverage – it’s a feature, not a bug.

If your boss asks if a certain module is thorougly tested, you can respond “yes” in good faith. It’s tested in the best manner possible with “Untested”. If you need to give an overview of your system, you can write “Untested” beside every module and your reviewers will accept it as accurate.

Oh, the problem of long and tedious code reviews are taken into account, too. Because “Untested” code is just “Legacy” code (see Michael Feather’s definition above), it is impossible to read and understand with the exception of the developer machine (aka production server). If a thing is impossible to do, why even start trying? This will give you more time to produce “Untested” code.

And if problems arise in production? Well, you are already on the machine, so you can just hotfix it. Nobody can blame you, you’ve stated again and again that it’s untested code.

By the way: “Hotfix” is another promising programming language worth speaking about, but that would go beyond the scope of this blog entry. Might add it later, though.

A programming language for non-programmers

The central tragedy of software development is that the people that CAN program don’t know what they SHOULD program and the people that know exactly what SHOULD be programmed CANNOT do it. The latter group is mostly managers and people with million-dollar ideas.

The new kid on the programming language block tries to solve this problem by utilizing state-of-the-art artificial intelligence in the compiler AND the runtime. We are talking, of course, about “Straightforward”. It’s a programming language with a natural syntax that’s so easy and lenient, you can call it, well – you probably get the joke by now.

Remember the last time a stunned manager tried to explain the new feature to you and, when you came up with an estimate encompassing weeks for the implementation, shouted out “but it’s straightforward!”. He talked about his preferred programming language and you probably misunderstood him again.

“Straightforward” is so popular with the business folks because the compiler is in the “do what I mean” category of compilers. By using natural language recognition, it infers your most probable meaning of the code, looks it up on the internet and translates it into machine code. The first versions used sites like stackoverflow.com for the translation step, but that didn’t work out, because the site is filled by developers, not business people. Newer versions just access the cloud and find the answer there.

The machine code of “Straightforward” is not actual binary code, but an intermediate representation, much like Java’s bytecode, but for non-technical concepts. Because these concepts are subject of interpretation and the zeitgeist, they are really interpreted again at execution time by another artificial intelligence. This approach might be a bit demanding with processing power, but that’s just a financial problem. The big advantage is that code like “Make the colors more lively!” is both compilable and executable and yields the correct results regarding the current fashion every time. Your color scheme doesn’t age as fast or virtually not at all with this straightforward code.

The only problem that prohibits widespread adoption of “Straightforward” in the business right now is the unsolved equation:

Do what I mean != Do what I want

This is a fundamental theoretical problem in the field of management, much like P = NP in computer science. The race has already started, whoever solves his equation first gets the prize. It is rumored that quantum computing is the key to both. But I suspect that if quantum computing is available for everyday use, other programming languages like “ASAP” will take over the market.

Your turn

I hope this blog post has entertained (and maybe inspired) you. Now, it’s your turn. What is the programming language you always wanted to use? Be silly, be creative, be vocal. Write a comment below and tell us!

From multiplayer Pac-Man to a twenty year old company

This blog post does not contain big insights. It’s just the story of the very first days of our company which happens to celebrate its 20th anniversary this month. And because most stories begins a lot earlier than when the narrator begins to tell them, I’ll try to tell this one from the start.

It starts with an eight-year old boy that has access to his very first personal computer, a Tandon 8088 with 8 MHz. Just to put this glorified pocket calculator in today’s perspective: A basic arduino board has more power. But back in the days, this personal computer was a magical tool that could act as all kinds of things, including a gaming machine. One of the first games on this machine was Pac-Man, in 80×25 character ASCII “graphics” and without any scoreboard or competitive element. It was strictly single player and the computer-controlled ghosts acted strictly by their algorithms, so it became a repetitive chore rather soon. The boy would play the usual route, add some new steps at the end and watch the ghosts react. After some time, the boy could predict the ghosts’ reactions and plan the new steps with accuracy, clearing level after level. The ghosts never adapted.

By the age of twelve, the boy knew that he would become a “computer engineer”. Every occupational counselor (two in total) advised against this decision, not because it was bad, but because the counselors didn’t know anything about the profession. But the boy sticked to his decision and began his studies in computer science immediately after school was over. This was in 1997, when the internet still made sounds and you could ruin an hour-long download just by picking up the phone.

The boy, now a young man far from home, studied basic computer science for six month until the semester break arrived. Most other students returned home, but he stayed and teamed up with other students still on campus. They planned to program a computer game. A pac-man game, but with multiplayer abilities. One team would be “the players” or “pac-men”, the other team would be “the ghosts”. If somehow there wouldn’t be a dozen human players in front of the keyboard, the computer would control the remaining avatars. Game controls worked with split keyboard and – planned for later versions – over the network.

The only way the students knew how to organize the project was to transform one room into a computer-ridden workshop and hack away. Every horizontal platform in the room became a desk. The project should happen in a span of 24 hours. Today, this would be called a “game jam“. After 24 hours, all we had was a map. No game, no players, nothing exciting – just the future game’s map. But we agreed to continue working until the game is finished.

It took the three students a whole week. A week without much sleep, slippy food and lots of source code. Because we didn’t know about version control yet (nobody told us and we didn’t set up a local network, anyway), we had to structure the code in way that would allow us to work on different parts without collisions and transfer them from computer to computer using floppy disks. We had to maintain a list of files that were modified and did so on a central whiteboard that the young man had bought at the beginning of his studies. This whiteboard became the planning area where we would keep track of our modifications, tasks and concepts, including the stereotypical post-it notes. In hindsight, you could call it a chaotic story board. Without the whiteboard, we probably would have failed.

But after the week, the game was finished. We had developed a multiplayer pac-man in Java, complete with graphics, sounds and multi-threading. It was playable! We named it “Hubert 2D”, a reference to both “Duke Nukem 3D”, a very 90s game, and to one of our most famous fellow students. The game was blazingly fast – so fast, in fact, that you often lost track of your avatar. The unofficial motto of the game turned out to be “where am i?”. It was crammed with features. Just a Pac-Man where you could gobble up little pills and evade the ghosts was not enough for us. First, there was Hubert, the boss ghost. He appeared randomly and could not be player-controlled. He had a rocket launcher. If you defeated Hubert, you could grab the rocket launcher and, well, launch rockets. How can you defeat a rocket-launching ghost in Pac-Man? With your chainsaw, evidently. Players could pick up chainsaws to defend themselves against the ghosts. Ghosts could pick up energy shields to defend themselves against the chainsaws. Players could place mines to blow up ghosts that didn’t pay attention. Ghosts could place bombs to create new passageways to evade the mines or blow up the players. Sheep wandered around cluelessly, being blown up by mines, bombs, chainsaws or rockets and generally acting like a mobile roadblock. Teleporters added to the confusion by instantly teleporting you to either another teleporter or a random place on the map (leading to the infamous “where am i?”). But above all, you could poison and heal other avatars with various potions. Taking everything into account, this wasn’t Pac-Man anymore. This was team deathmatch that lasted until all the pills on the map were gobbled up accidentally.

Two funny moments during development and testing (aka playing) will always stay in my memory:

  • You could poison an avatar, but also heal it with medicine. Being healed was indicated by an “hallelujah” sound effect. But, because every new avatar on the map would be created in the “healed” state, we had a serious “hallelujah” epidemic going on. It took us way longer than it should have to connect the dots and eliminate the sound effect during creation.
  • Every avatar on the map moved with the same speed. Some avatars like bombs or mines decided not to move at all, others like sheep and Hubert only moved sometimes, but rockets flew twice as fast. So it was not possible to outrun a rocket. Because of this imbalance in power, we deemed the Hubert boss to be invincible in close combat. You could not walk up to him without facing a rocket that reached you at least a tile before you could employ your chainsaw. We were proven wrong, when one player used an energy shield in combination with a chainsaw and a hallway corner to sneak up on Hubert, neutralize the first rocket with the energy shield and defeat Hubert with the chainsaw before the second rocket could be fired. Because we thought that Hubert would be invincible, this move didn’t gain any in-game points. But the moment turned legendary immediately.

This week of intensive teamwork, combined with the result of an actual game, provided us with the trust and groundworks for future collaboration. So it was no wonder that, just a few semesters later, we came up with the idea of selling this collaboration ability in the form of a software development company. We were more knowledgeable, better equipped and had trained working together multiple times. What better times than now?

So we founded our company, the Softwareschneiderei (“software tailoring”) in late 2000, twenty years ago. Because we really meant it to be earnest, we invested the money to create a limited liability company and had to learn all the topics and obligations that follow such a creation in a very short time. We were still studying at university, but working for our own company, in a rented office, in every free minute. Our primary goal was to finish our studies with a degree. Our secondary goal was to let the company survive long enough to make it the primary goal after graduation. The plan worked out and here we are, twenty years later.

The statistics says that only one out of ten companies survives their first five years. Even after that, keeping a company afloat is not sunshine sailing. Somehow, we made it. Despite all our mistakes and misconceptions (and there were many, most on a more serious level than deeming Hubert to be invincible), we developed our company in a way that provides benefit for our customers and profit for our employees.

And in a corner of my desk drawer, there is still a 3,5″ floppy disk labelled “Hubert 2D”. Because that’s the source code that got this company started, 23 years ago.