Transposition as a programming technique

If you have been programming for a while, you will probably, and hopefully, agree that it is preferable to have a sequence of functions as opposed to the same number of functions nested. In other words, call-graph breadth is better than depth. Among other reasons, a “linear” set of instructions is often easier to follow, which is better for humans, and also tends to not go haywire with what memory it touches, which is better for computers.
However, deep call hierarchies occur much more than I would like. I have seen call stacks well beyond 200 functions deep. But this need not be – one can often be turned into the other by transposition. Transposition derives from the latin transponere, which roughly means “to put across”. With matrices, it means swapping rows and columns. Similarly, we can swap call-hierarchy depth for breath.

The example

A couple of months ago, I was tasked with programming a standing-wave display of power-line voltage curves. As you might know, the signal is roughly sine-wave shaped at about 50Hz. The signal is captured in time windows of 200ms, i.e. there’s a new packet of data 5 times a second with 10 sine cycles in it. However, the frequency value jitters just enough to make the signal drift a bit in the 200ms window, i.e. the wave moves forwards and backwards a little bit. The standing-wave feature tries to remove that drift and make it seemingly stationary in our fixed time window, so changes in amplitude become more visible.

Algorithm 1

The idea seems simple enough for just one signal:

  1. In the previous wave, search backwards to find the spot where the wave crosses from positive to negative.
  2. Take the previous wave from that point on and stitch it together with the current one, and cut that off at 200ms of data.

But there is not just one signal, there can be hundreds. And they should all be aligned to one designated “master” signal. So now we add extra steps:

  1. For all other signals, find the wave packets overlapping (in time) with our new stiched wave packet.
  2. Order them, and stitch them to a new wave packet covering exactly the same time window.

Now even in this version, finding the right packets for a time interval can be more tricky than it seems, because the values for the signals come in irregularly and can be shifted significantly. So you can just buffer of the last N (5?) packets for each signal and search in there. Still, one more requirement remains. For the display of archived data, the algorithm should work on batches of waves, i.e. many seconds worth, which made step 3 harder by extending the search space. So add:

  1. For each previous and current pair in a given time-interval:

Now the whole thing was pretty much implemented with steps 0 to 4 being functions calling into the next step, with major loops on the 0th and 3rd step. The wave data flows through these implementation layers vertically, i.e. from step 0 to step 4 and back, but the control flow of the program does not. It flows perpendicular to it, horizontally, solely controlled by the outer-most loop. It is intuitive to write it this way – after all, the control flow follows the flow of time in the data we are processing, but the code was not particularly easy, especially with the search in step 3 becoming unnecessarily complex.

Algorithm 2

Now let us try transposing this, and match the flow of data with our control flow:

  1. Gather all relevant signals for the time interval and sort their packets.
  2. Extract all the “stitching” time codes from the master signal.
  3. For all signals, traverse pairs together with the time codes and stitch accordingly.

The whole process becomes more digestible, and processing the data in stages made it obvious that sorted data makes using a “merge” type algorithm very easy.
Both algorithms use the same data, but the second makes it explicit, while the first just passes it through the call-stack in chunks.

Conclusion

I have since used this idea of “transposition” a few times to clean up and simplify my designs. It seems especially helpful when trying to decouple messaging from bulk processing.
The idea of looking at the data flow and adapting the control flow to match it, is central to data-oriented design. I argue that while this can be used to optimized programs, transposition is mainly a tool to make programs simpler, which can then lead to optimization. Separating processing into stages is also very similar to loop-fission.
Have you used a technique like this before? Do you, perhaps, know it by another name? Let me know!

Don’t use wrapper

It’s a bad word for a piece of code, and you should feel bad for using it. Here is why:

1. It is easily phonetically confused with “rapper”

Well, this one is actually funny. Really the only redeeming quality. So if someone tells me that they “made a wrapper”, I immediately giggle a bit inside.

2. Wrapping things is a programmers job.

As programmers, we are in the business of abstractions, and a function clearly is an abstraction. A function that calls something else wraps that something else. So isn’t everything a wrapper?
Who would say that the following function is a wrapper?

template <class T>
T multiplication_wrapper(T a, T b)
{
  return a * b;
}

It does wrap the multiplication operator, does it not? Of course, the example is contrived, but many people call equally simple functions “wrapper functions”.

3. It is often a bad analogy

When you wrap something, like a present, you first need to unwrap it to actually use it. So in that case, it acts more like a kind of envelope. This is clearly not the case for what most people call wrappers. You could wrap some data in a .zip file – that would make sense! But no one uses it like that.
Another use of the word wrap implies something that goes around something else, forming a fixture of sorts. Like a wraparound baby sling. So I guess this could work for some uses, like a protection layer. Again, it is not used like that.
Finally, there’s wrapping up something, as in finishing something. Well, maybe if you’re wrapping your main function around the rest of your code, you can finish writing your program. A very monolithic approach.

There are plenty of better alternatives

In most cases, what people should rather use is either facade or adapter. Both names convey a lot more meaning than wrapper. A facade is something that wraps code to make the interface nicer. An adapter wraps an interface to integrate with some other piece of code. Both are structural design patterns. Both wrap something. But then again, that could be said for all of the structural design patterns. Or, most code. Except maybe assembler?

So please, calling something a wrapper is not enough. You might as well just call it function/object/abstraction. Use adapter, facade, decorator, proxy etc.. The why is more important than the what.

The four archetypes of cloud users – part 1 of 2

In the occupational field of accounting, the strong trend towards cloud services is noticeable. Everything needs to be digital, and with digital, they mean online, and with online, they mean in the cloud. Every expense voucher needs to be scanned and uploaded, because in many cases, it can be booked automatically. In the new era of accounting, human intervention is only needed for special cases.

I see this as a good example of how digital online services can transform the world. Every step in the process would have technically been possible for the last twenty years, but only the cloud could unify the different participants enough so that a streamlined end-to-end process is marketable to the masses. And in this marketing ecstasy, the stakeholders that profit the most (the accountants) often forget that their benefits are just a part of the whole picture. In order to assess the perceived and actual benefits of all stakeholders, you at least need to apply an archetype to each participant.

The four archetypes

In my opinion, there are four different archetypes of cloud users. Let’s have a look at them and then assess the risks and potentials when selling a digital online service to them. I’ll list the archetypes in the order from biggest risk to biggest potential.

Archetype 1: The tinfoil hat

A person that could be identified as a “tinfoil hat” doesn’t need to be a conspiracy weirdo or paranoid maniac. In fact, the person probably has deep and broad knowledge about technology and examines new technologies in detail. The one distinctive feature of the tinfoil hats is that they take security, including IT security, very seriously. They don’t take security for granted, don’t trust asseverations and demand proof. You can’t convince a tinfoil hat by saying that the data transfer is “encrypted”, you need to specify the actual encryption algorithm. Using RC4 ciphers for the SSL protocol isn’t good enough for the tinfoil. You need at least proof that you understood the last sentence and took actions to mitigate the problem. Even then, the tinfoil will hesitate to give any data out of hands and often choose the cumbersome way in order to stay safe. “Better safe than sorry” is his everyday motto.

Tinfoil hats always search for scenarios that could compromise their data or infrastructure. They are paranoid by default and actively invest in security. “On premises” is the only way they deploy their own services, and “on premises” is how they prefer to keep their data.

Typical signs of a tinfoil hat archetype include:

  • self-hosted applications
  • physical servers
  • lack of (open) wireless network
  • physically separated networks
  • signed and encrypted e-mails

Trying to sell a cloud service to a tinfoil hat is like trying to sell a flight to an aviophobian (somebody with fear of flying). There is always another way to get from A to B, seemingly safer and more controllable. If you are selling cloud services, tinfoil hats are your worst nightmare. If you can convince a tinfoil hat, your product is probably made of fairy dust and employs lots of unicorns.

Archetype 2: The clipboard

Clipboard people are wary of new technologies, but assess them in the context of usability. They demand high security, but will compromise if the potential of the new technology far exceeds the risk. Other than the tinfoil hat, the clipboard sees his role as an enabler, but will not rest to increase the perceived or actual safety of the product. You can appease a clipboard by giving evidence of security audits from a third party. They will trust known authorities, because it means that they can always deflect blame in case of an accident to these authorities.

Clipboards run on checklists, safety protocols and recurring audits. They don’t try to avert every possibility of a security breach, but will examine each incident in detail and update their checklists. They don’t care about “on premises” or “off premises” as long as the service is reachable, safe enough and reliable. If a cloud service has an higher availability than the local counter-part, the clipboard will think about a migration.

Typical signs of a clipboard archetype include:

  • Virtual Private Networks (VPN)
  • Two-Factor Authentication
  • Token-Based Authentication
  • Strong Encryption

The clipboard will listen if you pitch your cloud service and can be enticed by the new or better capabilities. But in the very next sentence, he will ask about security and be insistent until you provide proof – first-hand or by credible third parties. You can convince a clipboard if your product is designed with safety in mind. As long as the safety is state-of-the-art, you’ll close the deal.

Outlook on the second part

In the second part of this blog entry, we will look at the remaining two archetypes, namely the “combination lock” and the “smartphone”. Stay tuned.

Did you identify with one of the archetypes? What are your most important aspects of cloud services? I would love to hear from you.

A procedure to deal with big amounts of email

The problem

You probably know the problem already: A day with less than 500 emails feels like your internet connection might be lost. The amount of emails you receive can accurately predict the time of day. In my case, I’ll always receive my 300th email each day right before lunch. Imagine that I’ve spent one minute for each message, then I would have done nothing but reading emails yet. And by the time I return from lunch, more emails have found their way into my inbox. My job description is not “email reader”. It actually is one of the lesser prioritized activities of my job. But I keep most answering times low and always know the content of my inbox. You’ll seldom hear “sorry, I haven’t seen your email yet” from me. How I keep the email flood in check is the topic of this blog entry. It’s my personal procedure, so nothing fancy with a big name, but you might recognize some influence from well-known approaches like “Getting Things Done” by David Allen.

The disclaimer

Disclaimer: You might entirely disagree with my approach. That’s totally acceptable. But keep in mind that it works for at least one person for a long time now, even if it doesn’t fit your style. Email processing seems to be a delicate topic, please keep your comments constructive. By the way, I’d love to hear about your approach. I’m always eager to learn and improve.

The analogy

Let me start with a common analogy: Your email inbox is like your mailbox. All letters you receive go through your mailbox. All emails you receive go through your inbox. That’s where this analogy ends and it was never useful to begin with. Your postman won’t show up every five minutes and stuff more commercial mailings, letters, postcards and post-it notes into your mailbox (raising that little flag again that indicates the presence of mail). He also won’t announce himself by ringing your door bell (every five minutes, mind you) and proclaiming the first line of three arbirtrarily chosen letters. Also, I’ve rarely seen mailboxes that contain hundreds if not thousands of letters, some read, some years old, in different states of decay. It’s a common sight for inboxes whose owners gave up on keeping up. I’ve seen high stacks of unanswered correspondence, but never in the mailbox. And this brings me to my new analogy for your email inbox: Your email inbox is like your desk. The stacks of decaying letters and magazines? Always on desks (and around it in extreme cases). The letters you answer directly? You bring them to your desk first. Your desk is usually clear of pending work documents and this should be the case for your inbox, too.

(c) Fotolia Datei: #87397590 | Urheber: thodonal

The rules

My procedure to deal with the continuous flood of e-amils is based on three rules:

  • The inbox is the only queue of emails that needs attention. It is only filled with new emails (which require activity from my side) or emails that require my attention in the foreseeable future. The inbox is therefor only filled with pending work.
  • Email processing is done manually. I look at each email once and hopefully only once. There are no automatic filters that sort emails into different queues before I’ve seen them.
  • For every email, interaction results in an activity or decision on my side. No email gets “left there”.

Let me explain the context of the rules in a bit more detail:

My email account has lots and lots of folders to store all emails until eternity. The folders are organized in a hierarchy, but that doesn’t really matter, because every folder can hold emails. The hierarchy of folders isn’t pre-planned, it emerges from the urge to group emails together. It’s possible that I move specific emails from one folder to another because the hierarchy has changed. I will use automatic filters to process emails I’ve already read. But I will mark every email as read by myself and not move unread emails around automatically. This narrows the place to look for new emails to one place: the inbox. Every other folder is only for archivation, not for processing.

The sweeps

The amount of unread emails in my inbox is the amount of work I need to do to return to the “only pending work” state. Let’s say that I opened my email reader and it shows 50 new messages in the inbox. Now I’ll have to process and archivate these 50 emails to be in the same state as before I had opened it. I usually do three separate processing steps:

  • The first sweep is to filter out any spam messages by immediate tell-tale signs. This is the only automatic filter that I’ll allow: the junk filter. To train it, I mark any remaining junk mail as spam and let the filter deal with it. I’m still not sure if the junk filter really makes it easier for me, because I need to scan through the junk regularly to “rescue” false positives (legitimate mails that were wrongfully sorted out), but the junk filter in combination with my fast spam sweep will lower the message count significally. In our example, we now have 30 mails left.
  • The second sweep picks every email that is for information purpose only. Usually those mails are sent by software tools like issue trackers, wikis, code review tools or others. Machines don’t feel the effort of writing an email, so they’ll write a lot. Most of the time, the message content is only a few lines of text. I grab each of these mails and drag them into their corresponding folder. While I’m dragging, I read (and memorize) the content of the email. The problem with this kind of information is that it’s a lot of very small chunks of data for a lot of different contexts. In order to understand those messages, you have to switch your mental context in a matter of seconds. You can do it, but only if you aren’t interrupted by different mental states. So ignore any email that requires more than a few seconds of focused attention from you. Let them sit into your inbox along with the emails that require an answer. The only activity for mails included in your second sweep is “drag to folder & memorize”. Because machines write often, we now have 10 mails left in our example.
  • The third sweep now attends to each remaining mail independently. Here, the three-minutes rule applies: If it takes less than three minutes to reply to the email right now, then do it right now and archivate the mail in a suitable folder (you might even create a new folder for it). Remember: if you’ve processed an email, it leaves the inbox. If it takes more than three minutes, you need to schedule an attention slot for this particular message on your todo-list. This is the only time the email remains in the inbox, because it’s a signal of pending work. In our example, 7 emails could be answered with short replies, but 3 require deep concentration or some more text for the answer.

After the three sweeps, only emails that indicate pending work remain in the inbox. They only leave the inbox after I’ve dealt with them. I need to schedule a timebox to work on them, but after that, they’ll find themselves out of the inbox and in a folder. As soon as an email is in a folder, I forget about its existence. I need to remember the information that were in it, but not the message itself.

The effects

While dealing with each email manually sounds painfully slow at first, it becomes routine after only a short while. The three sweeps usually take less than five minutes for 50 emails, excluding the three major correspondence tasks that make their way as individual items on my work schedule. Depending on your ratio of spam to information messages to real correspondence, your results may vary.

The big advantage of dealing manually with each email is that I’ve seen each message with my own eyes. Every email that an automatic filter grabs and hides before you can see it should not have been sent in the first place – it’s simulating a pull notification scheme (you decide when to receive it by opening the folder) rather than leveraging the push notification scheme (you need to deal with it right now, not later) that emails are inherently. Things like timelines, activity streams or message boards are pull-oriented presentations of presumably the same information, perhaps that’s what you should replace your automatically hidden emails with.

You’ll have an ever-growing archive with lots of folders for different things (think of a shelf full of document files in real life), but you’ll never look into them as long as you don’t desperately search “that one mail from 3 months ago”. You’ll also have a clean inbox, preferably in “blank slate” condition or at least with only emails that require actions from you. So you’ll have a clear overview of your pending work (the things in your inbox) and the work already done (the things  in your folders).

The epilogue

That’s when you discover that part of your work can be described as “look at each email and move it to a folder” as if you were an official in charge for virtual paper. We virtualized our paperwork, letters, desks, shelves and document files. But the procedures to deal with them is still the same.

Your turn now

How do you process your emails? What are your rules and habits? What are your experiences with folders vs. tags? I would love to hear from you – in a comment, not an email.

The rule of additive changes

Change is in the nature of software development. Most difficult aspects of the craft revolve around dealing with change. How does one keep software extensible? How do you adapt to new business requirements?

With experience comes the intuition that some kind of changes are more volatile than other changes. For example, it is often safer to add a new function or type to an application than change an existing one.

This is because adding something new means that it is not already strongly connected to the rest of the application. Or at least that’s the assumption. You have yet to decide how the new component interacts with the rest of the application. Usually this is done by a, preferably small, incision in the innards of your software. The first change, the adding, should not break anything. If anything, the small incision should be the only dangerous aspect of the change.

This is as very important concept: adding should not break things! This is so important, I want to give it a name:

The Rule of Additive Changes

Adding something to a well-designed software system should not break existing functionality. Exceptions should be thoroughly documented and communicated.

Systems should always be designed and tought so that the rule of additive changes holds. Failure to do so will lead to confusing surprises in the best cases, and well hidden bugs in worse cases.

The rule is nothing new, however: it’s a foundation, an axiom, to many other rules, such as the Liskov Substitution Principle:

Inheritance

Quoting from Wikipedia:

“If S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program”

This relies on subtyping as an additive change: S works at least as good as any T, so it is an extension, an addition. You should therefore design your systems in a way that the Liskov Substition Principle, and therefore the rule of additive changes, both hold: An addition of a new type in a hierarchy cannot break anything.

Whitelists vs. Blacklists

Blacklists will often violate the rule of additive changes. Once you add a new element to the domain, the domain behind the blacklist will change as well, while the domain behind a whitelist will be unaffected. Ultimately, both can be what you want, but usually, the more contained change will break less – and you can still change the whitelist explicitly later!

Note that systems that filter classes from a hierarchy via RTTI or, even more subtle, via ask-interfaces, are blacklists. Those systems can break easily when new types are introduces to a hierarchy. Extra care needs to be taken to make sure the rule of addition holds for these systems.

Introspection and Reflection

Without introspection and reflection, programs cannot know when you are adding a new type or a new function. However, with introspection, they can. Any additive change can also be an incision point. Therefore, you need to be extra careful when designing systems that use introspection: They should not break existing functionality for adding something.

For example, adding a function to enable a specific new functionality is okay. A common case of this would be to adding a function to a controller in a web-framework to add a new action. This will not inferfere with existing functionality, so it is fine.

On the other hand, adding a member to a controller should not disable or change functionality. Adding a special member for “filtering” or some kind of security setting falls into this category. You think you’re merely adding something, but in fact you are modifying. A system that relies on such behavior therefore violates the rule of additive changes. Decorating the member is a much better alternative, as that makes it clear that you are indeed modifying something, which might break existing functionality.

Not all languages or frameworks provide this possibility though. In that case, the only alternative is good communication and documentation!

Refactoring

Many engineers implicitly assume that the rule of additive changes holds. In his book “Working Effectively With Legacy Code”, Micheal Feathers proposes the sprout and wrap techniques to change legacy software. The underlying technique is the same for both: formulating a potentially breaking change as mostly additive, with only a small incision point. In the presence of systems that do not follow the rule of additive changes, such risk minimization does not work at all. For example, adding additional function can break a system that relies heavily on introspection – which goes against all intuition.

Conclusion

This rule is not a new concept. It is something that many programmers have in their head already, but possibly fractured into lots of smaller guidelines. But it is one overarching concept and it needs a name to be accessible as such. For me, that makes things a lot clearer when reasoning about systems at large.

At your service, master!

One of the most important lessons that I had to learn in my job is that you have to be aware of the client. As a service provider, it is my duty to satisfy my client’s needs – and without knowing him, I will not be able to succeed. In this blog post I describe some insights that helped me to gain a better understanding of my clients.

The main connection between the service provider and his client is the communication between them. In an ideal world, the two parties would be able to understand each other perfectly, however, humans and their language are fallible and to me, this seems to be the root of most problems. Of course, both parties are responsible, nevertheless, the service provider should not only deal with his own defeciencies, but also with his client’s, in order to attract and keep clients. Next, I will list five instructions that can reduce or sometimes even prevent the incomprehension in the communication.

Be prepared

This is maybe the clearest rule: Before meeting a client, you should know the basics of the domain he is working in and of the problem he wants to solve. It is not the client’s job to explain his request, but rather the service provider’s job to comprehend it. Besides, if a client feels understood, he will also feel that you can solve his problem – and at that stage this matters more than whether you can actually solve his problem.

Be attentive

You and your client are different persons and, as a result, have a different understanding of the same things. Your client might quickly slur over some little details in a software he wants, so you could assume that they are of no importance – but you will be unpleasantly surprised when it turns out to be a critical aspect of the program. And this is not necessarily a flaw in the customer’s communication: Maybe to a domain expert – and your customer might be one – the importance of these details is totally obvious.

Furthermore, sometimes even language will lead you nowhere. For example, people do not always realize why a system is hard to use or where they make mistakes and hence cannot tell you about it, but by watching them you might find the problems. In such a situation, it is crucial to grasp not only the words the client is saying, but also other signals he is emitting.

Be without bias

As soon as I start listening to a client’s problem, sometimes I can literally watch myself constructing a solution in my head. I create a mental model composed of the components the customer is talking about, think about their relationships – and suddenly, I find myself thrown a curve because the client added a thought that objects my conception.

Of course, a model can improve the understanding of a client’s demands, however, one has to constantly question the validity of the model and – in case it is disproved – one must drop it without hesitation. Do not become attached to a model just because it is so elegant – in most cases, you will be betrayed. In contrast, it will probably become easier to adapt your mental models if you stay open-minded.

And even if your view seems to suit the customer’s requirements perfectly, you should hesitate to present it to him, you should not ask for confirmation early on. In fact, the better the concept seems, the more careful you should be: You might lead your client into thinking that it is a adequate solution, and by focusing on the conformity between the concept and his problem, you and your client may fail to see flaws.

Instead, you should try to ditch your assumptions, try to listen without bias. You still have to prepare yourself before you meet your client, but you should be willing to scrutinize your knowledge and to discard incorrect information.

Be concrete

The human language is a wonderful medium, but unfortunately terribly inaccurate. If, instead of writing, you can talk with your customer, you should usually choose the latter. Even better, if you can meet him in person, do it – there are so many more options to communicate if you are in one room that you will almost surely benefit from it.

For instance, if your client wishes a feature with some user interface, you can sketch it or build a paper prototype; you could even prepare a real prototype consisting only of the user interface. This allows your customer to play with it and facilitates the communication. And do not be abstract, do not fill your widgets with texts as “Lorem ipsum” – it does not matter if the content is made-up, but it should be realistic.

User interface design is a neat example since it is graphical, nevertheless, you can apply this principle to other tasks. It does not matter if you talk about a processes, some architecture, domain models or other structures: Even though most of them have no inherent graphical representation, it is usually easier to describe them graphically then by using text.

Seek for the why, not the what

Often, I tend to ask my clients about the problem they wish to solve – I ask what they wish to solve, not why. Usually, this is sufficient; he knows his situation and is able to express his needs. Unfortunately, it also happens that albeit the customer’s problem is solved according to his description, his wants are not satisfied – and the reason for this is that even he did not know what he actually needed. Even worse, sometimes I get caught by the “how”, that is, I quickly find a nice solution for some parts of a client’s problem, so I stick to it, maybe even implement it – and in the end I realize that it actually prevents me from solving the complete problem.

Hence, it is not only important to find out what your client wants to achieve, but also why he wants to achieve it, you have to understand his motivation. This can enable you to correct your client’s mistakes and to lead him to the question he actually wants to answer. Furthermore, this is a great handle to control the effort of a project: It becomes easier to identify indispensable core functionality and to find features whose usefulness is questionable, and hence, one can communicate with the client if some of the latters might be dropped. Simon Sinek gave an interesting TED talk to a similar topic found here.

Conclusion

Understanding your customers is difficult, but not impossible. I think that actively directing the attention at your counterpart, being open for input and questioning your assumptions and knowledge can strongly improve the communication with your clients.

My motto: Make it visible

Nearly ten years ago, I read the wonderful book “Behind Closed Doors. Secrets of Great Management” by Johanna Rothman and Esther Derby. They shared a lot of valuable insights and tipps for my management career, but more important, gave a name to a trend I was pursuing much longer. In their book, they introduce the central aspect of the “Big Visible Chart”, a whiteboard that contains all the important work. This term combined several lines of thought that lingered in my head at the time without myself being able to fully express them. Let me reiterate some of them:

  • Extreme Feedback Devices (XFD) were a new concept back in the days. The aspect of physical interaction with a purely virtual software project thrilled me. Given a sensible choice of the feedback device, it represents project state in a intuitive manner.
  • Scrum and Kanban Boards got popular around the same time. I always rationally regarded them as poor man’s issue tracker, but the ability to really move things around instead of just clicking had something in itself.
  • My father always mentioned his Project Cockpit that he used in his company to maintain an overview of all upcoming and present projects. This cockpit is essentially a Scrum Board on project granularity. We use our variation with great success.
  • A lot of small everyday aspects required my attention much too often. Things like if the dishwasher in a shared appartment contains dirty or clean dishes always needed careful examination.

It was about time to weave all these motivations into one overarching motto that could guide my progress. The “Big Visible Chart” was the first step to this motto, but not the last. A big chart is really just a big information radiator and totally unsuited for the dishwasher use case. The motto needed to contain even more than “put all information on a central whiteboard”. I wasn’t able to word my motto until Bret Victor came along and held his talk “Inventing On Principle” (if you don’t know it, go and watch it now, I’ll be waiting). He talks about the personal mission statement that you should find to arrange your actions around it. That was the magical moment when everything fell into place for me. I knew my motto all along, but couldn’t spell it. And then, it was clear: “Make it visible”. My personal mission is to make things visible.

Let me try to give you a few examples where I applied my principle of making information visible:

  • I built a lot of Extreme Feedback Devices that range from single lamps over multi-colored displays to speech synthesis and even a little waterfall that gets switched on if things are “in a state of flux”, like being built on the CI server. All the devices are clearly perceivable and express information that would otherwise need to be actively pulled from different sources. I even wrote a book chapter about this topic and talk about it on conferences.
  • A lot of recurring tasks in my team are handled by paper tokens that get passed on when the job is done. Examples are the blog token (yes, it’s currently on my desk) for blog entries or the backup token as a reminder to bring in the remotely stored backup device and sync it. These tokens not only remind the next owner of his duty, but also act as a sign that you’ve accomplished your job, just like with task cards on the Scrum board.
  • If we need to work directly on a client server, we put on our “live server hat so that we are reminded to be extra careful (in german, there’s the idiom of “auf der hut sein”). But the hat is also a plain visible sign to everybody else to be a tad more silent and refrain from disturbing. Don’t talk to the hat! A lesser grade of “do not disturb” sign is the fully applied headphone.
  • Of course I built my own variation of my father’s Project Cockpit. It’s a great tracking device to never forget about any project, how sparse the actual activity might be.
  • And I solved the dishwasher case: The last action when clearing the dishes should be to already apply the next dishwasher tab. That way, whenever you open the dishwasher door, there are two possible states: if the tab case is empty, the dishes are clean (or somebody forgot to re-arm). If the tab is closed, you can be sure to have dirty dishes in the machine. The case gets re-opened during the next washing cycle.
  • An extra example might be the date of opening we write on the milk and juice cartons so you’ll know how long it has been open already.

All of these examples make information visible in place that would otherwise require you to collect it by sampling, measuring or asking around. Information radiators are typically big objects that typically do that job for you and present you the result. I’ve come to find that information radiators can be as little as a dishwasher tab in the right spot. The important aspect is to think about a way to make the information visible without much effort.

So if you repeatedly invest effort to gather all necessary data for an information, ask yourself: how could you automate or just formalize things so that you don’t have to gather the data, but have the information right before your eyes whenever you need it? It’s as simple as a little indicator on your mailbox that gets raised by the mailman or as complicated as a multi-colored LED in your faucet indicating the water temperature. The overarching principle is always to make information visible. It’s a very powerful motto to live by.