Children behind the wheel

A few weeks ago, I read a funny news article about a 11 years old boy who stole a bus and drove the normal route with it. When the police stopped him, he had already picked up some passengers and somehow managed to only inflict minor damages along the way. The whole story (in german) can be read here.

Author: Vitaly Volkov, Волков Виталий Сергеевич (user kneiphof) https://commons.wikimedia.org/wiki/User:KneiphofThis blog entry is not about the unheeding passengers, it’s about the little boy and his mindset. This mindset exists in the business world, too. It’s the mixture of “what could possibly go wrong?” with “I’m totally able to pull this off” and a large dose of “everybody else is surely faking it, too”. In the context of the Dunning-Kruger effect, this mixture is called “unskilled and unaware of it”. It’s a dangerous situation for both the employee and the employer, because neither of them can properly evaluate the actual risk.

The Dunning-Kruger effect

Let’s start with the known theory. The Dunning-Kruger effect describes a cognitive bias in which unskilled individuals assess their ability (in the context of a given skill) much higher than it really is and highly skilled individuals tend to underestimate their (relative) competence. The problem is not that real experts tend to be modest about their expertise. The real problem is that “if you’re incompetent, you can’t know you’re incompetent. The skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is.”

So in short: Being unskilled in a certain area probably means you don’t really know that you are unskilled.

Or, translated to our young bus driver: If you don’t know anything about driving a bus, you certainly think you are as much a decent bus driver as the man behind the wheel. It looks easy enough from the passenger seat.

The effect in practice

Why should this bother us in software development? Our education system ensures that we are exposed to enough development practice so that we can counter the “unaware of it” part of the Dunning-Kruger effect. But it isn’t effective enough, at least that’s what I see from time to time.

Every once in a while, I have the opportunity to evaluate an existing code base. Most of the time, it produces a working, profitable application, so it cannot be said to be a failure. Sometimes however, the code base itself is so convoluted, bloated and riddled with poor implementation choices that it absolutely cannot be developed any further without a high risk of regression bugs and/or absurd amounts of developer time for little changes.

These hopeless source codes have one thing in common: they are developed by one person and one person alone. This person has developed for months or even years, showing progress and reporting no problems and suddenly resigned, often shortly before a major milestone in the project like going live with the first version or announcing the next version. The code base now lies abandonded and needs to be adopted. And while no code base is perfect (or should even try to be), this one reeks of desperation and frustration. Often, the application itself is not very demanding, but the source code makes it appear to be.

A good example of this kind of project is my scrap metal tale (in three parts) from five years ago:

A more recent case is littered with inline comments that celebrate small victories of the developer:

  • 5 lines of convoluted, contradicatory statements
  • 3 lines commented out
  • 1 comment line stating “YES! This is finally working! Super!”
  • Still three obvious bugs, resource leaks or security flaws in these few lines alone

The most outrageous (and notorious) case might be the Brillant Paula Bean from the Daily WTF, but this code base is at least readily comprehensible.

The origins of the effect

I think a lot of the frustration, desperation, anxiety and outright fear that I can sense through the comments and code structure was really felt by the original author. It must have been incredibly hard to stay on course, work hard and come up with solutions in the face of imminent deadlines, ever-changing requirements and the lingering fear that you’re just not up to the task. Except that we’ve just learned that developers in the “unskilled and unaware of it” state won’t feel the fear. That’s the origin of all the bad code: The absolute conviction that “it’s not me, it’s the problem, the domain, the language, the compiler, the weather and everything else”. Programming is just hard. Nobody else could do this better. It’ll work in the end. Those “minor problems” (like never actually speaking to the hardware, hard-coded paths and addresses, etc.) will be fixed at the last moment. There’s nothing wrong with an occassional exception stack trace in the logfile and if it bothers you, I can always make the catch-block empty.

The most obvious problem is that these developers think that this is how everybody else develops software, too. That we all don’t bother with concurrency correctness, resource lifecycle management, data structures, graphical user interface design, fault tolerance or even just basic logging. That all programming is hard and frustrating. That finding out where to insert a sleep statement to quench that pesky exception is the pinnacle of developer ingenuity. That things like automated tests, code metrics, continuous integration or even version control are eccentric fads that will pass by and be forgotten soon, so no need to deal with it. That we all just fake it and dread the moment our software is used in the wild.

A possible remedy

The “unskilled and unaware of it” developer isn’t dumb or hopeless, he’s just unskilled. The real tragedy is that he doesn’t have a mentor that can alleviate the biggest problems in the code and show better approaches. The unskilled lonesome developer cannot train himself. A good explanation for this novice “lock-in” can be found in the Dreyfus Model of Skill Acquisition (I recommend you watch the excellent talk on this topic by Dave Thomas): Novices lack the skills for proper self-assessment and cannot learn from their mistakes (as stated by the Dunning-Kruger effect, too). They also cannot recognize them as “their” mistakes or even “mistakes”. They need outside feedback (and guidance) to advance themselves. A mentor’s role is to give exactly that.

Conclusion

If you find yourself in a position of being a “skilled and fairly aware of it” developer, please be aware that you’ve probably been mentored sometimes in the past. Pass it on! Be the mentor for an aspiring junior developer.

If you suspect that you might fall in the “unskilled” category of developers, don’t despair! Being aware of this is the first and most important step. Now you can act strategically to improve your skill. There is a whole book giving you invaluable advice: Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman by Dave Hoover and Adewale Oshineye. Two prominent advices from the book are “Be the Worst (of your team)” and “Find Mentors“. And my most prominent advice? “Don’t stick it out alone“.

Programming is (or should be) fun after all.

A procedure to deal with big amounts of email

The problem

You probably know the problem already: A day with less than 500 emails feels like your internet connection might be lost. The amount of emails you receive can accurately predict the time of day. In my case, I’ll always receive my 300th email each day right before lunch. Imagine that I’ve spent one minute for each message, then I would have done nothing but reading emails yet. And by the time I return from lunch, more emails have found their way into my inbox. My job description is not “email reader”. It actually is one of the lesser prioritized activities of my job. But I keep most answering times low and always know the content of my inbox. You’ll seldom hear “sorry, I haven’t seen your email yet” from me. How I keep the email flood in check is the topic of this blog entry. It’s my personal procedure, so nothing fancy with a big name, but you might recognize some influence from well-known approaches like “Getting Things Done” by David Allen.

The disclaimer

Disclaimer: You might entirely disagree with my approach. That’s totally acceptable. But keep in mind that it works for at least one person for a long time now, even if it doesn’t fit your style. Email processing seems to be a delicate topic, please keep your comments constructive. By the way, I’d love to hear about your approach. I’m always eager to learn and improve.

The analogy

Let me start with a common analogy: Your email inbox is like your mailbox. All letters you receive go through your mailbox. All emails you receive go through your inbox. That’s where this analogy ends and it was never useful to begin with. Your postman won’t show up every five minutes and stuff more commercial mailings, letters, postcards and post-it notes into your mailbox (raising that little flag again that indicates the presence of mail). He also won’t announce himself by ringing your door bell (every five minutes, mind you) and proclaiming the first line of three arbirtrarily chosen letters. Also, I’ve rarely seen mailboxes that contain hundreds if not thousands of letters, some read, some years old, in different states of decay. It’s a common sight for inboxes whose owners gave up on keeping up. I’ve seen high stacks of unanswered correspondence, but never in the mailbox. And this brings me to my new analogy for your email inbox: Your email inbox is like your desk. The stacks of decaying letters and magazines? Always on desks (and around it in extreme cases). The letters you answer directly? You bring them to your desk first. Your desk is usually clear of pending work documents and this should be the case for your inbox, too.

(c) Fotolia Datei: #87397590 | Urheber: thodonal

The rules

My procedure to deal with the continuous flood of e-amils is based on three rules:

  • The inbox is the only queue of emails that needs attention. It is only filled with new emails (which require activity from my side) or emails that require my attention in the foreseeable future. The inbox is therefor only filled with pending work.
  • Email processing is done manually. I look at each email once and hopefully only once. There are no automatic filters that sort emails into different queues before I’ve seen them.
  • For every email, interaction results in an activity or decision on my side. No email gets “left there”.

Let me explain the context of the rules in a bit more detail:

My email account has lots and lots of folders to store all emails until eternity. The folders are organized in a hierarchy, but that doesn’t really matter, because every folder can hold emails. The hierarchy of folders isn’t pre-planned, it emerges from the urge to group emails together. It’s possible that I move specific emails from one folder to another because the hierarchy has changed. I will use automatic filters to process emails I’ve already read. But I will mark every email as read by myself and not move unread emails around automatically. This narrows the place to look for new emails to one place: the inbox. Every other folder is only for archivation, not for processing.

The sweeps

The amount of unread emails in my inbox is the amount of work I need to do to return to the “only pending work” state. Let’s say that I opened my email reader and it shows 50 new messages in the inbox. Now I’ll have to process and archivate these 50 emails to be in the same state as before I had opened it. I usually do three separate processing steps:

  • The first sweep is to filter out any spam messages by immediate tell-tale signs. This is the only automatic filter that I’ll allow: the junk filter. To train it, I mark any remaining junk mail as spam and let the filter deal with it. I’m still not sure if the junk filter really makes it easier for me, because I need to scan through the junk regularly to “rescue” false positives (legitimate mails that were wrongfully sorted out), but the junk filter in combination with my fast spam sweep will lower the message count significally. In our example, we now have 30 mails left.
  • The second sweep picks every email that is for information purpose only. Usually those mails are sent by software tools like issue trackers, wikis, code review tools or others. Machines don’t feel the effort of writing an email, so they’ll write a lot. Most of the time, the message content is only a few lines of text. I grab each of these mails and drag them into their corresponding folder. While I’m dragging, I read (and memorize) the content of the email. The problem with this kind of information is that it’s a lot of very small chunks of data for a lot of different contexts. In order to understand those messages, you have to switch your mental context in a matter of seconds. You can do it, but only if you aren’t interrupted by different mental states. So ignore any email that requires more than a few seconds of focused attention from you. Let them sit into your inbox along with the emails that require an answer. The only activity for mails included in your second sweep is “drag to folder & memorize”. Because machines write often, we now have 10 mails left in our example.
  • The third sweep now attends to each remaining mail independently. Here, the three-minutes rule applies: If it takes less than three minutes to reply to the email right now, then do it right now and archivate the mail in a suitable folder (you might even create a new folder for it). Remember: if you’ve processed an email, it leaves the inbox. If it takes more than three minutes, you need to schedule an attention slot for this particular message on your todo-list. This is the only time the email remains in the inbox, because it’s a signal of pending work. In our example, 7 emails could be answered with short replies, but 3 require deep concentration or some more text for the answer.

After the three sweeps, only emails that indicate pending work remain in the inbox. They only leave the inbox after I’ve dealt with them. I need to schedule a timebox to work on them, but after that, they’ll find themselves out of the inbox and in a folder. As soon as an email is in a folder, I forget about its existence. I need to remember the information that were in it, but not the message itself.

The effects

While dealing with each email manually sounds painfully slow at first, it becomes routine after only a short while. The three sweeps usually take less than five minutes for 50 emails, excluding the three major correspondence tasks that make their way as individual items on my work schedule. Depending on your ratio of spam to information messages to real correspondence, your results may vary.

The big advantage of dealing manually with each email is that I’ve seen each message with my own eyes. Every email that an automatic filter grabs and hides before you can see it should not have been sent in the first place – it’s simulating a pull notification scheme (you decide when to receive it by opening the folder) rather than leveraging the push notification scheme (you need to deal with it right now, not later) that emails are inherently. Things like timelines, activity streams or message boards are pull-oriented presentations of presumably the same information, perhaps that’s what you should replace your automatically hidden emails with.

You’ll have an ever-growing archive with lots of folders for different things (think of a shelf full of document files in real life), but you’ll never look into them as long as you don’t desperately search “that one mail from 3 months ago”. You’ll also have a clean inbox, preferably in “blank slate” condition or at least with only emails that require actions from you. So you’ll have a clear overview of your pending work (the things in your inbox) and the work already done (the things  in your folders).

The epilogue

That’s when you discover that part of your work can be described as “look at each email and move it to a folder” as if you were an official in charge for virtual paper. We virtualized our paperwork, letters, desks, shelves and document files. But the procedures to deal with them is still the same.

Your turn now

How do you process your emails? What are your rules and habits? What are your experiences with folders vs. tags? I would love to hear from you – in a comment, not an email.

The whole company under version control

by Sashkin / fotolia

A minor fact about the Softwareschneiderei that always evokes surprised reactions is that everything we do is under version control. This should be no surprise for our software development work, as version control is a best practice for about twenty years now. If you aren’t a software developer or unfamiliar with the concept of version control for whatever reasons, here’s a short explanation of its main features:

Summary of version control

Version control systems are used to track the change history of a file or a bunch of files in a way that makes it possible to restore previous versions if needed. Each noteworthy change of a file (or a bunch of files) is stored as a commit, a new savepoint that can be restored. Each commit can be provided with a change note, a short comment that describes the changes made. This results in a timeline of noteworthy changes for each file. All the committed changes are immutable, so you get revision safety of your data for almost no cost.

Usual work style for developers

In software development, each source code file has to be “in a repository”, the repository being the central database for the version control system. The repository is accessible over the network and holds the commits for the project. One of the first lessons a developer has to learn is that source code that isn’t committed to a version control system just doesn’t exist. You have to commit early and you have to commit often. In modern development, commit cycles of a few minutes are usual and necessary. Each development step results in a commit.

What we’ve done is to adopt this work style for our whole company. Every document that we process is stored under version control. If we write you a quote or an invoice, it is stored in our company data repository. If we send you a letter, it was first committed to the repository. Every business analysis spreadsheet, all lists and inventories, everything is stored in a repository.

Examples of usage scenarios

Let me show you two examples:

We have a digital list of all the invoices we sent. It’s nothing but a spreadsheet with the most important data for each invoice. Every time we write an invoice, it is another digital document with all the necessary text and an additional line in the list of invoices. Both changes, the new invoice document and the extended list are included in one commit with a comment that hints to the invoice number and the project number. These changes are now included in the ever-growing timeline of our company data.

We also have a liquidity analysis spreadsheet that needs to be updated often. Every time somebody makes a change to the spreadsheet, it’s a new commit with a comment what was updated. If the update was wrong for whatever reason, we can always backtrack to the spreadsheet content right before that faulty commit and try again. We don’t only have the spreadsheet, but the whole history of how it was filled out, by whom and when.

Advantages of version controlled files

Before we switched to a version controlled work style, we had network shares as the place to store all company data. This is probably the de-facto standard of how important files are handled in many organizations. Adding version control has some advantages:

  • While working with network shares, everybody works on the same file. Most programs show a warning that another user has write access on a file and only opens in read-only mode. But not every program does that and that’s where edit collisions occur without anybody noticing. With version control, you work on a local copy of the file. You can always change the file, but you will get a “merge conflict” when another user has altered the file in the repository after your last synchronization. These merge conflicts are usually minor inconveniences with source code, but a major pain with binary file formats like spreadsheets. So you’ll know about edit collisions and you’ll try to avoid them. How do you avoid them? By planning and communicating your work better. Version control emphasizes the collaborative work setting we all live in.
  • Version controlled data is always traceable. You can pinpoint exactly who did what at which time and why (as stated in the commit comment). There is no doubt about any number in a spreadsheet or any file in your repository. This might sound like a surveillance nightmare, but it’s more of a protection against mishaps and honest errors.
  • Version control lets you review your edits. Every time you commit your work, you’ll see a list of files that you’ve changed. If there is a file that you didn’t know you’ve changed, the version control just saved your ass. You can undo the erroneous change with a simple click. If you’d worked with network shares, this change would have gone unnoticed. With version control, you have to double-check your work.
  • There are no accidental deletions with version control. Because you have every file stored in the repository, you can always undo every delete operation. With network shares, every file lives in the constant fear of the delete key. With version control, you catch your mishap in the commit step and just restore the file.

Summary of the adoption

When we switched to version control for every company data, we just committed our network shares in the repository and started. The work style is a bit inconvenient at first, because it is additional work and needs frequent breaks for the commits, but everybody got used to it very fast. Soon, the advantages began to outweight the inconvenience and now working with our company data is free of fear because we have the safety net of version control.

You want to know more about version control? Feel free to ask!

My motto: Make it visible

Nearly ten years ago, I read the wonderful book “Behind Closed Doors. Secrets of Great Management” by Johanna Rothman and Esther Derby. They shared a lot of valuable insights and tipps for my management career, but more important, gave a name to a trend I was pursuing much longer. In their book, they introduce the central aspect of the “Big Visible Chart”, a whiteboard that contains all the important work. This term combined several lines of thought that lingered in my head at the time without myself being able to fully express them. Let me reiterate some of them:

  • Extreme Feedback Devices (XFD) were a new concept back in the days. The aspect of physical interaction with a purely virtual software project thrilled me. Given a sensible choice of the feedback device, it represents project state in a intuitive manner.
  • Scrum and Kanban Boards got popular around the same time. I always rationally regarded them as poor man’s issue tracker, but the ability to really move things around instead of just clicking had something in itself.
  • My father always mentioned his Project Cockpit that he used in his company to maintain an overview of all upcoming and present projects. This cockpit is essentially a Scrum Board on project granularity. We use our variation with great success.
  • A lot of small everyday aspects required my attention much too often. Things like if the dishwasher in a shared appartment contains dirty or clean dishes always needed careful examination.

It was about time to weave all these motivations into one overarching motto that could guide my progress. The “Big Visible Chart” was the first step to this motto, but not the last. A big chart is really just a big information radiator and totally unsuited for the dishwasher use case. The motto needed to contain even more than “put all information on a central whiteboard”. I wasn’t able to word my motto until Bret Victor came along and held his talk “Inventing On Principle” (if you don’t know it, go and watch it now, I’ll be waiting). He talks about the personal mission statement that you should find to arrange your actions around it. That was the magical moment when everything fell into place for me. I knew my motto all along, but couldn’t spell it. And then, it was clear: “Make it visible”. My personal mission is to make things visible.

Let me try to give you a few examples where I applied my principle of making information visible:

  • I built a lot of Extreme Feedback Devices that range from single lamps over multi-colored displays to speech synthesis and even a little waterfall that gets switched on if things are “in a state of flux”, like being built on the CI server. All the devices are clearly perceivable and express information that would otherwise need to be actively pulled from different sources. I even wrote a book chapter about this topic and talk about it on conferences.
  • A lot of recurring tasks in my team are handled by paper tokens that get passed on when the job is done. Examples are the blog token (yes, it’s currently on my desk) for blog entries or the backup token as a reminder to bring in the remotely stored backup device and sync it. These tokens not only remind the next owner of his duty, but also act as a sign that you’ve accomplished your job, just like with task cards on the Scrum board.
  • If we need to work directly on a client server, we put on our “live server hat so that we are reminded to be extra careful (in german, there’s the idiom of “auf der hut sein”). But the hat is also a plain visible sign to everybody else to be a tad more silent and refrain from disturbing. Don’t talk to the hat! A lesser grade of “do not disturb” sign is the fully applied headphone.
  • Of course I built my own variation of my father’s Project Cockpit. It’s a great tracking device to never forget about any project, how sparse the actual activity might be.
  • And I solved the dishwasher case: The last action when clearing the dishes should be to already apply the next dishwasher tab. That way, whenever you open the dishwasher door, there are two possible states: if the tab case is empty, the dishes are clean (or somebody forgot to re-arm). If the tab is closed, you can be sure to have dirty dishes in the machine. The case gets re-opened during the next washing cycle.
  • An extra example might be the date of opening we write on the milk and juice cartons so you’ll know how long it has been open already.

All of these examples make information visible in place that would otherwise require you to collect it by sampling, measuring or asking around. Information radiators are typically big objects that typically do that job for you and present you the result. I’ve come to find that information radiators can be as little as a dishwasher tab in the right spot. The important aspect is to think about a way to make the information visible without much effort.

So if you repeatedly invest effort to gather all necessary data for an information, ask yourself: how could you automate or just formalize things so that you don’t have to gather the data, but have the information right before your eyes whenever you need it? It’s as simple as a little indicator on your mailbox that gets raised by the mailman or as complicated as a multi-colored LED in your faucet indicating the water temperature. The overarching principle is always to make information visible. It’s a very powerful motto to live by.

Transparent software: Making complexity understandable

“I don’t think that’s feasible”

That are not the first words we wanted to hear from our client after our presentation.

“I have seen several engineers working over a year just for the concept”

This is complex.

The world tells you that all should be simple. Make it simple. Keep it simple. It is just that simple.

Only when it is not.

Look around you. Nature is beautiful. And complex. The human is beautiful. And complex. Many systems and contexts are complex.

Look inside you. Your thoughts and emotions. Your relationships.

Look at your computer, your phone, your work. Many problems we have and solve everyday are not simple.

Sometimes we think “Oh, why can’t this be simple”. “The software should be simpler”. But KISS won’t save you. Software is broken. But not because it is not simple. We do not want simplicity. We want clarity. We want to understand. Let me elaborate.

I create software for engineers. Engineers are the people who take problems who are fine in theory and work perfectly in a controlled environment like a lab and translate them to the real world. But the real world isn’t simple or controlled. It is messy. The smallest things can blow up your house of cards of theory. These people need software to understand what happens. Through their education and their experience they know what should happen. They are the experts. But the systems and problems they work with are so complex and mostly invisible to the human eye and incomprehensible to the human brain. But today’s engineering software looks like this:

cases-colorful-colourful-2019
Options all over the place

Make no mistake. This isn’t constrained to engineering problems. Take a look around you. Nowadays there are a million sensors collecting masses of data. Your phone. Your thermostat. Your shoes. Even your tooth brush. Sensors are everywhere. We are collecting more data than ever before. This data gives us a glimpse of the complex underlying system. So we think. But why do we collect them in the first place?
Because we can. We are seeking the holy grail of wisdom. More data creates more information. More information creates more knowledge. And finally we hope that more knowledge gets us a spark of wisdom. But we are just starting out.

The course of technology

The normal way of technology goes something like this: First we are constrained. We try to push the borders. When the field is wide and open we do everything that’s possible. After a while we become more mature and use it to serve a purpose. Software is like that. Collecting data is like that. It is like an addiction. Think about it: Do you influence the data or does the data influence you? Who is in control?

But there’s hope. In order to reason about and come to our decisions we need transparent software. The dictionary defines transparent as:

transparent (adjective)

  • (of a material or article) allowing light to pass through so that objects behind can be distinctly seen
  • easy to perceive or detect
  • having thoughts, feelings, or motives that are easily perceived
  • (of an organization or its activities) open to public scrutiny
  • Physics: transmitting heat or other electromagnetic rays without distortion.
  • Computing: (of a process or interface) functioning without the user being aware of its presence.

Transparent is a tricky word. It seems to be a paradox: on the one hand it means invisible and on the other hand it means easily perceived. Both uses of the word apply to what software needs to be.

No more magic

Software has to help us understand systems and concepts. What happens and what happened. It has to make it clear, comprehensible and detectable. We need to see how the software comes to its conclusions. We need the option to overrule it. The last decision is ours. Software can help us forming a decision but it should never decide on our behalf.
Also: It gets out of our way. We don’t need any more rituals to please the software to do our bidding. Software is a tool. To be a great tool it needs to fit the problem and the person. No one wants to cut with a knife that is all blade. It should adapt to our capabilities. It should fit like a glove. It amplifies not cripples our capabilities. It is made for us. It is transparent.

That’s the goal. But how do we get there?

Maximalistic design or design with ‘Betthupferl’

Minimalistic design is a misnomer. Reducing a complex issue needs more design not less. Designing is about thinking, taking care. If we want to make complex systems understandable we need to think hard. What is the essence of the problem? What information does the expert need to evaluate a situation? All of this expertise is hidden in the heads and the daily routine of the people we design for. So we need to ask, watch and listen to them. Not direct and with a free mind. Throw your preconceptions overboard. Remove your ego. First just observe. Collect. Challenge your assumptions. When you have a good amount of information (with experience you will know when you can start but do not believe you will ever have enough), distill. Distill the essence. And then add. That little extra. The details. The cues which foster understanding.
When you stay the night in a hotel and in your white and clean bed you find a little sweet on your pillow. You are delighted. In German we call this ‘Betthupferl’.
This little extra you add is just this. The user feels cared for. He sees that someone has gone the extra mile, has thought deeply about him. The essence is not enough. You need some details. To weave the parts to together to form a whole. This can be extra information when the user needs it. This can be a shortcut when the context is right. Or an animation which guides the eye. Or or or…
Important is that it does not confuse or blur the essence. It should support. Silently, almost invisible.

Having a plan

As a software developer, I quickly learnt that having a plan is essential for the successful realization of a project. Of course, there are projects which seem to run by themselves – either because no problems occur or because their solution is trivial. However, the larger the project, the tighter the deadlines, the more you need a plan to retain control over it. And it is particularly easy to lose control over a project if you not only manage it, but also participate in its implementation.

A project leader is responsible for the outcome of a project. They have to keep track of its goals, must know its current state and how far it progressed. You can, for example, constantly prescind from present actions and problems and check them in order to find out if they bring you closer to your objective or if you are getting side-tracked. Useful techniques are time boxes as in the Pomodoro techniques: For a fixed time, you concentrate on a problem, and afterwards, you recapitulate your results and, if necessary, adjust your approach. Yet, this is probably not enough to reveal how far your project advanced – and for this purpose you can employ a plan.

Such a plan can show you the exact condition of a project: It will tell you which milestones are already reached, where you are at the moment and which tasks have to be tackled next – and most importantly, it will tell you if you are in time. Basically, a plan is a monitoring tool for a project. By molding the project according to the plan, it is possible to see wether everything is alright, to expose potential pitfalls and, in the worst case, to recognize early if the project fails.

Moreover, a plan can also improve the communication related to the project. On the one hand, you will be able to brief your clients on the course of the project. Even better, if you can publish your progress regularly in a form allowing your clients to verify what you did so far, you will create a feedback loop that helps you to meet the clients’ demands. On the other hand, a plan will give you a handle to communicate with the people involved in the project realization. The knowledge about the state of the project is spreaded, which will permit you to spend less time talking about the required actions and more talking about the actual solution.

How to plan

Now, I want to enumerate a few hints for constructing a plan. First, even though a plan is crucial for a project, it is not necessary to develop the perfect plan right from the start, and it is presumably disadvantageous to stick with it at all costs. Instead, it is completely fine to launch the project following a rough draft; you can adapt it to your needs anytime.

Next, you should think about the unit of time you use to organize a project. If its life span amounts to a few weeks, it might be appropriate to plan single days, but in case it covers several months or even years, you should not bother to deduce the duties for the last month before starting the project. However, you should keep in mind that you may adjust the granularity arbitrarily: You can, for example, plan the first few days of a project in detail, while you sketch later actions in terms of weeks and months. In this step, it is also important to identify possible deadlines which have to be met.

Furthermore, you have to divide the project into sub-goals that are easier to operationalize. Just like with time units, you do not need to split your project into equally sized tasks, but upcoming issues should be specified in higher detail. At this point, you must also estimate the resources required to solve the issues; this could be as simple as time or money, but also something more specific as hardware or software. If you have tight deadlines, it is vital to check if there are tasks blocking other tasks: They cannot be parallelized and hence, the required resource is not just time, but rather calendar time.

Finally, even if you are managing a project, you are probably not alone – and you should exploit that circumstance extensively. When you know that there is someone who is able to perform a task more efficiently, you should delegate it. This is not restricted to the actual work in the project, but also includes management tasks such as the estimation of efforts. By this means, you will distribute the knowledge about the project to your team and facilitate the take-over of responsibility if it becomes necessary. In fact, perhaps the best way to successfully lead a project is to render oneself superfluous.

Zoom out early, zoom out often

Dennis Jarvis [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

A common pitfall of working long hours with high concentration level or stress level is the “yak shaving” effect. You start with a clear goal, dive down into the details and encounter an unforseen obstacle. No big problem, you just need to adjust focus for a moment and fix this little… but wait, in order to fix it, you first need to change this minor circumstance. And this change is prohibited by this effect, which needs to be adjusted, but relies on that. Much later, you’ll wake up from your dive and find yourself happily shaving a yak. But how exactly did you get there?

Avoid the yak

The best approach to counter yak shaving is “zooming out” of your current work in regular, externally triggered intervals and rehashing three aspects of your current work:

  • What do I want to achieve? (“Goal”)
  • What is my current task? (“Task”)
  • How does my task relate to my goal? (“Relationship”)

This “Goal/Task-Relationship” shouldn’t get too complicated. To describe your current Goal/Task-Relationship to a random person that just now arrived at the scene (ok, lets be clear: I’m talking about your boss), you should need at most two simple sentences. Every longer description is a sign of an unclear goal or inefficient steps (tasks) towards it.

To make sure that your Goal/Task-Relationship stays explainable, you could use the Pomodoro technique that partitions your concentrated work into intervals of half an hour (including the rehash phase).

Target fixation

The approach above helps against yaks, but not against target fixation. Target fixation occurs when you are so sure about your goal that you don’t question it even when the cost of achieving it rises to obscene levels. There are many stories I could tell about target fixation, but one sticks out for me because it happened myself and it happened recently.

The tragedy

In the midst of winter, during a cold period, my gas heater for the whole apartment broke down – on a late saturday evening. No amount of reading the manual, trying to turn it off and on again and maintainance routines could bring it back to life. The rooms grew colder. A long, cold weekend lay before me, but I couldn’t just sit it out – I had work to do with a tight deadline. So I frantically contacted one “24h emergency service” after the other with no success at all (this is in a rather big city, the experience really shocked me). My efforts to reach anybody who could help consumed time and nerves until I finally gave up. The backup option was to move to an hotel for two nights and I was ready to pack my things.

The remedy

oil-radiatorBut before I made the final decision to temporarily abandon the place, I called a friend to congratulate him on his birthday, totally unrelated to the heating desaster (it was on my todo list and needed to be done, so why not now?). After he asked me what’s up (he always senses misery) and I told him the whole catastrophe, he laughed and said: “Your problem can be solved with some money and a DIY store: just buy an oil radiator and plug it in – voilà, heating for one room”. I was baffled and excited: ten minutes later, and the store would be closed. In the last minute, I bought the radiator and had enough heat until the gas heater could be fixed during normal working days.

My target fixation is easily explained: “The gas heater broke down so I need to repair it/have it repaired”. The solution is also easy: “You need heating, but not necessarily by that broken gas heater”. It’s the same problem, just different zoom levels. By zooming out (being zoomed out, having somebody else provide the external view) of the narrow problem space I could see the whole picture and solve the real problem, not my perceived one.

Bird’s eye view

To counter target fixation, you have to zoom out regularly. But you need to zoom out even more and ask a different set of questions:

  • What problem do I want to solve? (“Problem”)
  • Can I think of a related, more generalized problem? (“Root”)
  • Have both problems the same cause? (“Cause”)

The “Problem Root Cause” approach helps to find a more abstract formulation of the problem at hand. You basically ask if you really solve a problem or merely a symptom of an hidden cause. In my story, I wanted to solve the problem of the broken gas heater. The generalized problem was lack of heating, regardless of which device it may provide. The cause was identical: cold weather without proper heating. Now I own an oil radiator on reserve.

Zoom out often

You really need to zoom out of your current work, take a few steps back and broaden your view to be sure about your path to the best solution. So my advice is to “zoom out early, zoom out often” (adapted from “commit early, commit often”). If you can manage the bird’s eye view of your path to the goal yourself, you’ll less often fall prey to yak shaving and target fixation.