Do most language make false promises?

Some years ago I stumbled over this interesting article about C being the most effective of programming language and one making the least false promises. Essentially Damien Katz argues that the simplicity of C and its flaws lead to simple, fast and easy to reason about code.

C is the total package. It is the only language that’s highly productive, extremely fast, has great tooling everywhere, a large community, a highly professional culture, and is truly honest about its tradeoffs.

-Damien Katz about the C Programming language

I am Java developer most of the time but I also have reasonable experience in C, C++, C#, Groovy and Python and some other languages to a lesser extent. Damien’s article really made me think for quite some time about the languages I have been using. I think he is right in many aspects and has really good points about the tools and communities around the languages.

After quite some thought I do not completely agree with him.

My take on C

At a time I really liked the simplicity of C. I wrote gtk2hack in my spare time as an exercise and definitely see interoperability and a quick “build, run, debug”-cycle as big wins for C. On the other hand I think while it has a place in hardware and systems programming many other applications have completely different requirements.

  • A standardized ABI means nothing to me if I am writing a service with a REST/JSON interface or a standalone GUI application.
  • Portability means nothing to me if the target system(s) are well defined and/or covered by the runtime of choice.
  • Startup times mean nothing to me if the system is only started once every few months and development is still fast because of hot-code replacement or other means.
  • etc.

But I am really missing more powerful abstractions and better error handling or ressource management features. Data structures and memory management are a lot more painful than in other languages. And this is not (only) about garbage collection!

Especially C++ is making big steps in the right direction in the last few years. Each new standard release provides additional features making code more readable and less error prone. With zero cost abstractions at the core of language evolution and the secondary aim of ease of use I really like what will come to C++ in the future. And it has a very professional community, too.

Aims for the C++11 effort:

  • Make C++ a better language for systems programming and library building
  • Make C++ easier to teach and learn

-Bjarne Stroustup, A Tour of C++

What we can learn from C

Instead of looking down at C and pointing at its flaws we should look at its strengths and our own weaknesses/flaws. All languages and environments I have used to date have their own set of annoyances and gotchas.

Java people should try building simple things and having a keen eye on dependencies especially because the eco system is so rich and crowded. Also take care of ressource management – the garbage collector is only half the deal.

Scala and C++ people should take a look at ABI stability and interoperability in general. Their compile times and “build, run, debug”-cycle has much room for improvement to say the least.

C# may look at simplicity instead of wildly adding new features creating a language without opinion. A plethora of ways implementing the same stuff. Either you ban features or you have to know them all to understand code in a larger project.

Conclusion

My personal answer to the title of this blog: Yes, they make false promises. But they have a lot to offer, too.

So do not settle with the status quo of your language environment or code style of choice. Try to maintain an objective perspective and be aware of the weaknesses of the tools you are using. Most platforms improve over time and sometimes you have to re-evaluate your opinion regarding some technology.

I prefer C++ to C for some time now and did not look back yet. But I also constantly try different languages, platforms and frameworks and try to maintain a balanced view. There are often good reasons to choose one over the other for a particular project.

 

4 questions you need to ask yourself constantly while programming

Most of today’s general purpose progamming languagues come with plethora of features. Often there are different levels of abstractions and intended use cases. Some features are primarily for library designers, others ease implementation of domain specific languages and application developers use mostly another feature set.

Some language communities are discussing “language profiles / levels” to ban certain potentionally harmful constructs. The typical audience like application programmers does not need them but removing them from the language would limit its usefulness in other cases. Examples are Scala levels (a bit dated), the Google C++ Style Guide or Profiles in the C++ Core Guidelines.

In the wild

When reading other peoples code I often see novice code dealing with low-level threading. Or they go over board with templates, reflection or meta programming.

I have even seen custom ClassLoaders in Java written by normal application programmers. People are using threads when workers, tasks, actors or other more high-level abstractions would fit much better.

Especially novices seem to be unable to recognize their limits and to stay off of inappropriate and potentially dangerous features.

How do you decide what is appropriate in your situation?

Well, that is a difficult question. If you find the task at hand seems hard you should probably take a step back because:

There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors.

-Jeff Atwood

Then ask yourself some simple questions:

  1. Someone must have done it before. Have I searched thoroughly for hints or solutions?
  2. Is there a (better) library, data structure or abstraction?
  3. Do I really have to do this? There must be a better/easier way!
  4. What do I gain using feature/library/tool X and what are its costs? What about the alternatives?

Conclusion

You need some experience to recognize that you are on the wrong path, solving problems you would not even have if doing the right thing in the first place.

Experience is what you got by not having it when you needed it.

-Author Unknown

Try to know and admit your limits – there is nothing wrong with struggling to get things working but it helps to frequently check your direction by taking a step back and reflecting.

Explicit types – and when to use them

Many modern programming languages offer a way declare variables without an explicit type if the type can be inferred, either dynamically or statically. Many also allow for variables to be explicitly defined with a type. For example, Scala and C# let you omit the explicit variable type via the var keyword, but both also allow defining variables with explicit types. I’m coming from the C++ world, where “auto” is available for this purpose since the relatively recent C++11. However, people are still debating whether you should actually use it.

Pros

Herb Sutter popularised the almost-always-auto style. He advocates that using more type inference is good because it is roughly equivalent to programming against interfaces instead of implementations. He says that “Overcommitting to explicit types makes code less generic and more interdependent, and therefore more brittle and limited.” However, he also mentions that you might sometimes want to use explicit types.

Now what exactly is overcommiting here? When is the right time to use explicit types?

Cons

Opponents to implicit typing, many of them experienced veterans, often state that they want the actual type visible in the source code. They don’t want to rely on type inference being right. They want the code to explicitly state what’s going on.

At first, I figured that was just conservatism in the face of a new “scary” feature that they did not fully understand. After all, IDEs can usually infer the type on-the-fly and you can hover on a variable to let it show you the type.

For C++, the function signature is a natural boundary where you often insert explicit types, unless you want to commit to the compile time and physical dependency cost that comes with templates. Other languages, such as Groovy, do not have this trade-off and let you skip explicit types almost everywhere. After working with Groovy/Grails for a while, where the dominant style seems to be to omit types whereever possible, it dawned on me that the opponents of implicit typing have a point. Not only does the IDE often fail to show me the inferred type (even though it still works way more often than I would have anticipated), but I also found it harder to follow and modify code that did not mention explicit types. Seemingly contrary to Herb Sutter’s argument, that code felt more brittle than I had liked.

Middle-ground

As usual, the truth seems to be somewhere in the middle. I propose the following rule for when to use explicit types:

  • Explicit typing for domain-types
  • Implicit typing everywhere else

Code using types from the problem domain should be as specific as possible. There’s no need for it to be generic – it’s actually counter-productive, as otherwise the code model would be inconsistent with model of the problem domain. This is also the most important aspect to grok when reading code, so it should be explicit. The type is as important as the action on it.

On the other hand, for pure-fabrication types that do not respresent a concept in the domain, the action is important, while the type is merely a means to achieve this action. Typically, most of the elements from a language’s standard library fall into this category. All your containers, iterators, callables. Their types are merely implementation details: an associative container could be an array, or a hash-map or a tree structure. Exchanging it rarely changes the meaning of the code in the problem domain – it just changes its performance characteristics.

Containers will occasionally contain domain-types in their type. What do you do about those? I think they belong in the “everywhere else” catergory, but you should be take extra care to name the contained type when working with it – for example when declaring the variable of the for-each loop on it, or when inserting something into it. This way, the “collection of domain-type” aspect will become clear, but the specific container implementation will stay implicit – like it should.

What do you think? Is this a useful proposition for your code?

Declare war on your software

If we believe Robert Greene, life is dominated by fierce war – and he does not only refer to obvious events such as World War II or the Gulf Wars, but also to politics, jobs and even the daily interactions with your significant other.

The book

Left aside whether or not his notion corresponds to reality, it is indeed possible to apply many of the strategies traditionally employed in warfare to other fields including software development. In his book The 33 Strategies of War, Robert Greene explains his extended conception on the term war, which is not restricted to military conflicts, and describes various methods that may be utilized not only to win a battle, but also to gain advantage in everyday life. His advice is backed by detailed historic examples originating from famous military leaders like Sun Tsu, influential politicans like Franklin D. Roosevelt and even successful movie directors like Alfred Hitchcock.

Examples

While it is clear that Greene’s methods are applicable to diplomacy and politics, their application in the field of software development may seem slightly odd. Hence, I will give two specific examples from the book to explain my view.

The Grand Strategy

Alexander the Great became king of Macedon at the young age of twenty, and one of his first actions was to propose a crusade against Persia, the Greek’s nemesis. He was warned that the Persian navy was strong in the Mediterranean Sea and that he should strengthen the Greek navy so as to attack the Persians both by land and by sea. Nevertheless, he boldly set off with an army of 35,000 Greeks and marched straight into Asia Minor – and in the first encounter, he inflicted a devastating defeat on the Persians.

Now, his advisors were delighted and urged him to head into the heart of Persia. However, instead of delivering the finishing blow, he turned south, conquering some cities here and there, leading his army through Phoenicia into Egypt – and by taking Persia’s major ports, he disabled them from using their fleet. Furthermore, the Egyptians hated the Persians and welcomed Alexander, so that he was free to use their wealth of grain in order to feed his army.

Still, he did not move against the Persian king, Darius, but started to engage in politics. By building on the Persion government system, changing merely its unpopular characteristics, he was able to stabilize the captured regions and to consolidate his power. It was not before 331 B. C., two years after the start of his campaign, that he finally marched on the main Persian force.

While Alexander might have been able to defeat Darius right from the start, this success would probably not have lasted for a long time. Without taking the time to bring the conquered regions under control, his empire could easily have collapsed. Besides, the time worked in his favor: Cut off from the Egyptian wealth and the subdued cities, the Persian realm faltered.

One of Greene’s strongest points is the notion of the Grand Strategy: If you engage in a battle which does not serve a major purpose, its outcome is meaningless. Like Alexander, whose actions were all targeted on establishing a Macedonian empire, it is crucial to focus on the big picture.

It is easy to see that these guidelines are not only useful in warfare, but rather in any kind of project work – including software projects. While one has to tackle the main tasks at some point, it is important to approach it reasoned, not rashly. If anaction is not directed towards the aim of the project, one will be distracted and endager its execution by wasting resources.

The Samurai Musashi

Miyamoto Musashi, a renowned warrior and duellist, lived in Japan during the late 16th and the early 17th century. Once, he was challenged by Matashichiro, another samurai whose father and brother had already been killed by Musashi. In spite of the warning of friends that it might be a trap, he decided to oppose his enemy, however, he did prepare himself.

For his previous duels, he had arrived exorbitantly late, making his opponents lose their temper and, hence, the control over the fight. Instead, this time he appeared at the scene hours before the agreed time, hid behind some bushes and waited. And indeed, Matashichiro arrived with a small troop to ambush Musashi – but using the element of surprise, he could defeat them all.

Some time later, another warrior caught Musashi’s interest. Shishido Baiken used a kusarigama, a chain-sickle, to fight and had been undefeated so far. The chain-sickle seemed to be superior to swords: The chain offered greater range and could bind an enemy’s weapon, whereupon the sickle would deal the finishing blow. But even Baiken was thrown off his guard; Musashi showed up armed with a shortsword along with the traditional katana – and this allowed him to counter the kusarigama.

A further remarkable opponent of Musashi was the samurai Sasaki Ganryu, who wore a nodachi, a sword longer than the usual katanas. Again, Musashi changed his tactics: He faced Ganryu with an oar he had turned into a weapon. Exploiting the unmatched range of the oar, he could easily win the fight.

The characteristic that distinguished Musashi from his adversaries most was not his skill, but that he excelled at adapting his actions to his surroundings. Even though he was an outstanding swordsman, he did not hesitate to follow different paths, if necessary. Education and training facilitate becoming successful, but one has to keep an open mind to change.

Relating to software development, it does not mean that we have to start afresh all the time we begin a new project. Nevertheless, it is dangerous if one clings to outdated technologies and procedures, sometimes may be helpful to regard a situation like a child, without any assumptions. In this manner, it is probably possible to learn along the way.

Summary

Greene’s book is a very interesting read and even though in my view one should take its content with a pinch of salt, it is a nice opportunity to broaden one’s horizon. The book contains far more than I addressed in this article and I think most of its findings are indeed in one way or another applicable to everyday life.

The web is for documents

The web is intended to help a person find and understand relevant information. The primary container of information is the document. Therefore web applications should be centered around a document metaphor, not an app one.

The web is intended to help a person find and understand relevant information. The primary container of information is the document. Therefore web applications should be centered around a document metaphor, not an app one.

In 1990 Tim Berners-Lee and Robert Cailliau wrote a proposal for what we call the web today:

HyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will.

The web is a linked information system. Bret Victor states:

Information software serves the human urge to learn. A person uses information software to construct and manipulate a model that is internal to the mind — a mental representation of information.

The web is built around information. More information than we can handle. What we need to make sense of it all is understanding. The power of technology can be used to transfer and gain understanding. Understanding needs to be a first class citizen. The applications we build must be centered around it.

One way to foster understanding is to interact, to play with information. Technology can simulate a system of information so that we can form hypotheses and ask questions. Bret Victor coined the term “explorable explanations” to describe such systems.

I believe the web is perfectly suited for building explorable explanations.

The web’s container for information is the document. A document combines different forms of media (text, images, video, …) to a whole. Fortunately for us the web does not stop here. With scripting we have the possibility to interact and manipulate the information in order to gain further insight.

Most of the tools we need to create for understanding are already at our hands. What we need is a fundamental change in focus. Right now (a large part of) the web industry tries to play catch up with native. Whole frameworks try to mimic native applications like this is a virtue. Current developments want to abstract the document as far away as possible. This is not what the web was intended for. Why build an application which tries so hard to recreate a native feeling in something other than the native platform itself? Web applications should be built on the strength of the web. We should not chase a foreign metaphor.
Right now the web seems to be torn. Torn between the print era of passive documents and the shiny new world of native applications. But the web has the capability to do so much more. To concentrate on its purpose, to fill the niche. A massive niche. Understanding is a core endeavor of mankind. To quote Stephen Anderson and Karl Fast in introducing their upcoming book From Information to Understanding:

In all areas of life, we are surrounded by understanding problems.

Doug Engelbart shares a similar vision for the purpose of the personal computer per se:

By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.

The web is ready. The tools are ready. But are we?

Where to start: foundations

Transparent software: Making complexity understandable

Software is broken. Not because it is not simple. Because it is not transparent. Let me elaborate.

“I don’t think that’s feasible”

That are not the first words we wanted to hear from our client after our presentation.

“I have seen several engineers working over a year just for the concept”

This is complex.

The world tells you that all should be simple. Make it simple. Keep it simple. It is just that simple.

Only when it is not.

Look around you. Nature is beautiful. And complex. The human is beautiful. And complex. Many systems and contexts are complex.

Look inside you. Your thoughts and emotions. Your relationships.

Look at your computer, your phone, your work. Many problems we have and solve everyday are not simple.

Sometimes we think “Oh, why can’t this be simple”. “The software should be simpler”. But KISS won’t save you. Software is broken. But not because it is not simple. We do not want simplicity. We want clarity. We want to understand. Let me elaborate.

I create software for engineers. Engineers are the people who take problems who are fine in theory and work perfectly in a controlled environment like a lab and translate them to the real world. But the real world isn’t simple or controlled. It is messy. The smallest things can blow up your house of cards of theory. These people need software to understand what happens. Through their education and their experience they know what should happen. They are the experts. But the systems and problems they work with are so complex and mostly invisible to the human eye and incomprehensible to the human brain. But today’s engineering software looks like this:

cases-colorful-colourful-2019
Options all over the place

Make no mistake. This isn’t constrained to engineering problems. Take a look around you. Nowadays there are a million sensors collecting masses of data. Your phone. Your thermostat. Your shoes. Even your tooth brush. Sensors are everywhere. We are collecting more data than ever before. This data gives us a glimpse of the complex underlying system. So we think. But why do we collect them in the first place?
Because we can. We are seeking the holy grail of wisdom. More data creates more information. More information creates more knowledge. And finally we hope that more knowledge gets us a spark of wisdom. But we are just starting out.

The course of technology

The normal way of technology goes something like this: First we are constrained. We try to push the borders. When the field is wide and open we do everything that’s possible. After a while we become more mature and use it to serve a purpose. Software is like that. Collecting data is like that. It is like an addiction. Think about it: Do you influence the data or does the data influence you? Who is in control?

But there’s hope. In order to reason about and come to our decisions we need transparent software. The dictionary defines transparent as:

transparent (adjective)

  • (of a material or article) allowing light to pass through so that objects behind can be distinctly seen
  • easy to perceive or detect
  • having thoughts, feelings, or motives that are easily perceived
  • (of an organization or its activities) open to public scrutiny
  • Physics: transmitting heat or other electromagnetic rays without distortion.
  • Computing: (of a process or interface) functioning without the user being aware of its presence.

Transparent is a tricky word. It seems to be a paradox: on the one hand it means invisible and on the other hand it means easily perceived. Both uses of the word apply to what software needs to be.

No more magic

Software has to help us understand systems and concepts. What happens and what happened. It has to make it clear, comprehensible and detectable. We need to see how the software comes to its conclusions. We need the option to overrule it. The last decision is ours. Software can help us forming a decision but it should never decide on our behalf.
Also: It gets out of our way. We don’t need any more rituals to please the software to do our bidding. Software is a tool. To be a great tool it needs to fit the problem and the person. No one wants to cut with a knife that is all blade. It should adapt to our capabilities. It should fit like a glove. It amplifies not cripples our capabilities. It is made for us. It is transparent.

That’s the goal. But how do we get there?

Maximalistic design or design with ‘Betthupferl’

Minimalistic design is a misnomer. Reducing a complex issue needs more design not less. Designing is about thinking, taking care. If we want to make complex systems understandable we need to think hard. What is the essence of the problem? What information does the expert need to evaluate a situation? All of this expertise is hidden in the heads and the daily routine of the people we design for. So we need to ask, watch and listen to them. Not direct and with a free mind. Throw your preconceptions overboard. Remove your ego. First just observe. Collect. Challenge your assumptions. When you have a good amount of information (with experience you will know when you can start but do not believe you will ever have enough), distill. Distill the essence. And then add. That little extra. The details. The cues which foster understanding.
When you stay the night in a hotel and in your white and clean bed you find a little sweet on your pillow. You are delighted. In German we call this ‘Betthupferl’.
This little extra you add is just this. The user feels cared for. He sees that someone has gone the extra mile, has thought deeply about him. The essence is not enough. You need some details. To weave the parts to together to form a whole. This can be extra information when the user needs it. This can be a shortcut when the context is right. Or an animation which guides the eye. Or or or…
Important is that it does not confuse or blur the essence. It should support. Silently, almost invisible.

Successful patterns for software developers

Some lessons I learned

Be good at everything but better at something

Diversify. Knowing and working in different domains, using different programming languages and tools and handling diverse tasks enriches your creativity as a developer. It keeps your mind flexible and inspires you. Many can agree on that. But on the other hand you should find your niche, your personal joy, your home ground. Different developers have different personalities, different talents and different preferences. People shine when they work with something they like. They are more happy, more productive and more creative. But not all developers in a team or company have the same favorites. Your company should encourage you getting better at your specialty. Work on your strengths.
I like dynamic languages, others prefer static languages. Some developers like to craft desktop software, some web applications. Others concentrate on the UI or controlling robots or sensors. Some ponder over algorithms while others are happy with designing visualizations.
When your team has a common ground but everybody has strengths in different fields everybody can learn from and supplement each other. Synergy is created.

Be support, project manager, admin, …

Developers in our company wear many hats. Sometimes even literally (but that’s another story). If a developer is responsible or takes part in other roles of the project his view is widened. When he talks with customers he not only understands their needs better but also can suggest different ways or solutions. His domain knowledge grows and he can identify pain points. Working with the platforms where the application is deployed and the systems involved also strengthens his grasp of the environment the application lives in. As a project manager he learns to juggle time and scope. He learns to work with constraints and different forces pulling not only the code but the whole project.

Optimize for forgiveness

A user deleted a post accidentally. What do we normally do? We introduce a security question to establish a barrier for deletion. What’s this? We make the UX worse because we think it is the user’s fault and we need to protect him from himself. A better way would be to don’t delete the post but make it invisible. This way he can undo if he wrongly deleted the post. But what about updates? Updating a post overwrites the old content. What if this happened not on purpose? A better way would be to record the last state of the post and undo the update if necessary. Todays computers have so much memory and are so powerful that an application can be allowed to be merciful. It can forgive its users when they did something wrong and want to revert it.

It is not about you

This is one of the hardest lessons to learn. Your work is not about you. Yes, it reflects you in a way. But it does not define you, you define the work. If your code has bugs or you make a mistake and accidentally delete important data, take a deep breath and think. Get help. Plan your steps, discuss with others what to do. Tell them you made a mistake. Everybody does. Do not try to fix it yourself. Do not hide it. Focus on the problem at hand. Not your shame or feeling of guilt. (This goes hand in hand with optimizing for forgiveness)

Talk and listen

Feedback. One of the most important things in software development. Talk to your users. Listen. Again and again. Often talking 10 minutes about a task or problem can result in hours of work saved. The bug you couldn’t reproduce? It was in a different area. The feature you thought was nearly impossible? With a slight change it is much easier. This also goes for everything else. Deployments and commits. Design decisions. You are the expert in your domain, the customer in his. Act accordingly. Tell him the options he has and what will be the consequences. Don’t let him guess. He shouldn’t do your work.

Web, your users deserve better

The web has come a long way since its inception. But nevertheless many applications fail to serve the user appropriately. We talk a lot about new presentation styles, approaches and enhancements. These are all good endeavors but we should not neglect the basics.

The web has come a long way since its inception. But nevertheless many applications fail to serve the user appropriately. We talk a lot about new presentation styles, approaches and enhancements. These are all good endeavors but we should not neglect the basics. Say you have crafted a beautiful application. It is fast, reliable and has all features the client, user or product manager has envisioned. But is it usable? Is its design up to the task? How should you know? You are no designer. But you can evaluate if your application has the fundamental building blocks, the basics. How?
Fortunately there is an ISO standard about the proper behaviour of information systems: ISO 9241-110. It defines seven principles for dialogues (in a wider sense):

  • Suitability for the task: the dialogue is suitable for a task when it supports the user in the effective and efficient completion of the task.
  • Self-descriptiveness: the dialogue is self-descriptive when each dialogue step is immediately comprehensible through feedback from the system or is explained to the user on request.
  • Controllability: the dialogue is controllable when the user is able to initiate and control the direction and pace of the interaction until the point at which the goal has been met.
  • Conformity with user expectations: the dialogue conforms with user expectations when it is consistent and corresponds to the user characteristics, such as task knowledge, education, experience, and to commonly accepted conventions.
  • Error tolerance: the dialogue is error tolerant if despite evident errors in input, the intended result may be achieved with either no or minimal action by the user.
  • Suitability for individualization: the dialogue is capable of individualization when the interface software can be modified to suit the task needs, individual preferences, and skills of the user.
  • Suitability for learning: the dialogue is suitable for learning when it supports and guides the user in learning to use the system.

This sounds pretty abstract so let’s take a look at each principle in detail.

Suitability for the task

bloated app

Simple and easy. You all know the bloated applications from the desktop with myriads of functions, operations, options, settings, preferences, … These are easy to spot. But often the details are left behind. Many applications try to collect too much information. Or in the wrong order. Scattered over too many dialogues. This is such a big problem in today’s information systems that there’s even a German word for preventing this: Datensparsamkeit. Your application should only collect and ask for the information it needs to fulfill its tasks.
But not only collecting information is a problem. Help in little things like placing the focus on the first input field or prefilling fields with meaningful values which can be automatically derived improve the efficience of task completion. Todays application has many context information available and can help the user in filling out these data from the context she is in like the current date, location, selected contexts in the application or previous values.
Above all you have to talk to your users and understand them to adequately support their goals. Communication is key. This is hard work. They might not know what is important to them. Then watch them using your application, look at how they reached their goals before your application was there. What were their problems? What went well? What (common) mistakes did they make? How can your application avoid those?

Self descriptiveness

In every part of your application the user needs to know what is the function of every item on the screen. A recent trend in design generates widgets on the screen that are too ambiguous. Is this a link, a button or just text? What is clickable? Or editable? UX calls this an affordance:

“a situation where an object’s sensory characteristics intuitively imply its functionality and use”

So just from looking at it the user has to have an idea what the control is for. So when you look at the following input field, what is the format of the date you need to enter?

date format

So if your application accepts a set of formats you should tell the user beforehand. Same with required fields or constraints like maximum or minimum length or value ranges. But nowadays applications can go a step further: you can tell the user while she enters her data that her input contradicts another input or value in your database. You can tell her that the username she wants is already taken, the date of the appointment is already blocked.

username taken

Controllability

Everybody has seen this dreaded message:

Item was deleted

Despite any complex confirmations needed to delete an item items get deleted accidentally. What now? Adding levels of confirmation or complex rituals to delete an item does not value the users and their time. Some applications only mark an item as deleted and remove this flag if necessary. That is not enough. What if the user does not delete but overwrites a value of an item by mistake? Your application needs an undo mechanism. A global one. Users as all humans make mistakes. The technology is ready to and should not make them feel bad about it. It can be forgiving. So every action an user does must be revocable. Long running processes must be cancelable. Updates must be undoable.
I know there are exceptions to this. Actions which cause processes in the real world to start can sometimes be irrevocable. Sometimes. Nobody thought that sending an email can be undone. Google did it. How? They delay sending and offer an option to cancel this process. Think about it. Maybe you can undone the actions taken.
Your application should not only allow to reverse a process but also to start a process and complete it. This sounds obvious. But many applications set so many obstacles to find how to start an action. Show the actions which can be started. Provide shortcuts to the user to start and to advance. If your process has multiple steps make it easy for the user to return to where she left.

Conformity with user expectations

Especially in web design where there is so much freedom how your application looks: avoid fancy- or cleverness.

fancyness - blog post without borders and title

There are certain standards how widgets look, stick to them. If the users clicks a button on a form she expects that the content she entered is submitted. If she wants to upload a file the button should be labelled accordingly. Use clear words. Not only conventions determine how something is worded but also the task at hand. If the user expects to see a chart of her data, “calculate” or “generate” might not be the right button label even if the application does that. So again: talk to your users, understand them and their experience. Choose clarity over cleverness. Make it obvious. Your application might look “boring” but if the user knows where and what to do this is some much more worth.

Error tolerance

Oh! Your application accepts scientific notation. Entering 9e999999999… and

boom!

Users don’t enter malicious data by purpose (at least not always). But mistakes happen. Your application should plan for that. Constrain your input values. Don’t blow up when the users attachs a 100 GB file. Tell them what values you accept and when and why their entered information does not comply. Help them by showing fuzzy matches if their search term doesn’t yield an exact match. Even if the user submitted data is correct, data from other sources might not be. Your application needs to be robust. Take into account the problem and error cases not just the sunshine state.

Suitability for individualisation

Users are different. They have differ in skills, education, knowledge, experience and other characteristics. Some might need visual assistance like a color blind mode. Your application needs to provide this. Due to the different levels of experience and the different approachs a user takes your application should provide options to define how much and how the presented information is shown. Take a look at the following table of values. Do you see what is shown?

sinus curve values

Now take a look at a graph with the same values.

sinus curve values as graph

Sometimes one representation is better as another. Again talk to your users they might prefer different presentations.

Suitability for learning

You know your application. You know where to start an action and where to click. You know how the search is used and what filters are. You know where to find the report generation. You built it. But for first time users it is as entering a foreign city. Some things might be familiar and some strange. You need to think about the entry of your application. Users need help. Think about the blank slate, when your user or your application does not have any data. How do you guide the user to create her first project or enter information for the first item. She needs help with where to find the appropiate buttons and links to start the processes. She might not recognize the function behind an icon at first glance. Sometimes a tooltip helps. Sometimes you need a legend. And sometimes you should use a text instead of an icon.

icon glory

A coder’s manifesto

A personal manifesto about what I value in developing software and what I think needs to change how we develop software.

Remember the Agile manifesto? What was the most important principle behind it?

User value first

Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.
– The agile manifesto

If it doesn’t benefit the user it should not be done. This can be a new feature, a better user interface, a clearer wording, better performance, robustness, … Not all improvements have immediate value but they must have a value at some time. If you look at the craftsman project priorities timely delivery has the most user value for me. (Personal footnote: I think this is where the software craftsmanship movement sets the wrong focus: quality is not the most important thing, user value is). But how do you know what the user might need?

Communication is key

Communication in all parts of software development is a must. You need to talk to your users, your fellow developers and other project partners. What often is neglected is the communication part of the code and the documentation. The primary measure of good code is how well it communicates its intent. How clear it is. Many measure code quality with things like testability, low coupling, coverage, metrics. But clarity and communication are by far the most important. If you can understand what the code does and how and why, you are able to change it. I personally believe that not only the statements but also the formatting of the code and the individual expression, the style, is important. Just as in novels and poetry, the typography emphasizes the meaning, the formatting can do this for the code.
Sometimes the why of decisions cannot be expressed in code this is where documentation comes in. If you have a need to document the what or the how, refactor your code. The importance of communication cannot be understated. Often talking with the user first can save you days and weeks of coding. How?

KISS and YAGNI

Simple and easy. As computer scientist we are trained to solve a problem correctly in all its glory. Every corner case is handled and secured by a test. I think to be more successful as a developer we have to unlearn this. (don’t get me wrong: there are places and software where we need this but the majority of software doesn’t need this) How often did I implement a complete version of a solution to later see that only 80% of it was used. The remaining 20% occur once in a lifetime but needed the most of the development time. If we constraint the problem we can save magnitudes of development time and can invest in more user value. And in…

Developer happiness

Some developers trade development productivity for runtime performance. But we don’t want to code in assembler or C. I happily trade runtime performance for productivity. Productivity means happiness. I am happy when I get things done. If I need to improve the performance I can focus on the parts which need it (remember YAGNI?). Programming languages, frameworks and platforms are not just tools, they are and form our ways of thinking. Tools we use have to help us reaching our goals, so…

Testing is important but just a tool

Testing gives me confidence to change code without breaking things. It helps me to avoid regression. Tests are a great tool. But only a means to an end. The program code is more important than tests, so tests should not force me to make compromises in clarity or communication level of the code. Ideally testing environments should be easy to set up and execute. One thing the recent TDD debate showed me is that we need to focus on the goals we have and the problems we want to solve with our tools. The tools we use should take a front row seat. If they need more focus or effort than the benefit they bring something is wrong. We need to constantly assess if the tools help us to reach our goals. So we have to…

Reflect

The last principle of the agile manifesto is to reflect regularly. What can be done better? How to improve? What did we learn? Often this sounds like a chore. Many times the reflection is omitted. Clean code tries to prevent this with daily reflection. But the set of practices and principles is questionable and carved into stone. So what can be done to make reflection more attractive? One of my suggestions is to keep a

Developer handbook

Reading Small Talk Best Practice Patterns (by Kent Beck) I could not help but have a feeling that this is like a personal developer handbook. In it Kent touches many important aspects of programming like composition of methods, naming and formatting. On top of that he describes his personal experiences and opinions in many of the patterns. I found this really valuable. I think starting with some of these patterns and my own experiences it would be helpful to record them in a personal developer handbook. I can use this book to remember past solutions and reflect on them. I can add my experiences in different projects and contexts to these. The goal is to get better at writing clear code and improve the communication of my code. These records could also help me to make my common habits, patterns, mistakes and approaches to problems explicit and learn from them. This is a personal book but it could also be a team effort or help others. The last part, the conclusion, of the TDD debate has a special insight for me: I should not lean on the masters of software development to advance our field and my development skills in particular. I need to find my own way and many things I take for granted I have to…

Re-think

Kent told an anecdote how he discovered TDD and then it struck me: he had a goal (or a need) in mind and had to find his way. In hindsight it sounds easy but it takes courage and persistence to push through. I think many more problems currently exist in developing software and often we took them for a given, we adjusted us, we accepted them. We cannot imagine a solution for these problems because like the elephant with the rope we never experienced a time without them. But this needs to change. We have to rethink the unsolved or inadequately solved problems and ignore some conventions and habits we formed as developers and as an industry. We have to rethink some of our approaches and assumptions. I believe we can find new ways which improve how we develop software and for this we need to rethink.

How the most interesting IT debate is revealing our values as software developers

TDD is dead. Is TDD dead? A question that seems to divide our profession. What does this debate have to do with you?

TDD is dead. Is TDD dead? A question that seems to divide our profession.
On the one side: developers which write their tests first and let them drive their code. They prefer the mockist approach to testing. Code should be tested in isolation, under lab like circumstances. Clean code is their book. Practices and principles guide their thinking. An application should not be bound to frameworks and have a hexagonal architecture. The GOOS book showed how it can be done.
On the other side: developers which focus on readability and clarity. They use their experience and gut to drive their decisions. Because of past experiences they test their the code the classical way. They are pragmatic. Practices and principles are used when they improve the understanding of the code. Code is there to be refactored. Just like a gardener trims bushes and a writer edits his prose they work with their code.

What are your values?

What does this debate have to do with you?

Ask yourself:
What if you could write a proof of your program costing 10 or just 5 times as much as the implementation? It would prove your code would work correctly under all possible circumstances. Would you do it?

Or would you rather improve the existing architecture, design or clarity of your code? So that you remove technical debt and are better positioned for future changes.

Or would you write new features and improve your application for the people using it?

What are your values?

History

At the beginnings of my developer life in the late 80s/early 90s I remember that the industry was focussed on one goal: code reuse. Modules, components, libraries, frameworks were introduced. Then patterns came. All of that was working towards one side of the equation: low coupling.
High cohesion was neglected in pursuit of a noble goal. But what happened? The imbalance produced layer after layer, indirection after indirection, over-separation and over-abstraction. You had to deal with dependency injection (containers), configuration, class hierarchies, interfaces, event buses, callbacks, … just to understand a hello world.
Today we have more computing power and are solving more and more complex things. We think in higher abstractions. Much more people benefit from our skills and our works.
On the user facing side design focusses on simplicity and usability. Even complex relationships can be made understandable and manageable. A wise man once said: design is about intent.
The same with code: Code is about intent. Intent should be the measure of the quality of our code. Not testability, not coupling: intent. If the code (and this includes the code comments) would reveal its intent, you could fix bugs in it, improve it, change it, refactor it. Tests would be your safety net to ensure you are not breaking your intent.
You might say: but this is what TDD is all about! But I think we got it all backwards. The code and its intention revealing nature is more important than the tests. The tests support. But tests should never replace or even harm the clarity of the code.
The quality of the code is important. But most important are the people using your application.
My goal is to delight the people who use my software and my way there is writing intention revealing software. I am not there and I am learning every day but I take step after step.

What are your values?