My Favorite Pattern

It has become somewhat of an internal meme that I do not like it when programmers use the word “wrapper”. When someone does say it, I usually get a cue from one of the others to start complaining about it. Do not get me wrong, though. I am very much in favor of wrapping things, but with purpose. And my favorite one is the façade.

When simple becomes complex

Many times, APIs start out simple and elegant. This usually works for a while and the API gets used a lot precisely because of its beauty and simplicity. But eventually, a new use case comes along that demands more of the API than it can currently serve. It has to be extended. This usually takes the form of an additional method or function parameter, or an additional function that needs to be called. Using the API now becomes more complex all its users.

Do not underestimate this effect. I have only anecdotal evidence, but in my experience, a lot of unnecessary software complexity can be attributed to this1. The Pareto-Principle applies here: A single use case causes all the users of the previously simple API to deal with new complexity (e.g. 10% of the use cases cause 90% of the complexity in the user-/call-sites).

Façades make it look beautiful

Luckily, it can be dealt with beautifully: using the façade pattern. This pattern abstracts a complex API behind a simple API. The trade-off, of course, is that it is less powerful than the “full API”. In our example though, all of the previous use-cases can keep using the simple API via a façade.

When to apply this

The aforementioned example, extending an API, is a very nice opportunity to apply the façade. Just keep the interface of the old API around, and re-implement it using the new, extended API, which is usually created by modifying the old API’s implementation. Now all the old call-sites can stay the same, yet you can have a more powerful API for those rare cases that need it.

Of course, you can also identify common usage patterns and refactor them using a façade, but that’s usually much harder to do.

What exactly are façades made of?

Façades do not hide the more complex API in the sense that the APIs users are not allowed to use it. Yes, façades make APIs look beautiful, but that is where the metaphor ends. You can still access what is behind the façade. You can even write more façades for the behind. Many APIs have multiple common cases and only very few complex ones.

So… Classes? Functions? Data? Any of those, in fact. Whenever you enable writing something in a simpler way for a common case, you have a façade . Very often, a small function with a simple signature is all the façade you need.

But it makes all the difference.

Now can someone please tell me what that little hook under the c is called?

  1. Façades can, of course, also contribute to creating complexity by growing the codebase and creating ‘variants’. But they rarely do. ↩︎

Optional polymorphism by delegation

A code design pattern I’ve used a lot in recent times is the “optional-based polymorphism” that looks like a delegation to another type that might not be available. It might be an implementation of the FCoI-principle (Favour Composition over Inheritance).

Let’s look at an example: An application has several different engines that move stuff around. Some engines are based on limit switches. They move until they are stopped by a physical switch. The application can make these engines move from one predefined position to the next, but not anywhere in between. Another type of engines is based on a relative position. You give the engine the new target position and it positions itself there, without any limit switches or predefined positions.

Traditional approach

A typical implementation using inheritance would be a common supertype “Engine” that provides the functionality both engine types exhibit. From there, we would define two subtypes that extend the functionality in their desired way. One subtype would be the “LimitSwitchEngine”, the other one the “PositionableEngine”.

Our client code that wants to use a particular engine has two possibilities: It only requires the common functionality of an engine and can work with the supertype. Or it needs to perform a downcast after checking the actual type of the engine.

Cast methods

The optional-based polymorphism guides the client code towards the specific subtype by providing all possibilities in the common interface:

public interface Engine {

	/* Common functionality */
	
	boolean isMoving();
	
	void emergencyStop();
	
	/* optional-based polymorphism */
	
	Optional<LimitSwitchEngine> boundToLimitSwitches();
	
	Optional<PositionableEngine> freelyPositionable();
}

The client code uses the Engine’s interface only as a stepping stone for the specific engine that is required for your use case. If the engine object cannot provide that functionality, you’ll get an empty Optional. Else you retrieve your reference to the specific type and work with it.

Disadvantages

One disadvantage of this approach is the fact that the supertype is aware and even dependent on the different subtypes. You limit the scope of your type hierarchy to the types offered in the “entrance interface”. You can still use the traditional downcast way as described in the introduction for all other types, but that separates them into “featured” and “non-featured” subtypes. So this approach will violate the Open/Closed principle by not being open to extension without modification.

Another disadvantage is that your typical navigation in the IDE doesn’t work as well anymore. If you want to know about all the different types of engines in the system, you can’t just look at the type hierarchy of the Engine type anymore. This is because of the first advantage this pattern brings:

Advantages

Not only gets this style rid of the downcast, it frees your type system up in two different dimensions: The LimitSwitchEngine and PositionableEngine don’t need to be subtypes of Engine. They can be totally independent types with no real connection to the Engine. And they can be different instances. Of course, there is no need to use any of these freedoms. You can still inherit PositionableEngine from Engine and implement both types in the same object. But it isn’t mandatory anymore.

Another advantage is discoverability. Your typical type hierarchy lookup in the IDE is replaced with code completion lookup. If you get the names right, this pattern feels like writing code on rails, because your code completion proposals will lead you to the correct place.

Your opinion

What is your opinion on this pattern? What would you expect from a code design that provides those “casting” methods? Tell us in the comments!

Even better automated instance construction in C++

In the previous articles on automated instance construction (first and second) I showed how you can use constructor-argument deduction to automatically do dependency injection. While that approach worked nicely in general, one little detail was still nagging me: Since construction of the actual objects happens at the end of a recursion, the stack depth in some of those construction could get quite deep. In fact there are an additional Maxactual number of c’tor parameters functions on the stack before the c’tor is called. This effect is even worse when resolving long dependency chains, were those functions are there for each of the dependencies currently being resolved.

The previous code uses an std::index_sequence of the exactly the right length to inject the same number of mimic parameters that are then used to locate dependencies. If we knew the right length, there wouldn’t have to be any recursion around the construction. And that’s actually easy to refactor out, we can just figure out the std::index_sequence first and return, and then use it outside of the recursion:

template <class T, std::size_t Head, std::size_t... Rest>
constexpr auto
injection_parameter_sequence(std::index_sequence<Head, Rest...>,
  decltype(T{ mimic<T>{ Head }, mimic<T>{ Rest }... })* = nullptr)
{
  return std::index_sequence<Head, Rest...>{};
}

template <class T>
constexpr auto injection_parameter_sequence(std::index_sequence<>)
{
  return std::index_sequence<>{};
}

template <class T, std::size_t... Rest>
constexpr auto
injection_parameter_sequence(std::index_sequence<Rest...>)
{
  return injection_parameter_sequence<T>(std::make_index_sequence<sizeof...(Rest) - 1>{});
}

Starting with a “long” index sequence, this overload set returns the smaller index sequence for the construction. We can use a small tool function to actually create the instance:

template <class T, std::size_t... Params>
constexpr auto make_unique_injected_with_sequence(service_provider const& p, std::index_sequence<Params...>)
{
  return std::make_unique<T>(mimic<T>(p, Params)...);
}

Which can be called like this:

template <class T, std::size_t Max = 16> auto make_unique_injected(service_provider const& p)
{
  return make_unique_injected_with_sequence<T>(p,
    injection_parameter_sequence<T>(std::make_index_sequence<Max>{}));
}

Only these last two function will be added to the call stack for each constructor call, which is not a whole lot. This construction has the additional advantage that only these two need to be changed to support different kinds construction, e.g. using std::make_shared instead of std::make_unique.

Avoid special values of the result type for error indication

As many of you may know we work with a variety of programming languages and ecosystems with very different code bases. Sometimes it may be a modern green field project using state of the art frameworks. At other times it may be a dreaded legacy project initially written many years ago (either by us or someone we do not even know) using ancient languages and frameworks like really old java stuff (pre jdk 7) or C++ (pre C++11), for example.

These old projects could not use features of modern incarnations of these languages/compilers/environments – and that is fine with me. We usually gradually modernize such systems and try to update the places where we come along to fix some issues or implement new features.

Over the years I have come across a pattern that I think is dangerous and easily leads to bugs and harder to maintain code:

Special values of the resulting type of a function to indicate errors

The examples are so numerous and not confined to a certain programming environment that they urged me to write this article. Maybe some developers using this practice will change their mind and add a few tools to their box to write safer and more expressive code.

A simple example

Let us image a function that returns a simple integer number like this:

/**
 * Here we talk to a hardware sensor. If everything works, we should
 * get a value between -50 °C and +50 °C.
 * If something goes wrong, we return -9999.
int readAmbientTemperature();

Given the documentation, clients can surely use this kind of function and if every use site interprets the result correctly, nothing will ever go wrong. The problem here is, that we need a lot of domain knowledge and that we have to check for the special value.

If we use this pattern for other values where the value range is not that clearly bounded we may either run into problems or invent other “impossible values” for each use case.

If we forget to check for the special value the users may see it an be confused or even worse it could be used in calculations.

The problem even gets worse with more flexible types like floating point numbers or strings where it is harder to compare and divide valid results from failure indicators.

Classic error message that mixes technical code and error message in a confusing, albeit funny sentence (Source: Interface Hall Of Shame)

Of course, there are slightly better alternatives like negative numbers in a positive-only domain function or MAX_INT, NaN or the like provided by most languages.

I do not find any of the above satisfying and good enough for production use.

Better alternatives

Many may argue, that their environment lacks features to implement distinct error indicators and values but I tend to disagree and would like to name a few widely used alternatives for very different languages and environments:

  • Return codes and out-parameters for C-like languages like in the unix and win32 APIs (despite all their other flaws… 😀 )
  • Exceptions for Java, Python, .NET and maybe in some cases even C++ with sufficiently specific type and details to differentiate different failures
  • Optional return types when the failures do not need special handling and absence of a value is enough
  • HTTP status code (e.g. 400 or 404) and a JSON object containing reason and details instead of a 2xx status with the value
  • A result struct or object containing execution status and either a value on success or error details on failure

Conclusion

I am aware that I probably spent way too much words on such a basic topic but I think the number of times I have encountered such a style – especially in code of autodidacts, but also professionals – justifies such an article in my opinion. I hope I provided some inspiration for those who do not know better or those who want to help others improve.

What else can we do?

A common code structure to implement a decision is the if-statement, or in its complete form, the if-else-statement:

By using the explicit if-else-statement, you essentially partition a part of your code into two “execution lanes” that are used mutually exclusive. Instead of writing them one upon the other, we could, if our code editors supported it, write them side by side:

There are some graphical code editors that tried this tabular approach. It certainly looks unfamiliar to the eye trained on the first notation, but it makes one thing clear: The code flow will go through only one of the columns, not both.

Dependence on explicit conditionals

Using the if-else-statement became so second-nature to most developers that they acted confused and helpless when presented with a simple restriction:

“Don’t use the else keyword”

Jeff Bay, Object Calisthenics, 2008

The restriction is imposed as the second of nine rules from the object calisthenics by Jeff Bay. In the explanation of the rule, he stated that the rule should act as a first step towards implicit conditional statements. Paraphrased: There are 99 ways to express an else statement without using the keyword, but the average developer knows none of them.

In my opinion, the rule is merely the warm-up phase to a bigger challenge, as stated by the “anti-if campaign”: To get rid of if-statements (and else-statements by that matter) in all contexts where alternatives prove more effective.

In order to decide when not to use if-statements, we should learn about the alternatives. There are plenty to choose from! (refer to slide #4)

But we should also learn about the if-statement itself. The goal isn’t to abandon it, but to use it when appropriate and then use it to its full potential.

An interesting thought about the “else”

We already know everything about the if and else? I had the opportunity to learn something new not long ago. The hint came from Kevlin Henney in one of his talks (Non-Functional Coding):

The talk is fairly recent and has some traditional “Kevlin parts” in it. The part I highlighted is unusually aggressive for him. The reasoning is sound, but the nearly personal attack towards the audience (to “piss them off”) is uncalled for.

But, the “volume up to 200 %”-style works more often than not and the bit got me thinking. The culprit in question is this code:

According to Kevlin, this style “is just wrong”. Let’s try to find out why.

There is one principle that is mentioned by Kevlin in passing: The “Single Level of Abstraction” principle that states that you should not mix different levels of abstraction in one block of code (the principle talks about methods). It is a foundation for the first rule in the object calisthenics: “Only one level of indentation per method”.

If you look at the if-code and else-code, they operate on the same level of abstraction. Maybe not on the same level of probability, but they deal with the same topic. Elevating one part by eliminating the else-block in favor of an early return means that this part is more important. It also designates the if-code and in fact the whole if-statement to be a guard clause. Guard clauses typically deal with invalid state and don’t complement the desired functionality. They act as gatekeepers and interdict the invalid state to enter the method’s main body. As a metaphor: The bouncers in front of a club are like guard clauses. To say that being denied entry by a bouncer is comparable fun to being in the club is probably not a widespread opinion.

Unfinished reflection

I still reflect on other clues that are name-dropped by Kevlin, like the stated reduction of refactoring opportunities, but that’s probably because I don’t have enough comparison material.

There is one thing that I haven’t got a proper hold on yet and that’s the term “control state“. My google kung-fu is not mighty enough to reach past some obscure ASP.NET concepts from ten years ago. I haven’t heard the term in books – at least I don’t remember it.

So here is my call for help: Can you provide some source or explanation about what Kevlin Henney means by “control state“?

And what else do you think about the whole discussion?

Arrow Anti-Pattern

When you write code, it can happen that you nest some ifs or loops inside each other. Here is an example:

Because of the shape of the indentation, this code smell is called an anti-arrow pattern. The deepest indentation depth is the tip of the arrow. In my opinion, such a style is detrimental to readability and comprehension.

In the following, I would like to present a simple way of resolving such arrow anti-patterns.

Extract Method

First we extract the arrow pattern as a new method. This allows us to use return values instead of variable assignments and makes the code clearer.

public string PrintElephantMessage(Animal animal)
{
    Console.WriteLine(IsAElephant(animal));
}
public string IsAElephant(Animal animal)
{
    if (animal.IsMammal())
    {
        if (animal.IsGrey())
        {
            if (animal.IsBig())
            {
                if (animal.LivesOnLand())
                {
                    return "It is an elephant";
                }
                else
                {
                    return "It is not an elephant. Elephants live on land";
                }
            }
            else
            {
                return "It is not an elephant. Elephants are big";
            }
        }
        else
        {
            return "It is not an elephant. Elephants are grey";
        }
    }
    else
    {
        return "It is not an elephant. Elephants are mammals";
    }
}

Turn over ifs

A quick way to eliminate the arrow anti-pattern is to invert the if conditions. This will make the code look like this:

public string IsAElephant(Animal animal)
{
    if (!animal.IsMammal())
    {
        return "It is not an elephant. Elephants are mammals";
    }
    if (!animal.IsGrey())
    {
        return "It is not an elephant. Elephants are grey";
    }
    if (!animal.IsBig())
    {
        return "It is not an elephant. Elephants are big";
    }
    if (!animal.LivesOnLand())
    {
        return "It is not an elephant. Elephants live on land";
    }
    return "It is an elephant";
}

Some IDEs like Visual Studio can help flip the Ifs.

Conclusion

Arrow anti-pattern are code smells and make your code less legible. Fortunately, you can refactor the code with a few simple steps.

Have we made things too easy?

One of the old mantras for API design is “Make doing the right thing easy and the wrong thing hard”. This, of course, applies to much broader topics as well, such as software development or UX.

For software development specifically, are we maybe making “doing the wrong thing” too easy as well? Here are a two examples:

Web Requests

In the old times, requesting data from a web server required first setting up the request, sending it, and then getting the result back to your application either via polling or callbacks. Dave Mark once adequately called this solving the “waiting problem”. It was cumbersome, to say the least. It was clear that making such a request was something to be avoided. You did it when you had to, but you avoided setting up too many different kinds of requests implictly.

Nowadays, with the advent anonymous functions/lambdas in most mainstream programming languages, continuations became the new way handle these things: do_request(...).then(result -> ...) This already made this a lot easier. And even better, now we have some form of coroutines in many languages were you can just do result = await do_request(...). It even looks almost like a normal function call.

With this, programmers can just do requests one after the other. Need one thing from a server? Do one request. Need ten things from a server? Do ten requests. Of course, this is horribly wasteful: each request will incur the full overhead of http/https and a server roundtrip. In the old times, doing the request was painful, so you automatically looked for ways to avoid doing more, and bundle your asks into one request, argueable leading to a better program.

Dependencies

Before nice package-managers where a thing, handling dependencies was a huge pain. You would have to manually get, unpack, configure and install the dependency for each developer and/or consumer system. As a consequence, libraries were big and often duplicated foundational things. But it also caused developers carefully grooming their library selections.

Now with package managers, libraries have started to become small. Duplication within libraries certainly seems to have decreased, and the average library size has decreased. But this also caused developers to be much less cautious when adopting a dependency, with package managers handling thousands of dependencies that no one developer can possibly have a full understanding of. And this then leads to things like the leftpad disaster.

Better or worse?

I am pretty sure that both having nice abstractions to deal with asynchronicity and package managers are good things. But if they make certain things too easy, how can we deal with that? The only thing I can currently think of is figuratively sticking warning-labels on these things during review time, but because those things are now so easy and subtle, it is also easy to miss them.

Are there other examples were we maybe made the wrong thing too easy? Do you have any ideas how to deal with this problem?

Grails Domain update optimisation

As many readers may know we are developing and maintaining some Grails applications for more than 10 years now. One of the main selling points of Grails is its domain model and object-relational-mapper (ORM) called GORM.

In general ORMs are useful for easy and convenient development at the cost of a bit of performance and flexibility. One of the best features of GORM is the availability of several flexible APIs for use-cases where dynamic finders are not enough. Let us look at a real-world example.

The performance problem

In one part of our application we have personal messages that are marked as read after viewing. For some users there can be quite a lot messages so we implemented a “mark all as read”-feature. The naive implementation looks like this:

def markAllAsRead() {
    def user = securityService.loggedInUser
    def messages = Messages.findAllByUserAndTimelineEntry.findAllByAuthorAndRead(user, false)
    messages.each { message ->
        message.read = true
        message.save()
    }
    Messages.withSession { session -> session.flush()}
 }

While this is both correct and simple it only works well for a limited amount of messages per user. Performance will degrade because all the domain objects are loaded into domain objects, then modified and save one-by-one to the session. Finally the session is persisted to the database. In our use case this could take several seconds which is much too long for a good user experience.

DetachedCriteria to the rescue

GORM offers a much better solution for such use-cases that does not sacrifice expressiveness. Instead it offers a succinct API called “Where Queries” that creates DetachedCriteria and offers batch-updates.

def markAllAsRead() {
    def user = securityService.loggedInUser
    def messages = Messages.where {
        read == false
        addressee == user
    }
    messages.updateAll(read: true)
}

This implementation takes only a few milliseconds to execute with the same dataset as above which is de facto native SQL performance.

Conclusion

Before cursing GORM for bad performance one should have a deeper look at the alternative querying APIs like Where Queries, Criteria, DetachedCriteria, SQL Projections and Restrictions to enhance your ORM toolbox. Compared to dynamic finders and GORM-methods on domain objects they offer better composability and performance without resorting to HQL or plain SQL.

The Optional Wildcast

This blog post presents a particular programming technique that I happen to use more often in recent months. It doesn’t state that this technique is superior or more feasible than others. It’s just a story about a different solution to an old programming problem.

Let’s program a class hierarchy for animals, in particular for mammals and birds. You probably know where this leads up to, but let’s start with a common solution.

Both mammals and birds behave like animals, so they are subclasses of it. Birds have the additional behaviour of laying eggs for reproduction. We indicate this feature by implementing the Egglaying interface.

Mammals feed their offsprings by giving them milk. There are two mammals in our system, a cow and the platypus. The cow behaves like the typical mammal and gives a lot of milk. The platypus also feeds their young with milk, but only after they hatched from their egg. Yes, the platypus is a rare exception in that it is both a mammal and egglaying. We indicate this odd fact by implementing the Egglaying interface, too.

If our code wants to access the additional methods of the Egglaying interface, it has to check if the given object implements it and then upcasts it. I call this type of cast “wildcast” because they seem to appear out of nowhere when reading the code and seemingly don’t lead up or down the typical type hierarchy. Why would a mammal lay eggs?

One of my approaches that I happen to use more often recently is to indicate the existence of real wildcast with a Optional return type. In theory, you can wildcast from anywhere to anyplace you want. But only some of these jumps have a purpose in the domain. And an explicit casting method is a good way to highlight this purpose:

public abstract class Mammal {
	public Optional<Egglaying> asEgglaying() {
		return Optional.empty();
	}
}

The “asEgglaying()” method might return an Egglaying object, or it might not. As you can see, on default, it returns only an empty Optional. This means that no cow, horse, cat or dog has to think about laying eggs, they just aren’t into it by default.

public class Platypus extends Mammal implements Egglaying {
	@Override
	public Optional<Egglaying> asEgglaying() {
		return Optional.of(this);
	}
}

The platypus is another story. It is the exception to the rule and knows it. The code “Optional.of(this)” is typical for this coding technique.

A client that iterates over a collection of mammals can now incorporate the special case with more grace:

for (Mammal each : List.of(mammals())) {
	each.lactate();
	each.asEgglaying().ifPresent(Egglaying::breed);
}

Compare this code with a more classic approach using a wildcast:

for (Mammal each : List.of(mammals())) {
	each.lactate();
	if (each instanceof Egglaying) {
		((Egglaying) each).breed();
	}
}

My biggest grief with the classic approach is that the instanceof is necessary for the functionality, but not guided by the domain model. It comes as a surprise and has no connection to the Mammal type. In the Optional wildcast version, you can look up the callers of “asEgglaying()” and see all the special code that is written for the small number of mammals that lay eggs. In the classic approach, you need to search for conditional upcasts or separate between code for birds and special mammal code when looking up the callers.

In my real-world projects, this “optional wildcast” style facilitates domain discovery by code completion and seems to lead me to more segregated type systems. These impressions are personal and probably biased, so I would like to hear from your experiences or at least opinions in the comments.

Unit-Testing Deep-Equality in C#

In the suite of redux-style applications we are building in C#, we are making extensive use of value-types, which implies that a value compares as equal exactly if all of its contents are equal also known as “deep equality”, as opposed to “reference equality” or “shallow equality”. Both of those imply deep equality, but the other way around is not true. The same object is of course equal to itself, not matter how deep you look. And an object that references the same data as another object also has equal content. But a simple object that contains different lists with equal content will be unequal under shallow comparison, but equal under deep comparison.

Though init-only records already provide a per-member comparison as Equals be default, this fails for collection types such as ImmutableList<> that, against all intuition but in accordance to , only provide reference-equality. For us, this means that we have to override Equals for any value type that contains a collection. And this is were the trouble starts. Once Equals is overridden, it’s extremely easy to forget to also adapt Equals when adding a new property. Since our redux-style machinery relies on a proper “unequal”, this would manifest in the application as a sporadically missing UI update.

So we devised a testing strategy for those types, using a little bit of reflection:

  1. Create a sample instance of the value type with no member retaining its default value
  2. Test, by going over all properties and comparing to the same property in a default instance, if indeed all members in the sample are non-default
  3. For each property, run Equals the sample instance to a modified sample instance with that property set to the value from a default instance.

If step 2 fails, it means there’s a member that’s still at its default value in the sample instance, e.g. the test wasn’t updated after a new property was added. If step 3 fails, the sample was updated, but the new property is not considered in Equals – and it can even tell which property is missing.

The same problems of course arise with GetHashCode, but are usually less severe. Forgetting to add a property just makes collisions more likely. It can be tested much in the same way, but can potentially lead to false positives: collisions can occur even if all properties are correctly considered in the function. In that case, however, the sample can usually be altered to remove the collision – and it is really unlikely. In fact, we never had a false positive.