Crashes when returning references to vector elements

Recently, I was experiencing a strange crash that I traced to a piece of C++ code looking more or less like this:

template <class T>
class container
{
public:
  std::vector<T> values_;
  T default_;

  T const& get() const
  {
    if (values_.empty())
      return default_;
    return values.front();
  }
};

This was crashing when calling get(), with a non-empty values_ member. It looks fairly innocent. And it ran in production for a couple of years already. So what changed?

I had, in fact, never instanciated this template with T = bool before. And that was causing the crash, while still compiling without any errors. Now if you’re a little versed in the C++ standard library you might know that std::vector is a special snowflake indeed. In an effort to save space, and, I suspect, prove the usefulness of template specializations, it is not really a “normal” container holding bool values. Instead, it holds some type of integers and packs each pseudo-bool into one of their bits. The consequence is that the accessor functions like operator[], front() and back() cannot return a reference to a bool. Instead, they return a “proxy” object that supports assignment to and from a bool.

Back to the get() function: it tries to return a reference to a bool. Of course, that bool doesn’t really exist except as a temporary, and so this results in a dangling reference that causes a segmentation fault when used.

I suspect there could have been a warning about a dangling reference somewhere there. I have seen clang-tidy especially report things like this (with a few false positives too), but it did not show up for me. To fix it, I am now just returning a bool instead of a bool const& for T = bool. A special case in my case to work around a special case in std::vector.

Co-Variant methods on C# collections

C# offers a powerful API for working with collections and especially LINQ offers lots of functional goodies to work with them. Among them is also the Concat()-method which allows to concatenate two IEnumerables.

We recently had the use-case of concatenating two collections with elements of a common super-type:

class Animal {}
class Cat : Animal {}
class Dog : Animal {}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  // XXX: This does not work because Concat is invariant!!!
  return cats.Concat(dogs);
}

The above example does not work because concat requires both sequences to have the same type and returns a combined sequences of this type. If we do not care about the specifities of the subclasses be can build a Concatenate()-method ourselves which make the whole thing possible because instances of both subclasses can be put into a collection of their common parent class.

private static IEnumerable<TResult> Concatenate<TResult, TFirst, TSecond>(
  this IEnumerable<TFirst> first,
  IEnumerable<TSecond> second)
    where TFirst: TResult where TSecond : TResult
{
  IList<TResult> result = new List<TResult>();
  foreach (var f in first)
  {
    result.Add(f);
  }
  foreach (var s in second)
  {
    result.Add(s);
  }
  return result;
}

The above method is a bit clunky to call but works as intended:

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  // Works great!
  return cats.Concatenate<Animal, Cat, Dog>(dogs);
}

A variant of the above is a Concatenate()-method can be useful if you use a collection of the parent class to collect instances of subclass collections:

private static IEnumerable<TResult> Concatenate<TResult, TIn>(
  this IEnumerable<TResult> first,
  IEnumerable<TIn> devs)
    where TIn : TResult
{
  IList<TResult> result = first.ToList(); 
  foreach (var dev in devs)
  {
    result.Add(dev);
  }
  return result;
}

public IEnumerable<Animal> combineAnimals(IEnumerable<Cat> cats, IEnumerable<Dog> dogs)
{
  IEnumerable<Animal> result = new List<Animal>();
  result = result.Concatenate(cats);
  return result.Concatenate(dogs);
}

Maybe the above examples can serve as an inspiration for more utility methods that may improve working with collections in C#.

Make yourself comfortable

If you have to add a new feature to an existing code base, you’ve likely already experienced an uncomfortable truth: Nobody has thought about your use case. Nothing in the existing code base fits your goals. This isn’t because everybody wanted you to fail, but because your new feature is in fact brand new to the software and responsible software developers stop working on a code as soon as their work is done (while obeying the project’s definition of done).

So you try to shoehorn your functionality into the existing code. It’s not neat, but you get it working. Are you done? In my opionion, you haven’t even started yet. Your first attempt to combine your idea of the best implementation of your functionality and the existing system will be cumbersome and painful. Use it as a learning experience how the system behaves and throw it away once you get a working prototype. Yes, you’ve read that right. Throw it away as in undo your commits or changes. This was the learning/exploration phase of your implementation. You’ve applied your idea to reality. It didn’t work well. Now is the time to apply reality to your idea. Commence your second attempt.

For your second attempt, you should make use of your refactoring skills on the existing code. Bend it to your anticipated and tried needs. And once the code base is ready, drop your new feature into the new code “socket”. Your work doesn’t need to be cumbersome and painful. Make yourself comfortable, then make it work.

Here is a example, based on a real case:

An existing system was in development for many years and worked with a lot of domain objects. One domain object was a price tag that looked something like this:

interface PriceTag {
    PriceCategory category();
    TaxGroup taxGroup();
    Euro nettoAmount();
    Product forProduct();
}

Well, it was a normal domain object giving back other normal domain objects. The new feature should be an audio module that could read price tags out loud. The team used a text-to-speech synthesizing library that takes a string and outputs an audio stream. No big deal and pretty independent from the already existing code base.

But the code that takes a price tag and converts it into a string, aka the connection point between the unbound library code and the existing system, was ugly and undecipherable:

String priceTagToText(PriceTag price) {
    return price.forProduct().getDenotation()
        + " for only "
        + CurrencyFormatter.format(price.nettoAmount())
        + " with "
        + String.valueOf(price.taxGroup().percentage())
        + " % VAT in the "
        + price.category().getDenotation()
        + " section.";
}

This is how it looks if somebody tries to combine two building blocks that aren’t meant for each other. To test this method, you’ll have to mock deep into the domain objects.

If two building blocks aren’t matching naturally, maybe it’s an idea to add some lubrication code between them. This code isn’t exactly doing anything newfound, but adds a requirement seam that point towards the existing system:

interface ReadablePriceTag {
    String denotation();
    String netto();
    String vatPercentage();
    String category();
}

You can probably already see where this is heading. Just in case you cannot, I will take you through all parts of the code.

First we can write a priceTagToText() method that reads a lot nicer:

String priceTagToText(ReadablePriceTag price) {
    return price.denotation()
        + " for only "
        + price.netto()
        + " with "
        + price.vatPercentage()
        + " VAT in the "
        + price.category()
        + " section.";
}

The second and complementary part is the implementation of the ReadablePriceTag interface that is given a PriceTag object and translates the data for the new methods:

class PriceTagBasedReadablePriceTag {
    private final PriceTag price;

    PriceTagBasedReadablePriceTag(PriceTag price) {
        this.price = price;
    }

    @Override
    public String denotation() {
        return this.price.forProduct().getDenotation();
    }

    @Override
    public String netto() {
        return CurrencyFormatter.format(this.price.nettoAmount());
    }

    @Override
    public String vatPercentage() {
        return String.valueOf(
                this.price.taxGroup().percentage()) + " %";
    }

    @Override
    public String category() {
        return this.price.category().getDenotation();
    }
}

Basically, you have a lot of existing code that is using PriceTag objects and some new code that wants to use ReadablePriceTag objects. The PriceTagBasedReadablePriceTag class is the connector between both worlds (at least in one direction). We can definitely argue about the name, but that’s a detail, not the main point. The main points of all this effort are two things:

  1. The new code does not suffer in quality and readability from decisions made at a different time in a different context.
  2. The code clearly models these contexts. If you are aware of Domain Driven Design, you probably see the “Bounded Context” border that crosses right between PriceTag and ReadablePriceTag. The PriceTagBasedReadablePriceTag class is the bridge across that border.

If you express your context borders explicitely like in this example, your code reads fine on any side of the border. There is no notion of “old and fitting” and “new and awkward” code. It seems like additional work and it surely is, but is pays off in the long run because you can play this game indefinitely. A code base that gets more muddied and forced with time will reach a breaking point after which any effort needs knowledge in archeology and cryptoanalysis.

So, my advice boils down to one thing: Make yourself comfortable when adding new code to an existing code base.
And then, think about your type names. PriceTagBasedReadablePriceTag is most likely not the best name for it. But that’s a topic for another blog post. What would be your name for this class?

What is it with Software Development and all the clues to manage things?

As someone who started programming a long time ago (roughly 20 years, now that I think about it), but only in recent years entered the world of real software development, the mastery of day-to-day-challenges happens to consist of two main topics: First, inour rapidly evolving field we never run out of new technologies to learn, and then, there’s a certain engineering aspect underlying, how to do things in a certain manner, with lots of input every year.

So after I recently shared some of such ideas with my friends — I indeed still have a few ones of those), I wondered: How is it, that in the modern software development world, most of the information about managing things actually comes from the field itself, and rather feeding back its ideas of project management, quality, etc. into the non-software-subspaces of the world? (Ideas like the Agile movement, Software Craftmanship, the calls of doing things Lean and Clean, nowadays prospered so much that you see their application or modification in several other industries. Like advertisement, just as an example.)

I see a certain kind of brain food in this question. What tells software development apart from other current fields, so that there is a broad discussion and considerable input at its base level? After all, if you plan on becoming someone who builds houses, makes cars, or manages cities, you wouldn‘t engage in such a vivid culture of „how“ to do things, rather focussing on the „what“.

Of course, I might be mistaken in this view. But, by asking: what actually tells software development apart from these other fields of producing something, I see a certain kind of brain food, helpful for approaching every day tasks and valuing better tips over worse ones.

So, what can that be?

1. Quite peculiar is the low entry threshold in being able to call yourself a “programmer”. With the lots of resources you get at relatively little cost (assumption: you have a computer with a working internet connection), you have a lot of channels by which you can learn the „what“ of software development first, and saving the „how“ for later. If you plan on building a house, there‘s not a bazillion of books, tutorials, and videos, after all.

2. Similarly, there‘s the rather low cost of failure when drafting a quick hobby project. Not always will a piece of code that you write in your free time tell yourself „hey man, you ever thought about some better kind of architecture?“ – which is, why bad habits can stick and even feel right. If you choose the „wrong“ mindset, you don‘t always lose heaps of money, and neither do you, if you switch your strategy once in a while, you also don‘t automatically. (you probably will, though, if you are too careless in this process).

3. Furthermore, there‘s the dynamic extension of how your project is going to be used („Scope Creep“). One would build a skyscraper in a different way than a bungalow (I‘m not an expert, though), but with software, it often feels like adding a simple feature here, extending the scope there, unless you hit a point where all its interdependencies are in a complex state of conflict…

4. Then, it‘s a matter of transparency: If you sit in a badly designed car, it becomes rather obvious when it always exhausts clouds of black smoke. Or your house always smells like scents of fresh toilet. Of course, a well designed piece of software will come with a great user experience, but as you can see in many commercial products, there also is quite some presence of low-than-average-but-still-somehow-doing-what-it-should software. Probably users are more tolerant with software than with cars?

5. Also, as in most technical fields, it is not the case that „pure consultants“ are widely received in a positive light. For most nerds, you don‘t get a lot of credibility if you talk about best practices without having got your hands dirty over a longer period of time. Ergo, it needs some experienced software programmers in order to advise less experienced software programmers… but surely, it‘s questionable whether this is a good thing.

6. After all, the requirements for someone who develops a project might be very different in each field. From my academical past in computational Physics, I know that there is quite some demand for „quick & dirty“ solutions. Need to add some Dark Matter in your model here? Well, plug this formula in and check the results. Not every user has the budget or liberty in creating a solid structure of your program. If you want to have a new laboratory building, of course, you very well want it it do be designed as good as it can get.

All in all, these observations somehow boil down to the question, whether software development is to be seen more like a set of various engineering skills, rather like a handcraft, an art, or a complex program of study. It is the question, whether the “crack” in this field is the one who does complex arithmetics in its head, or the one who just gets what the customer wants. I like thinking about such peculiar modes of thought, as they help me in understanding what kinds of things I should learn next.

Or is there something completely else to it?

Unintentionally hidden application state

What do libraries like React and Dear Imgui or paradigms like Data-oriented design and Data-driven programming have in common?
One aspect is that they all rely on explictly modeling an application’s state as data.

What do I mean by hidden state?

Now you will probably think: of course, I do that too. What other way is there, really? Let me give you an example, from the widely used Qt library:

QMessageBox box;
msgBox.setText("explicitly modeled data!");
msgBox.exec();

So this appears to be harmless, but is actually one of the most common cases were state is modeled implicitly. But we’re explicitly setting data there, you say. That is not the problem. The exec() call is the problem. It is blocking until the the message box is closed again, thereby implicitly modeling the “This message box is shown” state by this thread’s instruction pointer and position on the call stack. The instruction pointer is now tied to this state: while it does not move out of that function, the message box is still shown.

This is usually not a big problem, but it demonstrates nicely how state can be hidden unintentionally in an application. And even a small thing like this already prevents you from doing certain things, for example easily serializing your complete UI state.

Is it bad?

This cannot really be avoided while keeping some kind of conceptual separation between code and data. It can sure be minimized. But is it bad? Well, it is certainly good to know when you’re using implicit state like that. And there’s one critiria for when it should most certainly minimized as much as possible: highly interactive programs, like user interfaces, games, machine control systems and AI agents.
These kinds of programs usually have some spooky and sponteanous interactions between different, seemlingly unrelated objects, and weird transitions between states. And using the call-stack and instruction pointer to model those states makes them particularly unsuitable to being interacted with.

Alternatives?

So what can you use instead? The tried and tested alternative is always a state-machine. There are also related alternatives like behavior trees, which are actually quite similar to a “call stack in data” but much more flexible. Hybrid solutions that move only bits of the code into the realm of data are promises/futures and coroutines. Both effectively allow a “linear” function to be decoupled from its call-stack, and be treated more like data. And if their current popularity is any indicator, that is enough for many applications.

What do you think? Should hidden state like this always be avoided?

Changing the keyboard navigation behaviour of form inputs

The default behaviour in HTML forms is that you can move the focus from one input element to the next via the tab key and submit the form via the enter key. This is also how dialogs work on most operating systems when using the native UI components. This behaviour is consistent across all browsers, and changing it messes with the user’s expectations and reduces accessibility. So I would normally advise against changing this behaviour without good reasons.

However, one of our customers wanted a different behaviour for an application developed by us. This application replaced an older application where the enter key did not submit the form, but moved the focus to the next input element. The ‘muscle memory’ effect made users accidentally submit the form by hitting the enter key, causing frustration. Since this application is not a public web site, but merely a web technology based intranet application with a small and specialized user base, changing the default behaviour is acceptable if the users want it.

So here’s how to do it. The following JavaScript function focusNextInputOnEnter takes a form element as a parameter and changes the focus behaviour on the input elements within this form.

function focusNextInputOnEnter(form) {
  var inputs = form.querySelectorAll('input, select, textarea');
  for (var i = 0; i < inputs.length; i++) {
    var input = inputs[i];
    input.addEventListener('keypress', (function(index) {
      return function(event) {
        if (!isEnter(event.which)) {
          return;
        }
        var nextIndex = index + 1;
        while (nextIndex < inputs.length) {
          var nextInput = inputs[nextIndex];
          if (nextInput.disabled) {
            nextIndex++;
            continue;
          }
          nextInput.focus();
          break;
        }
      };
    })(i));
  }

  function isEnter(keyCode) {
    return keyCode === 13;
  }
}

It works by handling the keypress events on the input elements and checking the key code for the enter key (code 13). It has an additional check so that disabled input elements are skipped.

To apply this change in behaviour to a form we have to call the function when the DOM content is loaded:

<form id="demo-form">
  <input type="text">
  <input type="text" disabled="disabled">
  <input type="checkbox">
  <select>
    <option>A</option>
    <option>B</option>
  </select>
  <textarea></textarea>
  <input type="text">
  <input type="text">
</form>

<script>
  document.addEventListener('DOMContentLoaded', function() {
    focusNextInputOnEnter(document.getElementById('demo-form'));
  });
</script>

I want to reiterate my warning that you should definitely not do this for public web sites, and elsewhere only if you know that this is what your users want.

Using (elastic)search with .NET Core

Many modern applications require powerful search mechanisms to become useful and make their users more productive. That is in large part due to the amount of data available to work with. Thankfully there are already powerful tools to index your data and make it searchable.

One of the most well known state-of-the-art solutions is ElasticSearch and it has an API to be used from .NET called NEST. While the documentation is ok I want to give a quick rundown on how to add searching capabilities to your .NET Core application. Some ideas are borrowed from the great post “Using Elasticsearch with ASP.NET Core and Docker“.

Getting ElasticSearch running

The easiest way to get up and running with elasicsearch is to use their docker images and just run the container on your development machine. I like to a docker compose file like the following to get elasticsearch and its tooling application kibana up and running fast:

version: '3.8'
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        container_name: elastic
        environment:
            - node.name=elastic
            - cluster.initial_master_nodes=elastic
        ports:
            - "9200:9200"
            - "9300:9300"
        volumes:
            - type: bind
              source: ./esdata
              target: /usr/share/elasticsearch/data
        networks:
            - esnetwork
    kibana:
        image: docker.elastic.co/kibana/kibana:7.8.0
        ports:
            - "5601:5601"
        networks:
            - esnetwork
        depends_on:
            - elasticsearch
volumes:
    esdata:
networks:
    esnetwork:
        driver: bridge

After you run it with docker compose you can talk to the search service on http://localhost:9200/ and the kibana management GUI on http://localhost:5601/. On the Kibana UI especially the Dev Tools and its console are interesting for experimenting with search queries.

Access ElasticSearch from your .NET Core app

I find it quite elegant to write a extension method for the IServiceCollection to configure the ElasticClient and register it as a Singleton to the dependency injection framework of .NET Core like so

    public static class ElasticSearchExtension
    {
        public static void AddElasticsearch(
            this IServiceCollection services, IConfiguration configuration)
        {
            var url = configuration["elasticsearch:url"];
            var settings = new ConnectionSettings(new Uri(url))
                    .DefaultMappingFor<SearchableDevice>(deviceMapping => deviceMapping
                        .IndexName("devices")
                        .IdProperty(dev => dev.Id)
                    )
                ;
            var client = new ElasticClient(settings);
            var response = client.Indices.Create("devices", creator => creator
                .Map<SearchableDevice>(device => device
                    .AutoMap()
                )
            );
            // maybe check response to be safe...
            services.AddSingleton<IElasticClient>(client);
        }
    }

The configuration block looks like following

  "ElasticSearch": {
    "url": "http://localhost:9200/"
  },

and allows for future extension.

Our search service has to be registered in Startup of course:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddElasticsearch(Configuration);
}

Indexing our objects

In the above example we built a SearchableDevice class whose public properties are going to be indexed by ElasticSearch. The API allows for a much more fined grained control about what and how to index but we want to keep things simple without having to worry about excluding Properties and so on. If you have set it up that way indexing a SearchableDevice is merely one simple call:

// SearchClient is the injected IElasticClient
// mySearchableDevice is an instance of SearchableDevice
SearchClient.IndexDocument(mySearchableDevice);

Searching for objects

When developing a search query I like to try it in the Kibana Dev Tools and then transform it to a NEST call. A simple query to look in all device properties if they start with “needle” looks like this:

{
  "query": {
    "multi_match": {
      "type": "phrase_prefix", 
      "fields": ["*"],
      "query": "needle"
    }
  }
}

The nice thing about the Kibana Dev Tools console is that it can display and complete the possible values for fields like “type” in your multi_match query.

That search query can then be translated to a NEST call in a straightforward way and looks this way:

var response = SearchClient.Search<SearchableDevice>(sd => sd
    .Query(q => q
        .MultiMatch(query => query
            .Type(TextQueryType.PhrasePrefix)
            .Fields("*")
            .Query("needle")
        )
    )
);

The search response contains the hits with their source objects and some metadata like the score and result count.

Bear in mind that ElasticSearch only returns the first/best 10 matches by default, so specifying the result size often might be closer to what you want.

Wrapping it up

Getting started with ElasticSearch in .NET Core does not require too much boiler plate an setup work if you use tools like docker and the NEST library. Making it usable and tuning the indexing and querying may require a lot of work to achieve the best results. On the other hand smaller applications can start-off with a simple search setup like shown above and simply evolve it when need be.

Shooting Troubles With Toys

As software grows, one of the typical challenges is keeping track of the quirks and subtleties of all the languages, third-party libraries, frameworks, IDEs / toolchains or whatnot you, at places, need to maneuver in order not to construct everything yourself. That‘s often just a matter of familiarization and after you stumbled across a particular type of problem – once or a few times – it gets assimilated as a trivial thing. It sometimes gets so far as keeping certain warnings or harmless exceptions in your software – technical debt. (Alas, your customer doesn‘t usually care for the perfect product, or, let‘s say, wait that long and pay for the perfect product..).

Now, once in a while all the third-party implementations you rely on interact in a odd manner. These are the cases where you get a „Cannot update a component while rendering a different component“ in a React/Recoil application, a NoClassDefFoundError in a Java / Grails application, a general SegFault in a C / C++ program, or your database does weird stuff. U name it.

So even when you encounter a problem that your then, time is a critical thing. So what you do? Google it. Find fellow victims on Stack Overflow, GitHub, etc. – but this only goes so far, depending on how common your problem ist.

Now you should always have a version control system at hand, of course. This has the huge advantage of being able to simplify your problem. Just enter a new branch that no one cares about, and you can completely get rid of all the confusing mess that is your reliance on third party content. Of course, this is a possibility one can always know about. Point here being, do it as a habit. Learn it as a habit. If it‘s only in „deactivating this probably useless flag“, „hardcoding this localhost into this URL in order to make progress quickly“ – you do not want to risk carrying this into production code. Just know that keeping experimental branches open for a longer time is a bad habit, either. So think of them e.g. as „in two hours, I will either merge or delete this branch, there‘s no way about it“. There‘s nothing in there that you wouldn‘t be able to reproduce if required.

And sometimes, it‘s more effective to work from the other end. Instead of going from „very close to where we want to be“, start from a place completely unpolluted by your technical debt. Start a toy project. Use exactly the dependencies that you have in your real project, and try to set up your error scenario. In our case, this method helped us in understanding a completely meaningless NoClassDefFoundError, because suddenly – with exactly the same JDK and Grails packages that we had in the real code – IntelliJ IDE just felt more like telling us verbosely what the actual problem is. Which you can then see without all the clutter.

Even more, this procedure does help your with your own Rubber Ducking – after all, you want to describe to yourself a scenario, where „Actually I don‘t get it, I am just doing this and that and…“, well, are you? Or is there more to it that your eyes don‘t see? Just find out.

Of course, this is just the precursor to a more test driven approach. Toy projects aren‘t really anything else, they are just isolated environments in which you completely see what is going on, with an essential setup and a clear expectation. These are tests. Now if you already wrote them, why not think about including them in your projects as tests? Especially if you‘re kind of new to test driven development, you can make this habit of toy boxing a guide on the road to a more test driven way of thinking.

Or maybe, just don‘t make errors. If you ever have the option – just choose that [I guess then you have time to fix mine, too? :)]

Leibniz would have known how to override equals

Equality is a subtle and thorny business, in programming as well as in pure mathematics, physics and philosphy. Probably every software developer got annoyed somtime by unexpected behaviour of some ‘equals’ method or corresponding operators and assertions. There are lot’s of questions that depend on context and answering them for some particular context might cost some pain and time – here is a list of examples:

  • What about objects that come with database-ids? Should they be equal for the objects to be equal?
  • Are dates with time zones equal if they represent the same instant but have a different time zone?
  • What about numbers represented by functions that compute digits up to a given precision?

Leibniz’ Law

This post is about applying an idea of Leibniz I like, to the problem of finding good answers to the questions above. It is called “Leibniz’ law” and can be phrased as a definition or characterization of equality:

Two objects are equal, if and only if, they agree in all properties.

If you are not familiar with the phrase “if and only if”, that’s from mathematics and it is a shorthand for saying, that two things are true:

  • If two objects are equal, then they agree in all properties.
  • If two objects agree in all properties, then they are equal.

Lebniz’ law is sometimes stated using mathematical symbols, like “\forall“, but this would be besides the point of this post – what those properties are will not be defined in a formal mathematical way. If I am in doubt about equality while programming, I am concerned about properties relevant to the problem I want to solve. For example, in almost all circumstances I can imagine, for a list, a relevant property would be its length, but not the place in the computers memory where it is stored.

But what are relevant properties in general? For me, such a property is the result of running some piece of meaningful code. And what meaningful code is, depends on your judgement how the object in question should be used. So in total, this boils down to the following:

Two instances of a type are equal, if and only if, they yield the same results in any meaningful piece of code.

Has this gotten us anywhere? My answer is yes, since the question about equality was reduced to a question about use cases of a type, which might be a starting point of defining a new type anyway.

Turtles all the way down

Please take a moment to note what a sneaky beast equality can be: Above I explained equality by using equality – right where I said “same results”. It is really hard to make statements about anything at all without using some notion of equality in some way. Even in programming, where you can freely define when two objects are equal, you can very well forget that you are using a system, namely your programming language, which usually already comes with an intricate notion of equality defined on the syntax you are using to define your notion of equality…

On a more practical note, that means that messed up notions of equality usually propagate if you define new kinds of objects from known ones.

Relation to Liskov’s Principle

With our above definition, we are very close to an informal interpretation of Liskov’s Substitution Principle, which we can rephrase as:

In all meaningful code for a type, an instance of a subtype has to behave the same way.

For comparison, the message of this post stated in the same tongue:

In all meaningful code for a type, two equal instance of the type should behave the same way.

Mastering programming like a martial artist

Gichin Funakoshi, who is sometimes called the “father of modern karate”, issued a list of twenty guiding principles for his students, called the “Shōtōkan nijū kun”. While these principles as a whole are directed at karate practitioners, many of them are very useful for other disciplines as well.
In my understanding, lifelong learning is a fundamental pillar of both karate and programming, and many of those principles focus on “learning” as a more fundamental action. I’d like to focus on a particular one now:

Formal stances are for beginners; later, one stands naturally.

While, at first, this seems to focus on stances, the more important concept is progression, and how it relates to formalities.

Shuhari

It is a variation of the concept of Shuhari, the three stages of mastery in martial arts. I think they map rather beautifully to mastery in programming too.

The first stage, shu, is about learning traditions and movements, and how to apply them strictly and faithfully. For programming, this is learning to write your first programs with strict rules, like coding conventions, programming patterns and all the processes needed to release your programs to the world. There is no room for innovation, this stage is about absorbing what knowledge and practices already exist. While learning these rules may seem like a burden, the restrictions are also a gift. Because it is always clear what is right and what is wrong, and decisions are easy.

The second stage, ha, is about breaking away from these rules and traditions. Coding conventions, programming-patterns etc. are still followed. But more and more, exceptions are allowed, and encouraged, when they serve a greater purpose. A hack might no longer seem so bad, when you consider how much time it saved. Technical dept is no longer just avoided, but instead managed. Decisions are a little harder here, but there’s always the established conventions to fall back to.

In the final stage, ri, is about transcendence. Rules lose their inherent meaning to purpose. Coding conventions, best-practices, and patterns can still be observed, but they are seen for what they are: merely tools to achieve a goal. As thus, all conventions are subject to scrutiny here. They can be ignored, changed or even abandoned completely if necessary. This is the stages for true innovation, but also for mastery. To make decisions on this level, a lot of practice and knowledge and a bit of wisdom are certainly required.

How to use this for teaching

When I am teaching programming, I try to find out what stage my student is in, and adapt my style appropriately (although I am not always successful in this).

Are they beginners? Then it is better to teach rigid concepts. Do not leave room for options, do not try to explore alternatives or trade-offs. Instead, take away some of the complexity and propose concrete solutions. Setup rigid guidelines, how to code, how to use the IDEs, how to use tools, how to communicate. Explain exactly how they are to fulfill all their tasks. Taking decisions away will make things a lot easier for them.

Students in the second, or even in the final stage, are much more receptive to these freedoms. While students on the second stage will still need guidance in the form of rules and conventions, those in the final stage will naturally adapt or reject them. It is much more useful to talk about goals and purpose with advanced students.