Unit-Testing Deep-Equality in C#

In the suite of redux-style applications we are building in C#, we are making extensive use of value-types, which implies that a value compares as equal exactly if all of its contents are equal also known as “deep equality”, as opposed to “reference equality” or “shallow equality”. Both of those imply deep equality, but the other way around is not true. The same object is of course equal to itself, not matter how deep you look. And an object that references the same data as another object also has equal content. But a simple object that contains different lists with equal content will be unequal under shallow comparison, but equal under deep comparison.

Though init-only records already provide a per-member comparison as Equals be default, this fails for collection types such as ImmutableList<> that, against all intuition but in accordance to , only provide reference-equality. For us, this means that we have to override Equals for any value type that contains a collection. And this is were the trouble starts. Once Equals is overridden, it’s extremely easy to forget to also adapt Equals when adding a new property. Since our redux-style machinery relies on a proper “unequal”, this would manifest in the application as a sporadically missing UI update.

So we devised a testing strategy for those types, using a little bit of reflection:

  1. Create a sample instance of the value type with no member retaining its default value
  2. Test, by going over all properties and comparing to the same property in a default instance, if indeed all members in the sample are non-default
  3. For each property, run Equals the sample instance to a modified sample instance with that property set to the value from a default instance.

If step 2 fails, it means there’s a member that’s still at its default value in the sample instance, e.g. the test wasn’t updated after a new property was added. If step 3 fails, the sample was updated, but the new property is not considered in Equals – and it can even tell which property is missing.

The same problems of course arise with GetHashCode, but are usually less severe. Forgetting to add a property just makes collisions more likely. It can be tested much in the same way, but can potentially lead to false positives: collisions can occur even if all properties are correctly considered in the function. In that case, however, the sample can usually be altered to remove the collision – and it is really unlikely. In fact, we never had a false positive.

How comments get you through a code review

Code comments are a big point of discussion in software development. How and where to use comments. Or should you comment at all? Is the code not enough documentation if it is just written well enough? Here I would like to share my own experience with comments.

In the last months I had some code reviews where colleagues looked over my merge requests and gave me feedback. And it happened again and again that they asked questions why I do this or why I decided to go this way.
Often the decisions had a specific reason, for example because it was a customer requirement, a special case that had to be covered or the technology stack had to be kept small.

That is all metadata that would be tedious and time-consuming for reviewers to gather. And at some point, it is no longer a reviewer, it is a software developer 20 years from now who has to maintain the code and can not ask you questions any more . The same applies if you yourself adjust the code again some time later and can not remember your thoughts months ago. This often happens faster than you think. To highlight how fast details disappear here is a current example: This week I set up a new laptop because the old one had a hardware failure. I did all the steps only half a year ago. But without documentation, I would not have been able to reconstruct everything. And where the documentation was missing or incomplete, I had to invest effort to rediscover the required steps.

Example

Here is an example of such a comment. In the code I want to compare if the mixer volume has changed after the user has made changes in the setup dialog.

var setup = await repository.LoadSetup(token);

var volumeOld = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();

setup = Setup.App.RunAsDialog(setup, configuration);

var volumeNew = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();
if (volumeNew == volumeOld)
{
     break;
}
            
ResizeToMixerVolume(setup, volumeOld);

Why do I save the volume in an additional variable instead of just writing the setup into a new variable in the third line? That would be much easier and more elegant. I change this quickly – and the program is broken.

This little comment would have prevented that and everyone would have understood why this way was chosen at the moment.

// We need to copy the volumes, because the original setup is partially mutated by the Setup App.
var volumeOld = setup.Mixers.Contents.Select(mixer=>mixer.Volume).ToList();

If you annotate such prominent places, where a lot of brain work has gone into, you make the code more comprehensible to everyone, including yourself. This way, a reviewer can understand the code without questions and the code becomes more maintainable in the long run.



Simple abstractions are good abstractions

I think that a lot of accidental complexity in software is produced by not picking the simplest abstraction for the job. Let me lead with an example: Consider this code from a code generator that generates C++ code:

std::ostringstream extra_properties;
if (!attribute.unit.empty())
{
  extra_properties << fmt::format("\n      properties.set_unit(\"{0}\");", attribute.unit);
}
if (!attribute.min_value.empty())
{
  extra_properties << fmt::format("\n      properties.set_min_value(\"{0}\");", attribute.min_value);
}
if (!attribute.max_value.empty())
{
  extra_properties << fmt::format("\n      properties.set_max_value(\"{0}\");", attribute.max_value);
}

It has a lot of ugly duplication: basically everything but the method names and values. So, how do we get rid of the duplication? Just a couple of years ago, I would probably have used a function for that:

void property_snippet(std::ostringstream& str, std::string const& method_name, std::string const& value)
{
  if (value.empty())
    return;
  str << fmt::format("\n      properties.{0}(\"{1}\");", method_name, value);
}

And then turn the call site code into:

property_snippet(extra_properties, "set_unit", attribute.unit);
property_snippet(extra_properties, "set_min_value", attribute.min_value);
property_snippet(extra_properties, "set_max_value", attribute.max_value);

Back then, I would have said that this is a definite improvement, but nowadays I am not so sure anymore. The call-site is a lot more concise, but we still have about half its code duplicated: the first half of each line. The additional function adds lots of complexity that is not necesarily offset by the gain at the call-site: the declaration with all the parameters. And the code gets separated, which is only really good if the function does a little bit more than this one.

This variant can, however, be made simpler with lambdas that capture extra_properties instead of passing it each time. While that is a better solution, I would argue that function objects and capturing are not necessarily simple either, so this only makes second place.

Nowdays, my first go-to abstraction is an in-place list and a loop:

std::tuple<char const*, std::string> methods_and_values[] = {
  {"set_unit", attribute.unit},
  {"set_min_value", attribute.min_value},
  {"set_max_value", attribute.max_value},
};

for (auto [method_name, value] : methods_and_values)
{
  if (value.empty())
    continue;
  extra_properties << fmt::format("\n      properties.{0}(\"{1}\");", method_name, value);
}

For me, this has the added benefit that is clearly separates the ‘inert’ data part of the code and the ‘active’ transformation. While this example is C++, this works in almost languages that I know of, even such arcane beasts as Xbase++.

A Purpose of Domain-Driven-English-German-Language-Mumbo-Jumbo

Disclaimer: Due to it’s nature, this blog article needs to make some use of the German language. This is part of its essence and could not be avoided, sorry to all international readers.

Since its conception in 2003, the expression “Domain-Driven Design” might have been tossed around a bit, together with all the other XYZ-Driven Designs that are out there. As usual with such terms, I only try to gather the core points of these ideas; I do not like sticking to any such concept with religious fervor or otherwise dogmatic understanding. Moreover, these concepts are usually not of the type “you either use them or you don’t”, but you have some control over the degree in which you employ them, depending on your requirements as a whole.

This is why in a new project, I might implement a handful of ideas and see where it goes, always prepared to call it a day and toss any rule out when it endangers my progress. On the other hand, if I only follow principles that instantly convince me, I risk missing out on some practice that just is unusual, but not bad in itself.

Domain-Driven Design, in my understanding, aims at aligning the architectural details of your code base with the domain model, i.e. the technical peculiarities of your (customer’s) specific use case. Which doesn’t sound hard or bad per se, but as usual, takes some practice to shed some light on.

Enter the idea of using German words in your code. For variables, methods, classes, and such stuff – even with Umlauts and the Eszett (“ß”). If one is not used to that, such code might instantly induce some sort of digestive sickness or at least that’s what it has done to me, because of it’s sheer look, i.e.

// just some example to look at

var sortedZuordnungen = szenario.SortedZeitplanForArbeitsplatz(arbeitsplatz.Id)
.ToList();
var gesperrteHalbtage = sperrungen.Where(s => s.AufArbeitsplatz(arbeitsplatz.Id)).Select(s => s.Halbtag);

var nächsteZuordnung = sortedZuordnungen.FirstOrDefault();
Halbtag tryStart = Constants.HeuteVormittag;

while (nächsteZuordnung != default)
{
    tryStart.CreateListFromHere(anzahlHalbtage, gesperrteHalbtage);
    nächsteZuordnung = FindNächsteZuordnung();
}

(replace “German” with any other language your customer might use; if you’re living in a completely English-speaking environment, this article should be of limited insight for you. Sorry again.)

Now code like this – at first – what is this!? That’s not proper! It looks like the sound of some older German politician who never really bothered learning the English language, with some crazy dialect and whatnot!

The advantage behind this concept becomes especially apparent when dealing with a lot of very generic terms. E.g. the word “component” might just mean a button on your UI, or it might mean something very specific for your customer – or even worse, you might mean something very specific for your customer, but in reality, he would never refer to that entity with that word, so… you’re left with a chance of awkward bewilderment in every single meeting with the guy.

So, despite it’s weird look – this is one of the concepts that I haven’t tossed out the window yet. The key point is the overall reduction of friction in your thoughts. In communicating with various languages, one always has to do some minor translations in your head. These can be faulty or misleading either way – the nature of the language itself is secondary.

What works for me, is

  • Pure code fabrications that are close to the programming language get English names like usual
  • Things that a customer might talk about in German should get a German name
  • German and English can be mixed in a single word without any shame
  • Thus, words can be long, but you have an IDE who can help with that
  • German compound words get the correct German capitalization, i.e. the equivalent of “componentNumber” would be “komponentennummer”, not “komponentenNummer”
  • The linking of two German parts happens with the correct grammatical standards, i.e. a “workPlace” becomes an “arbeitsplatz” with the “s” inbetween (Fugen-s).

For some reason, this by now resulted in quite an uninterrupted workflow for me. The last two rules were an interesting finding because I noticed that without them, I really made a noticeable pause in my thinking process whenever I thought about these entities. This pause is now gone.

E.g. by now, the cognitive load of talking about a “KomponentenController” – something that is a Controller from a software engineering point of view and dealing with components from a domain point of view, appears easier for me than having to talk about a “ComponentController” with the extra translation of Component and Komponente. Mind you, there are enough words that do not sound that similar in our two languages.

I will not use this concept in every single project I might start from now. I.e. for hobby projects (where I’m my own customer), I would still prefer the 100%-English-language solution. But depending on your project, this is worth a try, and I’m positively amazed on how well that can work.

Always apply the Principle Of Least Astonishment to yourself, too

Great principles have the property that while they can be stated in a concise form, they have far-reaching consequences one can fully appreciate after many years of encountering them.

One of these things is what is known as the Principle of Least Astonishment / Principle of Least Surprise (see here or here). As stated there, in a context of user interface design, its upshot is “Never surprise the user!”. Within that context, it is easily understandable as straightforward for everyone that has ever used any piece of software and notices that never once was he glad that the piece didn’t work as suggested. Or did you ever feel that way?

Surprise is a tool for willful suspension, for entertainment, a tool of unnecessary complication; exact what you do not want in the things that are supposed to make your job easy.

Now we can all agree about that, and go home. Right? But of course, there’s a large difference between grasping a concept in its most superficial manifestation, and its evasive, underlying sense.

Consider any software project that cannot be simplified to a mere single-purpose-module with a clear progression, i.e. what would rather be a script. Consider any software that is not just a script. You might have a backend component with loads of requirements, you have some database, some caching functionality, then you want a new frontend in some fancy fresh web technology, and there’s going to be some conflict of interests in your developer team.

There will be some rather smart ways of accomplishing something and there will be rather nonsmart ways. How do you know which will be which? So there, follow your principle: Never surprise anyone. Not only your end user. Do not surprise any other team member with something “clever”. In most situations,

  1. it’s probably not clever at all
  2. the team member being fooled by you is yourself

Collaboration is a good tool to let that conflict naturally arise. I mean the good kind of conflict, not the mistrust, denial of competency, “Ctrl+A and Delete everything you ever wrote!”-kind of conflict. Just the one where someone would tell you “hm. that behaviour is… astonishing.”

But you don’t have a team member in every small project you do. So just remember to admit the factor of surprise in every thing you leave behind. Do not think “as of right now, I understand this thing, ergo this is not of any surprise to anyone, ever”. Think, “when I leave this code for two months and return, will there be anything… of surprise?”

This principle has many manifestations. As one of Jakob Nielsen’s usability heuristics, it’s called “Recognition rather than Recall”. In a more universal way of improving human performance and clarity, it’s called “Reduce Cognitive Load”. It has a wide range of applicability from user interfaces to state management, database structures, or general software architecture. I like the focus of “Surprise”, because it should be rather easy for you to admit feeling surprised, even by your own doing.

My favorite C++20 feature

As I evolved my programming style away from mutating long-lived “big” objects and structures and towards are more functional and data-oriented style based mainly on pure functions, I also find myself needing a lot more structs. These naturally occur as return types for functions with ‘richer’ output if you do not want to use std::tuple or other ad-hoc types everywhere. If you see a program as a sequence of data-transformations, I guess the structs are the immediate representations encoded in the type system.

Let my first clarify what I mean by structs, as opposed to what the language says: A type that has all public data members, obeys the rule of zero, and is valid in any configuration. A typical struct v3 { float x{},y{},z{};}; 3d vector is a struct, std::vector is not.

These types are great. You can copy them around, use them with structured binding, they correctly propagate constness, and they are a great fit to pass them through layers of functions calls. And, when used as function parameters, they are great for evolving your program over time, because you can just change the single struct, as opposed to every function call that uses this parameter combination. Or you can easily batch, or otherwise ‘delay’, calls by recording the function parameters. Just throw the parameters into a container and execute the code later.

And with C++20, they got even better, because now you can use them with my favorite new feature: designated initializers, which allows you to use the member names at the initialization site and use RAII. E.g., for a struct that symbolizes an http request: struct http_request { http_method method; std::string url; std::vector<header_entry> headers; }; You can now initialize it like this:

auto request = http_request{
  .method = http_method::get,
  .uri = "localhost:7634",
  .headers = { { .name = "Authorization", .value = "Bearer TOKEN" } },
};

You can even use this directly as a parameter without repeating the type name, de facto giving your named parameters for a pair of extra curlys:

run_request({
    .method = http_method::get,
    .uri = "localhost:7634",
    .headers = { { .name = "Authorization", .value = "Bearer TOKEN" } },
});

You can, of course, combine this named-parameter style-struct with other function parameters in your API, but like with lambdas, I think they are most readable as the last parameter. Hence, also like with lambdas, you probably never want to have more than one at each call-site. I’m very happy with this new feature and it’s already making the code using my APIs a lot more readable.

Mutable States can change inside your Browser console log

So we know, that web development must be one of the fastest-changing ecospheres humankind has ever seen (not to say, JavaScript frameworks and their best practices definitely mutate similar in frequency and deadliness as Coronaviruses). While these new developments can also come with great joy and many opportunities, this means that once in a while, we need to take care of older projects which were written in a completely different mindset.

It’s somehow trivial: Even when your infrastructure is prone to constant shifts, any Software Developer holding at least some reputation should strive to write their code as long-living and maintainable as originally intended. Or longer.

But once in a while you run into legacy code that you first have to dissect in order to understand their working. And for JS, this usually means inserting console.log() statements at various places and to trace them during execution (yeah, I know, there’s a plenitude of articles telling you to stop that, but let’s just stay at the most basic level here).

Especially in an architecture with distributed, possibly asynchronous events (which helps in reducing coupling, see e.g. Mediator and Publish-Subscribe patterns), this can help your bugtracing. But there’s a catch. One which took me some time to actually understand as quite the villain.

It does not make any sense to me, but for some reason, at least Chrome and Firefox in their current implementation save some effort when using console.log() for object entities. As in, they seem to just hold a reference for lazy evaluation. It can then be that you look upwards at your log, maybe even need to scroll there, look at some value and then not realize that you are looking at the current state, not the state at time of logging!

Maybe that was clear to you. Maybe it never occured to you because you always cared about using your state immutably. But in case you are developing on some legacy code and don’t know about what your predecessor did everywhere, you might not be prepared.

You can visualize that difference easily by yourself. Consider that short JS script:

var trustfulObject = {number: 0};
var deceptiveObject = {number: 0};

// let's just increase these numbers once each second
setInterval(() => {
    console.log("let's see...", trustfulObject, deceptiveObject);
    trustfulObject = {number: trustfulObject.number + 1};
    deceptiveObject.number = deceptiveObject.number + 1;
}, 1000);

Let that code run for a while and then open your Browser console. Scroll upwards a bit and click on some of the objects. You will find that the trustfulObject is always enumerated as supposed (at the time of logging), while the deceptiveObject will always show the number at the time of clicking. That surely surprised me.

In case you are still wondering why: The trustfulObject is freshly created each step and then reassigned to your reference variable. It seems the Browser has no other choice than logging the old (correct) state, because the reference is lost afterwards. The deceptiveObject holds the same reference during the whole runtime, which somehow makes it look more efficient to the Browser to just not evaluate anything until you want to know the value.

And then, it lies to you. ¯\_(ツ)_/¯

Two notes:

  1. If you really have to deal with legacy code of a given size where you cannot easily change that behaviour, you can log your object using JSON.stringify, i.e. console.log("let's see…", trustfulObject, JSON.stringify(deceptiveObject)); avoids that lazy evaluation.
  2. Note: Not to be confused, the JS “const” keyword does exactly the opposite of creating an immutable object. It creates an immutable reference, i.e. you can only manipulate their content afterwards. Exactly what you not want.

Of course, in modern times you probably wouldn’t write vanilla JS, and e.g. using React useState definitely reduces that issue. But still. If you don’t want to use React & Co. everywhere, then… pay attention.

Composition of C# iterator methods

Iterator methods in C# or one of my favorite features of that language. I do not use it all that often, but it is nice to know it is there. If you are not sure what they are, here’s a little example:

public IEnumerable<int> Iota(int from, int count)
{
  for (int offset = 0; offset < count; ++offset)
    yield return from + offset;
}

They allow you to lazily generate any sequence directly in code. For example, I like to use them when generating a list of errors on a complex input (think compiler errors). The presence of the yield contextual keyword transforms the function body into a state machine, allowing you to pause it until you need the next value.

However, this makes it a little more difficult to compose such iterator methods, and in reverse, refactor a complex iterator method into several smaller ones. It was not obvious to me right away how to do it at all in a ‘this always works’ manner, so I am sharing how I do it here. Consider this slightly more complex iterator method:

public IEnumerable<int> IotaAndBack(int from, int count)
{
  for (int offset = 0; offset < count; ++offset)
    yield return from + offset;

  for (int offset = 0; offset < count; ++offset)
    yield return from + count - offset - 1;
}

Now we want to extract both loops into their own functions. My one-size-fits-all solution is this:

public IEnumerable<int> AndBack(int from, int count)
{
  for (int offset = 0; offset < count; ++offset)
    yield return from + count - offset - 1;
}

public IEnumerable<int> IotaAndBack(int from, int count)
{
  foreach (var x in Iota(from, count))
     yield return x;

  foreach (var x in AndBack(from, count))
     yield return x;
}

As you can see, a little ‘foreach harness’ is needed to compose the parts into the outer function. Of course, in a simple case like this, the LINQ version Iota(from, count).Concat(AndBack(from, count)) also works. But that only works when the outer function is sufficiently simple.

Pattern Matching and the SLAP

You probably know that effect: One starts writing a lot of code in a new language, after a while gains a decent appreciation of this or that goodness and a decent annoyance about these or that oddities… Then you do some other project in other languages (the Softwareschneiderei projects are quite diverse in this respect), and each time you switch languages there’s that small moment of pondering about certain design decisions.

Then after a while, sometimes there’s that feeling of a deep “ooooooh”, and you get a understanding of a fundamental mindset that probably must have been influential there. This is always interesting, because you can start to try using similar patterns in other languages, just to find out whether they are generally useful or not.

Now, as I’ve been writing quite some Rust code lately, I somehow started to like the way of pattern matching that it proposes. Suppose you have a structure that is some composition of multiple decisions, that somehow belong together to a sufficient degree that you don’t split it up into multiple pieces. That might be the state of some file handling that was passed to your program, as an example. Such a structure could look like (u32 is Rust’s unsigned 32-bit integer format):

enum InputState {
    Unitialized,
    PlainText(String),
    ImageData {
        width: u32,
        height: u32,
        base64Data: String,
    },
    Error {
        code: u32,
        message: String
    }
}

Now, in order to read such a thing in a context, there’s the “match” statement, a kind of “switch/select with superpowers”, in that it allows you to simultaneously destructure its content to reduce unnecessary typing. This might look like

fn process_input(state: InputState) {
    match state {
        InputState::Uninitialized => {
            println!("Uninitialized. Nothing to do!");
            std::process::exit();
        },
        InputState::PlainText(str) => {
            display_string(str);
        },
        InputState::ImageData {_, base64Data} => {
            println!("This seems to be a image and is now to be displayed");
            display_image(base64Data);
        }
        _ => (),
    }
}

(I probably forgot several &references in writing that example, but that’s Rust and not my point here :D). Anyway. At first, I was quite irritated with that – why does Rust want me to always include the placeholders (the _)? Why can’t it just let me take care of the stuff I want to take care right now? Why does the compiler complain instead of always assuming that _ => (), i.e. if nothing else matches, do nothing? But I eventually found out.

To make my point, a comparison with a more loosely typed environment like JavaScript, where (as a quick sketch) one could have written that as

// inputState might be null, or {message: "bla"},
// or {width: ..., height:..., base64Data}, or {code: ..., message: "bla"}...

if (!inputState) { return; }
if (inputState.code) { /* Error case */ }
else if (inputState.base64Data) { /* Image case */ }
else if (inputState.message) { /* probably the PlainText case */ }

/* but are you sure you forgot nothing? */

Now my point is, that these are not merely translations of the same idea between different languages. These are structurally different.

The latter example is a very microscopic view. I have a kind of squishy looking, alien thing called inputState that lies on the center of my operating table, I take the scalpel and dissect – what do we have here? what color is that? does this have bones? … Without further ado, you grab into the interior of whatever and you better hope that you’re not in a kind of Sci-Fi Horror Movie… :O

The former Pattern Matching, however, is an approach more true to its original question. It stops you with your scalpel right at the beginning.

We first want to know all eventualities. Then cut where it makes sense, and free your mind from the first decision.

We just wanted to distinguish our proceeding depending on the general nature of our object of interest. We do not want to look into details at that moment, we just want to know where we are.

In the language of Clean Code, this is the Single Level of Abstraction Principle (SLAP). It states that each method you write should explicitly concern itself with a constant degree of looking at a certain problem. E.g. Low-level mathematical calculations like milliseconds vs. system time conversions should not appear next to high-level server initialization; with the simple reason of understandability – switching levels of abstractions is quite irritating for the human brain (i.e. everyone who didn’t write that code). It breaks your line of thinking, especially when you are trying to find a bug or worse.

From my experience, I know that I myself would argue “well that’s not true; I can indeed hold multiple level of abstractions in my head simultaneously!” and this isn’t really a lie. But I still see embarrassing mistakes later, of the type “of course there’s also that case. I should have known.”

Another metaphor: Think you just ask your friend about the time. She then directly initiates a very detailed lecture about the movement of Sun and Earth, paired with some epistological considerations about the Heisenberg uncertainty principle, not to neglect the role of time in the concept of Entropy. Would you rather respond with “Thanks, that helped” or “… are you on drugs?”?

With Pattern Matching, this is the same. Look at a single decision, and then first lay all the options open: “What is this?” – then go on to another method. “What do do about it?” is another level. “How?” goes deeper, “And how, actually, if you consider these or that additional conditions” even deeper. And at the bottom are something like components, that just take one input and mangle these bytes without regard for higher morals.

It is a very helpful principle that still sometimes needs a little reminder. For me, I was just happy to see that kind-of-compulsive approach embodied by the Rust match operator.

So, how far do you think one should take it with SLAP? Do you manage to always follow it, or does it work differently for you?

PS: Of course, one could introduce the same caution by defining a similar control flow also in JavaScript. There’s no need to break the SLAP. But it makes you the one responsible of keeping track, while in my Rust example you have the annoying linting / compiler do that for you.

Characteristics of a Good Merge Request

If you happen to use Merge Requests as a basic mechanics of your software development workflow (and there are compelling arguments that you should), you’ve probably encountered some merge requests that could be handled in a straight-forward manner and some that are just painful to work with. Some merge requests “spark joy” and others elicit nothing but displeasure.

What differentiates the good, joyful merge requests from the dreadful? We’ve found six (or seven, based on your categorization) characteristics that good merge requests exhibit, while bad ones don’t.

Good merge requests are:

  • Atomic – Only one topic
  • Minimal – Only necessary changes
  • Curated – Double-checked by the developer
  • Finished – Free of deactivated code, debug code, etc.
  • Explained – Provided with commentary in the merge request (not the code)
  • Manageable – Small in changeset size
  • Separated – Either domain-based or technology-based, not intermingled

Bad merge requests lack in some or even all these categories. The first three aspects are ignored the most in my experience. This is why they are at the top of the list. A lot of trouble vanishes simply by staying on course, being succinct and caring about the next pair of eyes as a developer.

Let’s look at each characteristic in some depth:

Atomic

Let’s say you watch a romantic comedy movie and halfway through, all of a sudden, it changes into a war movie with gruesome scenes. You would feel betrayed. Ok, we don’t work in entertainment, our work is serious. Let’s say you read a newspaper article about the latest stock market movement and in the third paragraph, it plainly changes into a car advertisement. This is called “bait and switch” and is a diversionary tactics or a feint. If you do it unwittingly, you might mean no harm, but still cause irritation.

An atomic merge request tells only one story which means it contains changes for only one issue. If you happen to make “just a quick fix” at the wayside of your task, you break the atomicity property of your changeset and force your reviewer to follow your train of thought, but without the context.

There is nothing to say against a quick fix or a small refactoring, a little improvement here and there. But it’s not part of your story, so make it a story of its own. Bundling the tiny change with the overarching story makes it harder for the reviewer to differentiate topical changes from opportunitic ones and difficult to accept your main changes while rejecting your secondary ones. If you open up a second merge request for your peripheral changes, you retain the atomicity and the goodwill of your reviewer.

Minimal

A good merge request contains only changes that are essential to the story and consciously modified by the developer. Because we use a plethora of development tools that all want to store their metadata somewhere in our project, we often end up with involuntary modifications in files we don’t know, never looked at and don’t feel responsible for.

The minimalistic aspect of a good merge request puts you, the developer, into the position of responsibility of all content in your changeset. It doesn’t matter that “a tool made that change”. The tool acted on your command. It is not the responsibility of the reviewer to figure out what that change means. It is your duty. Explain the change to your reviewer and if you can’t, revert the change. If the rest of your changes work without it, it wasn’t necessary and shouldn’t have been included anyway. If your changes don’t work anymore, you are about to learn the inner workings of your development tools.

A minimal changeset also implies a reduced number of modified files. That’s true, but you shouldn’t design your system to have an artificial low number of files (or parts). If you don’t space out your code according to domain structures, you’ll end up with small changesets, but lots of merge conflicts and an ongoing dissonance between the domain model and your software model.

Curated

If you can follow the concept of a merge request telling a story, then you are the storyteller. You need to take care of your narration. In a good story, only necessary bits are told. If you muddy the water by introducing meaningless details and “red herrings”, you essentially irritate your listeners for no good reason.

At least have a second look at the changeset of your merge request. Is there any change present by mishap? That happens often and is not a sign of bad development. It is a sign of neglected care for your teammates if you let it show up in the changeset review, though.

Bring yourself in a position where you are fully aware of your story and the bits that tell it.

Finished

Your merge request should tell a complete story, not a fragment of it. It also shouldn’t show the auxiliary constructs you employ while developing your changes. These constructs might be debug output statements, commented out code, comments in the code that serve as a reminder for yourself (TODO comments play a big role in this regard) or extra code that ultimately proved to be unnecessary.

Your merge request should not contain any of that. After the merge, the resulting code base is regarded as “finished”. Any temporary content will stay forever.

Explained

Your story is probably very clear and recognizable. If not or if parts of it are more obscure than you hoped it would be, it is good practice to provide an explanation. It is your decision to explain yourself in the code (in form of an inline comment) or in the merge request itself (in form of a merge request comment). My preference tends to be the latter, because the comment will not be part of the “eternal” code base. The comment is a tool to help the reviewer grasp the details. Your approach may vary, but there needs to be some form of extra explanation from your side if your code isn’t crystal clear.

Manageable

The best stories are short. Humans aren’t good at keeping large numbers of details in their head and your reviewer is no exception. Keep your merge request small to make the review bearable. A good rule of thumb is a merge request containing no more than 10 files or 250 lines of changed code. While these numbers are arbitrary, they serve the purpose to give you a threshold you can check. You can’t check the real threshold of your reviewer, so try to stay below it.

If you find yourself in a position where a lot more files are touched and/or the changed lines of code are way above 250, you should think about what lead you there. These changes didn’t happen on their own. Your work produced them. Can you adjust your workflow so that smaller milestones are possible? What was the developer action that produced most of these changes? Are we looking at a shotgun surgery?

It is easier to review several small merge requests than one big one. Big, unmanageable merge requests may happen, but they should be the (painful) exception. Learn to work in episodes.

Separated

One aspect of software development is that we tend to mix two types of development into one stream of actions: There is development that produces new domain functionality and is inherently domain-based. And then, there is development that is required by technology but probably can’t be explained to the domain expert because it has nothing to do with the domain. Both types of code work together to provide the domain functionality.

But if you look at it from a story-telling aspect, then domain code is a novel while the technical code is more like a manual. Your domain code is probably unique while your technical code very much looks the same as in the next project. You should try to separate both types of code in your code base. And you should definitely separate them in your merge requests.

Keeping your domain related changes separate from your technical changes gives your reviewer a chance to ask the right questions. With domain code, some aspects are that way because the expert says so. There is no “universally correct way” to do these things. It might look strange or even wrong, but that’s the rule of the domain.

Technical changes, on the other hand, tend to look the same, no matter the domain. You can apply technical experience to these kind of changes regardless of context. Refactorings are the typical member of these changes. Renaming a variable or method will inevitably lead to a lot of changes in a lot of files. But it won’t affect the domain functionality (that’s the definition of a refactoring!). Keep your refactorings out of domain changesets to keep your reviewers happy.

What next?

There is a lot of adaption to be done from a “normal” development practice to one that tends to produce merge requests with these characteristics. These changes in behaviour don’t happen over night.

My proposal would be to start with one aspect and focus on it for one month. Starting with “Atomic” will have the most effect, but you can choose your starting point according to your preference. Just keep it for one month. After that, you can choose to focus on another aspect for a month or add another aspect to your focus group and now keep two aspects in mind. The main goal of this approach is to form a habit that enables you to produce better merge requests without really thinking about it. It might take some time, lets say a year or even two to incorporate all aspects in your day-to-day working style.

After that, you’ll probably look back on your earlier merge requests and smile because you see proof that you can do better now.