3 rules for projects under version control

Checking out a project under version control should be easy and repeatable. Here are a few tips on how to achieve that:

1. Self-containment

You should not need a specifically configured machine to start working on a project. Ideally, you clone the project and get started. Many things can be a problem to achieve that.
Maybe your project is needs a specific operating system, dependency installed, hardware or database setup to run. I personally draw that line at this:
You get a manual with the code that helps you to set up your development environment once and this should be as automated & easy as possible. You should always be able to run your projects without periphery, i.e. specific hardware that can be plugged in or databases that need to be installed.
To achieve this for hardware, you can often fake it via polymorphic interfaces and dependency injection, much like mocking it for testing. The same can be done with databases – or you can use in-memory databases as a fallback.

2. Separate build-artifacts

Building the project into an executable form should be clearly separated from the source-controlled files. For example, the build should never ever modify a file that is under version control. Ideally, the “build” directory is completely independent from the source – enabling a true out-of-source build. However, it is often a good middle ground to allow building in a few dedicated directories in your source repository – but these need to be in .gitignore.

3. Separate runtime data

In the same way, running your project should not touch any source controlled files. Ideally, the project can be run out-of-source. This is trivial for small programs that do not have data, but once some data needs to be managed by the source control system, it gets a little more tricky for the executables to find the data. For data that needs to be changed by the program (we call these “stores”), it is advisable to maintain templates in the VCS or in codes, and copy them to the runtime directory during the build process. For data that is not changed by running the program, such as images, videos, translation-tables etc., you can copy them as well, or make sure the program finds them in the source repository.

Following these guidelines will make it easier to work with version control, especially when multiple people are involved.

Consistency over magic, please

The Groovy programming language is a JVM based scripting language. It is used by the Grails web framework and the Gradle build automation system.

Groovy has a language feature called Named argument constructors. This means that given a class with properties, for example

class Example {
  String text
}

you can initialize the properties directly when calling the constructor:

def example = new Example(text: ' This is an example. ')
assert example.text == ' This is an example. '

This is basically a shortcut for initializing the properties via explicit assignment:

def example = new Example()
example.text = ' This is an example. '
assert example.text == ' This is an example. '

So far so good.

Enter Grails

We use the aforementioned Grails framework for some of our web application projects. It is advertised on its website as featuring “convention-over-configuration” and “sensible defaults”. Grails uses the Groovy programming language, and a simple domain class looks just like a plain old Groovy class, except that it lives under the grails-app/domain directory (this is one of the convention-over-configuration aspects):

class Example {
  String text
}

As expected, you can initialize the property via regular assignment:

def example = new Example()
example.text = ' This is an example. '
assert example.text == ' This is an example. '

So one might expect that you can initialize it via a named argument constructor call as well:

def example = new Example(text: ' This is an example. ')
assert example.text == ' This is an example. '

And indeed, you can. But what’s this? Our assertion fails:

assert example.text == ' This is an example. '
               |    |
               |    false
               This is an example.

It is not directly obvious from the assertion failure output, but the property value is indeed no longer equal to the expected text: the leading and trailing spaces got trimmed!

I was surprised, but after some research in Grails documentation it turned out that it’s not a bug, but a feature. In the section on Data Binding, you can find the following sentence:

The mass property binding mechanism will by default automatically trim all Strings at binding time. To disable this behavior set the grails.databinding.trimStrings property to false in grails-app/conf/application.groovy.

Groovy’s named argument constructor feature is used as a data binding mechanism by Grails to bind web request parameters to a domain object. For this the default behavior was modified, so that strings are automatically trimmed. I can only guess that this is considered to be an instance of the “sensible defaults” mentioned on the Grails homepage.

To me personally this kind of surprising behavior is not a sensible default, and I think it goes against the Principle of least astonishement. I prefer consistency over “magic”.

The lorem ipsum in development

We’ve all been guilty of this: showing a new UI mockup to the user with lorem ipsum or typing in fake data into forms for testing.

We’ve all been guilty of this: showing a new UI mockup to the user with lorem ipsum or typing in fake data into forms for testing. That’s bad because fake data does not have the same characteristics of real data. Real data has specific lengths, formats and structure. Fake data is arbitrary. It is even more embarrassing if a lorem ipsum gets into production. But using fake data is not only a problem in the user interface.
Using fake data is also a problem in programming. We see it in names of classes: *Impl, *Manager or *Holder.
These names do not communicate well. Why not name classes with intent? Take a look at domain driven design and the ubiquitous language for a starter.
But also using only fake data in tests is bad. Yes you need to test the extreme cases but even here you need to inform your selection of data from the real world. Build in the constraints that the domain provides. Without the domain you build highly flexible software which is a nightmare to maintain. But also the test data should not be just arbitrary: you need the corner cases and to find them look again into the domain, the real world. To do this you need to gather real data from users, domain experts or existing systems and reports. Real data has constraints, constraints drive creativity and decisions. The problem with deferring the decisions too much is you maintain your flexibility until then. So make decisions but not from fake but real data.

Developer Experience means nothing

Always choose User Experience over Developer Experience.

You’ve probably already heard about the concept of User Experience (UX) and the matching virtue of User Experience Design. If not, you might want to go check it out. I would suggest you start with the excellent book “The Design of Everyday Things” from Donald Norman. It has nothing to do with software development, so you won’t mix up User Experience Design with Graphical User Interface Design or even Graphics Design.

In short, User Experience is what an user of your software will experience while using your software. If your software makes them smile, you’re some kind of UX god. If your software makes them curse repeatedly, you’re probably not. You can try to improve the experiences of your users by making changes to the software. This is called User Experience Design and is an important part of software development. Most developers know nothing about User Experience Design.

What developers naturally know about is Developer Experience (DX), a concept that isn’t really defined in the literature. I made it up to explain my point to you. Every software developer has a favorite programming language and IDE. Remember, the source codes of the same problem in different programming languages will ultimately yield the same machine code. The machine is totally agnostic about your preferences for a certain programming language. Your choice of a programming language for a certain problem is more about your Developer Experience than anything else. Developer Experience is everything you feel and think during software development. A bad Developer Experience lets you swear about the tools, the code and everything and everybody else around you. A good Developer Experience gives you a sense of accomplishment and safety while coding and makes your work not harder than necessary.

Because you can’t read elsewhere about the concept of Developer Experience, I want to give two more examples and show how it affects the User Experience. The first example is a big, long-lasting software system that is in production and still developed further. If you have a software system in production, everything requires an additional thought about the matching upgrade strategy. You can’t just modify the database structure, you need to provide migrations. You can’t just add a configuration entry, you need to make it optional or consider the least harmful default value. In the project of our example, the developers had three possible approaches to a new configuration entry:

  • They could add it to the code but leave the configuration files unchanged. This required the code to handle the “absent from configuration” case in a useful manner. It required the developer to make effort in the code. The user would not know about the new configuration entry if not stated in some external documentation.
  • They could add it to the code and write a configuration migration script that added it to the existing configuration files on the customer’s installation. The code could now expect the entry to be present, but the developer had to write and test the migration script code. The user would see the new configuration entry with the default value.
  • They could introduce a new configuration file to the system and place the new configuration entry in it. The code could expect the entry to be present, because new configuration files were added to an existing installation during the upgrade process. The user would see the new configuration file and the new configuration entry with the default value.

You can probably guess which of the three options got used so excessively that users complained about the configuration being all over the place, in a myriad of little one-liner configuration files with ominous names. The developers chose the best option for them and, in short view, for the users. But on the long run, the User Experience declined.

The second example is from a computer game in the mid 2000s. It was a massive multiplayer online shooter with a decent implementation ahead of its time. But one thing was still from the last century: After each update of the game, the key bindings were reset to the default. And every other aspect of local modification to the settings, too. After each update, you had to configure your video, audio, controls and everything else like your in-game equipment again and again. The game didn’t offer any means to copy or reload your settings. It was up to you to maintain a recent backup of the settings files just in case. And if the file format was changed, you needed to combine their changes with yours. WinMerge is a decent tool for that task. But the problem is clear: The game developers couldn’t be bothered to think about how their upgrade strategy would affect their users. They ignored the problem and let the users figure it out. They chose better Developer Experience, free from complexities like user-side modifications over good User Experience like a game that can be upgraded without drawbacks for the gamers.

Sadly, this is an usual formula in software development: Developer Experience is more important than User Experience. Just look at it from an utilitarian point of view: If you burden thousands or even millions of users with just an easy one-minute task that you could fix during development, you have a budget of two workdays up to 10 person-years. Do the math yourself: One million users, each spending one minute of work on your software, equal to over 2000 workdays lost on a thing you could probably fix for everybody in an hour or two.

And this brings me to my central statement: Developer Experience, as opposed to User Experience, means nothing. It’s just not important. At least for all the users. It is important to the small group of developers and should not be forgotten. But never should a decision lead to better Developer Experience on the expense of User Experience. It’s a small inconvenience for us developers to think about smooth upgrades, meaningful and consistent control element titles or an easy installation. It’s a whole new game for our users.

Always choose User Experience over Developer Experience.

Call to action: If you have another good example of developers being lazy on the expense of their users, please share it as a comment.

Call to action 2: The initial picture is linked from the excellent website badhtml.com. If you ever feel like an imposter for your latest design, go and visit this website.

luabind deboostified tips and tricks

luabind deboostified is a fork of the luabind project that helps exposing APIs to Lua. As the name implies, it replaces the boost dependency with modern C++, which makes it a lot more pleasant to work with.

Here are a few tips and tricks I learned while working with it. Some tricks might be applicable to the original luabind – I do not know.

1. Splitting module registration

You can split the registration code for different classes. I usually add a register function per class, like this:

struct A {
  void doSomething();
  static luabind::scope registerWithLua();
};

struct B {
  void goodStuff();
  static luabind::scope registerWithLua();
};

You can then combine their registration code into a single module on the Lua side:

void registerAll(lua_State* L) {
  luabind::module(L)[
    A::registerWithLua(),
    B::registerWithLua()];
}

The implementation of a registration function looks like this:

luabind::scope A::registerWithLua()
{
  return luabind::class_<A>("A")
    .def("doSomething", &A::doSomething);
}

2. Multiple policies and multiple return values

Unlike C++, Lua has real multiple return values. You can use that by utilizing the return value policies that luabind offers. Lets say, you want to write this in Lua:

local x, y = a.getPosition()

The C++ side could look like this:

void getPosition(A const& a, float& x, float& y);

The deboostified fork needs its policies supplied in a type list. Let’s use a small helper meta-function to build that:

template <typename... T>
using joined = 
  typename luabind::meta::join<T...>::type;

Once you have that, you can expose it like this:

luabind::def("getPosition", &getPosition,
              joined<
                luabind::pure_out_value<2>,
                luabind::pure_out_value<3>
              >());

3. Specialized data structures using luabind::object

Using the converters in luabind is not the only way to make Lua values from C++. Almost everything you can do in Lua itself, you can do with luabind::object. Here is a somewhat contrived example:

luabind::object repeat(luabind::object what,
                       int count) {
  // Create a new table object
  auto result = luabind::newtable(
    what.interpreter());
  // Fill it as an array [1..N]
  for (int i = 1; i <= count; ++i)
    result[i] = what;
  return result;
}

This function can then be exported via luabind::def and used just like any other function. This is just the tip of the iceberg, though. For example, you can also write functions that, at runtime, behave differently when a number is passed in as when a table is passed in. You can find out the Lua type with luabind::type(myObject).

Of course, as soon as you want to create new objects to return to Lua, you need the lua_State pointer in that function. Using the interpreter from a passed-in luabind::object is one way, but I have yet to find another pleasant way to do this. It is probably possible to use the policies to do this, and have them pass that in as a special parameter, but for now I am using some complicated machinery to bind lambda functions that capture the Lua interpreter.

That’s it for now..

Keep in mind that these are not thoroughly researched best-practices, but patterns I have used to solve actual problems. There might be better solutions out there – if you know any, please let me know. Hope this helped!

Using Ansible vault for sensitive data

We like using ansible for our automation because it has minimum requirements for the target machines and all around infrastructure. You need nothing more than ssh and python with some libraries. In contrast to alternatives like puppet and chef you do not need special server and client programs running all the time and communicating with each other.

The problem

When setting up remote machines and deploying software systems for your customers you will often have to use sensitive data like private keys, passwords and maybe machine or account names. On the one hand you want to put your automation scripts and their data under version control and use them from your continuous integration infrastructure. On the other hand you do not want to spread the secrets of your customers all around your infrastructure and definately never ever in your source code repository.

The solution

Ansible supports encrypting sensitive data and using them in playbooks with the concept of vaults and the accompanying commands. Setting it up requires some work but then usage is straight forward and works seamlessly.

The high-level conversion process is the following:

  1. create a directory for the data to substitute on a host or group basis
  2. extract all sensitive variables into vars.yml
  3. copy vars.yml to vault.yml
  4. prefix variables in vault.yml with vault_
  5. use vault variables in vars.yml

Then you can encrypt vault.yml using the ansible-vault command providing a password.

All you have to do subsequently is to provide the vault password along with your usual playbook commands. Decryption for playbook execution is done transparently on-the-fly for you, so you do not need to care about decryption and encryption of your vault unless you need to update the data in there.

The step-by-step guide

Suppose we want work on a target machine run by your customer but providing you access via ssh. You do not want to store your ssh user name and password in your repository but want to be able to run the automation scripts unattended, e.g. from a jenkins job. Let us call the target machine ceres.

So first you setup the directory structure by creating a directory for the target machine called $ansible_script_root$/host_vars/ceres.

To log into the machine we need two sensitive variables: ansible_user and ansible_ssh_pass. We put them into a file called $ansible_script_root$/host_vars/ceres/vars.yml:

ansible_user: our_customer_ssh_account
ansible_ssh_pass: our_target_machine_pwd

Then we copy vars.yml to vault.yml and prefix the variables with vault_ resulting in $ansible_script_root$/host_vars/ceres/vault.yml with content of:

vault_ansible_user: our_customer_ssh_account
vault_ansible_ssh_pass: our_target_machine_pwd

Now we use these new variables in our vars.xml like this:

ansible_user: "{{ vault_ansible_user }}"
ansible_ssh_pass: "{{ vault_ansible_ssh_pass }}"

Now it is time to encrypt the vault using the command

ANSIBLE_VAULT_PASS="ourpwd" ansible-vault encrypt host_vars/ceres/vault.yml

resulting a encrypted vault that can be put in source control. It looks something like

$ANSIBLE_VAULT;1.1;AES256
35323233613539343135363737353931636263653063666535643766326566623461636166343963
3834323363633837373437626532366166366338653963320a663732633361323264316339356435
33633861316565653461666230386663323536616535363639383666613431663765643639383666
3739356261353566650a383035656266303135656233343437373835313639613865636436343865
63353631313766633535646263613564333965343163343434343530626361663430613264336130
63383862316361363237373039663131363231616338646365316236336362376566376236323339
30376166623739643261306363643962353534376232663631663033323163386135326463656530
33316561376363303339383365333235353931623837356362393961356433313739653232326638
3036

Using your playbook looks similar to before, you just need to provide the vault password using one of several options like specifying a password file, environment variable or interactive input. In our example we just use the environment variable inline:

ANSIBLE_VAULT_PASS="ourpwd" ansible-playbook -i inventory work-on-customer-machines.yml

After setting up your environment appropriately with a password file and the ANSIBLE_VAULT_PASSWORD_FILE environment variable your playbook commands are exactly the same like without using a vault.

Conclusion

The ansible vault feature allows you to safely store and use sensitive data in your infrastructure without changing too much using your automation scripts.

Client-side web development: Drink the Kool-Aid or be cautious?

Client side web development is a fast-changing world. JavaScript libraries and frameworks come and go monthly. A couple of years ago jQuery was a huge thing, then AngularJS, and nowadays people use React or Vue.js with a state container like Redux. And so do we for new projects. Unfortunately, these modern client-side frameworks are based on the npm ecosystem, which is notoriously known for its dependency bloat. Even if you only have a couple of direct dependencies the package manager lock file will list hundreds of indirect dependencies. Experience has shown that lots of dependencies will result in a maintenance burden as time passes, especially when you have to do major version updates. Also, as mentioned above, frameworks come and then go out of fashion, and the maintainers of a framework move on to their next big thing pet project, leaving you and your project sitting on a barely or no longer maintained base, and frameworks can’t be easily replaced, because they tend to permeate every aspect of your application.

With this frustrating experience in mind we recently did an experiment for a new medium sized web project. We avoided frameworks and the npm ecosystem and only used JavaScript libraries with no or very few indirect dependencies, which really were necessary. Browsers have become better at being compatible to web standards, at least regarding the basics. Libraries like jQuery and poly-fills that paper over the incompatibilities can mostly be avoided — an interesting resource is the website You Might Not Need jQuery.

We still organised our views as components, and they are communicating via a very simple event dispatcher. Some things had to be done by foot, but not too much. It works, although the result is not as pure as it would have been with declarative views as facilitated by React and a functional state container like Redux. We’re still fans of the React+Redux approach and we’re using it happily (at least for now) for other projects, but we’re also skeptical regarding the long term costs, especially from relying on the npm ecosystem. Which approach will result in less maintenance burden? We don’t know yet. Time will tell.

Decoding non-utf8 server responses using the Fetch API

The new Javascript Fetch API is really nice addition to the language and my preferable, and in fact the only bearable, way to do server requests.
The Promise based API is a lot nicer than older, purely callback-based, approaches.

The usual approach to get a text response from a server using the Fetch API looks like this:

let request = fetch(url)
  .then(response => response.text())
  .then(handleText);

But this has one subtle problem:

I was building a client application that reads weather data from a small embedded device. We did not have direct access to changing the functionality of that device, but we could upload static pages to it, and use its existing HTML API to query the amount of registered rainfall and lightning strikes.

Using the fetch API, I quickly got the data and extracted it from the HTML but some of the identifiers had some screwed up characters that looked like decoding problems. So I checked whether the HTTP Content-Type was set correctly in the response. To my surprise it was correctly set as Content-Type: text/html; charset=iso-8859-1.

So why did my Javascript Application not get that? After some digging, it turned out that Response’s text() function always decodes the payload as utf-8. The mismatch between that and the charset explained the problem!

Obviously, I had to do the decoding myself. The solution I picked was to use the TextDecoder class. It can decode an ArrayBuffer with a given encoding. Luckily, that is easy to get from the response:

let request = fetch(url)
  .then(response => response.arrayBuffer())
  .then(buffer => {
    let decoder = new TextDecoder("iso-8859-1");
    let text = decoder.decode(buffer);
    handleText(text);
  });

Since I only had to support that single encoding, that worked well for me. Keep in mind that the TextDecoder is still experimental Technology. However, we had a specific browser as a target and it works there. Lucky us!

For what the javascript!

The setting

We are developing and maintaining an important web application for one of our clients. Our application scrapes a web page and embeds our own content into that page frame.

One day our client told us of an additional block of elements at the bottom of each page. The block had a heading “Image Credits” and a broken image link strangely labeled “inArray”. We did not change anything on our side and the new blocks were not part of the HTML code of the pages.

Ok, so some new Javascript code must be the source of these strange elements on our pages.

The investigation

I started the investigation using the development tools of the browser (using F12). A search for the string “Image Credits” instantly brought me to the right place: A Javascript function called on document.ready(). The code was basically getting all images with a copyright attribute and put the findings in an array with the text as the key and the image url as the value. Then it would iterate over the array and add the copyright information at the bottom of each page.

But wait! Our array was empty and we had no images with copyright attributes. Still the block would be put out. I verified all this using the debugger in the browser and was a bit puzzled at first, especially by the strange name “inArray” that sounded more like code than some copyright information.

The cause

Then I looked at the iteration and it struck me like lightning: The code used for (name in copyrightArray) to iterate over the elements. Sounds correct, but it is not! Now we have to elaborate a bit, especially for all you folks without a special degree in Javascript coding:

In Javascript there is no distinct notion of associative arrays but you can access all enumerable properties of an object using square brackets (taken from Mozillas Javascript docs):

var string1 = "";
var object1 = {a: 1, b: 2, c: 3};

for (var property1 in object1) {
  string1 = string1 + object1[property1];
}

console.log(string1);
// expected output: "123"

In the case of an array the indices are “just enumerable properties with integer names and are otherwise identical to general object properties“.

So in our case we had an array object with a length of 0 and a property called inArray. Where did that come from? Digging further revealed that one of our third-party libraries added a function to the array prototype like so:

Array.prototype.inArray = function (value) {
  var i;
  for (i = 0; i &lt; this.length; i++) {
    if (this[i] === value) {
      return true;
    }
  }
  return false;
};

The solution

Usually you would iterate over an array using the integer index (old school) or better using the more modern and readable for…of (which also works on other iterable types). In this case that does not work because we do not use integer indices but string properties. So you have to use Object.keys().forEach() or check with hasOwnProperty() if in your for…in loop if the property is inherited or not to avoid getting unwanted properties of prototypes.

The takeaways

Iteration in Javascript is hard! See this lengthy discussion…The different constructs are named quite similar an all have subtle differences in behaviour. In addition, libraries can mess with the objects you think you know. So finally some advice from me:

  • Arrays are only true arrays with positive integer indices/property names!
  • Do not mess with the prototypes of well known objects, like our third-party library did…
  • Use for…of to iterate over true arrays or other iterables
  • Do not use associative arrays if other options are available. If you do, make sure to check if the properties are own properties and enumerable.

 

The Pure Fabrication Tax

Last week, I attended the Maexle event of the C++ user group in Karlsruhe. The Maexle event is basically a programming contest where your program plays the Mia dice game against other programs. You have to implement a simple network protocol to join games, announce rolls and call bluffs. Your program earns points for every game it has participated and not lost. So there is a strong emphasis on starting early and staying in the game, even if your program doesn’t perform the best.

Since it was an event of the C++ user group, the programming language to be used was C++. I’m certainly no C++ hero and knew I couldn’t compete, so I joined the fun with an espionage role and programmed an observing bot that doesn’t play, but gathers data on the players. I chose Java for the task. My observer was online after two minutes, the first real player joined the server after 20 minutes. It turned out to be written in Python. The first real C++ bot was online after 35 minutes, the last one played its first round after two hours.

I listened closely to the problems the teams around me tackled and noticed something strange: Nobody talked about the actual game (Maexle/Mia). Every task was a technical one. Let’s talk about why that’s a problem.

Three Definitions

Before I dive into the subject, I want to define some terms that I’ll use to help you understand my point. It’s entirely possible to look at the story above and see a bunch of engineers having fun with some engineering tasks.

  • First, I value the economics of my customer. In this case, the customer is a lonely server on the LAN that wants to host some games of Maexle for bots. Like, lots of games. Thousands of games. The customer gives points for early market entry, so time to market is an economic factor (or a key performance indicator, some might say). You can roughly say that being online early means you can make bucks longer. The second key performance indicator is uptime. You want to stay in the game as long as you don’t lose all the time. There are some more KPI, but the two I’ve listed should have a major impact in your programming approach – if you value the economics.
  • Second, I don’t care about tools. A programming language is a tool. A compiler is a tool. Your IDE or text editor is a tool. Use your preferred tooling as long as it suits your needs. That means explicitely as long as it doesn’t actively work against your other values like the customer’s economics. This blog post is definitely not about Java or Python being “better” or “better suited” than C++. They aren’t. The first two bots (observer and player) were programmed by participants that had prior experience with the event. It wasn’t the tool that made them fast, it was the absence of rookie errors in the domain and its technical structure.
  • Third, I will explain my point with the concept of “pure fabrication. Pure fabrication is everything that is not specified by the customer, but necessary to fulfill the specification. It’s the code you write because your customer wants to persist some data. He never ordered you to write SQL statements or “open a connection to the database”, maybe he didn’t even know what a database was. Your customer wanted the data stored somehow. The code that enables you to actually program the storage is “pure fabrication” in terms of the domain. Think of it as a scaffolding holding your domain code in place. If you hire a painter to color your house, he will scaffold the walls to reach every spot with ease. You didn’t hire him to set those structures up, they are just necessary for the task. The difference to most of our code is that the painter removes the scaffolding afterwards.

Pure Fabrication vs. Domain

So, if I would have been a customer on the Maexle event, paying for a competitive Maexle bot, I would be very surprised about the actual construction process. Up to two hours into a three-hours event, my programmer would solve apparently hard and important problems, but not my problems. In fact, I wouldn’t even understand the relation between the attempted problems and my required solution. And I would have to have blind faith for more than half my money that something useable will come out of this.

This is the effect of too much pure fabrication in the programming approach. I’m all for solving hard programming problems, but I’m not interested in solving them over and over again. After some iterations, they become solved problems or, essentially, tools. And I don’t care about tools as long as they get their job done. If your domain problem requires a better tool, then we can put the programming problem on our todo list again. Otherwise, we are not valueing our customer’s economics, we are showing off to our peers.

If you program a simple game of Maexle with a heavy emphasis on time to market and even after the initial ramp up aren’t able to reason about your code using language from the domain (like game, dice, roll, bluff, double and, of course, mia), you are staying in pure fabrication land. That’s the level of programming where it matters if you used an integer or freed that memory. That’s when you pay the Pure Fabrication Tax to the fullest. Because your code now does something valueable in the domain, but the distance between your customer’s language and your code’s language is an hindrance. And this distance will demand its tax with every new feature, every change request and every bug.

Bugs are another area where the distance is measureable. If you can’t explain your bugs to the customer, you’ve made them in the pure fabrication part of your code. If you can never explain your bugs, your domain code is hidden between lines and lines of source code with lots of special characters, brackets and magic numbers. Just imagine your hired painter tries to tell you why your house is now pink instead of white or yellow: “It was a small mishap in the way we constructed the scaffolding, we used an E5 steel beam instead of a rail clamp and forgot to perform a hammer check on it”. The last part is totally made up, but I’m sure that’s how we sound for non-programmers.

Exemptions from the Tax?

What solution would I suggest? I don’t think there is a definite solution to the problem. You can’t go full Domain Driven Design on a three-hour Maexle event. By the time you’ve built your fancy Domain Specific Language to write code with the customer besides you, everybody else has gathered their game points and gone home. If you switch to a language that has a string tokenizer in its standard library, you can speed up your programming, but maybe just produce a bigger heap of slightly less low-leveled pure fabrication code.

I don’t want to advocate a solution. My attempt is to highlight the problem: The Pure Fabrication Tax. Given the right (or wrong) amount of extrinsic (or intrinsic) motivation, we are able to produce a mess in just a few hours without really connecting to the domain we produce the mess for. If we didn’t program a Maexle bot that night, but a poker bot or a chat bot, most if not all of the problems and bugs would have been the same. This is not a domain-specific problem. It’s our problem. We probably just like to pay the tax.

What are your thoughts on the Pure Fabrication Tax? Can you see it? Do you have an idea for a solution approach? Leave your comment below!

Disclaimer

And to counter everybody who thinks I’m just bashing the other participants on the event: I was the first one online on the server, with a task that requires virtually no effort and doesn’t even participate directly in the competition, with tools that solved nearly all my pure fabrication problems and still managed to create a program that contained less than five domain terms and was useless for its intended purpose. I said I value the economics of my customer (even if there was none), so I know that I failed hardest on the event. And I had prior knowledge. There was just nobody to compare my mess to.