Meet my Expectations!

A while ago I came across a particulary irritating piece of code in a somewhat harmlessly looking mathematical vector class. C++’s rare feature of operator overloading makes it a good fit for multi-dimensional calculations, so vector classes are common and I had already seen quite a few of them in my career. It looked something like this:

template <typename T>
class vec2
{
public:
  /* A few member functions.. */
  bool operator==(vec2 const& rhs) const;

  T x;
  T y;
};

Not many surprises here, except that maybe the operator==() should be a free-function instead. Whether the data members of the class are an array or named individually is often a point of difference between vector implementations. Both certainly have their merits. But I digress…
What really threw me off was the implementation of the operator==(). How would you implement it? Intuitively, I would have expected pretty much this code:

template <typename T>
bool vec2<T>::operator==(vec2 const& rhs) const
{
  return x==rhs.x && y==rhs.y;
}

However, what I found instead was this:

template <typename T>
bool vec2<T>::operator==(vec2 const& rhs) const
{
  if (x!=rhs.x || y!=rhs.y)
  {
    return false;
  }

  return true;
}

What is wrong with this code?

Think about that for a moment! Can you swiftly verify whether this boolean logic is correct? You actually need to apply De Morgan’s laws to get to the expression from the first implementation!
This code was not technically wrong. In fact, for all its technical purposes, it was working fine. And it seems functionally identical to the first version! Still, I think it is wrong on at least two levels.

Different relations

Firstly, it bases its equality on the inequality of its contained type, T. I found this quite surprising, so this already violated the POLA for me. I immediately asked myself: Why did the author choose to implement this based on operator!=(), and not on operator==()? After all, supplying equality for relations is common in templated C++, while inequality is inferred. In a way, this is more intuitive. Inequality already has the negation in its name, while equality is something “original”! Not only that, but why base the equality on a different relation of the contained type instead of the same? This can actually be a problem when the vector is instantiated on a type that supplies operator==(), but not operator!=() – thought that would be equally surprising. It turned out that the vector was only used on built-in types, so those particular concerns were futile. At least, until it is later used with a custom type.

Too many negations

Secondly, there’s the case of immediately returning a boolean after a condition. This alone is often considered a code-smell. It could be argued that this is more readable, but I don’t want to argue in favor of pure brevity. I want to argue in favor of clarity! In this case, that construct is basically used to negate the boolean expression, further obscuring the result of the whole function.
So basically, the function does a double negation (not un-equal) to express a positive concept (equal). And negations are a big source of errors and often lead to confusion.

Conclusion

You need to make sure to make the code as simple and clear as possible and avoids any surprises, especially when dealing with the relatively unconstrained context of C++ templates.  In other words, you need to make sure to meet the expectations of the naive reader as well as possible!

Transferring commits via Git bundles

Sometimes you want to send (e.g. by e-mail) a set of new Git commits to someone else who has the same repository at an older state, without transferring the whole repository and without sharing a common remote repository.

One feature that might come to your mind are Git patches. Patches, however, don’t work when there are branches and merge commits in the commit history: git format-patch creates patches for the commits across the various branches in the order of their commit times and doesn’t create patches for merge commits.

Git bundles

The solution to the problem are Git bundles. Git bundles contain a partial excerpt of a Git repository in a single file.

This is how to create a bundle, including branches, merge commits and tags:

$ git bundle create my.bundle <base commit>..HEAD --branches --tags

<base commit> must be replaced with the last commit (i.e. commit hash or tag), which was included in the old state of the repository.

A Git bundle can be imported into a repository via git pull:

$ git pull /path/to/my.bundle

Get it up and running

West Side-project story
West Side-project story

I have seen quite a few projects that spent a ton of calendar/developer time and bugdet in building components and frameworks instead of getting something running. Running in this sense does not mean being able to show an application started from a developers IDE on her development notebook. With running I mean versioned artifacts deployed on some sort of staging infrastructure the client/customer has access to. Let me elaborate on that:

Walking skeleton

I really like the notion of the walking skeleton coined by Steve Freeman and Nat Pryce in “Growing Object-Oriented Software, Guided by Tests“. While I do not want to emphasize TDD and/or automated end-to-end testing I see great benefits in producing a walking skeleton that touches all important parts of a system and is working with a minimal set of functionality. For me that means that all of the following elements are in place, albeit in a primitve way which can be refined on the way:

  • The code is hosted in a source code repository accessible to the project members
  • A build system is chosen and able to produce a runnable artifact
  • A continuous integration server (CI) is triggered on changes in the repository and produces the runnable artifact
  • The artifact is easily installable on the target machines and/or installed on a staging system that resembles the target system as closely as reasonable
  • If there are components they talk to another using minimal requests and stubbed replies that can be refined over time

I usually aim for that walking skeleton in the first few hours into a project after the base technologies and requirements allow starting with coding. That may take up to several days when the system is more complex but it should not take weeks. Connect different parts of the system as early as possible even if responses are minimal, hardcoded or “wrong”. It shows that the parts are able to communicate and mismatches will become visible either someway along the build process or at least when running application. Why should you choose such an approach?

Benefits

  • Most people I know are better at evolving and improving existing stuff than at creating new stuff in empty space in a focused and efficient way. If you have a skeleton of the system it is much easier to talk about the interfaces and responsibilites of the different parts of the system.
  • You get to define APIs which you can evolve over time instead of specifing the complete API and experience implementation and mismatch problems much later when the components need to be integrated.
  • Similarily it is much more difficult to package a complex system after it is finished than to package a simple minimal system and evolve all aspects – building, implemenation and packaging/deployment, maintainance – when necessary.
  • You see if things may work like expected or if there is some inherent problem in the whole design much earlier. Essentially you evolve your proof-of-concept into something with real value for your clients.
  • As soon as your system provides some value you can deploy the working stuff long before the whole project is finished.
  • It is much easier to discuss a working system than to reason about a system not yet existing.
  • You are eating your own dogfood from early on and can address pain points in development, user experience, deployment and running the application.
  • You have something to show more or less right from the beginning and progress will be visible throughout the project.
  • No “Works on my machine!” syndrome.

Potential Problems

Of course there is the challenge of continuously refactoring and extending existing stuff. Later on data migration or migration of configuration may be additional tasks. But hey, you are skilled, agile developers that embrace the idea of changing requirements and the ability to move fast. You have to pay attention not to accumulate too much technical debt as it will slow you down and hurt you in the long run.

Summary

Running systems providing value are what your clients often care about the most. If you can something like that early on communication with your stakeholders tends to be more relaxed as they have better possibilities of steering in the right direction and they steadily see progress. Running software should be the primary goal.

Thinking in immutability

The way I learned programming is dictated by objects and states. Besides advantages and on going efforts in the industry I couldn’t help but thinking: immutability is nice. I can use it in some cases and keep it quietly stored in the corner. But it didn’t remain silent.

The way I learned programming is dictated by objects and states. According to my thinking data is packed into objects which are later modified to reflect the changes over time. State and modification are a central modelling technique. For me programming and OOP in particular resolved around this common theme. Mutating objects pervade my thinking even beyond the code into the database and even the architecture of the whole system.
Besides advantages and on going efforts in the industry I couldn’t help but thinking: immutability is nice. I can use it in some cases and keep it quietly stored in the corner.
But it didn’t remain silent.
So I asked myself: How do you construct programs that build upon immutability? How do you (mostly) avoid mutable objects? How do you think in immutability?
The first step was to unlearn. No updates. No modifications. Read, create, copy. That’s about it. No more CRUD only CR. No more SQL updates, only inserts.

Events and logs

To illustrate I use a simple example. Creating, moving, translating and deleting a point. In the traditional OO way it looks like this:

Point p = new Point(40, 30);
p.translateXBy(5);
p.moveTo(10, 20);
p.delete();

Or using SQL might be something like this (omitting primary keys and where clauses here):

insert into points (x, y) values (40, 30)
update points p set p.x = p.x + 5
update points p set p.x = 10, p.y = 20
delete from points

In our memory (or database if we use one) every line updates our point:

Point p = new Point(40, 30); // p = {x: 40, y: 30}
p.translateXBy(5); // p = {x: 45, y: 30}
p.moveTo(10, 20); // p = {x: 10, y: 20}
p.delete(); // p = ?

But what if we do not store the results of the operations but the operations themselves? The events.
Imagine your state changes as a series of events. Just imagine.

new PointCreated(40, 30); // pointEvents = [{created[x: 40, y:30]}]
new PointTranslatedXBy(5); // pointEvents = [{created[x: 40, y:30]}, {translated[x: 5]}]
new PointMovedTo(10, 20); // pointEvents = [{created[x: 40, y:30]}, {translated[x: 5]}, {moved:[x: 10, y:20]}]
new PointDeleted(); // pointEvents = [{created[x: 40, y:30]}, {translated[x: 5]}, {moved:[x: 10, y:20]}, {deleted}]

Even in the database we would just use inserts, no more updates and no more deletes. The events are stored in a log (ironically the database does this the same way). A log is a fully ordered, append only queue. Once we use and store events we have some extras besides immutability: an audit trail, an undo stack, recovering, …
We could externalize the event stream in a message queue and could monitor it, replay it to reproduce bugs, distribute it. The possibilities are endless.

But. That’s all nice and fine. I have one more question: what’s the current state? A user should see the current state and other parts of the system also (not mentioning that I – coming from a mutable state kind of thinking – would also feel better seeing it).

So what’s the current state?

All events applied in order.

OK. But isn’t this expensive doing this all the time?

Yes!

Here another concept from databases helps us: materialized views. We can easily translate in our mind between the new immutable event driven way and the old in place update way. It is just the same data in different representations (if we only are interested in the current state). If we store the current state as a materialized view (or cache) besides the event log we can have both.
Every part of the program which needs the current state gets an immutable copy of it. If this part needs to know when something changes, it can observe the events and act accordingly. This way mutability is pushed to the borders, to the parts where the current state is shown (like the UI layer).

Quantities in C++ and User Defined Literals

Some weeks ago one of my colleagues wrote about the use and implementation of physical quantities in C#. If you are writing an application in the technical or scientific domain chances are high that you should adhere to his advice and use a suitable representation of physical quantities instead of plain primitive values. Good news is that you can easily port/implement quantities to modern C++ or use existing libraries like Boost.Units.

With C++11 you can go one step further adding the so called User-defined literals. This feature allows definition of suffices for integer, floating-point, character and string literals to produce objects of the desired (quantity) type. While there is nothing wrong with using the multiplication operator to produce quantity instances user-defined literals provide just a little bit more syntactic sugar:

// Your quantity classes...
class Angle;

// operators for user-defined literals
constexpr Angle operator "" _deg(long double deg)
{
    return deg * degrees;
}

constexpr Angle operator "" _deg(unsigned long long int deg)
{
    return deg * degrees;
}

constexpr Angle operator "" _rad(long double rad)
{
    return (rad * 180 / M_PI) * degrees;
}

// add more if needed

This allows you to write code like:

Angle rightAngle = 90_deg;
Angle halfCircle = 3.141_rad;
Angle fullCircle = 4 * 90_deg;

In many cases this looks a tad simpler and cleaner than using the multiplication operator in conjunction with a unit especially in more complex formulas. There are a few things about quantities and user-defined literals in C++ I find noteworthy:

  • These literals are only supported for the built-in literal types. If exact calculation and better than floating-point precision is needed, raw literals (instead of the explained cooked) and decimal libraries have to be used. For raw literals you have to parse the characters of the literal yourself.
  • User-defined literals need to be prefixed with _ to avoid namespace clashes with current and future standard library literals. There are for example some nice literals for durations in the <chrono>-date and time standard library.
  • If you implement your literal operators as constexpr they will be evaluated at compile time meaning slightly increased compile times and zero runtime overhead.

For some more in-depth discussion of user-defined literals have a look at the blog series from Andrzej Krzemieński.

 

What’s your time, database?

Time is a difficult subject. Especially time zones and daylight saving time. Adding layers makes things worse. Ask your database.

Time is a difficult subject. Especially time zones and daylight saving time. Sounds easy? Well, take a look.
Adding layers in software development complicates the issue and every layer has its own view of time. Let’s start with an example: we write a simple application which stores time based data in a SQL database, e.g. Oracle. The table has a column named ‘at’. Since we don’t want to mess around with timezones, we use a column type without timezone information, in Oracle this would be ‘Date’ if we do not need milliseconds and ‘Timestamp’ if we need them. In Java with plain JDBC we can extract it with a call to ‘getTimestamp’:

Date timestamp = resultSet.getTimestamp("at");

The problem is now we have a timestamp in our local timezone. Where is it converted? Oracle itself has two timezone settings: for the database and for the session. We can query them with:

select DBTIMEZONE from dual;

and

select SESSIONTIMEZONE from dual;

First Oracle uses the time zone set in the session, then the database one. The results from those queries are interesting though: some return a named timezone like ‘Europe/Berlin’, the other return an offset ‘+01:00’. Here a first subtle detail is important: the named timezone uses the offset and the daylight saving time from the respective timezone, the offset setting only uses the offset and no daylight saving. So ‘+01:00’ would just add 1 hour to UTC regardless of the date.
In our example changing these two settings does not change our time conversion. The timezone settings are for another column type: timestamp with (local) timezone.
Going up one layer the JDBC API reveals an interesting tidbit:

Timestamp getTimestamp(int columnIndex)
throws SQLException

Retrieves the value of the designated column in the current row of this ResultSet object as a java.sql.Timestamp object in the Java programming language.

Sounds about right, but wait there’s another method:

Timestamp getTimestamp(int columnIndex,
Calendar cal)
throws SQLException


Retrieves the value of the designated column in the current row of this ResultSet object as a java.sql.Timestamp object in the Java programming language. This method uses the given calendar to construct an appropriate millisecond value for the timestamp if the underlying database does not store timezone information.

Just as in Oracle we can use a named timezone or an offset:

Date timestamp = resultSet.getTimestamp("at", Calendar.getInstance(TimeZone.getTimeZone("GMT+1:00")));

This way we have control over what and how the time is extracted from the database. The next time you work with time based information take a close look. And if you work with Java use Joda Time.

Keep your ovens clean

If you have build dependencies that require the build system to be altered, like registered DLLs, there is a workaround that might save you from snowflaking all your machines.

Let’s assume for a moment that you are a baker, producing different types of pastries in your small bakery. The production process is always the same: prepare the dough, put it in the oven, wait some time and retrieve the most delicious buns or bread. If we can abstract the real baking process to these steps, it’s the same as with software: prepare the sourcecode, put it in the compiler, wait some time and retrieve the most delicious binary or executable. There is only one difference: The oven of the baker is a self-contained, closed system, while our compilers require a distinct system setup around them in order to produce anything edible. The oven is independent from the kitchen around it, the compiler is depedent on the environment. To finish the analogy, what would a baker say if he can’t bake bread in his oven unless he nurtures a certain type of yeast in his kitchen?

A most unpleasant case

While developing a platform dependent application recently, we met a most unpleasant case of build dependency on a third-party library. It was an old dynamic link library (DLL) that requires registering in the windows registry. There was no other way than to register the DLL using the regsrv32 utility. If you didn’t do this, the build process would abort with an error stating unmet dependencies. If you ran the resulting program on a machine without registered DLL, it would crash with a runtime error complaining about the missing registry entry. And by the way, there are two totally independent regsrv32 utilities on a 64-bit windows system, one for 32-bit and one for 64-bit registrations. No, the name of the latter one isn’t regsvr64, that would be way too easy.

We accepted the fact that you need to prepare your system if you want to run the program, but we quarreled a lot with the nuisance that you need to alter your system just to build the software. This process of alteration is called snowflaking in the DevOp mentality and it’s not a desired activity. We would need to alter every build machine in our continuous integration cluster that comes into contact with the project. And we would need to de-snowflake them again afterwards, because this kind of tinkering adds up to inscrutable side-effects very fast.

A practicable workaround

We found a way around the abovementioned snowflaking for our build servers. It’s not a solution, it’s only a workaround, as it solves the immediate problem but produces some lesser problems on the way. Let’s look at what we did.

At first, our situation could be described with this module diagram:

dependency1We couldn’t modify the problematic DLL itself, it was a given binary. But we could wrap it in our own DLL. Wrapping less pleasant things into something you can control is a proven technique even in baking, by the way. We now had a system layout that looks like this:

dependency2Nothing gained so far, just that we now have a layer outside our system that can provide the functionality of the DLL and is actually under our control. The wrapper really does nothing on its own but to forward each call to the DLL. To profit from this indirection, we need to introduce another module, like this:

dependency3The second module provides the same interface as the first, but does nothing, not even forwarding anywhere. It’s a complete stub, just there to be uncomplicated during the build process. The goal is to build the system using this “empty” DLL and then replace it with the “problematic” DLL afterwards. The only question is: how do we build the problematic DLL? Here’s the workaround part of the solution: We actually had to compile the problematic DLL on a snowflaked system and add it to the project repository. Good thing our target system’s specification is known, so we only need to do this for one platform. Because we are reasonably sure that the DLL interface will not change over time (it had every opportunity in the last ten years and didn’t use it), we can assume that the interface of our two wrapper DLLs also won’t change. So it’s not too problematic to check in a precompiled binary that needs to satisfy an interface that’s reproduced with every build cycle. Still, we need to keep an eye on the method signatures of our two wrapper DLLs. If one of them changes, the modification needs to be replicated on the other wrapper, too. It’s a classic duplication.

When we balanced the duplication in the interfaces of the two wrapper DLLs against the snowflaking of every CI and developer machine, we found our aversion against snow outweighing the other negative aspects. Your mileage may vary.

Conclusion

We kept our build ovens clean by introducing a wrapping layer around the problematic depedency and then using the benefits of indirection by switching to a non-problematic stub during the build cycle. The technique is very old, but still use- and powerful.

Physical Quantities in C#

Scientific applications usually perform lots of calculations with physical quantities. If you do not represent them properly in your code you run the risk of mixing them up. For example, it’s easy to add a value in meters to a value in kilometers if you simply work with variables of primitive types like double or decimal. It can help to encode the unit in the variable name, e.g. massInKilogram, but it’s much better to let the type system handle it.

So here’s some C# code which models a generic physical quantity and its unit:

public abstract class Quantity<T> where T : Quantity<T>, new()
{
    private decimal value;

    public decimal In(Unit<T> unit)
    {
        return value / unit.Factor;
    }

    public string ToStringIn(Unit<T> unit)
    {
        return string.Format("{0} {1}", In(unit), unit.Text);
    }

    public class Unit<Q> where Q : Quantity<Q>, new()
    {
        public decimal Factor { get; private set; }
        public string Text { get; private set; }

        public Unit(string representation, decimal factor)
        {
            Text = representation;
            Factor = factor;
        }

        public static Q operator *(decimal value, Unit<Q> unit)
        {
            var quantity = new Q();
            quantity.value = value * unit.Factor;
            return quantity;
        }
    }
}

With this base class we can easily implement some quantities:

class Duration : Quantity<Duration>
{
    public static readonly Unit<Duration> Millisecond = new Unit<Duration>("ms", 1e-3M);
    public static readonly Unit<Duration> Second = new Unit<Duration>("s", 1M);
    public static readonly Unit<Duration> Minute = new Unit<Duration>("min", 60M);
}

class Mass : Quantity<Mass>
{
    public static readonly Unit<Mass> Milligram = new Unit<Mass>("mg", 1e-3M);
    public static readonly Unit<Mass> Gram = new Unit<Mass>("g", 1M);
    public static readonly Unit<Mass> Kilogram = new Unit<Mass>("kg", 1e+3M);
}

And this is how they are used:

var t = 30 * Duration.Second;
Console.WriteLine(t.ToStringIn(Duration.Minute));
Console.WriteLine(t.ToStringIn(Duration.Second));
Console.WriteLine(t.ToStringIn(Duration.Millisecond));

var m = 10 * Mass.Gram;
Console.WriteLine(m.ToStringIn(Mass.Milligram));
Console.WriteLine(m.ToStringIn(Mass.Gram));
Console.WriteLine(m.ToStringIn(Mass.Kilogram));

The Unit class uses operator overloading to overload the multiplication operator. Instead of calling a constructor directly we use multiplication with a unit to create new instances of a quantity.

The type system prevents nonsense like this:

// does not compile
(10 * Mass.Gram).In(Duration.Minute);

You will probably want to overload more operators of the quantity classes depending on your use case. You can also overload operators to produce instances of new quantities:

Velocity v = (3.5 * Length.Kilometer) / (10 * Duration.Minute);

class Velocity : Quantity
{
    public static readonly Unit MeterPerSecond = new Unit("m/s", 1M);
}

class Length : Quantity
{
    public static readonly Unit Meter = new Unit("m", 1M);
    public static readonly Unit Kilometer = new Unit("m", 1e+3M);

    public static Velocity operator /(Length s, Duration t)
    {
        return (s.In(Length.Meter) / t.In(Duration.Second)) * Velocity.MeterPerSecond;
    }
}

If you do not want to hand craft your quantities you might want to check out existing libraries for working with quantities like QuantityTypes.

Packaging kernel modules/drivers using DKMS

Hardware drivers on linux need to fit to the running kernel. When drivers you need are not part of the distribution in use you need to build and install them yourself. While this may be ok to do once or twice it soon becomes tedious doing it after every kernel update.

The Dynamic Kernel Module Support (DKMS) may help in such a situation: The module source code is installed on the target machine and can be rebuilt and installed automatically when a new kernel is installed. While veterans may be willing to manually maintain their hardware drivers with DKMS end user do not care about the underlying system that keeps their hardware working. They want to manage their software updates using the tools of their distribution and everything should be working automagically.

I want to show you how to package a kernel driver as an RPM package hiding all of the complexities of DKMS from the user. This requires several steps:

  1. Preparing/patching the driver (aka kernel module) to include dkms.conf and follow the required conventions of DKMS
  2. Creating a RPM spec-file to install the source, tool chain and integrate the module source with DKMS

While there is native support for RPM packaging in DKMS I found the following procedure more intuitive and flexible.

Preparing the module source

You need at least a small file called dkms.conf to describe the module source to the DKMS system. It usually looks like that:

PACKAGE_NAME="menable"
PACKAGE_VERSION=3.9.18.4.0.7
BUILT_MODULE_NAME[0]="menable"
DEST_MODULE_LOCATION[0]="/extra"
AUTOINSTALL="yes"

Also make sure that the source tarball extracts into the directory /usr/src/$PACKAGE_NAME-$PACKAGE_VERSION ! If you do not like /usr/src as a location for your kernel modules you can configure it in /etc/dkms/framework.conf.

Preparing the spec file

Since we are not building a binary and package it but install source code, register, build and install it on the target machine the spec file looks a bit different than usual: We have no build step, instead we just install the source tree and potentially additional files like udev rules or documentation and perform all DKMS work in the postinstall and preuninstall scripts. All that means, that we build a noarch-RPM an depend on dkms, kernel sources and a compiler.

Preparation section

Here we unpack and patch the module source, e.g.:

Source: %{module}-%{version}.tar.bz2
Patch0: menable-dkms.patch
Patch1: menable-fix-for-kernel-3-8.patch

%prep
%setup -n %{module}-%{version} -q
%patch0 -p0
%patch1 -p1

Install section

Basically we just copy the source tree to /usr/src in our build root. In this example we have to install some additional files, too.

%install
rm -rf %{buildroot}
mkdir -p %{buildroot}/usr/src/%{module}-%{version}/
cp -r * %{buildroot}/usr/src/%{module}-%{version}
mkdir -p %{buildroot}/etc/udev/rules.d/
install udev/10-siso.rules %{buildroot}/etc/udev/rules.d/
mkdir -p %{buildroot}/sbin/
install udev/men_path_id udev/men_uiq %{buildroot}/sbin/

Post-install section

In the post-install script of the RPM we add our module to the DKMS system build and install it:

occurrences=/usr/sbin/dkms status | grep "%{module}" | grep "%{version}" | wc -l
if [ ! occurrences > 0 ];
then
    /usr/sbin/dkms add -m %{module} -v %{version}
fi
/usr/sbin/dkms build -m %{module} -v %{version}
/usr/sbin/dkms install -m %{module} -v %{version}
exit 0

Pre-uninstall section

We need to remove our module from DKMS if the user uninstalls our package to leave the system in a clean state. So we need a pre-uninstall script like this:

/usr/sbin/dkms remove -m %{module} -v %{version} --all
exit 0

Conclusion

Packaging kernel modules using DKMS and RPM is not really hard and provides huge benefits to your users. There are some little quirks like the post-install and pre-uninstall scripts but after you got that working you (and your users) are rewarded with a great, fully integrated experience. You can use the full spec file of the driver in the above example as a template for your driver packages.

Universal skills every software developer can benefit from

I develop software. Professionally for almost 15 years. These are some skills that helped and help me and I think they could help any software developer.

Disclaimer: I develop software. Professionally for almost 15 years. These are some skills that helped and help me and I think they could help any software developer.

Debug

I cannot tell you how many times debugging saved me. I debug with print statements, with IDEs, with command line debuggers and with my brain. Understanding how a system works is crucial. Which parts are connected and which are not. Asking what if. And asking what happened.

Profile

If things go slow, I need to know why. Users and stakeholders expect a certain speed. And rightly so. But beware: if you optimize for one scenario, others might suffer. Profiling and optimizations are a thing of priority: which tasks should be fast and which can be slow.

Sketch

If I work with or for others, understanding each other and the models and concepts they use is essential. Sketching helps me to illustrate my view, my understanding of their view and the misunderstandings between us. Even when not communicating with others I can communicate with myself. Sketching a model of what is in my head or what I plan helps me reason about it. You don’t need to be a master artist, simple shapes like lines, rectangles, circles and arrows get you a long way.

Concepts (domain and technical)

Everybody thinks in concepts and models. May they be from a technical or a user domain. In my daily work I need to understand, to develop, to extract and to communicate concepts. Concepts come from very different places: code has concepts, domains have concepts, our profession has concepts and all kinds of people have concepts. Concepts form the base for my communication.

Budgeting

Time is limited. Concentration is limited. Constraints in a project help me to focus. To be pragmatic. But they also push me to plan and to estimate. I need to develop a notion of how long a feature takes, how important it is, how risky.

Evaluate

In my work I constantly evaluate. From small scale: which implementation is better, to large scale: which technology, which architecture should I use. To evaluate, I need to know the goals and the criteria. My experience helps me and hinders me. I know that no evaluation can be objective. Every one has his personal favorites (and dislikes). Some things can only be seen afterwards. So I have to remind me that I don’t take too long with evaluation and start using it.

Talk

With other developers I can talk in IT lingo. With designers I need to use words from design. With users and stakeholders I speak so that they understand. My job is not only writing code. My job is to explain my job to others. If they do not understand me, it is my fault, not theirs. I do not need to bother them with every detail but sometimes they are the only ones who can decide. I need to tell them what their options are and what the consequences for each of them is – in their words.

Plan and prepare

They are two kinds of people: the ones who like to prepare and the ones who like to improvise. I am in the middle. Some things can be prepared and planned. It is useful when you can move work ahead of time or when you have (more) options when you need to improvise. Don’t overplan. Remember: a plan is there to be changed.

Improvise

During my career I face new situations every now and then. I cannot plan for them or I didn’t. When I am in a client meeting, in a demo presentation, at the production system and something goes not as planned, I need to do something. Sometimes now. It helps me to have a emergency mode. In these situations I focus on what I have: my brain and my voice and on what I can (maybe) get: help from others, a pencil and a paper, time. And sometimes I need to say: I am sorry.

Lead / own

If I work on an issue, I need to own it. If I lead a project, I need to own it. My career: I need to own it. I am the one who is responsible. That does not mean I have to perform the work myself. I also need to know when I am not the right person for the job. But I need to decide. The work, the project, the career is not a boat which drifts in a giant ocean, I need to take the paddle and use it.

Collaborate

I work not alone. I have teammates. I have clients. My goal is to work with them to a common goal. For this I need to collaborate. To delegate. To talk and to ask. To lead and to follow.

Define goals

Goals are measurable. I can ask: did I reach that goal? And answer with yes or no. Often we define something like: I want to get better at X as a goal. But that isn’t a goal. Think of a goal as a destination, not a direction. It is important to strike the balance between focusing too little or too much on our goals. But without even knowing what the goals are we just wander around.

Reflect and how to get feedback

I confess: I cannot live without feedback for too long. Am I on my way to the goal? Is this code any good? Was this the right decision? Do I make progress? Does it work? Reflection and feedback are stepping stones for me. A base from which I can move to higher mountains.

Ask

Asking is hard. Asking for help is hard. Asking for things you don’t (and maybe should) know is hard. But it helps immensely in learning. Be curious. Asking so that the other understands your question and you get the answers you need, needs practice. So: feel free to ask 🙂