Arbitrary 2D curves with Highcharts

Highcharts is a versatile JavaScript charting library for the web. The library supports all kinds of charts: scatter plots, line charts, area chart, bar charts, pie charts and more.

A data series of type line or spline connects the points in the order of the x-axis when rendered. It is possible to invert the axes by setting the property inverted: true in the chart options.

var chart = $('#chart').highcharts({
  chart: {
    inverted: true
  },
  series: [{
    name: 'Example',
    type: 'line',
    data: [
      {x: 10, y: 50},
      {x: 20, y: 56.5},
      {x: 30, y: 46.5},
      // ...
    ]
  }]
});

line-chart-inverted

Connecting points in an arbitrary order

Connecting the points in an arbitrary order, however, is not supported by default. I couldn’t find a Highcharts plugin which supports this either, so I implemented a solution by modifying the getGraphPath function of the series object:

var series = $('#chart').highcharts().series[0];
var oldGetGraphPath = series.getGraphPath;
Object.getPrototypeOf(series).getGraphPath = function(points, nullsAsZeroes, connectCliffs) {
  points = points || this.points;
  points.sort(function(a, b) {
    return a.sortIndex - b.sortIndex;
  });
  return oldGetGraphPath.call(this, points, nullsAsZeroes, connectCliffs);
};

The new function sorts the points by a new property called sortIndex before the line path of the chart gets rendered. This new property must be assigned to each point object of the series data:

series.setData([
  {x: 10, y: 50, sortIndex: 1},
  {x: 20, y: 56.5, sortIndex: 2},
  {x: 30, y: 46.5, sortIndex: 3},
  // ...
], true);

Now we can render charts with points connected in an arbitrary order like this:

A line chart with points connected in arbitrary order
A line chart with points connected in arbitrary order

Modern developer #3: Framework independent JavaScript architecture

Usually small JavaScript projects start with simple wiring of callbacks onto DOM elements. This works fine when it the project is in its initial state. But in a short time it gets out of hand. Now we have spaghetti wiring and callback hell. Often at this point we try to get help by looking at adopting a framework, hoping to that its coded best practices draw us out of the mud. But now our project is tied to the new framework.
In search of another, framework independent way I stumbled upon scalable architecture by Nicholas Zakas.
It starts by defining modules as independent units. This means:

  • separate JavaScript and DOM elements from the rest of the application
  • Modules must not reference other modules
  • Modules may not register callbacks or even reference DOM elements outside their DOM tree
  • To communicate with the outside world, modules can only call the sandbox

The sandbox is a central hub. We use a pub/sub system:

sandbox.publish({type: 'event_type', data: {}});

sandbox.subscribe('event_type', this.callback.bind(this));

Besides being an event bus, the sandbox is responsible for access control and provides the modules with a consistent interface.
Modules are started and stopped (in case of misbehaving) in the application core. You could also use the core as an anti corruption layer for third party libraries.
This architecture gives a frame for implementation. But implementing it raises other questions:

  • how do the modules update their state?
  • where do we call the backend?

Handling state

A global model would reside or be referenced by the application core. In addition every module has its own model. Updates are always done in application event handlers, not directly in the DOM event handlers.
Let me illustrate. Say we have a module with keeps track of selected entries:

function Module(sandbox) {
  this.sandbox = sandbox;
  this.selectedEntries = [];
}

Traditionally our DOM event handler would update our model:

button.on('click', function(e) {
  this.selectedEntries.push(entry);
});

A better way would be to publish an application event, subscribe the module to this event and handle it in the application event handler:

this.sandbox.subscribe('entry_selected', this.entrySelected.bind(this));

Module.prototype.entrySelected = function(event) {
  this.selectedEntries.push(event.entry);
};

button.on('click', function(e) {
  this.sandbox.publish({type: 'entry_selected', entry: entry});
});

Other modules can now listen on selecting entries. The module itself does not need to know who selected the entry. All the internal communication of selection is visible. This makes it possible to use event sourcing.

Calling the backend

No module should directly call the backend. For this a special module called extension is used. Extensions encapsulate cross cutting concerns and shield communication with other systems.

Summary

This architecture keeps UI parts together with their corresponding code, flattens callbacks and centralizes the communication with the help of application events and encapsulates outside communication. On top of that it is simple and small.

Remote development with PyCharm

PyCharm is a fantastic tool for python development. One cool feature that I quite like is its support for remote development. We have quite a few projects that need to interact with special hardware, and that hardware is often not attached to the computer we’re developing on.
In order to test your programs, you still need to run it on that computer though, and doing this without tool support can be especially painful. You need to use a tool like scp or rsync to transmit your code to the target machine and then execute it using ssh. This all results in painfully long and error prone iterations.
Fortunately, PyCharm has tool support in its professional edition. After some setup, it allows you do develop just as you would on a local machine. Here’s a small guide on how to set it up with an ubuntu vagrant virtual machine, connecting over ssh. It work just as nicely on remote computers.

1. Create a new deployment configuration

In the Tools->Deployment->Configurations click the small + in the top left corner. Pick a name and choose the SFTP type.
add_server

In the “Connection” Tab of the newly created configuration, make sure to uncheck “Visible only for this project”. Then, setup your host and login information. The root path is usually a central location you have access to, like your home folder. You can use the “Autodetect” button to set this up.

connection
For my VM, the settings look like this.

On the “Mappings” Tab, set the deployment path for your project. This would be the specific folder of your project within the root you set on the previous page. Clicking the associated “…” button here helps, and even lets you create the target folder on the remote machine if it does not exist yet.

2. Activate the upload

Now check “Tools->Deployment->Automatic Upload”. This will do an upload when you change a file, so you still need to do the initial upload manually via “Tools->Deployment->Upload to “.

3. Create a project interpreter

Now the files are synced up, but the runtime environment is not on the remote machine. Go to the “Project Interpreter” page in File->Settings and click the little gear in the top-right corner. Select “Add Remote”.

remote_interpreter
It should have the Deployment configuration you just created already selected. Once you click ok, you’re good to go! You can run and debug your code just like on a local machine.

Have fun developing python applications remotely.

The migration path for Java applets

Java applets have been a deprecated technology for a while. It has become increasingly difficult to run Java applets in modern browsers. Browser vendors are phasing out support for browser plugins based on the Netscape Plugin API (NPAPI) such as the Java plugin, which is required to run applets in the browser. Microsoft Edge and Google Chrome already have stopped supporting NPAPI plugins, Mozilla Firefox will stop supporting it at the end of 2016. This effectively means that Java applets will be dead by the end of the year.

However, it does not necessarily mean that Java applet based projects and code bases, which can’t be easily rewritten to use other technologies such as HTML5, have to become useless. Oracle recommends the migration of Java applets to Java Web Start applications. This is a viable option if the application is not required to be run embedded within a web page.

To convert an applet to a standalone Web Start application you put the applet’s content into a frame and call the code of the former applet’s init() and start() from the main() method of the main class.

Java Web Start applications are launched via the Java Network Launching Protocol (JNLP). This involves the deployment of an XML file on the web server with a “.jnlp” file extension and content type “application/x-java-jnlp-file”. This file can be linked on a web page and looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<jnlp spec="1.0+" codebase="http://example.org/demo/" href="demoproject.jnlp">
 <information>
  <title>Webstart Demo Project</title>
  <vendor>Softwareschneiderei GmbH</vendor>
  <homepage>http://www.softwareschneiderei.de</homepage>
  <offline-allowed/>
 </information>
 <resources>
  <j2se version="1.7+" href="http://java.sun.com/products/autodl/j2se"/>
  <jar href="demo-project.jar" main="true"/>
  <jar href="additional.jar"/>
 </resources>
 <application-desc name="My Project" main-class="com.schneide.demo.WebStartMain">
 </application-desc>
 <security>
  <all-permissions/>
 </security>
 <update check="always"/>
</jnlp>

The JNLP file describes among other things the required dependencies such as the JAR files in the resources block, the main class and the security permissions.

The JAR files must be placed relative to the URL of the “codebase” attribute. If the application requires all permissions like in the example above, all JAR files listed in the resources block must be signed with a certificate.

deployJava.js

The JNLP file can be linked directly in a web page and the Web Start application will be launched if Java is installed on the client operating system. Additionally, Oracle provides a JavaScript API called deployJava.js, which can be used to detect whether the required version of Java is installed on the client system and redirect the user to Oracle’s Java download page if not. It can even create an all-in-one launch button with a custom icon:

<script src="https://www.java.com/js/deployJava.js"></script>
<script>
  deployJava.launchButtonPNG = 'http://example.org/demo/launch.png';
  deployJava.createWebStartLaunchButton('http://example.org/demo/demoproject.jnlp', '1.7.0');
</script>

Conclusion

The Java applet technology has now reached its “end of life”, but valuable applet-based projects can be saved with relatively little effort by converting them to Web Start applications.

C++ Coroutines on Windows with the Fiber API

Last week, I had the chance to try out coroutines as a way to cooperatively interleave long tasks with event-processing. Unlike threads, where you can have interaction between between threads at any time, coroutines need to yield control explicitly, whic arguably makes synchronisation a little simpler. Especially in (legacy) systems that are not designed for concurrency. Of course, since coroutines do not run at the same time, you do not get the perks from concurrency either.

If you don’t know coroutines, think of them as functions that can be paused and resumed.

Unlike many other languages, C++ does not have built-in support for coroutines just yet. There are, however, several alternatives. On Windows, you can use the Fiber API to implement coroutines easily.

Here’s some example code of how that works:

auto coroutine=make_shared<FiberCoroutine>();
coroutine->setup([](Coroutine::Yield yield)
{
  for (int i=0; i<3; ++i)
  {
    cout << "Coroutine " 
         << i << std::endl;
    yield();
  }
});

int stepCount = 0;
while (coroutine->step())
{
  cout << "Main " 
       << stepCount++ << std::endl;
}

Somewhat surprisingly, at least if you have never seen coroutines, this will output the two outputs alternatingly:

Coroutine 0
Main 0
Coroutine 1
Main 1
Coroutine 2
Main 2

Interface

Since fibers are not the only way to implement coroutines and since we want to keep our client code nicely insulated from the windows API, there’s a pure-virtual base class as an interface:

class Coroutine
{
public:
  using Yield = std::function<void()>;
  using Run = std::function<void(Yield)>;

  virtual ~Coroutine() = default;
  virtual void setup(Run f) = 0;
  virtual bool step() = 0;
};

Typically, creation of a Coroutine type allocates all the resources it needs, while setup “primes” it with an inner “Run” function that can use an implementation-specific “Yield” function to pass control back to the caller, which is whoever calls step.

Implementation

The implementation using fibers is fairly straight-forward:

class FiberCoroutine
  : public Coroutine
{
public:
  FiberCoroutine()
  : mCurrent(nullptr), mRunning(false)
  {
  }

  ~FiberCoroutine()
  {
    if (mCurrent)
      DeleteFiber(mCurrent);
  }

  void setup(Run f) override
  {
    if (!mMain)
    {
      mMain = ConvertThreadToFiber(NULL);
    }
    mRunning = true;
    mFunction = std::move(f);

    if (!mCurrent)
    {
      mCurrent = CreateFiber(0,
        &FiberCoroutine::proc, this);
    }
  }

  bool step() override
  {
    SwitchToFiber(mCurrent);
    return mRunning;
  }

  void yield()
  {
    SwitchToFiber(mMain);
  }

private:
  void run()
  {
    while (true)
    {
      mFunction([this]
                { yield(); });
      mRunning = false;
      yield();
    }
  }

  static VOID WINAPI proc(LPVOID data)
  {
    reinterpret_cast<FiberCoroutine*>(data)->run();
  }

  static LPVOID mMain;
  LPVOID mCurrent;
  bool mRunning;
  Run mFunction;
};

LPVOID FiberCoroutine::mMain = nullptr;

The idea here is that the caller and the callee are both fibers: lightweight threads without concurrency. Running the coroutine switches to the callee’s, the run function’s, fiber. Yielding switches back to the caller. Note that it is currently assumed that all callers are from the same thread, since each thread that participates in the switching needs to be converted to a fiber initially, even the caller. The current version only keeps the fiber for the initial thread in a single static variable. However, it should be possible to support this by replacing the single static fiber pointer with a map that maps each thread to its associated fiber.

Note that you cannot return from the fiberproc – that will just terminate the whole thread! Instead, just yield back to the caller and either re-use or destroy the fiber.

Assessment

Fiber-based coroutines are a nice and efficient way to model non-linear control-flow explicitly, but they do not come without downsides. For example, while this example worked flawlessly when compiled with visual studio, Cygwin just terminates without even an error. If you’re used to working with the visual studio debugger, it may surprise you that the caller gets hidden completely while you’re in the run function. The run functions stack completely replaces the callers stack until you call yield(). This means that you cannot find out who called step(). On the other hand, if you’re actually doing a lot of processing in the run function, this is quite nice for profiling, as the “processing” call tree seemingly has its own root in the call-tree.

I just wish the visual studio debugger had a way to view the states of the different fibers like it has for threads.

Alternatives

  • On Linux, you can use the ucontext.
  • Visual Studio 2015 also has another, newer, implementation.
  • Coroutines can be implemented using threads and condition-variables.
  • There’s also Boost.Coroutine, if you need an independent implementation of the concept. From what I gather, they only use Fibers optionally, and otherwise do the required “trickery” themselves. Maybe this even keeps the caller-stack visible – it is certainly worth exploring.
  • Using passwords with Jenkins CI server

    For many of our projects the Jenkins continuous integration (CI) server is one important cornerstone. The well known “works on my machine” means nothing in our company. Only code in repositories and built, tested and packaged by our CI servers counts. In addition to building, testing, analyzing and packaging our projects we use CI jobs for deployment and supervision, too. In such jobs you often need some sort of credentials like username/password or public/private keys.

    If you are using username/password they do not only appear in the job configuration but also in the console build logs. In most cases this is undesirable but luckily there is an easy way around it: using the Environment Injector Plugin.

    In the plugin you can “inject passwords to the build as environment variables” for use in your commands and scripts.inject-passwords-configuration

    The nice thing about this is that the passwords are not only masked in the job configuration (like above) but also in the console logs of the builds!inject-passwords-console-log

    Another alternative doing mostly the same is the Credentials Binding Plugin.

    There is a lot more to explore when it comes to authentication and credential management in Jenkins as you can define credentials at the global level, use public/private key pairs and ssh agents, connect to a LDAP database and much more. Just do not sit back and provide security related stuff plaintext in job configurations or your deployments scripts!

    Generating a spherified cube in C++

    In my last post, I showed how to generate an icosphere, a subdivided icosahedron, without any fancy data-structures like the half-edge data-structure. Someone in the reddit discussion on my post mentioned that a spherified cube is also nice, especially since it naturally lends itself to a relatively nice UV-map.

    The old algorithm

    The exact same algorithm from my last post can easily be adapted to generate a spherified cube, just by starting on different data.

    cube

    After 3 steps of subdivision with the old algorithm, that cube will be transformed into this:

    split4

    Slightly adapted

    If you look closely, you will see that the triangles in this mesh are a bit uneven. The vertical lines in the yellow-side seem to curve around a bit. This is because unlike in the icosahedron, the triangles in the initial box mesh are far from equilateral. The four-way split does not work very well with this.

    One way to improve the situation is to use an adaptive two-way split instead:
    split2

    Instead of splitting all three edges, we’ll only split one. The adaptive part here is that the edge we’ll split is always the longest that appears in the triangle, therefore avoiding very long edges.

    Here’s the code for that. The only tricky part is the modulo-counting to get the indices right. The vertex_for_edge function does the same thing as last time: providing a vertex for subdivision while keeping the mesh connected in its index structure.

    TriangleList
    subdivide_2(ColorVertexList& vertices,
      TriangleList triangles)
    {
      Lookup lookup;
      TriangleList result;
    
      for (auto&& each:triangles)
      {
        auto edge=longest_edge(vertices, each);
        Index mid=vertex_for_edge(lookup, vertices,
          each.vertex[edge], each.vertex[(edge+1)%3]);
    
        result.push_back({each.vertex[edge],
          mid, each.vertex[(edge+2)%3]});
    
        result.push_back({each.vertex[(edge+2)%3],
          mid, each.vertex[(edge+1)%3]});
      }
    
      return result;
    }
    

    Now the result looks a lot more even:
    split2_sphere

    Note that this algorithm only doubles the triangle count per iteration, so you might want to execute it twice as often as the four-way split.

    Alternatives

    Instead of using this generic of triangle-based subdivision, it is also possible to generate the six sides as subdivided patches, as suggested in this article. This approach works naturally if you want to have seams between your six sides. However, that approach is more specialized towards this special geometry and will require extra “stitching” if you don’t want seams.

    Code

    The code for both the icosphere and the spherified cube is now on github: github.com/softwareschneiderei/meshing-samples.

    My favorite Unix tool

    Awk is a little language designed for the processing of lines of text. It is available on every Unix (since V3) or Linux system. The name is an acronym of the names of its creators: Aho, Weinberger and Kernighan.

    Since I spent a couple of minutes to learn awk I have found it quite useful during my daily work. It is my favorite tool in the base set of Unix tools due to its simplicity and versatility.

    Typical use cases for awk scripts are log file analysis and the processing of character separated value (CSV) formats. Awk allows you to easily filter, transform and aggregate lines of text.

    The idea of awk is very simple. An awk script consists of a number of patterns, each associated with a block of code that gets executed for an input line if the pattern matches:

    pattern_1 {
        # code to execute if pattern matches line
    }
    
    pattern_2 {
        # code to execute if pattern matches line
    }
    
    # ...
    
    pattern_n {
        # code to execute if pattern matches line
    }
    

    Patterns and blocks

    The patterns are usually regular expressions:

    /error|warning/ {
        # executed for each line, which contains
        # the word "error" or "warning"
    }
    
    /^Exception/ {
        # executed for each line starting
        # with "Exception"
    }
    

    There are some special patterns, namely the empty pattern, which matches every line …

    {
        # executed for every line
    }
    

    … and the BEGIN and END patterns. Their blocks are executed before and after the processing of the input, respectively:

    BEGIN {
        # executed before any input is processed,
        # often used to initialize variables
    }
    
    END {
        # executed after all input has been processed,
        # often used to output an aggregation of
        # collected values or a summary
    }
    

    Output and variables

    The most common operation within a block is the print statement. The following awk script outputs each line containing the string “error”:

    /error/ { print }
    

    This is basically the functionality of the Unix grep command, which is filtering. It gets more interesting with variables. Awk provides a couple of useful built-in variables. Here are some of them:

    • $0 represents the entire current line
    • $1$n represent the 1…n-th field of the current line
    • NF holds the number of fields in the current line
    • NR holds the number of the current line (“record”)

    By default awk interprets whitespace sequences (spaces and tabs) as field separators. However, this can be changed by setting the FS variable (“field separator”).

    The following script outputs the second field for each line:

    { print $2 }
    

    Input:

    John 32 male
    Jane 45 female
    Richard 73 male
    

    Output:

    32
    45
    73
    

    And this script calculates the sum and the average of the second fields:

    {
        sum += $2
    }
    
    END {
        print "sum: " sum ", average: " sum/NR
    }
    

    Output:

    sum: 150, average: 50
    

    The language

    The language that can be used within a block of code is based on C syntax without types and is very similar to JavaScript. All the familiar control structures like if/else, for, while, do and operators like =, ==, >, &&, ||, ++, +=, … are there.

    Semicolons at the end of statements are optional, like in JavaScript. Comments start with a #, not with //.

    Variables do not have to be declared before usage (no ‘var’ or type). You can simply assign a value to a variable and it comes into existence.

    String concatenation does not have an explicit operator like “+”. Strings and variables are concatenated by placing them next to each other:

    "Hello " name ", how are you?"
    # This is wrong: "Hello" + name + ", how are you?"
    

    print is a statement, not a function. Parentheses around its parameter list are optional.

    Functions

    Awk provides a small set of built-in functions. Some of them are:

    length(string), substr(string, index, count), index(string, substring), tolower(string), toupper(string), match(string, regexp).

    User-defined functions look like JavaScript functions:

    function min(number1, number2) {
        if (number1 < number2) {
            return number1
        }
        return number2
    }
    

    In fact, JavaScript adopted the function keyword from awk. User-defined functions can be placed outside of pattern blocks.

    Command-line invocation

    An awk script can be either read from a script file with the -f option:

    $ awk -f myscript.awk data.txt
    

    … or it can be supplied in-line within single quotes:

    $ awk '{sum+=$2} END {print "sum: " sum " avg: " sum/NR}' data.txt
    

    Conclusion

    I hope this short introduction helped you add awk to your toolbox if you weren’t familiar with awk yet. Awk is a neat alternative to full-blown scripting languages like Python and Perl for simple text processing tasks.

    Introduction to Debian packaging

    In former posts I wrote about packaging your software as RPM packages for a variety of use cases. The other big binary packaging system on Linux systems is DEB for Debian, Ubuntu and friends. Both serve the purpose of convenient distribution, installation and update of binary software artifacts. They define their dependencies and describe what the package provides.

    How do you provide your software as DEB packages?

    The master guide to debian packaging can be found at https://www.debian.org/doc/manuals/maint-guide/. It is a very comprehensive guide spanning many pages and providing loads of information. Building a usable package of your software for your clients can be a matter of minutes if you know what to do. So I want to show you the basic steps, for refinement you will need to consult the guide or other resources specific to the software you want to package. For example there are several guides specific to packaging of software written in Python. I find it hard to determine what the current and recommended way to debian packages for python is because there are differing guides over the last 10 years or so. You may of course say “Just use pip for python” 🙂

    Basic tools

    The basic tools for building debian packages are debhelper and dh-make. To check the resulting package you will need lintian in addition. You can install them and basic software build tools using:

      sudo apt-get install build-essential dh-make debhelper lintian
    

    The python packaging system uses make and several scripts under the hood to help building the binary packages out of source tarballs.

    Your first package

    First you need a tar-archive of the software you want to package. With python setuptools you could use python setup.py sdist to generate the tarball. Then you run dh_make on it to generate metadata and the package build environment for your package. Now you have to edit the metadata files, namely control, copyright, changelog. Finally you run dpkg-buildpackage to generate the package itself.  Here is an example of the necessary commands:

    mkdir hello-deb-1.0
    cd hello-deb-1.0
    dh_make -f ../hello-deb-1.0.tar.gz
    # edit deb metadata files
    vi debian/control
    vi debian/copyright
    vi debian/changelog
    dpkg-buildpackage -us -uc
    lintian -i -I --show-overrides hello-deb_1.0-1_amd64.changes
    

    The control file roughly resembles RPMs SPEC file. Package name, description, version and dependency information belong there. Note that debian is very strict when it comes to naming of packages, so make sure you use the pattern ${name}-${version}.tar.gz for the archive and that it extracts into a corresponding directory without the extension, e.g. ${name}-${version}.

    If everything went ok several files were generated in your base directory:

    • The package itself as .deb file
    • A file containing the changelog and checksums between package versions and revisions ending with .changes
    • A file with the package description ending with .dsc
    • A tarball with the original sources renamed according to debian convention hello-deb_1.0.orig.tar.gz (note the underscore!)

    Going from here

    Of course there is a lot more to the tooling and workflow when maintaining debian packages. In future posts I will explore additional means for improving and updating your packages like the quilt patch management tool, signing the package, symlinking, scripts for pre- and post-installation and so forth.

    Getting better at programming without coding

    Almost two decades ago one of the programming books was published that had a big impact on my thinking as a software engineer: the pragmatic programmer. Most of the tips and practices are still fundamental to my work. If you haven’t read it, give it a try.
    Over the years I refined some practices and began to get a renewed focus on additional topics. One of the most important topics of the original tips and of my profession is to care and think about my craft.
    In this post I collected a list of tips and practices which helped and still help me in my daily work.

    Think about production

    Since I develop software to be used, thinking early about the production environment is key.

    Deploy as early as possible
    Deployment should be a non event. Create an automatic deployment process to keep it that way and deploy as early as possible to remove the risk from unpleasant surprises.

    Master should always be deployable
    Whether you use master or another branch, you need a branch which could always be deployed without risk.

    Self containment
    Package (as many as possible of) your dependencies into your deployment. Keep the surprises of missing repositories or dependencies to a minimum of none.

    Use real data in development
    Real data has characteristics, gaps and inconsistencies you cannot imagine. During development use real data to experience problems before they get into production.

    No data loss
    Deploying should not result in a loss of data. Your application should shutdown gracefully. Often deployment deletes the directory or uses a fresh place. No files or state in memory should be used as persistence by the application. Applications should be stateless processes.

    Rollback
    If anything goes wrong or the new deployed application has a serious bug you need to revert it to the last version. Make rollback a requirement.

    No user interruption
    Users work with your application. Even if they do not lose data or their current work when you deploy, they do not like surprises.

    Separate one off tasks
    Software should be running and available to the user. Do not delay startup with one off admin tasks like migration, cache warm-up or search index creation. Make your application start in seconds.

    Manage your runs
    Problems, performance degradation and bugs should be visible. Monitor your key metrics, log important things and detect problems in the application’s data. Make it easy to combine, search and graph your recordings.

    Make it easy to reproduce
    When a bug occurs or your user has a problem, you need to follow the steps how the system arrived at its current state. Store their actions so that they can be easily replayed.

    Think about users

    Software is used by people. In order to craft successful applications I have to consider what these people need.

    No requirements, just jobs
    Users use the software to get stuff done. Features and requirements confuse solutions with problems. Understand in what situation the user is and what he needs to get to his goal. This is the job you need to support.

    Work with the user
    In order to help the user with your software I need to relate to his situation. Observing, listening, talking to and working along him helps you see his struggles and where software can help.

    Speak their language
    Users think and speak in their domain. Not in the domain of software. When you want to help them, you and the user interface of your software needs to speak like a user, not like a software.

    Value does not come from effort
    The most important things your software does are not the ones which need the most effort. Your users value things which help them the most. Find these.

    Think about modeling

    A model is at the core of your software. Every system has a state. How you divide and manage this state is crucial to evolving and understanding your creation.

    Use the language of the domain
    In your core you model concepts from the user’s domain. Name them accordingly and reasoning about them and with the users is easier.

    Everything has one purpose
    Divide your model by the purpose of its parts.

    Separate read from write
    You won’t get the model right from the start. It is easier to evolve the model if read and write operations have their own model. You can even have different read models for different use cases. (see also CQRS and Turning the database inside out)

    Different parts evolve at different speeds
    Not all parts of a model are equal. Some stand still, some change frequently. Some are specified, about some others you learn step by step. Some need to be constant, some need to be experimented with. Separating parts by its changing speed will help you deal with change.

    Favor immutability
    State is hard. State is needed. Isolating state helps you understand a running system. Isolating state helps you remove coupling.

    Keep it small
    Reasoning about a large system is complicated. Keep effects at bay and models small. Separating and isolating things gives you a chance to overview the whole system.

    Think about approaches

    Getting to all this is a journey.

    When thinking use all three dimensions
    Constraining yourself to a computer screen for thinking deprives you of one of your best thinking tools: spatial reasoning. Use whiteboards, walls, paper and more to remove the boundaries from your thoughts.

    Crazy 8
    Usually you think in your old ways. Getting out of your (mental) box is not easy. Crazy 8 is a method to create 8 solutions (sketches for UI) in a very short time frame.

    Suspend judgement
    As a programmer you are fast to assess proposals and solutions. Don’t do that. Learn to suspend your judgement. Some good ideas are not so obvious, you will kill them with your judgement.

    Get out
    Thinking long and hard about a problem can put you into blindfold mode. After stating the problem, get out. Take a walk. Do not actively think or talk about the problem. This simulates the “shower effect”: getting the best ideas when you do not actively think about the problem.

    Assume nothing
    Assumptions bear risks. They can make your project fail. Approach your project with what is certain. Choose your direction to explore and find your assumptions. Each assumption is an obstacle, an question that needs an answer. Ask your users. Design hypotheses and experiments to proof them. (see From agile to UX for a detailed approach)

    Pre-mortem
    Another way to find blind spots in your thinking is to frame for failure. Construct a scenario in which your project is failed. Then reason about what made it fail. Where are your biggest risks? (see How to map your fears for details)

    MVA – Minimum, valuable action
    Every step, every experiment should be as lightweight as possible. Do not craft a beautiful prototype if a sketch would suffice. Choose the most efficient method to get further to your goal.

    Put it into a time box
    When you need to experiment, constrain it. Define a time in which you want to have an answer. You do not need to go the whole way to get an impression.