Prioritizing: order of tasks

This is an entry that extends the post on microprojects by two additional prioritizing strategies.

Strategy: Cover your ass

This is the strategy suggested by the previous post. After preparing a list of milestones and their estimates, you pick the next most problematic milestone and work on it. A list of tasks ordered by this strategy helps you to “fail fast”: in less than half of estimated time you will know, whether you will succeed or bust the budget – even when little is known about the concrete implementation or the esimates are off by some amount.

Strategy: Most value first

In lot of projects, this is the strategy used by customers. All features not absolutely necessary to achieve the goal are cut or declared optional. If you look at minesweeper: you can play it without the highscore, the timer, the modifiable field side or even probably without the random component (i.e. make 99 fields), but not without the mines. After you determined that your budget is too small, you know what the customer can live without and if you have the option to cut features, then this is probably the strategy for you.

Strategy: Most painful, when omitted

This is the strategy best applied before the pain is real. In contrast to other two strategies, it does contain hard to quantify criteria like:

  • Quality
  • Security
  • Performance

The cost to implement them is non-linear and not directly visible. The temptation is big to use time and money to create more profitable features instead. They can be prioritized by:

  • probability of occurence
  • damage in the case of occurence
  • implementation cost
  • growth of the above factors with time

This is a lot of work for a single task – most likely you will setup project wide guidelines and default scenarios that will be reviewed by recurring audits.

From ugly to pretty – Three steps is all it takes

A story about what can happen if you challenge your students to improve inferior code. With just three simple steps, the code gets beautiful.

makeupI hold lectures in software engineering for over a decade now. One major topic is testing, specifically unit tests. Other corner stones are refactorings and code readability. So whenever I have the chance to challenge my students in cross-topic aspects of software development, it’s almost always a source of insight for them and especially for me. But one golden moment holds a special place in my memory. This is the (rather elaborate, sorry) story of this moment.

During a lecture about unit tests with JUnit, my students had the task to develop tests for a bank account class. That’s about as boring as testing can be – the account was related to a customer and had a current balance. The customer can withdraw money, but only some customers can overdraw their account. To spice things up a bit, we also added the mock object framework EasyMock to the mix. While I would recommend other mock frameworks for production usage, the learning curve of EasyMock is just about right for first time exposure in a “sheep dip” fashion.

Our first test dealt with drawing money from an empty account that can be overdrawn:

@Test
public void canWithdrawOnCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(true);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(new Euro(30), cash);
  assertEquals(new Euro(-30), account.balance());
  EasyMock.verify(customer);
}

The second test made sure that this withdrawal behaviour only works for customers with sufficient credit standing. We decided to pay out nothing (0 Euro) if the customer tries to withdraw more money than his account currently holds:

@Test
public void cannotTakeUpCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(false);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(Euro.ZERO, cash);
  assertEquals(Euro.ZERO, account.balance());
  EasyMock.verify(customer);
}

As you can tell, a lot of copy and paste was going on in the creation of this test. Just look at the name of the local variable “required” – it’s misleading now. Right up to this point, my main topic was the usage of the mock framework, not perfect code. So I explained the five stages of normalized mock-based unit tests (initialize, train mocks, execute tested code, assert results, verify mocks) and then changed the topic by expressing my displeasure about the duplication and the inferior readability of the code (it even tries to trick you with the “required” variable!). Now it was up to my students to improve our situation (this trick works only a few times for every course before they preventively become even pickier than me). A student accepted the challenge and gave advice:

First step: Extract Method refactoring

The obvious first step was to extract the duplication in its own method and adjust the calls by their parameters. This is an easy refactoring that will almost always improve the situation. Let’s see where it got us. Here is the extracted method:

protected void performWithdrawalTestWith(
    boolean customerCanOverdraw,
    Euro amountOfWithdrawal,
    Euro expectedCash,
    Euro expectedBalance) {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(customerCanOverdraw);
  EasyMock.replay(customer);
  Account account = new Account(customer);

  Euro cash = account.withdraw(amountOfWithdrawal);

  assertEquals(expectedCash, cash);
  assertEquals(expectedBalance, customer.balance());
  EasyMock.verify(customer);
}

And the two tests, now really concise:

@Test
public void canWithdrawOnCredit() {
  performWithdrawalTestWith(
      true,
      new Euro(30),
      new Euro(30),
      new Euro(-30));
}

 

@Test
public void cannotTakeUpCredit() {
  performWithdrawalTestWith(
      false,
      new Euro(30),
      Euro.ZERO,
      Euro.ZERO);
}

Well, that did resolve the duplication indeed. But the test methods now lacked any readability. They appeared as if somebody had extracted all the semantics out of the code. We were unhappy, but decided to interpret the current code as an intermediate step to the second refactoring:

Second step: Introduce Explaining Variable refactoring

In the second step, the task was to re-introduce the semantics back into the test methods. All parameters were nameless, so that was our angle of attack. By introducing local variables, we gave the parameters meaning again:

@Test
public void canWithdrawOnCredit() {
  boolean canOverdraw = true;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = new Euro(30);
  Euro resultingBalance = new Euro(-30);

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean canOverdraw = false;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = Euro.ZERO;
  Euro resultingBalance = Euro.ZERO;

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

That brought back the meaning to the test methods, but didn’t improve readability. The code wasn’t intentionally cryptic any more, but still far from being intuitively understandable – and that’s what really readable code should be. If even novices can read your code fluently and grasp the main concepts in the first pass, you’ve created expert code. I challenged the student to further transform the code, without any idea how to carry on myself. My student hesitated, but came up with the decisive refactoring within seconds:

Third step: Rename Variable refactoring

The third step doesn’t change the structure of the code, but its approachability. Instead of naming the local variables after their usage in the extracted method, we name them after their purpose in the test method. A first time reader won’t know about the extracted method (and preferably shouldn’t need to know), so it’s not in the best interest of the reader to foreshadow its details. Instead, we concentrate about telling the reader a coherent story:

@Test
public void canWithdrawOnCredit() {
  boolean aCustomerThatCanOverdraw = true;
  Euro heWithdraws30Euro = new Euro(30);
  Euro receivesTheFullAmount = new Euro(30);
  Euro andIsNow30EuroInTheRed = new Euro(-30);

  performWithdrawalTestWith(
      aCustomerThatCanOverdraw,
      heWithdraws30Euro,
      receivesTheFullAmount,
      andIsNow30EuroInTheRed);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean aCustomerThatCannotOverdraw = false;
  Euro heTriesToWithdraw30Euro = new Euro(30);
  Euro butReceivesNothing = Euro.ZERO;
  Euro andStillHasABalanceOfZero = Euro.ZERO;

  performWithdrawalTestWith(
      aCustomerThatCannotOverdraw,
      heTriesToWithdraw30Euro,
      butReceivesNothing,
      andStillHasABalanceOfZero);
}

If the reader is able to ignore some crude verbalization and special characters, he can read the test out loud and instantly grasp its meaning. The first lines of every test method are a bit confusing, but necessary given Java’s lack of named parameters.

The result might remind you a lot of Behavior Driven Development notation and that’s probably not by chance. In a few minutes during that programming exercise, my students taught themselves to think in scenarios or stories when approaching unit tests. I couldn’t have taught it any better – instead, I got enlightened by this exercise, too.

How to use partial mocks in real life

Partial mocks are an advanced feature of modern mocking libraries like mockito. Partial mocks retain the original code of a class only stubbing the methods you specify. If you build your system largely from scratch you most likely will not need to use them. Sometimes there is no easy way around them when working with dependencies not designed for testability. Let us look at an example:

/**
 * Evil dependency we cannot change
 */
public final class CarvedInStone {

    public CarvedInStone() {
        // may do unwanted things
    }

    public int thisHasSideEffects(int i) {
        return 31337;
    }

    // many more methods
}

public class ClassUnderTest {

    public Result computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = new CarvedInStone().thisHasSideEffects(42);
        // more interesting code
        return new Result(intermediateResult * 1337);
    }
}

We want to test the computeSomethingInteresting() method of our ClassUnderTest. Unfortunately we cannot replace CarvedInStone, because it is final and does not implement an interface containing the methods of interest. With a small refactoring and partial mocks we can still test almost the complete class:

public class ClassUnderTest {
    public int computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = intermediateResultsFromCarvedInStone(42);
        // more interesting code
        return intermediateResult * 1337;
    }

    protected int intermediateResultsFromCarvedInStone(int input) {
        return new CarvedInStone().thisHasSideEffects(input);
    }
}

We refactored our dependency into a protected method we can use to stub out with our partial mocking to be tested like this:

public class ClassUnderTestTest {
    @Test
    public void interestingComputation() throws Exception {
        ClassUnderTest cut = spy(new ClassUnderTest());
        doReturn(1234).when(cut).intermediateResultsFromCarvedInStone(42);
        assertEquals(1649858, cut.computeSomethingInteresting());
    }
}

Caveat: Do not use the usual when-thenReturn-style:

when(cut.intermediateResultsFromCarvedInStone(42)).thenReturn(1234);

with partial mocks because the real method will get called once!

So the only untested code is a simple delegation. Measures like that refactoring and partial mocking generally serve as a first step and not the destination.

Where to go from here

To go the whole way we would encapsulate all unmockable dependencies into wrapper objects providing the functionality we need here and inject them into our ClassUnderTest. Then we can replace our wrapper(s) easily using regular mocking.

Doing all this can be a lot of work and/or risk depending on the situation so the depicted process serves as an low risk intermediate step for getting as much important code under test as possible.

Note that the wrappers themselves stay largely untestable like our protected delegating method.

JavaScript for Java developers

Although JavaScript and Java sound and look similar they are very different in their details and philosophies. Here I try to compare the two languages regardless of their libraries and frameworks. The goal is that you as a Java developer get an understanding of what JavaScript is and how it differs from Java.

Although JavaScript and Java sound and look similar they are very different in their details and philosophies. Here I try to compare the two languages regardless of their libraries and frameworks. The goal is that you as a Java developer get an understanding of what JavaScript is and how it differs from Java. One hint: you can use jsfiddle.net to try out some of the snippets here or any JavaScript.
Note: right now this document discusses JavaScript 1.4, if enough interest is there I try to update it to a newer version (preferable ES5).

Primitives

Java – char, boolean, byte, short, int, long, float, double
JavaScript – none

Primitives are elements of the language which aren’t objects and therefore have no methods defined on them. JavaScript has no primitives.

Immutable types

Java – String (16bit), Character, Boolean, Byte, Short, Integer, Long, Float, Double, BigDecimal, BigInteger
JavaScript – String (16bit), Number(double, 64bit floating point), Boolean, RegExp

The next special kind of object are immutable objects, objects which represent values and cannot be changed.
JavaScript has four value objects: String (16bit like in Java), Number (64bit floating point like a double in Java), Boolean (like in Java) and RegExp (similar to Java). Java differences the number types further and introduces a Character.
Strings in JavaScript can be in single or double quotes and the sign to escape is the backslash (‘\’) just like in Java.
A regexp can be created via new RegExp or with ‘/’ like:

/a*/

Arrays

Java – special
JavaScript – normal object

Another base type in every language is the array. In Java the array is treated as a special kind of object it has a length property and is the only object which has the bracket ‘[]’ operator. In Java you create and access an array in the following way:

// creation
String[] empty = new String[2]; // an empty array with length 2
String[] array = new String[] {"1", "2"};

// read
empty[0]; // => null
empty[5]; // => ArrayIndexOutOfBoundsException

// write
empty[0] = "Test"; // empty is now ["Test", null]
empty[2] = "Test";  // => ArrayIndexOutOfBoundsException

JavaScript handles creation and access in a different way:

// creation
var empty = new Array(2); // an empty array with length 2
var array = ["1", "2"];

// read
empty[0]; // => undefined
empty[5]; // => undefined

// write
empty[0] = "Test"; // empty is now ["Test", undefined]
empty[2] = "Test"; // empty is now ["Test", undefined, "Test"]

The reason for the strange patterns is that an array in JavaScript is just an object with the indexes as properties and reading an undefined property returns undefined whereas setting an undefined property creates the property on the object. More on this under objects.

Comments

Java – // and /**/
JavaScript – // and /**/

Both languages allow the line ‘//’ and the block ‘/* */’ comments whereas the line comment is preferred in JavaScript because commenting out a regular expression can lead to syntax errors:

/a*/

Commenting out this regular expression with the block comment would result in

/* /a*/ */

which is a syntax error.

Boolean Truth

Java – true: true, false: false
JavaScript – false: false, null, undefined, ”, 0, NaN, true: all other values

Another stumbling block for Java developers is the handling of expressions in a boolean context. JavaScript not just treats false as false but also defines null, undefined, the empty string, 0, NaN as falsy values. All other values are evaluated to true.

Literals

Java – “, ‘, numbers, booleans
JavaScript – “, ‘, [], {}, /, numbers, booleans

Literals are a short hand for constructing objects inside the language. Java only supports string, number and boolean creation with literals everything else needs a new operator. In JavaScript you can create strings, numbers, booleans, arrays, objects and regular expressions:

"A string";
'Another string';
var number = 5;
var whatif = true;
var array = [];
var object = {};
var regexp = /a*b+/;

Operators

Java – postfix (expr++ expr–), unary (++expr –expr +expr -expr ~ !), multiplicative (* / %), additive (+ -), shift (<> >>>), relational ( = instanceof), equality (== !=), bitwise AND (&), bitwise exclusive OR (^),, bitwise inclusive OR (|), logical AND (&&), logical OR (||), ternary (?:), assignment (= += -= *= /= %= &= ^= |= <>= >>>=)
JavaScript – object creation (new), function call (()), increment/decrement (++ –), unary (+expr -expr ~ !), typeof, void, delete, multiplicative (* / %), additive (+ -), shift (<> >>>), relational ( = in instanceof), equality (== != === !==), bitwise AND (&), bitwise exclusive OR (^),, bitwise inclusive OR (|), logical AND (&&), logical OR (||), ternary (?:), assignment (= += -= *= /= %= &= ^= |= <>= >>>=)

Java and JavaScript have many operators in common. JavaScript has some additional ones. ‘void’ is an operator to return undefined and rarely useful. ‘delete’ removes properties from objects and hence also elements from arrays. ‘in’ tests for a property of an object but does not work for literal strings and numbers.

var string = "A string";
"length" in string // => error
var another = new String('Another string');
"length" in another // => true

The unary operators ‘+’ and ‘-‘ try to convert their operands to numbers and if the conversion fails they return NaN:

+'5' // => 5
-'2' // => 2
-'a' // => NaN

Typeof returns the type of its operand as a string. Beware the difference between literal creation and creation via new for numbers and strings.

typeof undefined // => "undefined"
typeof null // => "object"
typeof true // => "boolean"
typeof 5 // => "number"
typeof new Number(5) // => "object"
typeof 'a' // => "string"
typeof new String('a') // => "object"
typeof document // => Implementation-dependent
typeof function() {} // => "function"
typeof {} // => "object"
typeof [] // => "object"

All host environment specific objects like window or the html elements in a browser have implementation dependent return values.
Note that for an array it also returns “object” if you need to distinguish an array you must dig deeper.

Object.prototype.toString.call([]) // => "[object Array]"

The two pairs of equality operators (== != and === !==) behave differently. The shorter ones ‘==’ and ‘!=’ use type coercion which produces strange results and breaks transitivity:

'' == '0' // => false
0 == '' // => true
0 == '0' // => true

‘===’ and ‘!==’ works as expected if both operands are of the same type and have the same value they are true. The same value means either they are the same object or if they are a literal string, a literal number or a literal boolean have the same value regardless of length or precision.

5 === 5 // => true
5 === 5.0 // => true
'a' === "a" // => true
5 === '5' // => false
[5] === [5] // => false
new Number(5) === new Number(5) // => false
var a = new Number(5);
a === a  // => true
false === false // => true

Declaration

Java – type
JavaScript – var

Since JavaScript is a dynamically typed language you do not specify types when declaring parameters, fields or local variables you just use var:

var a = new Number(5);

Scope

Java – block
JavaScript – function

Scope is a common pitfall in JavaScript. Scope defines the code area in which a variable is valid and defined. Java has block scope which means a variable is defined and valid inside any block.

int a = 2;
int b = 1;
if (a > b) {
	int number = 5;
}
// no number defined here

JavaScript on the other hand has function scope which can lead to some confusion for developers coming from block scoped languages.

var f = function() {
  var a = 2;
  var b = 1;
  if (a > b) {
	var number = 5;
  }
  alert(number); // number is valid here
};
// but not here

One thing to remember is that closures have a reference not a copy of their variables from an outer scope.

for (var i = 0; i < 3; i++) {
  setTimeout(function() {
    i; // => always 3
  }, 200);
}

How can you fix this? You need to add a wrapper function and pass the values you need.

for (var i = 0; i < 3; i++) {
  (function(i) {
    setTimeout(function() {
      i; // => 0, 1, 2
    }, 200);
  })(i);
}

Statements

Java – conditional (switch, if/else), loop (while, do/while, for), branch (return, break, continue), exception (throw, try/catch/finally)
JavaScript – conditional (switch (uses ===), if/else), loop (while, do/while, for, for in (beware of protoype chain)), branch (break, continue, return), exception (throw, try/catch/finally), with

The statements which can be used in Java and JavaScript are largely the same but since JavaScript is dynamically typed you can use them with any types. See the section about boolean truth for the statements which need an expression to evaluate to false or true. Switch uses the ‘===’ operator to match the cases and has the same fall through pitfall like Java. ‘For in’ iterates over the names of all properties of an object including those which are inherited via the prototype chain. ‘With’ can be used to shorten the access to objects.

with (object) {
  a = b
}

The problem here is you don’t know from looking at the code if a and/or b is a property of object or a global variable. Because of this ambiguity ‘with’ should be avoided

Object creation

Java – new
JavaScript – new or functional creation / module pattern

In Java you just declare your class

public class Person {
  private final String name;
  
  public Person(String name) {
    this.name = name;
  }
  
  public String getName() {
    return this.name;
  }
}

and instantiate it via new.

Person john = new Person("John");

In JavaScript there is no class keyword but you can create objects via ‘{}’ or ‘new’. Let’s take a look at the functional approach first. The so called module pattern supports encapsulation (read: private members).

var person = function(name) {
  var private_name = name;
  return {
    get_name: function() {
      return private_name;
    }
  };
};

Now person holds a reference to a factory method and calling it will create a new person.

var john = person('John');

Another more classical and familiar way is to use ‘new’.

var Person = function(name) {
  this.name = name;
};

Person.prototype.get_name = function() {
  return this.name;
};

var john = new Person('John');

But what happens when we leave out the new?

var john = Person('John'); // bad idea!

Now this is bound to window (the global context) and a name property is defined on window but we can avoid this:

var Person = function(name) {
  if (!(this instanceof Person)) {
    return new Person(name);
  }
  this.name = name;
};

Now you can call Person with or without new and both behave the same. If you don’t want to repeat this for every class you can use the following pattern (adapted from John Resig to make it ES5 strict compatible).

// adapted from makeClass - By John Resig (MIT Licensed) - http://ejohn.org/blog/simple-class-instantiation/
var makeClass = function() {
  var internal = false;
  var create = function(args) {
    if (this instanceof create) {
      if (typeof this.init == "function") {
        this.init.apply(this, internal ? args : arguments);
      }
    } else {
      internal = true;
      return new create(arguments);
    }
  };
  return create;
};

This creates a function which can create classes. You can use it similar to the classical pattern.

var Person = makeClass();
Person.prototype.init = function(name) {
  this.name = name;
};
Person.prototype.get_name = function() {
  return this.name;
};

var john = new Person('John');

But name is now a public member of Person what if we want it to be private? If we take another look at the functional pattern above we can use the same mechanism.

var Person = function(name) {
  if (!(this instanceof Person)) {
    return new Person(name);
  }
  var private_name = name;
  this.get_name = function() {
    return private_name;
  };
  this.set_name = function(new_name) {
    private_name = new_name;
  };
};

Now name is also a private member of the Person class. Using makeClass you can achieve it in the following way.

var Person = makeClass();
Person.prototype.init = function(name) {
  var private_name = name;
  this.get_name = function() {
    return private_name;
  };
};

var john = new Person('John');

Encapsulation

Java – visibility modifiers (public, package, protected, private)
JavaScript – public or private (via closures)

As we have seen in the previous section we can have private variables and also methods via the encapsulation of a closure. All other variables and members are public.

Accessing properties

Java – .
JavaScript – . or []

Besides the dot you can also use an object like a hash.

var a = {b: 1};
a.b = 3;
a['b'] = 5;

Accessing non existing properties

Java – prevented by the compiler
JavaScript – get returns undefined, set creates

In Java accessing a property or method of an object which does not exists is prevented by the compiler. In JavaScript the following compiles and runs fine.

var a = {};
a.b;
a.b = 5;

When you access non existing members of an object you get undefined in return. Setting the non existing property creates it on the object.

Invocation and this

Java – method
JavaScript – method, function, constructor, apply

JavaScript knows four kinds of invocations: method, function, constructor and apply. A function on an object is called method and calling it will bound this to the object.

var john = {
  name: "John",
  get_name: function() {
    return this.name; // => this is bound to john
  }
};
john.get_name(); // => John

But there is a potential pitfall: it doesn’t matter which method you call but how! This problem can be worked around with the apply/call pattern below.

var john = {
  name: "John",
  get_name: function() {
    return this.name; // => this is bound to the global context
  }
};
var fn = john.get_name;
fn(); // => NOT John

A function which is not a property of an object is just a function and this is bound to the global context (in a browser the global context is the window object).

var get_name = function() {
  return this.name; // this is bound to the global context
};
get_name();

Calling a function with ‘new’ constructs a new object and bounds this to it.

var Person = function(name) {
  this.name = name; // => this is bound to john
};
var john = new Person("John");
john.name; // => John

JavaScript is a functional language (some call it even Lisp in C’s clothing) and therefore functions have methods, too. ‘Apply’ and ‘call’ are both methods to call a function with binding ‘this’ explicit.

var john = {
  name: "John"
};
var get_name = function() {
  return this.name; // this is bound to the john
};
get_name.apply(john); // => John
get_name.call(john); // => John

The difference between ‘apply’ and ‘call’ is just how they take their additional parameters: ‘apply’ needs an array whereas ‘call’ takes them explicitly.

var john = {
  name: "John"
};
var set_name = function(name) {
  this.name = name; // this is bound to the john
};
set_name.apply(john, ["Jack"]); // => Jack
set_name.call(john, "John"); // => John

Variable arguments

Java – …
JavaScript – arguments

In Java you can use variable argument lists via ‘…’. In JavaScript you do not need to declare them. All parameters of a function call are available via arguments regardless of what parameters are declared.

var sum = function() {
  var result = 0;
  for (var i = 0; i < arguments.length; i++) {
    result += arguments[i];
  }
  return result;
};
sum(1); // => 1
sum(1, 2); // => 3

Also arguments looks like an array it isn’t one and if you need an array of arguments you can use slice to convert it.

var array = Array().slice.call(arguments);

Inheritance

Java – extends, implements
JavaScript – prototype chain

Java can easily inherit types or implementation via implements or extends. JavaScript has no classes and uses another approach called the prototype chain. If you want to create a new object User which inherits from Person you use the prototype attribute.

var Person = function(name) {
  this.name = name;
};

var User = function(username) {
  Person.call(this, username); // emulating call to super
  this.username = username;
};

User.prototype = new Person();

var john = new User('John');
john.name; // => John
john.username; // => John

If I left something out or got something wrong please leave a comment. Also if you think a topic discussed here should be explored in more depth feel free to comment.

Got issues? Treat them like micro-projects

Issues should be the smallest work unit available. But what if it is still larger than you can manage? Here’s a standard process for self-management while working on an issue.

Waterfall_modelEvery professional software developer organizes his work in some separable work tasks. These tasks are called issues and often managed in an issue tracker like Bugzilla or JIRA. In bigger teams, there is a separate project role for assigning and supervising work on the issue level, namely the project manager. But below the level of a single issue, external interference would be micro-management, a state that every sane manager tries to avoid at all costs.

Underneath the radar

But what if a developer isn’t that proficient with self-management? He will struggle on a daily basis, but underneath the radar of good project management. And there is nearly no good literature that deals specifically with this short-range management habits. A good developer will naturally exhibit all traits of a good project manager and apply these traits to every aspect of his work. But to become a good developer, most people (myself included!) need to go through a phase of bad project management and learn from their mistakes (provided they are able to recognize and reflect on them).

An exhaustive framework for issue processing

This blog entry outlines a complete set of rules to handle an work task (issue) like a little project. The resulting process is meant for the novice developer who hasn’t established his successful work routine yet. It is exhaustive, in the sense that it will cover all the relevant aspects and in the sense that it contains too much management overhead to be efficient in the long run. It should serve as a starting point to adopt the habits. After a while, you will probably adjust and improve it on your own.

A set of core values

The Schneide standard issue process was designed to promote a set of core values that our developers should adhere to. The philosophy of the value set itself contains enough details to provide another blog entry, so here are the values in descending order without further discussion:

  • Reliability: Your commitments need to be trustworthy
  • Communication: You should notify openly of changes and problems
  • Efficiency: Your work needs to make progress after all

As self-evident as these three values seem to be, we often discuss problems that are directly linked to these values.

The standard issue process

The aforementioned rules consist of five steps in a process that need to be worked on in their given order. Lets have a look:

  1. Orientation
  2. Assessment
  3. Development
  4. Feedback
  5. Termination

Steps three and four (development and feedback) actually happen in a loop with fixed iteration time.

Step 1: Orientation phase

In this phase, you need to get accustomed to the issue at hand as quickly as possible. Read all information carefully and try to build a mental model of what’s asked of you. Try to answer the following questions:

  • Do I understand the requirements?
  • Does my mental model make sense? Can I explain why the requirements are necessary?
  • Are there aspects missing or not sufficiently specified?

The result of this phase should be the assignment of the issue to you. If you don’t feel up to the task or unfamiliar with the requirements (e.g. they don’t make sense in your eyes), don’t accept the issue. This is your first and last chance to bail out without breaking a commitment.

Step 2: Assessment phase

You have been assigned to work on the issue, so now you need a plan. Evaluate your mental model and research the existing code for provisions and obstacles. Try to answer the following questions:

  • Where are the risks?
  • How can I partition the work into intermediate steps?

The result of this phase should be a series of observable milestones and a personal estimate of work effort. If you can’t divide the issue and your estimate exceeds a few hours of work, you should ask for help. Communicate your milestones and estimates by writing them down in the issue tracker.

Step 3: Development phase

You have a series of milestones and their estimates. Now it’s time to dive into programming. This is the moment when most self-management effort ends, because the developer never “zooms out” again until he is done or hopelessly stuck. You need periodic breaks to assess your progress and reflect on your work so far. Try to work for an hour (set up an alarm!) and continue with the next step (you will come back here!). Try to answer the following questions:

  • What is the most risky milestone/detail?
  • How long will the milestone take?

The result of this phase should be a milestone list constantly reordered for risk. We suggest a “cover your ass” strategy for novices by tackling the riskiest aspects first. After each period of work (when your alarm clock sets off), you should make a commit to the repository and run all the tests.

Step 4: Feedback phase

After you’ve done an hour of work, it’s time to back off and reflect. You should evaluate the new information you’ve gathered. Try to answer the following questions:

  • Is my estimate still accurate?
  • Have I encountered unforeseen problems or game-changing information?
  • What crucial details were discovered just yet?

The result of this phase should be an interim report to your manager and to your future self. A comment in the issue tracker is sufficient if everything is still on track. Your manager wants to know about your problems. Call him directly and tell him honestly. The documentation for your future self should be in the issue tracker, the project wiki or the source code. Imagine you have to repeat the work in a year. Write down everything you would want written down.
If your issue isn’t done yet, return to step three and begin another development iteration.

Step 5: Termination phase

Congratulations! You’ve done it. Your work is finished and your estimation probably holds true (otherwise, you would have reported problems in the feedback phases). But you aren’t done yet! Take your time to produce proper closure. Try to answer the following questions:

  • Is the documentation complete and comprehensible?
  • Have you thought about all necessary integration work like update scripts or user manual changes?

The result of this phase should be a merge to the master branch in the repository and complete documentation. When you leave this step, there should be no necessity to ever return to the task. Assume that your changes are immediately published to production. We are talking “going gold” here.

Recapitulation

That’s the whole process. Five steps with typical questions and “artifacts”. It’s a lot of overhead for a change that takes just a few minutes, but can be a lifesaver for any task that exceeds an hour (the timebox of step three). The main differences to “direct action” processes are the assessment and feedback phases. Both are mainly about self-observation and introspection, the most important ingredient of efficient learning. You might not appreciate at first what these phases reveal about yourself, but try to see it this way: The revelations set a low bar that you won’t fall short of ever again – guaranteed.

Object Calisthenics: Change the way you think

Some time ago I spoke with my colleague about skill sharpening and training the brain to come up with new solutions. He proposed a two hour session at the weekend implementing a small game using object calisthenics.

Rules

The rules are described in The ThoughtWorks Anthology book. Here is the list for quick reference.

  1. Use only one level of indentation per method.
  2. Don’t use the else keyword.
  3. Wrap all primitives and strings.
  4. Use only one dot per line.
  5. Don’t abbreviate.
  6. Keep all entities small.
  7. Don’use any classes with more than two instance variables.
  8. Use first-class collections.
  9. Don’t use any getters/setters/properties.

Most of the rules seemed simple enough. Rules 2 and 5 are standard in Softwareschneiderei, 1, 4, 6 and 8 are stricter versions of common sense, 3 is a tedious object wrapping. The rules I was anxious about were 7 and 9. To increase the learning effect, I added an extra rule to the list that is critical in real life programming:

  1.   Write tests for your code.

It doesn’t matter whether to write test first, test after or even test driven. Only then is the code “value added”.

Experiences

The game was minesweeper. It contains a nice mix of algorithms, data structures and UI. I concentrated the efforts on the algorithmic part. My first step was to analyse and create the needed data structures.

  • The smallest unit is the cell.
  • A cell can be either hidden or revealed, have a mine or be empty.
  • The game field contains such cells in rows and columns.
  • The position of a cell in a field is defined by its coordinate that contains the x and y position.

To associate anything with coordinates the coordinates had to be comparable to each other. Rule 9 forbids exposure of internal state, so the Coordinate class got its equals() and hashCode(). Only the creator of the coordinate had the knowledge about the number of dimensions and the values of the positions. Even the tests had no access to the inner state and tested only those two methods.

Since the revealed flag concept and a mine flag concept had similar properties, I decided not to track cells but to track their flags. Through this architectural decision, I had a field with two flag containers, one for revealed cells and one for cells with mines. An additional benefit was that it was enough to put only the coordinate into the container to mark a cell as a mine.

The next step was to link the parts together and add some behaviour. Setting a mine, then revealing a cell and obtaining the number of mines also. Setting a mine and marking the cell as revealed is a simple task with the containers. Testing that the revealed cell contained the mine was more tricky. To achieve that, the reveal method got an additional parameter, a closure with a hasMine parameter.

public void reveal(final Coordinate coordinate, final CellContainerVisitor revealedCellsVisitor) {
    revealedCells.mark(coordinate);
    visit(coordinate, revealedCellsVisitor);
}

private void visit(final Coordinate coordinate, final CellContainerVisitor revealedCellsVisitor) {
    revealedCellsVisitor.visit(coordinate, hasMineAt(coordinate));
}

@Test
public void containsMines() {
    final CellContainer target = new CellContainer();
    target.placeMineAt(someCoordinate());

    final List<Coordinate> mineCells = new ArrayList<Coordinate>();
    target.reveal(someCoordinate(), (coordinate, hasMine) -> {
        if (hasMine.equals(new HasMine(true))) {
           mineCells.add(coordinate);
        }
    });

    assertThat(mineCells, hasSize(1));
    assertThat(mineCells, contains(someCoordinate()));
}

The next game rule consumed the rest of the session: calculating the number of mines in the neighborhood. The main obstacle was to compute the coordinate of the neighbour. To do this it is necessary to add an offset to a position in a coordinate without exposing its internal structure. In the end I reverted to using more closures.

Conclusion

To achieve my goal I had to reverse the order in which I normally develop business logic: Rule 9 seems to support top-down approach: The interfaces of domain objects are nearly completely dominated by the way they are used by their containers.

Most of the time in this two hour session was spent staring at the screen and to think how to write readable code and readable tests without exposing internal details of the objects. Time well spent.

Guide to better Unit Tests: Focused Tests

Every now and then we stumble over unit tests with much setup and numerous checked aspects. These tests easily become a maintenance nightmare. While J.B. Rainsberger advocates getting rid of integration tests in his somewhat lengthy but very insightful talk at Agile 2009 he gives some advice I would like to use as a guide to better unit tests. His goal is basic correctness achieved by the means of what he aptly calls focused tests. Focused tests test exactly one interesting behaviour.

The proposed way to write these focused tests is to look at three different topics for each unit under test:

  1. Interactions (Do I ask my collaborators the right questions?)
  2. Do I handle all answers correctly?
  3. Do I answer questions correctly?

Conventional unit testing emphasizes on the third topic which works fine for leave classes that do not need collaborators. Usually, your programming world is not as simple, so you need mocking and stubbing to check all these aspects without turning your unit test into some large integration test that is slow to run and potentially difficult to maintain.
I will try to show you the approach using a simple and admittedly a bit contrived example. Hopefully, it illustrates Rainsberger’s technique good enough. Assume the IllustrationController below is our unit under test:

public class IllustrationController {
    private final PermissionService permissionService;
    private final IllustrationAction action;

    public IllustrationController(PermissionService permissionService, IllustrationAction action) {
        super();
        this.permissionService = permissionService;
        this.action = action;
    }

/**
* @return true, if the action was executed, false otherwise
*/
    public boolean performIfAllowed(Role r) {
        if (!permissionService.allowed(r)) {
            return false;
        }
        this.action.execute();
        return true;
    }
}

It has two collaborators: PermissionService and IllustrationAction. The first thing to check is:

Do I ask my collaborators the right questions?

In this case this is quite simple to answer, as we only have a few cases: Do we pass the right role to the PermissionService? This results in tests like below:

@Test
public void asksForPermissionWithCorrectRole() throws Exception {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);

IllustrationController ic = new IllustrationController(ps, action);
ic.performIfAllowed(Role.User);
// this question needs a test in PermissionService
verify(ps, atLeastOnce()).allowed(Role.User);
ic.performIfAllowed(Role.Admin);
// this question needs a test in PermissionService
verify(ps, atLeastOnce()).allowed(Role.Admin);
}

Do I handle all answers correctly?

In our example only the PermissionService provides two different answers, so we can easily test that:

@Test
public void interactsWithActionBecausePermitted() {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
// there has to be a case when PermissionService returns true, so write a test for it!
when(ps.allowed(any(Role.class))).thenReturn(true);

IllustrationController ic = new IllustrationController(ps, action);
ic.performIfAllowed(Role.Admin);

verify(ps, atLeastOnce()).allowed(any(Role.class));
verify(action, times(1)).execute();
}

@Test
public void noActionInteractionBecauseForbidden() {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
// there has to be a case when PermissionService returns false, so write a test for it!
when(ps.allowed(any(Role.class))).thenReturn(false);

IllustrationController ic = new IllustrationController(ps, action);
ic.performIfAllowed(Role.User);

verify(ps, atLeastOnce()).allowed(any(Role.class));
verify(action, never()).execute();
}

Note here, that not only return values are answers but also exceptions. If our action may throw exceptions on execution we can handle, we have to test that too!

Do I answer questions correctly?

Our controller answers the question, if the operation was performed or not by returning a boolean from its performIfAllowed()-method so lets check that:

@Test
public void handlesForbiddenExecution() throws Exception {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
when(ps.allowed(any(Role.class))).thenReturn(false);

IllustrationController ic = new IllustrationController(ps, action);
assertFalse("Perform returned success even though it was forbidden.", ic.performIfAllowed(Role.User));
}

@Test
public void handlesSuccessfulExecution() throws Exception {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
when(ps.allowed(any(Role.class))).thenReturn(true);

IllustrationController ic = new IllustrationController(ps, action);
assertTrue("Perform returned failure even though it was allowed.", ic.performIfAllowed(Role.Admin));
}

Conclusion
What we are doing here is essentially splitting different aspects of interesting behaviour in their own tests. The first two questions define the contract between our unit under test and its collaborators. For every question we ask and therefore stub using our mocking framework there has to be a test, that verifies that this question is answered like we expect it. If we handle all the answers correctly, our interaction is deemed to be correct, too. And finally, if our class implements its class contract correctly by answering the third question our clients also know what to expect and can rely on us.

Because each test focuses on only one aspect it tends to be simple and should only break if that aspect changes. In many cases these kind of tests can make your integration tests obsolete like Rainsberger states. I think there are cases in modern frameworks like grails where you do not want to mock all the framework magic because it is too easy to make wrong assumptions about the behaviour of the framework. So imho integration tests provide some additional value there because the behaviour of the platform stays part of the tests without being tests explicitly.

Automatic deployment of (Grails) applications

What was your most embarrassing moment in your career as a software engineer? Mine was when I deployed an application to production and it didn’t even start. Stop using manual deployment and learn how to automate your (Grails) deployment

What was your most embarrassing moment in your career as a software engineer? Mine was when I deployed an application to production and it didn’t even start.

Early in my career deploying an application usually involved a fair bunch of manual steps. Logging in to a remote server via ssh and executing various commands. After a while repetitive steps were bundled in shell scripts. But mistakes happened. That’s normal. The solution is to automate as much as we can. So here are the steps to automatic deployment happiness.

Build

One of the oldest requirements for software development mentioned in The Joel Test is that you can build your app in one step. With Grails that’s easy just create a build file (we use Apache Ant here but others will do) in which you call grails clean, grails test and then grails war:

<project name="my_project" default="test" basedir=".">
  <property name="grails" value="${grails.home}/bin/grails"/>
  
  <target name="-call-grails">
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="${grails.task}"/><arg value="${grails.file.path}"/>
      <env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>
  
  <target name="-call-grails-without-filepath">
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="${grails.task}"/><env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>

  <target name="clean" description="--> Cleans a Grails application">
    <antcall target="-call-grails-without-filepath">
      <param name="grails.task" value="clean"/>
    </antcall>
  </target>
  
  <target name="test" description="--> Run a Grails applications tests">
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="test-app"/>
      <arg value="-echoOut"/>
      <arg value="-echoErr"/>
      <arg value="unit:"/>
      <arg value="integration:"/>
      <env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>

  <target name="war" description="--> Creates a WAR of a Grails application">
    <property name="build.for" value="production"/>
    <property name="build.war" value="${artifact.name}"/>
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="-Dgrails.env=${build.for}"/><arg value="war"/><arg value="${target.directory}/${build.war}"/>
      <env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>
  
</project>

Here we call Grails via the shell scripts but you can also use the Grails ant task and generate a starting build file with

grails integrate-with --ant

and modify it accordingly.

Note that we specify the environment for building the war because we want to build two wars: one for production and one for our functional tests. The environment for the functional tests mimic the deployment environment as close as possible but in practice you have little differences. This can be things like having no database cluster or no smtp.
Now we can put all this into our continuous integration tool Jenkins and every time a checkin is made out Grails application is built.

Test

Unit and integration tests are already run when building and packaging. But we also have functional tests which deploy to a local Tomcat and test against it. Here we fetch the test war of the last successful build from our CI:

<target name="functional-test" description="--> Run functional tests">
  <mkdir dir="${target.base.directory}"/>
  <antcall target="-fetch-file">
    <param name="fetch.from" value="${jenkins.base.url}/job/${jenkins.job.name}/lastSuccessfulBuild/artifact/_artifacts/${test.artifact.name}"/>
    <param name="fetch.to" value="${target.base.directory}/${test.artifact.name}"/>
  </antcall>
  <antcall target="-run-tomcat">
    <param name="tomcat.command.option" value="stop"/>
  </antcall>
  <copy file="${target.base.directory}/${test.artifact.name}" tofile="${tomcat.webapp.dir}/${artifact.name}"/>
  <antcall target="-run-tomcat">
    <param name="tomcat.command.option" value="start"/>
  </antcall>
  <chmod file="${grails}" perm="u+x"/>
  <exec dir="${basedir}" executable="${grails}" failonerror="true">
    <arg value="-Dselenium.url=http://localhost:8080/${product.name}/"/>
    <arg value="test-app"/>
    <arg value="-functional"/>
    <arg value="-baseUrl=http://localhost:8080/${product.name}/"/>
    <env key="GRAILS_HOME" value="${grails.home}"/>
  </exec>
</target>

Stopping and starting Tomcat and deploying our application war in between fixes the perm gen space errors which are thrown after a few hot deployments. The baseUrl and selenium.url parameters tell the functional plugin to look at an external running Tomcat. When you omit them they start the Tomcat and Grails application themselves in their process.

Release

Now all tests passed and you are ready to deploy. So you fetch the last build … but wait! What happens if you have to redeploy and in between new builds happened in the ci? To prevent this we introduce a step before deployment: a release. This step just copies the artifacts from the last build and gives them the correct version. It also fetches the lists of issues fixed from our issue tracker (Jira) for this version as a PDF. These lists can be sent to the customer after a successful deployment.

Deploy

After releasing we can now deploy. This means fetching the war from the release job in our ci server and copying it to the target server. Then the procedure is similar to the functional test one with some slight but important differences. First we make a backup of the old war in case anything goes wrong and we have to rollback. Second we also copy the context.xml file which Tomcat needs for the JNDI configuration. Note that we don’t need to copy over local data files like PDF reports or serach indexes which were produced by our application. These lie outside our web application root.

<target name="deploy">
  <antcall target="-fetch-artifacts"/>

  <scp file="${production.war}" todir="${target.server.username}@${target.server}:${target.server.dir}" trust="true"/>
  <scp file="${target.server}/context.xml" todir="${target.server.username}@${target.server}:${target.server.dir}/${production.config}" trust="true"/>

  <antcall target="-run-tomcat-remotely"><param name="tomcat.command.option" value="stop"/></antcall>

  <antcall target="-copy-file-remotely">
    <param name="remote.file" value="${tomcat.webapps.dir}/${production.war}"/>
    <param name="remote.tofile" value="${tomcat.webapps.dir}/${production.war}.bak"/>
  </antcall>
  <antcall target="-copy-file-remotely">
    <param name="remote.file" value="${target.server.dir}/${production.war}"/>
    <param name="remote.tofile" value="${tomcat.webapps.dir}/${production.war}"/>
  </antcall>
  <antcall target="-copy-file-remotely">
    <param name="remote.file" value="${target.server.dir}/${production.config}"/>
    <param name="remote.tofile" value="${tomcat.conf.dir}/Catalina/localhost/${production.config}"/>
  </antcall>

  <antcall target="-run-tomcat-remotely"><param name="tomcat.command.option" value="start"/></antcall>
</target>

Different Environments: Staging and Production

If you look closely at the deployment script you notice that uses the context.xml file from a directory named after the target server. In practice you have multiple deployment targets not just one. At the very least you have what we call a staging server. This server is used for testing the deployment and the deployed application before interrupting or corrupting the production system. It can even be used to publish a pre release version for the customer to try. We use a seperate job in our ci server for this. We separate the configurations needed for the different environments in directories named after the target server. What you shouldn’t do is to include all those configurations in your development configurations. You don’t want to corrupt a production application when using the staging one or when your tests run or even when you are developing. So keep configurations needed for the deployment environment separate and separate from each other.

Celebrate

Now you can deploy over and over again with just one click. This is something to celebrate. No more headaches, no more bitten finger nails. But nevertheless you should take care when you access a production system even it is automatically. Something you didn’t foresee in your process could go wrong or you could make a mistake when you try out the application via the browser. Since we need to be aware of this responsibility everybody who interacts with a production system has to wear our cowboy hats. This is a conscious step to remind oneself to take care and also it reminds everybody else that you shouldn’t disturb someone interacting with a production system. So don’t mess with the cowboy!

Designing an API? Good luck!

An API Design Fest is a great opportunity to gather lasting insights what API design is really about. And it will remind you why there are so few non-disappointing APIs out there.

If you’ve developed software to some extent, you’ve probably used dozens if not hundreds of APIs, so called Application Programming Interfaces. In short, APIs are the visible part of a library or framework that you include into your project. In reality, the last sentence is a complete lie. Every programmer at some point got bitten by some obscure behavioural change in a library that wasn’t announced in the interface (or at least the change log of the project). There’s a lot more to developing and maintaining a proper API than keeping the interface signatures stable.

A book about API design

practicalapidesignA good book to start exploring the deeper meanings of API development is “Practical API Design” by Jaroslav Tulach, founder of the NetBeans project. Its subtitle is “Confessions of a Java Framework Architect” and it holds up to the content. There are quite some confessions to make if you develop relevant APIs for several years. In the book, a game is outlined to effectively teach API design. It’s called the API Design Fest and sounds like a lot of fun.

The API Design Fest

An API Design Fest consists of three phases:

  • In the first phase, all teams are assigned the same task. They have to develop a rather simple library with an usable API, but are informed that it will “evolve” in a way not quite clear in the future. The resulting code of this phase is kept and tagged for later inspection.
  • The second phase begins with the revelation of the additional use case for the library. Now the task is to include the new requirement into the existing API without breaking previous functionality. The resulting code is kept and tagged, too.
  • The third phase is the crucial one: The teams are provided with the results of all other teams and have to write a test that works with the implementation of the first phase, but breaks if linked to the implementation of the second phase, thus pointing out an API breach.

The team that manages to deliver an unbreakable implementation wins. Alternatively, points are assigned for every breach a team can come up with.

The event

This sounds like too much fun to pass it without trying it out. So, a few weeks ago, we held an API Design Fest at the Softwareschneiderei. The game mechanics require a prepared moderator that cannot participate and at least two teams to have something to break in the third phase. We tried to cram the whole event into one day of 8 hours, which proved to be quite exhausting.

In a short introduction to the fundamental principles of API design that can withstand requirement evolution, we summarized five rules to avoid the most common mistakes:

  •  No elegance: Most developers are obsessed with the concept of elegance. In API design, there is no such thing as beauty in the sense of elegance, only beauty in the sense of evolvability.
  •  API includes everything that an user depends on: Your API isn’t defined by you, it’s defined by your users. Everything they rely on is a fixed fact, if you like it or not. Be careful about leaky abstractions.
  •  Never expose more than you need to: Design your API for specific use cases. Support those use cases, but don’t bother to support anything else. Every additional item the user can get a hold on is essentially accidental complexity and will sabotage your evolution attempts.
  •  Make exposed classes final and their constructor private: That’s right. Lock your users out of your class hierarchies and implementations. They should only use the types you explicitly grant them.
  •  Extendable types cannot be enhanced: The danger of inheritance in API design is that you suddenly have to support the whole class contract instead of “only” the interface/protocol contract. Read about Liskov’s Substitution Principle if you need a hint why this is a major hindrance.

The introduction closed with the motto of the day: “Good judgement comes from experience. Experience comes from bad judgement.” The API Design Fest day was dedicated to bad judgement. Then, the first phase started.

The first phase

No team had problems to grasp the assignment or to find a feasible approach. But soon, eager discussions started as the team projected the breakability of their current design. It was very interesting to listen to their reasoning.

After two hours, the first phase ended with complete implementations of the simple use cases. All teams were confident to be prepared for the extensions that would happen now. But as soon as the moderator revealed the additional use cases for the API, they went quiet and anxious. Nobody saw this new requirement coming. That’s a very clever move by Jaroslav Tulach: The second assignment resembles a real world scenario in the very best manner. It’s a nightmare change for every serious implementation of the first phase.

The second phase

But the teams accepted the new assignment and went to work, expanding their implementation to their best effort. The discussions revolved around possible breaches with every attempt to change the code. The burden of an API already existing was palpable even for bystanders.

After another two hours of paranoid and frantic development, all teams had a second version of their implementation and we gathered for a retrospective.

The retrospective

In this discussion, all teams laid down arms and confessed that they had already broken their API with simple means and would accept defeat. So we called off the third phase and prolonged the discussion about our insights from the two phases. What a result: everybody was a winner that day, no losers!

Some of our insights were:

  • Users as opponents: While designing an API, you tend to think about your users as friends that you grant a wish (a valid use case). During the API Design Fest, the developers feared the other teams as “malicious” users and tried to anticipate their attack vectors. This led to the rejection of a lot of design choices simply because “they could exploit that”. To a certain degree, this attitude is probably healthy when designing a real API.
  • Enum is a dead end: Most teams used Java Enums in their implementation. Every team discovered in the second phase that Enums are a dead end in regard of design evolution. It’s probably a good choice to thin out their usage in an API context.
  • The most helpful concepts were interfaces and factories.
  • If some object needs to be handed over to the user, make it immutable.
  • Use all the modifiers! No, really. During the event, Java’s package protected modifier experienced a glorious revival. When designing an API, you need to know about all your possibilities to express restrictions.
  • Forbid everything: But despite your enhanced expressibility, it’s a safe bet to disable every use case you don’t want to support, if only to minimize the target area for other teams during an API Design Fest.

The result

The API Design Fest was a great way to learn about the most obvious problems and pitfalls of API design in the shortest possible manner. It was fun and exhausting, but also a great motivator to read the whole book (again). Thanks to Jaroslav Tulach for his great idea.

Build remote controllable applications – expose APIs!

With the advent of mobile computing and (almost) always available network access we should think about the way we are building our applications. We often think in terms of client and server applications. In my opinion we should expand on this and start building “schizophrenic” applications that are clients and servers at the same time even if they are running on the same machine with only one available client at first. Let me elaborate on this:

Exposing an API and thus acting as a “server” is important for many applications to make integration into other software systems possible. It is obvious in the web where you often integrate widgets or services onto your page, for example like-buttons, maps, avatars and so on. It lets others use your software in their programs via scripting and possibly other means. All that broadens the use cases for your application. It becomes more valuable because there are more possibilities for your clients or others to use it. Often your application acts as a client to different services providing value of its own. So it serves two purposes and provides double or even multiplied value!

I would like to make some more specific examples:

  • Many vendors of some piece of hardware provide a user program to their hardware as a windows application. There is no (easy) way to remote control the hardware or to use it on different platforms. Sometimes you get a driver library and can build a service around it but it usually takes a lot of work.
  • Applications provide only one interface, usually a platform specific GUI and you are stuck with it. Would it not be nice to see specialized views for your mobile device of choice? In the future it is perhaps your brand new smart watch where you would like to see the status.

If your application is build from ground up with other software as a client in mind, you (or others) can add new ways of interacting with your application and the value it provides. This comes with a side-effect that should not be underestimated: Exposing an API helps your design!

Added benefits

You do not only open your application to a plethora of use cases but you will also build a better software. Thinking of the boundaries of the system, designing an API, using it in different scenarios and with different front ends will make you system better structured and much clearer separated into modules. When you create the possibility for different clients of your system you remove most of the danger of mixing UI code with business logic. If your clients have to use a defined API, which is not necessarily the same for all clients, they have to depend on the specified behaviour exposed as a service. It does not matter if the API is Java, CORBA, REST/JSON, SOAP or what not, the pure existence will have you define boundaries to your system. Your application will become part of one or more other systems forcing you to put thought into modularisation and separation at a larger scale than classes or packages. All this will help you with the design and overall your application will handle more use-cases than you might have imagined and will be prepared for changes in computing unforeseen and yet to come.

Adding new frontends or APIs and exchanging different parts of the system become comparatively easy in contrast to many conventionally built monolithic applications.