Summary of the Schneide Dev Brunch at 2012-10-14

Two weeks ago, we held another Schneide Dev Brunch. The Dev Brunch is a regular brunch on a sunday, only that all attendees want to talk about software development and various other topics. If you bring a software-related topic along with your food, everyone has something to share. The brunch was so well attended that we had trouble to all find a chair and fit at the table. We had to stay inside as the weather was rainy and too cold for prolonged outdoor sessions. Let’s have a look at the main topics we discussed:

Work hard, play hard

The first topic was a summary of the contents of the documentary movie “work hard play hard” about our modern work places. The documentary is a recommended watch for everyone thinking about joining this side of the industry. It’s beautiful sometimes and very painful to watch most times. You might cherish some of the rougher edges on your workplace afterwards. The DVD is out now.

Dual Monitoring

A short discussion about the efficiency increase that happens just by adding another monitor to your desk. There was no dispute: If you don’t at least try it, you waste money. That’s what I meant when I blogged about the second monitor being an profitable investment. Just one downfall, it shouldn’t end like this.

Management by Directive

Another discussion about the management of large departments. The “directive issuer” manager style is a common sight in this environment. I won’t repeat the discussion itself, but rather add an amusing story about an ex-military commander running a software development company. Enjoy!

Review of the Sneak Preview “Quality Assurance Best Practices in Karlsruhe”

There was a “sneak preview” organised by the VKSI, a local association of software engineers a few weeks ago. The topic of the whole event was “Quality Assurance Best Practices in Karlsruhe“. The event was divided into three independent presentations with different topics:

  • Non-Functional Software Tests” by Gebhard Ebeling: The talk was about realistic load- and performance testing of complex applications (and websites). While the presentation omitted tools and code completely, there were some take-aways even for developers that had never performed these types of tests before. This was arguably the best presentation of the event.
  • Contracts im Software Engineering” by Ben Romberg and Stefan Schürle: This talk was about the benefits of software contracts (think about checked method or class invariants) and the presentation of a particular implementation in Java, namely C4J. The perceived problem with this solution was the rather clumsy source code necessary to define the contracts.
  • MoDisco Software Modernization & Analysis” by Benjamin Klatt: MoDisco builds a model out of source code that is detailled enough to apply meaningful transformations to it and have the exact same source code (plus transformed code) as output. The idea looked very promising, but the presentation lacked actual source code examples. Nonetheless, MoDisco proves that there is a future for modell-driven analysis.

We had a lengthy discussion about software contracts and Design By Contract (DBC) in general. One tool that got mentioned several times was “CoFoJa” from (at least initially) Google.

Book review: Java Application Architecture: Modularity Patterns with Examples Using OSGI

In the rather new book of the Robert C. Martin signature series, Kirk Knoernschild tackles the hard task to teach software architecture through a book. One participant read the book and is very happy about the experience and insight he got from it. The book itself is repetitive at times, but that adds to the accessibility of the topic at hand when you jump right into a chapter. Additionally to the modularity and architecture aspects, you’ll learn OSGI through the code examples. This books gets a recommendation.

Book review: ATDD by Example

Another new book is from Markus Gärtner, of the Kent Beck signature series this time. It takes the reader by the hand and shows a way to use Cucumber, FitNesse and of course Behavior-Driven Development as a tool-and-process framework to implement (Acceptance-) Test Driven Development. None of our participants read the book fully yet, but it’s already a promising start. If you are looking for a new book about testing (after having read the great GOOS book), don’t hesitate. Another recommendation to read.

Visitor design pattern breaks modularization

One participant brought up the problem that he wanted absolute modularization in his application layout, but used a visitor design pattern at some central place. This breaks modularization, as the type information is exposed too much. We discussed the problem with some diagrams and sketches and came up with several alternatives, each with their own advantages and drawbacks. That was a great code design session among seasoned professionals.

Why are services included into Grails?

Another discussion was about the Grails web framework and the necessity for a service layer or service classes explicitly. We sketched out the fundamental architecture of a Grails application and discussed different possible alternatives to a dedicated service layer. There are some nice features about Grails services (like injection by convention, transaction and scope), but nothing really too sophisticated to distinguish them from POGOs. The discussion was open-ended, as usual with complex topics.

Review of a workshop on agile software-engineering

Lately, a participant visited a workshop on agile software-engineering, focussing a lot on SCRUM and XP. The workshop ran for several days and included lots of hands-on exercises. The workshop itself provided not much new content for seasoned agile developers, but served as an accurate and thorough introduction for younger developers. A major part of the workshop were social aspects of agile environments. Concepts like team empowerement are usually not taught in technical workshops. Important additional topics comprised of agile planning and estimation and proper retrospectives. The workshop itself was more of a entry-level introduction to agile development, but very effective in that regard.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The high number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Checking preconditions in advance vs. on demand vs. exceptions

Usually, it is good practice to check certain preconditions before applying operations to input data. This is often referred to as defensive programming. Many people are used to lines like:

public void preformOn(String foo) {
  if (!myMap.containsKey(foo)) {
    // handle it correctly
    return;
  }
  // do something with the entry
  myMap.get(foo).performOperation();
}

While there is nothing wrong with such kind of “in advance checking” it may have performance implications – especially when IO is involved.

We had a problem some time ago when working with some thousand wrappers for File objects. The wrappers checked if the given File object actually is a file using the innocent isFile()-method in the constructor which caused hard disk access each time. So building our collection of wrapped files took quite some time (dozens of seconds) and our client complained (rightfully so!) about the performance. Once the collection was built the operations were fast because no checking was needed anymore.

Our first optimization step was deferring the check to the point where the file was actually used. This sped up the creation of the wrappers so it was barely noticeable but processing a bunch of elements took longer because of additional disk accesses. Even though this approach may work for a plethora of situations for our typical use cases the effect of this optimization was not enough.

So we looked at our problem from another perspective: The vast majority of file handles were actually existing and readable files and directories and foreign/unknown files were the exception. Because of this fact we chose to simply leave out any kind of checks and handle the exceptions! Exception handling is often referred to as slow but if exceptions are rare it can make a difference in some orders of magnitude. Our speed up using this approach was enourmous and the client was happy about sub-second responsiveness for his typical operations. In addition we think that the code now expresses more cleary that irregular files really are the exception and not the rule for this particular code.

Conclusion

There are different approaches to handling of parameters and input data. Depending on the cost of the check and the frequency of special input different strategies may prove beneficial both in expressing your intent and the perceived performance of your application.

Solutions to common Java enum problems

Say, you have an enum representing a state:

enum State {
  A, B, C, D;
}

And you want to know if a state is a final state. In our example C and D should be final.
An initial attempt might be to use a simple method:

public boolean isFinal() {
	return State.C == this || State.D == this;
}

When there are two states this might seem reasonable but adding more states to this condition makes it unreadable pretty fast.
So why not use the enum hierarchy?

A(false), B(false), C(true), D(true);

private boolean isFinal;

private State(boolean isFinal) {
  this.isFinal = isFinal;
}

public boolean isFinal() {
  return isFinal;
}

This was and is in some cases a good approach but also gets cumbersome if you have more than one attribute in your constructor.
Another attempt I’ve seen:

public boolean isFinal() {
        for (State finalState : State.getFinalStates()) {
            if (this == finalState) {
                return true;
            }
        }
        return false;
    }

    public static List<State> getFinalStates() {
        List<State> finalStates = new ArrayList<State>();
        finalStates.add(State.C);
        finalStates.add(State.D);
        return finalStates;
    }

This code gets one thing right: the separation of the final attribute from the states. But it can be written in a clearer way:

List<State> FINAL_STATES = Arrays.asList(C, D)

public boolean isFinal() {
	return FINAL_STATES.contains(this);
}

Another common problem with enums is constructing them via an external representation, e.g. a text.
The classic dispatch looks like this:

    public static State createFrom(String text) {
        if ("A".equals(text) || "FIRST".equals(text)) {
            return State.A;
        } else if ("B".equals(text)) {
            return State.B;
        } else if ("C".equals(text)) {
            return State.C;
        } else if ("D".equals(text) || "LAST".equals(text)) {
            return State.D;
        } else {
            throw new IllegalArgumentException("Invalid state: " + text);
        }
    }

Readers of refactoring sense a code smell here and promptly want to refactor to a dispatch using the hierarchy.

A("A", "FIRST"),
B("B"),
C("C"),
D("D", "LAST");

private List<String> representations;

private State(String... representations) {
  this.representations = Arrays.asList(representations);
}

public static State createFrom(String text) {
  for (State state : values()) {
    if (state.representations.contains(text)) {
      return state;
    }
  }
  throw new IllegalArgumentException("Invalid state: " + text);
}

Much better.

A mindset for inherited source code

One field of expertise our company provides is the continuation of existing software projects. While this sounds very easy to accomplish, in reality, there are a few prerequisites that a software project has to provide to be continuable. The most important one is the source code of the system, obviously. If the source code is accessible (this is a problem more often than you might think!), the biggest hurdle is now the mindset and initial approach of the developers that inherit it.

The mindset

Most developers have a healthy “greenfield” project mindset. There is a list of requirements, so start coding and fulfill them. If the code obstructs the way to your goal, you reshape it in a meaningful manner. The more experience you have with developing software, the better the resulting design and architecture of the code will be. Whether you apply automatic tests to your software (and when) is entirely your decision. In short: You are the master of the code and forge it after your vision. This is a great mindset for projects in the early phases of development. But it will actively hinder you in later phases of your project or in case you inherit foreign code.

For your own late-phase projects and source code written by another team, another mindset provides more value. The “brownfield” metaphor doesn’t describe the mindset exactly. I have three metaphors that describe parts of it for me: You’ll need to be an archeologist, a forensicist (as in “securer of criminal evidence”) and a minefield clearer. If you hear the word archeologist, don’t think of Indiana Jones, but of somebody sitting in the scorching desert, clearing a whole football field from sand with only a shaving brush and his breath. If you think about being a forensicist, don’t think of your typical hero criminalist who rearranges the photos of the crime scene to reveal a hidden hint, but the guy in a white overall who has to take all the photos without disturbing the surrounding (and being disturbed by it). If you think about the minefield clearer: Yes, you are spot on. He has to rely on his work and shouldn’t move too fast in any direction.

The initial approach

This sets the scene for your initial journey inside foreign source code: Don’t touch anything or at least be extra careful, only dust it off in the slightest possible manner. Watch where you step in and don’t get lost. Take a snapshot, mental or written, of anything suspicious you’ll encounter. There will be plenty of temptation to lose focus and instantly improve the code. Don’t fall for it. Remember the forensicist: what would the detective in charge of this case say if you “improved the scenery a bit” to get better photos? This process reminds me so much of a common approach to the game “Minesweeper” that I included the minefield clearer in the analogy. You start somewhere on the field and mark every mine you indirectly identify without ever really revealing them.

Most likely, you don’t find any tests or an issue tracker where you can learn about the development history. With some luck, you’ll have a commit history with meaningful comments. Use the blame view as often as you can. This is your archeological skills at work: Separating layers and layers of code all mingled in one place. A good SCM system can clear up a total mess for you and reveal the author’s intent for it. Without tests, issues and versioning, you cannot distinguish between a problem and a solution, accidental and deliberate complexity or a bug and a feature. Everything could mean something and even be crucial for the whole system or just be useless excess code (so-called “live weight” because the code will be executed, but with no effect in terms of features). To name an example, if you encounter a strange sleep() call (or multiple calls in a row), don’t eliminate or change them! The original author probably “fixed” a nasty bug with it that will come back sooner than you know it.

Walking on broken glass

And this is what you should do: Leave everything in its place, broken, awkward and clumsy, and try to separate your code from “their” code as much as possible. The rationale is to be able to differentiate between “their” mess and “your” mess and make progress on your part without breaking the already existing features. If you cannot wait any longer to clean up some of the existing code, make sure to release into production often and in a timely manner, so you still know what changed if something goes wrong. If possible, try to release two different kinds of new versions:

  • One kind of new version only incorporates refactorings to the existing code. If anything goes wrong or seems suspicious, you can easily bail out and revert to the previous version without losing functionality.
  • The other kind only contains new features, added with as little change to existing code as possible. Hopefully, this release will not break existing behaviour. If it does, you should double-check your assumptions about the code. If reasonably achievable, do not assume anything or at least write an automatic test to validate your assumption.

Personally, I call this approach the “tick-tock” release cycle, modelled after the release cycle of Intel for its CPUs.

Changing gears

A very important aspect of software development is to know when to “change gears” and switch from greenfield to brownfield or from development to maintainance mode. The text above describes the approach with inherited code, where the gear change is externally triggered by transferring the source code to a new team. But in reality, you need to apply most of the practices on your own source code, too. As soon as your system is in “production”, used in the wild and being filled with precious user data, it changes into maintainance mode. You cannot change existing aspects as easily as before.
In his book “Implementation Patterns” (2008), Kent Beck describes the development of frameworks among other topics. One statement is:

While in conventional development reducing complexity to a minimum is a valuable strategy for making the code easy to understand, in framework development it is often more cost-effective to add complexity in order to enhance the framework developer’s ability to improve the framework without breaking client code.
(Chapter 10, page 118)

I not only agree with this statement but think that it partly applies to “conventional development” in maintainance mode, too. Sometimes, the code needs additional complexity to cope with existing structures and data. This is the moment when you’ve inherited your own code.

Class names with verbs enforce the Single Responsibility Principle (SRP)

I’m experimenting with fluent code for a while now. Fluent code is code that everybody can read out loud and understand immediately. I’ve blogged on this topic already and it’s not big news, but I’ve just recently had a revelation why this particular style of programming works so well in terms of code design.

The basics

I don’t expect you to read all my old blog entries on fluent code or to know anything about fluent interfaces, so I’m giving you a little introduction.

Let’s assume that you want to find all invoice documents inside a given directory tree. A fluent line of code reads like this:


Iterable<Invoice> invoices = FindLetters.ofType(
    AllInvoices.ofYear("2012")).beneath(
        Directory.at("/data/documents"));

While this is very readable, it’s also a bit unusual for a programmer without prior exposure to this style. But if you are used to it, the style works wonders. Let’s see: the implementation of the FindLetters class looks like this (don’t mind all the generic stuff going on, concentrate on the methods!):

public final class FindLetters<L extends Letter> {
  private final LetterType<L> parser;

  private FindLetters(LetterType<L> type) {
    this.parser = type;
  }

  public static <L extends Letter> FindLetters<L> ofType(LetterType<L> type) {
    return new FindLetters<L>(type);
  }

  public Iterable<L> beneath(Directory directory) {
    ...
  }

Note: If you are familiar with fluent interfaces, then you will immediately notice that this isn’t even a full-fledged one. It’s more of a (class-level) factory method and a single instance method.

If you can get used to type in what you want to do as the class name first (and forget about constructors for a while), the code completion functionality of your IDE will guide you through the rest: The only public static method available in the FindLetters class is ofType(), which happens to return an instance of FindLetters, where again the only method available is the beneath() method. One thing leads to another and you’ll end up with exactly the Iterable of Invoices you wanted to find.

To assemble all parts in the example, you’ll need to know that Invoice is a subtype of Letter and AllInvoices is a subtype of LetterType<Invoice>.

The magical part

One thing that always surprised me when programming in this style is how everything seems to find its place in a natural manner. The different parts fit together really well, especially when the fluent line of code is written first. Of course, because you’ll design your classes to make everything fitting. And that’s when I had the revelation. In hindsight, it seems rather obvious to me (a common occurrence with revelations) and you’ve probably already seen it yourself.

The revelation

It struck me that all the pieces that you assemble a fluent line of code with are small and single-purposed (other descriptions would be “focussed”, “opinionated” or “determined”). Well, if you obey the Single Responsibility Principle (SRP), every class should only have one responsibility and therefore only limited purposes. But now I know how these two things are related: You can only cram so much purpose (and responsibility) in a class named FindLetters. When the class name contains the action (verb) and the subject (noun), the purpose is very much set. The only thing that can be adjusted is the context of the action on the subject, a task where fluent interfaces excel at. The main reason to use a fluent interface is to change distinct aspects of the context of an object without losing track of the object itself.

The conclusion

If the action+subject class names enforce the Single Responsibility Principle, then it’s no wonder that the resulting code is very flexible in terms of changing requirements. The flexibility isn’t a result of the fluency or the style itself (as I initially thought), but an effect predicted and caused by the SRP. Realizing that doesn’t invalidate the other positive effects of fluent code for me, but makes it a bit less magical. Which isn’t a bad thing.

Triggering jenkins from git with common post-receive hook

The standard way of triggering Jenkins jobs from a git repository was issuing a get request on the “build now” URL of the job in the post-receive hook, e.g.

curl http://my_ci_server:8080/job/my_job/build?delay=0sec

The biggest problem of this approach is that you have to hardcode the job name into the url. This prevents sharing the hook between repositories and requires you to put an adjusted post-receive hook script into each new repository. Also, additional work has to be done to trigger jobs only for certain branches and the like.
Fortunately, Jenkins offers a new way of triggering jobs from a git repository for quite a while now. Essentially you have to notify jenkins of the commit in your repository and configure the job for polling.

To trigger jobs for the repository git@my_repository_server:my_project.git you can use the following script:

GIT_REPO_URL=git@my_repository_server:`pwd | sed 's:.*\/::'`
curl http://my_ci_server:8080/git/notifyCommit?url=$GIT_REPO_URL

Notice the absence of any repository or job specific stuff in the post-receive hook. Such a hook can be placed in a central location and be shared between repositories using symbolic links.

RubyMotion: Ruby for iOS development

RubyMotion is a new (commercial) way to develop apps for iOS, this time with Ruby. So why do I think this is better than the traditional way using ObjectveC or other alternatives?

Advantages to other alternatives

Other alternatives often use a wrapper or a different runtime. The problem is that you have to wait for the library/wrapper vendor to include new APIs when iOS gets a new update. RubyMotion instead has a static compiler which compiles to the same code as ObjectiveC. So you can use the myriads of ObjectiveC libraries or even the interface builder. You can even mix your RubyMotion code with existing ObjectiveC programs. Also the static compilation gives you the performance advantages of real native code so that you don’t suffer from the penalties of using another layer. So you could write your programs like you would in ObjectiveC with the same performance and using the same libraries, then why choose RubyMotion?

Advantages to the traditional way

First: Ruby. The Ruby language has a very nice foundation: everything is an expression. And everything can be evaluated with logic operators (only nil and false is false).
In ObjectiveC you would write:

  cell = tableView.dequeueReusableCellWithIdentifier(reuseId);
  if (!cell) {
    cell = [[TableViewCell alloc] initWithStyle: cellStyle, reuseIdentifier: reuseId]];
  }

whereas in Ruby you can write

cell = tableView.dequeueReusableCellWithIdentifier(@reuse_id)
  || TableViewCell.alloc.initWithStyle(@cell_style, reuseIdentifier:@reuse_id)

As you can see you can use the Cocoa APIs right away. But what excites me even more is the community which builds around RubyMotion. RubyMotion is only some months old but many libraries and even award winning apps have been written. Some libraries wrap so called boiler plate code and make it more pleasant you to use. Other introduce new metaphors which change the way apps are written entirely.
I see a bright future for RubyMotion. It won’t replace ObjectiveC for everyone but it is a great alternative.