Web apps: Security is more than you think

Security in web apps is an ever increasing important topic: in this post we take a look at injection attacks especially SQL injection, the number one OWASP security problem.

Security in web apps is an ever increasing important topic besides securing the machine or your web/application containers on which your apps run you need to deal with some security related issues in your own apps. In this article we take a look at the number one (according to OWASP)risk in web apps:

Injection attacks

Every web app takes some kind of user input (usually through web forms) and works with it. If the web app does not properly handle the user input malicious entries can lead to severe problems like stealing or losing of data. But how do you identify problems in your code? Take a look at a naive but not uncommon implementation of a SQL query:

query("select * from user_data where username='" + username + "'")

Using the input of the user directly in a query like this is devastating, examples include dropping tables or changing data. Even if your library prevents you from using more than one statement in a query you can change this query to return other users’ data.
Blacklisting special characters is not a solution since you need some of them in your input or there are methods to circumvent your blacklists.
The solution here is to proper escape your input using your libraries mechanisms (e.g. with Groovy SpringJDBC):

query("select * from user_data where username=:username", [username: username])

But even when you escape everything you need to take care what you inject in your query. In this example all data is stored with a key of username.data.

query("select * from user_data where key like :username '.%' ", [username: username])

In this case everything will be escaped correctly but what happens when your user names himself % ? He gets the data of all users.

Is SQL the only vulnerable part of your app? No, every part which interprets your input and executes it is vulnerable. Examples include shell commands or JavaScript which we will look at in a future blog post.

As the last query showed: besides using proper escaping, setting your mind for security problems is the first and foremost step to a secure app.

A small test saves the day

You think a method is too trivial to write a test for it? Think again if the method is mission-critical!

Just recently, I had to write a connection between an existing application and a new hardware unit. This is a fairly common job for our company, even considering the circumstances that I’d never even seen the hardware, let alone being able to connect to it. The hardware unit itself was rather big and it was installed in a security sensitive area with restricted access. So, I only got a specification of the protocol to use and a description of the hardware’s features.

Our common procedure to include hardware dependent modules into an application is to write two implementations of the module: One implementation is the real deal and interacts with the hardware over ethernet, USB, serial port or whatever proprietary communication device is used. This version of the module can only work as intended if the hardware is present. The other implementation acts as an emulation of the hardware, without any dependencies. If you are familiar with unit tests, think of a big test mock. The emulation version is used during development to test and run the application without requirements about the hardware. There are a lot of subtle pitfalls to consider and avoid, but on a bird-view level of abstraction, these interchangeable implementations of a module enable us to develop software with hardware dependencies without need for the actual hardware.

The first piece of code that’s used of a module is a factory/builder class that chooses between the available implementations, based on some configuration entry (or hardware availability, etc.). A typical implementation of the responsible method might look like this:


public HardwareModule createFor(ModuleConfiguration configuration) {
  if (configuration.isHardwarePresent()) {
    new RealHardwareModule();
  }
  return new EmulatedHardwareModule();
}

If the configuration object says that the hardware is present, the real implementation is used, subsequentially opening a connection to the hardware and talking the client side of the given protocol. Otherwise, the emulation is created and returned, maybe opening a debug GUI window to display certain internal states and values and providing controls to mess with the application during development.

The method itself looks very innocent and meager. There is not much going on, so what could possibly go wrong?

I’m not the most eager test-driven developer in the world, I have to admit. But I see the value of tests (and unit tests in particular) and adhere to the A-TRIP rules defined by Andy Hunt and (pragmatic) Dave Thomas:

  • Automatic
  • Thorough
  • Repeatable
  • Independent
  • Professional

For a complete definition of the rules, read the linked blog entry or, even better, buy the book. It’s small and cheap, but contains a lot of profound basic knowledge about unit testing.

The “Thorough” rule is more of a rule of thumb than a hard scientific formula for good unit tests: Always write a test if you’ve found a bug or if the code you’re writing is mission-critical. This was when my gut feeling told me that while the method above might seem trivial, it is definitely essential for the hardware module. So I wrote a test:

  @Test
  public void providesEmulationIfUnspecified() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration(""));
    assertEquals("not the hardware emulation", EmulatedHardwareModule.class, hardware.getClass());
  }

  @Test
  public void providesEmulationIfHardwareAbsent() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration("hardware.present=false"));
    assertEquals("not the hardware emulation", EmulatedHardwareModule.class, hardware.getClass());
  }

  @Test
  public void providesRealImplementationIfHardwarePresent() {
    HardwareModuleFactory factory = new HardwareModuleFactory();
    HardwareModule hardware = factory.createFor(configuration("hardware.present=true"));
    assertEquals("not the real hardware implementation", RealHardwareModule.class, hardware.getClass());
  }

To my surprise, the test immediately went red for the third test method. After double-checking the test code, I was certain that the test was correct. The test discovered a bug in the production code. And being a mostly independent unit test, it pointed to the problematic lines right away: the method implementation above. The helper method named configuration() spared in the code sample was very unlikely to contain a bug.

After a short moment of reading the code again, I corrected it (note the added return statement in line 3):


public HardwareModule createFor(ModuleConfiguration configuration) {
  if (configuration.isHardwarePresent()) {
    return new RealHardwareModule();
  }
  return new EmulatedHardwareModule();
}

This might not seem like the most disastrous bug ever, but it would have made for a nasty start when I finally would have tried the application with the real hardware. There is nothing more valueable than to be able to keep your cool “in the wild” and work on the real problems like faulty protocol specifications or unexpected/undocumented hardware behaviour. So, my gut feeling (and the Thorough rule) were right and my brain, telling me “skip this petty test” longer than I like to admit, was wrong. A small test for a small method paid off immediately and saved the day, at least for me.

Testing on .NET: Choosing NUnit over MSTest

We sometimes do smaller .NET projects for our clients even though we are mostly a Java/JVM shop. Our key infrastructure stays the same for all projects – regardless of the platform. That means the .NET projects get integrated into our existing continuous integration (CI) infrastructure based on Jenkins. This works suprisingly well even though you need a windows slave and the MSBuild plugin.

One point you should think about is which testing framework to use. MSTest is part of Visual Studio and provides nice integration into the IDE. Using it in conjunction with Jenkins is possible since there is a MSTest plugin for our favorite CI server. One downside is that you need either Visual Studio itself or the Windows SDK (500MB download, 300MB install) installed on the build server in addition to .NET. Another one is that it does not work with the “Express” editions of Visual Studio. Usually that is not a problem for companies but it raises the entry barrier for open source or other non-profit projects by requiring relatively expensive Visual Studio licences.

In our scenarios NUnit proved much lighter and friendlier in installation and usage. You can easily bundle it with your sources to improve self-containment of the project and lessen the burden on the system and tools. If you plug the NUnit tool into the external tools-section of Visual Studio (which also works with Express) the integration is acceptable, too.

Conclusion

If you are not completely on the full Microsoft stack for you project infrastructure using Visual Studio, TeamCity, Sourcesafe et al. it is worth considering choosing NUnit over MSTest because of its leaner size and looser coupling to the Mircosoft stack.

Antipatterns: Convenience Constructors

Lately I stumble a lot upon code I wrote 4 or more years ago. In the light of introducing new features the code gets tested for its quality. One antipattern I’ve found which I had used in the past but which is really hard to extend is convenience constructors.

Lately I stumble a lot upon code I wrote 4 or more years ago. In the light of introducing new features the code gets tested for its quality. One antipattern I’ve found which I had used in the past but which is really hard to extend is convenience constructors. Take a constructor for a command object for example:

    public SetProperty(String filename, String key, String value) {
        this(filename, key, value, null);
    }

    public SetProperty(String filename,
            String key, String value, String comment) {
        this(filename, ReferenceTo.key(key), value, comment);
    }

    public SetProperty(String filename,
            String sectionType, String sectionName,
            String key, String value) {
        this(filename, sectionType, sectionName, key, value, null);
    }

    public SetProperty(String filename,
            String sectionType, String sectionName,
            String key, String value, String comment) {
        this(filename, ReferenceTo.sectionAndKey(sectionType, sectionName, key), value, comment);
    }

    public SetProperty(String filename,
            AdvancedPropertyReference propertyReference,
            String value, String comment) {
        this(filename, propertyReference, value, comment);
    }

    public SetProperty(String filename,
            AdvancedPropertyReference propertyReference,
            String value, String comment) {
        super(filename);
        this.propertyReference = propertyReference;
        this.value = value;
        this.comment = comment;
    }

We need to add a new feature which enables us to append properties not just set and replace them. One way could be to extend the class. But this is overkill. Just adding a new parameter flag should suffice. But this would blow up the number of constructors because you need to include a version with and without the new parameter for each (used) constructor. Here an old friend comes to the rescue: design patterns. Looking at the GoF book shows a good solution to the problem: the builder pattern.

public class SetPropertyBuilder {
    private final String filename;
    private String sectionType;
    private String sectionName;
    private String referenceKey;
    private String value;
    private String comment;
    private boolean append;

    public SetPropertyBuilder(String filename) {
        super();
        this.filename = filename;
    }

    public SetPropertyBuilder set(String key, String newValue) {
        this.referenceKey = key;
        this.value = newValue;
        return this;
    }

    public SetPropertyBuilder append(String key, String additionalValue) {
        set(key, additionalValue);
        this.append = true;
        return this;
    }

    public SetPropertyBuilder inSection(String type, String name) {
        this.sectionType = type;
        this.sectionName = name;
        return this;
    }

    public SetProperty build() {
        AdvancedPropertyReference reference = ReferenceTo.key(this.referenceKey);
        if (this.sectionType != null && this.sectionName != null) {
            reference = ReferenceTo.sectionAndKey(this.sectionType, this.sectionName, this.referenceKey);
        }
        return new SetProperty(this.filename, reference, this.value, this.comment, this.append);
    }
}

Now we can eleminate all but one constructor from the SetProperty command. Adding a new property now yields one new method in the builder.

Summary of the Schneide Dev Brunch at 2012-10-14

If you couldn’t attend the Schneide Dev Brunch in October 2012, here are the main topics we discussed neatly summarized.

Two weeks ago, we held another Schneide Dev Brunch. The Dev Brunch is a regular brunch on a sunday, only that all attendees want to talk about software development and various other topics. If you bring a software-related topic along with your food, everyone has something to share. The brunch was so well attended that we had trouble to all find a chair and fit at the table. We had to stay inside as the weather was rainy and too cold for prolonged outdoor sessions. Let’s have a look at the main topics we discussed:

Work hard, play hard

The first topic was a summary of the contents of the documentary movie “work hard play hard” about our modern work places. The documentary is a recommended watch for everyone thinking about joining this side of the industry. It’s beautiful sometimes and very painful to watch most times. You might cherish some of the rougher edges on your workplace afterwards. The DVD is out now.

Dual Monitoring

A short discussion about the efficiency increase that happens just by adding another monitor to your desk. There was no dispute: If you don’t at least try it, you waste money. That’s what I meant when I blogged about the second monitor being an profitable investment. Just one downfall, it shouldn’t end like this.

Management by Directive

Another discussion about the management of large departments. The “directive issuer” manager style is a common sight in this environment. I won’t repeat the discussion itself, but rather add an amusing story about an ex-military commander running a software development company. Enjoy!

Review of the Sneak Preview “Quality Assurance Best Practices in Karlsruhe”

There was a “sneak preview” organised by the VKSI, a local association of software engineers a few weeks ago. The topic of the whole event was “Quality Assurance Best Practices in Karlsruhe“. The event was divided into three independent presentations with different topics:

  • Non-Functional Software Tests” by Gebhard Ebeling: The talk was about realistic load- and performance testing of complex applications (and websites). While the presentation omitted tools and code completely, there were some take-aways even for developers that had never performed these types of tests before. This was arguably the best presentation of the event.
  • Contracts im Software Engineering” by Ben Romberg and Stefan Schürle: This talk was about the benefits of software contracts (think about checked method or class invariants) and the presentation of a particular implementation in Java, namely C4J. The perceived problem with this solution was the rather clumsy source code necessary to define the contracts.
  • MoDisco Software Modernization & Analysis” by Benjamin Klatt: MoDisco builds a model out of source code that is detailled enough to apply meaningful transformations to it and have the exact same source code (plus transformed code) as output. The idea looked very promising, but the presentation lacked actual source code examples. Nonetheless, MoDisco proves that there is a future for modell-driven analysis.

We had a lengthy discussion about software contracts and Design By Contract (DBC) in general. One tool that got mentioned several times was “CoFoJa” from (at least initially) Google.

Book review: Java Application Architecture: Modularity Patterns with Examples Using OSGI

In the rather new book of the Robert C. Martin signature series, Kirk Knoernschild tackles the hard task to teach software architecture through a book. One participant read the book and is very happy about the experience and insight he got from it. The book itself is repetitive at times, but that adds to the accessibility of the topic at hand when you jump right into a chapter. Additionally to the modularity and architecture aspects, you’ll learn OSGI through the code examples. This books gets a recommendation.

Book review: ATDD by Example

Another new book is from Markus Gärtner, of the Kent Beck signature series this time. It takes the reader by the hand and shows a way to use Cucumber, FitNesse and of course Behavior-Driven Development as a tool-and-process framework to implement (Acceptance-) Test Driven Development. None of our participants read the book fully yet, but it’s already a promising start. If you are looking for a new book about testing (after having read the great GOOS book), don’t hesitate. Another recommendation to read.

Visitor design pattern breaks modularization

One participant brought up the problem that he wanted absolute modularization in his application layout, but used a visitor design pattern at some central place. This breaks modularization, as the type information is exposed too much. We discussed the problem with some diagrams and sketches and came up with several alternatives, each with their own advantages and drawbacks. That was a great code design session among seasoned professionals.

Why are services included into Grails?

Another discussion was about the Grails web framework and the necessity for a service layer or service classes explicitly. We sketched out the fundamental architecture of a Grails application and discussed different possible alternatives to a dedicated service layer. There are some nice features about Grails services (like injection by convention, transaction and scope), but nothing really too sophisticated to distinguish them from POGOs. The discussion was open-ended, as usual with complex topics.

Review of a workshop on agile software-engineering

Lately, a participant visited a workshop on agile software-engineering, focussing a lot on SCRUM and XP. The workshop ran for several days and included lots of hands-on exercises. The workshop itself provided not much new content for seasoned agile developers, but served as an accurate and thorough introduction for younger developers. A major part of the workshop were social aspects of agile environments. Concepts like team empowerement are usually not taught in technical workshops. Important additional topics comprised of agile planning and estimation and proper retrospectives. The workshop itself was more of a entry-level introduction to agile development, but very effective in that regard.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The high number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Checking preconditions in advance vs. on demand vs. exceptions

Usually, it is good practice to check certain preconditions before applying operations to input data. This is often referred to as defensive programming. Many people are used to lines like:

public void preformOn(String foo) {
  if (!myMap.containsKey(foo)) {
    // handle it correctly
    return;
  }
  // do something with the entry
  myMap.get(foo).performOperation();
}

While there is nothing wrong with such kind of “in advance checking” it may have performance implications – especially when IO is involved.

We had a problem some time ago when working with some thousand wrappers for File objects. The wrappers checked if the given File object actually is a file using the innocent isFile()-method in the constructor which caused hard disk access each time. So building our collection of wrapped files took quite some time (dozens of seconds) and our client complained (rightfully so!) about the performance. Once the collection was built the operations were fast because no checking was needed anymore.

Our first optimization step was deferring the check to the point where the file was actually used. This sped up the creation of the wrappers so it was barely noticeable but processing a bunch of elements took longer because of additional disk accesses. Even though this approach may work for a plethora of situations for our typical use cases the effect of this optimization was not enough.

So we looked at our problem from another perspective: The vast majority of file handles were actually existing and readable files and directories and foreign/unknown files were the exception. Because of this fact we chose to simply leave out any kind of checks and handle the exceptions! Exception handling is often referred to as slow but if exceptions are rare it can make a difference in some orders of magnitude. Our speed up using this approach was enourmous and the client was happy about sub-second responsiveness for his typical operations. In addition we think that the code now expresses more cleary that irregular files really are the exception and not the rule for this particular code.

Conclusion

There are different approaches to handling of parameters and input data. Depending on the cost of the check and the frequency of special input different strategies may prove beneficial both in expressing your intent and the perceived performance of your application.

Solutions to common Java enum problems

More readable solutions to using enums with attributes for categorization or representation.

Say, you have an enum representing a state:

enum State {
  A, B, C, D;
}

And you want to know if a state is a final state. In our example C and D should be final.
An initial attempt might be to use a simple method:

public boolean isFinal() {
	return State.C == this || State.D == this;
}

When there are two states this might seem reasonable but adding more states to this condition makes it unreadable pretty fast.
So why not use the enum hierarchy?

A(false), B(false), C(true), D(true);

private boolean isFinal;

private State(boolean isFinal) {
  this.isFinal = isFinal;
}

public boolean isFinal() {
  return isFinal;
}

This was and is in some cases a good approach but also gets cumbersome if you have more than one attribute in your constructor.
Another attempt I’ve seen:

public boolean isFinal() {
        for (State finalState : State.getFinalStates()) {
            if (this == finalState) {
                return true;
            }
        }
        return false;
    }

    public static List<State> getFinalStates() {
        List<State> finalStates = new ArrayList<State>();
        finalStates.add(State.C);
        finalStates.add(State.D);
        return finalStates;
    }

This code gets one thing right: the separation of the final attribute from the states. But it can be written in a clearer way:

List<State> FINAL_STATES = Arrays.asList(C, D)

public boolean isFinal() {
	return FINAL_STATES.contains(this);
}

Another common problem with enums is constructing them via an external representation, e.g. a text.
The classic dispatch looks like this:

    public static State createFrom(String text) {
        if ("A".equals(text) || "FIRST".equals(text)) {
            return State.A;
        } else if ("B".equals(text)) {
            return State.B;
        } else if ("C".equals(text)) {
            return State.C;
        } else if ("D".equals(text) || "LAST".equals(text)) {
            return State.D;
        } else {
            throw new IllegalArgumentException("Invalid state: " + text);
        }
    }

Readers of refactoring sense a code smell here and promptly want to refactor to a dispatch using the hierarchy.

A("A", "FIRST"),
B("B"),
C("C"),
D("D", "LAST");

private List<String> representations;

private State(String... representations) {
  this.representations = Arrays.asList(representations);
}

public static State createFrom(String text) {
  for (State state : values()) {
    if (state.representations.contains(text)) {
      return state;
    }
  }
  throw new IllegalArgumentException("Invalid state: " + text);
}

Much better.

A mindset for inherited source code

This article outlines a mindset for developers to deal with existing, probably inherited code bases. You’ll have to be an archeologist, a forensicist and a minefield clearer all at once.

One field of expertise our company provides is the continuation of existing software projects. While this sounds very easy to accomplish, in reality, there are a few prerequisites that a software project has to provide to be continuable. The most important one is the source code of the system, obviously. If the source code is accessible (this is a problem more often than you might think!), the biggest hurdle is now the mindset and initial approach of the developers that inherit it.

The mindset

Most developers have a healthy “greenfield” project mindset. There is a list of requirements, so start coding and fulfill them. If the code obstructs the way to your goal, you reshape it in a meaningful manner. The more experience you have with developing software, the better the resulting design and architecture of the code will be. Whether you apply automatic tests to your software (and when) is entirely your decision. In short: You are the master of the code and forge it after your vision. This is a great mindset for projects in the early phases of development. But it will actively hinder you in later phases of your project or in case you inherit foreign code.

For your own late-phase projects and source code written by another team, another mindset provides more value. The “brownfield” metaphor doesn’t describe the mindset exactly. I have three metaphors that describe parts of it for me: You’ll need to be an archeologist, a forensicist (as in “securer of criminal evidence”) and a minefield clearer. If you hear the word archeologist, don’t think of Indiana Jones, but of somebody sitting in the scorching desert, clearing a whole football field from sand with only a shaving brush and his breath. If you think about being a forensicist, don’t think of your typical hero criminalist who rearranges the photos of the crime scene to reveal a hidden hint, but the guy in a white overall who has to take all the photos without disturbing the surrounding (and being disturbed by it). If you think about the minefield clearer: Yes, you are spot on. He has to rely on his work and shouldn’t move too fast in any direction.

The initial approach

This sets the scene for your initial journey inside foreign source code: Don’t touch anything or at least be extra careful, only dust it off in the slightest possible manner. Watch where you step in and don’t get lost. Take a snapshot, mental or written, of anything suspicious you’ll encounter. There will be plenty of temptation to lose focus and instantly improve the code. Don’t fall for it. Remember the forensicist: what would the detective in charge of this case say if you “improved the scenery a bit” to get better photos? This process reminds me so much of a common approach to the game “Minesweeper” that I included the minefield clearer in the analogy. You start somewhere on the field and mark every mine you indirectly identify without ever really revealing them.

Most likely, you don’t find any tests or an issue tracker where you can learn about the development history. With some luck, you’ll have a commit history with meaningful comments. Use the blame view as often as you can. This is your archeological skills at work: Separating layers and layers of code all mingled in one place. A good SCM system can clear up a total mess for you and reveal the author’s intent for it. Without tests, issues and versioning, you cannot distinguish between a problem and a solution, accidental and deliberate complexity or a bug and a feature. Everything could mean something and even be crucial for the whole system or just be useless excess code (so-called “live weight” because the code will be executed, but with no effect in terms of features). To name an example, if you encounter a strange sleep() call (or multiple calls in a row), don’t eliminate or change them! The original author probably “fixed” a nasty bug with it that will come back sooner than you know it.

Walking on broken glass

And this is what you should do: Leave everything in its place, broken, awkward and clumsy, and try to separate your code from “their” code as much as possible. The rationale is to be able to differentiate between “their” mess and “your” mess and make progress on your part without breaking the already existing features. If you cannot wait any longer to clean up some of the existing code, make sure to release into production often and in a timely manner, so you still know what changed if something goes wrong. If possible, try to release two different kinds of new versions:

  • One kind of new version only incorporates refactorings to the existing code. If anything goes wrong or seems suspicious, you can easily bail out and revert to the previous version without losing functionality.
  • The other kind only contains new features, added with as little change to existing code as possible. Hopefully, this release will not break existing behaviour. If it does, you should double-check your assumptions about the code. If reasonably achievable, do not assume anything or at least write an automatic test to validate your assumption.

Personally, I call this approach the “tick-tock” release cycle, modelled after the release cycle of Intel for its CPUs.

Changing gears

A very important aspect of software development is to know when to “change gears” and switch from greenfield to brownfield or from development to maintainance mode. The text above describes the approach with inherited code, where the gear change is externally triggered by transferring the source code to a new team. But in reality, you need to apply most of the practices on your own source code, too. As soon as your system is in “production”, used in the wild and being filled with precious user data, it changes into maintainance mode. You cannot change existing aspects as easily as before.
In his book “Implementation Patterns” (2008), Kent Beck describes the development of frameworks among other topics. One statement is:

While in conventional development reducing complexity to a minimum is a valuable strategy for making the code easy to understand, in framework development it is often more cost-effective to add complexity in order to enhance the framework developer’s ability to improve the framework without breaking client code.
(Chapter 10, page 118)

I not only agree with this statement but think that it partly applies to “conventional development” in maintainance mode, too. Sometimes, the code needs additional complexity to cope with existing structures and data. This is the moment when you’ve inherited your own code.

Class names with verbs enforce the Single Responsibility Principle (SRP)

When using fluent code and fluent interfaces, I noticed an increased flexibility in the code. On closer inspection, this is the effect of a well-known principle that is inherently enforced by the coding style.

I’m experimenting with fluent code for a while now. Fluent code is code that everybody can read out loud and understand immediately. I’ve blogged on this topic already and it’s not big news, but I’ve just recently had a revelation why this particular style of programming works so well in terms of code design.

The basics

I don’t expect you to read all my old blog entries on fluent code or to know anything about fluent interfaces, so I’m giving you a little introduction.

Let’s assume that you want to find all invoice documents inside a given directory tree. A fluent line of code reads like this:


Iterable<Invoice> invoices = FindLetters.ofType(
    AllInvoices.ofYear("2012")).beneath(
        Directory.at("/data/documents"));

While this is very readable, it’s also a bit unusual for a programmer without prior exposure to this style. But if you are used to it, the style works wonders. Let’s see: the implementation of the FindLetters class looks like this (don’t mind all the generic stuff going on, concentrate on the methods!):

public final class FindLetters<L extends Letter> {
  private final LetterType<L> parser;

  private FindLetters(LetterType<L> type) {
    this.parser = type;
  }

  public static <L extends Letter> FindLetters<L> ofType(LetterType<L> type) {
    return new FindLetters<L>(type);
  }

  public Iterable<L> beneath(Directory directory) {
    ...
  }

Note: If you are familiar with fluent interfaces, then you will immediately notice that this isn’t even a full-fledged one. It’s more of a (class-level) factory method and a single instance method.

If you can get used to type in what you want to do as the class name first (and forget about constructors for a while), the code completion functionality of your IDE will guide you through the rest: The only public static method available in the FindLetters class is ofType(), which happens to return an instance of FindLetters, where again the only method available is the beneath() method. One thing leads to another and you’ll end up with exactly the Iterable of Invoices you wanted to find.

To assemble all parts in the example, you’ll need to know that Invoice is a subtype of Letter and AllInvoices is a subtype of LetterType<Invoice>.

The magical part

One thing that always surprised me when programming in this style is how everything seems to find its place in a natural manner. The different parts fit together really well, especially when the fluent line of code is written first. Of course, because you’ll design your classes to make everything fitting. And that’s when I had the revelation. In hindsight, it seems rather obvious to me (a common occurrence with revelations) and you’ve probably already seen it yourself.

The revelation

It struck me that all the pieces that you assemble a fluent line of code with are small and single-purposed (other descriptions would be “focussed”, “opinionated” or “determined”). Well, if you obey the Single Responsibility Principle (SRP), every class should only have one responsibility and therefore only limited purposes. But now I know how these two things are related: You can only cram so much purpose (and responsibility) in a class named FindLetters. When the class name contains the action (verb) and the subject (noun), the purpose is very much set. The only thing that can be adjusted is the context of the action on the subject, a task where fluent interfaces excel at. The main reason to use a fluent interface is to change distinct aspects of the context of an object without losing track of the object itself.

The conclusion

If the action+subject class names enforce the Single Responsibility Principle, then it’s no wonder that the resulting code is very flexible in terms of changing requirements. The flexibility isn’t a result of the fluency or the style itself (as I initially thought), but an effect predicted and caused by the SRP. Realizing that doesn’t invalidate the other positive effects of fluent code for me, but makes it a bit less magical. Which isn’t a bad thing.

Triggering jenkins from git with common post-receive hook

The standard way of triggering Jenkins jobs from a git repository was issuing a get request on the “build now” URL of the job in the post-receive hook, e.g.

curl http://my_ci_server:8080/job/my_job/build?delay=0sec

The biggest problem of this approach is that you have to hardcode the job name into the url. This prevents sharing the hook between repositories and requires you to put an adjusted post-receive hook script into each new repository. Also, additional work has to be done to trigger jobs only for certain branches and the like.
Fortunately, Jenkins offers a new way of triggering jobs from a git repository for quite a while now. Essentially you have to notify jenkins of the commit in your repository and configure the job for polling.

To trigger jobs for the repository git@my_repository_server:my_project.git you can use the following script:

GIT_REPO_URL=git@my_repository_server:`pwd | sed 's:.*\/::'`
curl http://my_ci_server:8080/git/notifyCommit?url=$GIT_REPO_URL

Notice the absence of any repository or job specific stuff in the post-receive hook. Such a hook can be placed in a central location and be shared between repositories using symbolic links.