Book review: “Java by Comparison”

I need to start this blog entry with a full disclosure: One of the authors of the book I’m writing about contacted me and asked if I could write a review. So I bought the book and read it. Other than that, this review is independent of the book and its authors.

Let me start this review with two types of books that I identified over the years: The first are toilet books, denoting books that can be read in small chunks that only need a few minutes each time. This makes it possible to read one chapter at each sitting and still grasp the whole thing.

The second type of books are prequel books, meaning that I wished the book would have been published before I read another book, because it paves the road to its sequel perfectly.

Prequel books

An example for a typical prequel book is “Apprenticeship Patterns” that sets out to help the “aspiring software craftsman” to reach the “journeyman” stage faster. It is a perfect preparation for the classic “The Pragmatic Programmer”, even indicated by its subtitle “From Journeyman to Master”. But the Pragmatic Programmer was published in 1999, whereas the “Apprenticeship Patterns” book wasn’t available until a decade later in 2009.

If you plan to read both books in 2019 (or onwards), read them in the prequel -> sequel order for maximized effect.

Pragmatic books

The book “The Pragmatic Programmer” was not only a groundbreaking work that affected my personal career like no other book since, it also spawned the “Pragmatic Bookshelf”, a publisher that gives authors all over the world the possibility to create software development books that try to convey practical knowledge. In software development, rapid change is inevitable, so books about practical knowledge and specific technologies have a half-life time measured in months, not years or even decades. Nevertheless, the Pragmatic Bookshelf has published at least half a dozen books that I consider timeless classics, like the challenging “Seven Languages in Seven Weeks” by Bruce A. Tate.

A prequel to Refactoring

A more recent publication from the Pragmatic Bookshelf is “Java by Comparison” by Simon Harrer, Jörg Lenhard and Linus Dietz. When I first heard about the book (before the author contacted me), I was intrigued. I categorized it as a “toilet book” with lots of short, rather independent chapters (70 of them, in fact). It fits in this category, so if you search for a book suited for brief idle times like a short commute by tram or bus, put it on your list.

But when I read the book, it dawned on me that this is a perfect prequel book. Only that the sequel was published 20 years ago (yes, you’ve read this right). In 1999, the book “Refactoring” by Martin Fowler established an understanding of “better code” that holds true until today. There was never a second edition – well, until today! Last week, the second edition of “Refactoring” became available. It caters to a younger generation of developers and replaced all Java code with JavaScript.

But what if you are an aspiring Java developer today? Your first steps in the language will be as clumsy as mine were back in 1997. For me, the first “Refactoring” was perfectly timed, because I had eased out most of my quirks and got a kickstart “from journeyman to master” out of it. But what if you are still an apprentice in Java programming? Then you should read “Java by Comparison” as the prequel book to the original “Refactoring”.

The book works by showing you actual Java code and discussing the bad and ugly parts of it. Then it proposes a better solution in actual code – something many software development books omit as an easy exercise for the reader. You will see this pattern again and again: Java code with problems, a review of the code and a revised version of the same code. Each topic is condensed into two pages, making it a perfect 5-minute read (repeated 70 times).

If you read one chapter each morning on your commute to work and another one on your way back, you’ll be sped up from apprentice level to journeyman level in less than two months. And you can apply the knowledge from each chapter in your daily code right away. Imagine you spend your commute with a friendly mentor that shows you actual code (before and after) instead of only dropping wise man’s quotes that tell you what’s wrong but never show you a specific example of “right”.

All topics and chapters in the book are thorougly researched and carefully edited. You can feel that the authors explained each improvement over and over again to their students and you might notice the little hints for further reading. They start small and slow, but speed up and don’t shy away from harder and more complex topics later in the book. You’ll learn about tests, immutability, concurrency and naming (the best part of the book in my opinion) as well as using code and API comments to your advantage and how not to express conditional logic.

Overall, the book provides the solid groundworks for good code. I don’t necessarily agree with all tips and rules, but that is to be expected. It is a collection of guidelines and rules for beginners, and a very good one. Follow these guidelines until you know them by heart, they are the widely accepted common denominator of Java programming and rightfully so. You can reflect, adapt, improve and iterate based on your experience later on. But it is important to start that journey from the “green zone” and this book will show you this green zone in and out.

My younger self would have benefited greatly had this book been around in 1997. It covers the missing gap between your first steps and your first dance in code.

It’s a beginner’s world

According to Robert C. Martin, the number of software developers worldwide doubles every five years. So my advice for the 20+ million beginners in the next five years out there is to read this book right before “Refactoring”. And reading “Refactoring” at least once is a pleasure you owe to yourself.

The inevitable emergence of domain events

Even if you’ve read the original Domain Driven Design book by Eric Evans, you’ve probably still not heard about domain events (or DDD Domain Events), as he didn’t include them in the book. He talked about it a lot since then, for example in this talk in 2009 in the first 30 minutes.

Domain Events

In short, domain events are occurrences of “something that domain experts care about”. You should always be on the lookout for these events, because they are integral parts of the interface between the technical world and the domain world. In your source code, both worlds condense as the same things, so it isn’t easy (or downright impossible) to tell them apart. But if you are familiar with the concept of “pure fabrication”, you probably know that a single line of code can clearly belong to the technical fabric and still be relevant for the domain. Domain events are one possibility to separate the belongings again. But you have to listen to your domain experts, and they probably still don’t tell you the full story about what they care about.

Revealed by Refactoring

To underline my point, I want to tell you a story about a software project in a big organization. The software is already in production when my consulting job brings me into contact with the source code. We talked about a specific part of the code that screamed “pure fabrication” with just a few lines of domain code in between. Our goal was to refactor the code into two parts, one for the domain code and the other, bigger one for the technical part. In the technical part, some texts get logged into the logfile, like “item successfully written to the database” and “database connection closed”. It were clearly technical aspects of the code that got logged.

One of the texts had a spelling error in it and I reached out to correct it. A developer stopped me: “Don’t do that! They filter for that exact phrase.”. That surprised me. Nothing in the code indicated the relevance of that log statement, least of all the necessity of that typo. And I didn’t know who “they” were and that the logfiles got searched. So I asked a lot of questions and finally understood the situation:

Implicit Domain Events

The developers implemented the requirements of the domain experts as given in the specification documents. Nothing in there specified the exact text or even presence of logfile entries. But after the software was done and in production, the business side (including the domain experts) needed to know how many items were added to the system in a given period. And because they needed the information right away and couldn’t wait for the next development cycle, they contacted the operation department (that is separated from the development department) and got copies of the logfiles. They scanned the logfiles with some crude regular expression magic for the entries (like “item written to the database”) and got their result. The question was answered, the problem solved and the solution even worked a second time – and a third time, and so on. The one-time makeshift script was used permanently and repeatedly, in fact, it ran every hour and scanned for new items, because it became apparent that the business not only needed the statistics, but wanted to start a business process for each new item (like an editorial review of sorts) in a timely manner.

Pinned Code

Over the course of a few weeks, the purely technical logfile entry line in the source code got pinned and converted to a crucial domain requirement without any formal specification or even notification. Nothing in the source code hinted at the importance of this line or its typo. No test, no comment, no code structure. The line looked exactly the same as before, but suddenly played in another league. Every modification at this place or its surrounding code could hamper the business. Performing a well-intended refactoring could be seen as direct sabotage. The code was sacred, but in the unspoken kind. The code became a minefield.

Extracting the Domain Event

The whole hostage situation could be resolved by revealing the domain event and making it explicit. Let’s say that we model an “item added” domain event and post it in addition to the logfile entry. How we post it is up to the requirement or capabilities of the business department. An HTTP request to another service for every new item is a simple and viable solution. A line of text in a dedicated event log file would be another option. Even an e-mail sent to an human recipient could be discussed. Anything that separates the technical logfile from the business view on the system is better than forbidden code. After the separation, we can refactor the technical parts to our liking and still have tests in place that ensure that the domain event gets posted.

Domain Events are important

These domain events are important parts of your system, because they represent things (or actions) that the business cares about. Even if the business only remembers them after the fact, try to incorporate them in an explicit manner into your code. Not only will you be able to tell domain code and technical code apart easily, but you’ll also get this precious interface between business and tech. Make a list of all your domain events and you’ll know how your system is seen in the domain expert world. Everything else is more or less just details.

What story about implicit domain events comes to your mind? Tell us in a comment or write a blog entry about it. We want to hear from you!

Transforming C-Style arrays in java

Every now and then some customer asks us to fix or improve some important legacy application other people have written. Usually, such projects are fun and it is rewarding to see the improvements both in code and value for the users.

In one of these projects there is a Java GUI application that uses C-style arrays for some of its central data structures:

public class LegoBox  {
  public LegoBrick[] bricks = new LegoBrick[8000];
  public int brickCount = 0;
}

The array-length is a constant upper bound and does not denote the actual elements in the array. Elements are added dynamically to the array and it looks like a typical job for a automatically growing Collection like java.util.ArrayList. Most operations simply iterate over all elements and perform some calculations. But changing such a central part in a performance sensitive application is not only a lot of work but also risky.

We decided to take an incremental approach to improve code readability and maintainability and measured performance with a large, representative dataset between refactorings. There are two easy alternative APIs that improve working with the above data structure.

Imperative API

Smooth migration from the existing imperative “ask”-code (see “Tell, don’t ask”-principle) can be realized by providing an java.util.Iterable to the underlying array.


public int countRedBricks() {
  int redBrickCount = 0;
  for (int i = 0; i < box.brickCount; i++) {
    if (box.bricks[i].isRed()) {
      redBrickCount++;
    }
  }
  return redBrickCount;
}

Code like above is easily transformed to much clearer code like below:

public class LegoBox  {
  public LegoBrick[] bricks = new LegoBrick[8000];
  public int brickCount = 0;

  public Iterable<LegoBrick> allBricks() {
    return Arrays.stream(tr, 0, brickCount).collect(Collectors.toList());
  }
}

public int countRedBricks() {
  int redBrickCount = 0;
  for (LegoBrick brick : box.bricks) {
    if (brick.isRed()) {
      redBrickCount++;
    }
  }
  return redBrickCount;
}

Functional API

A nice alternative to the imperative solution above is a functional interface to the array. In Java 8 and newer we can provide that easily and encapsulate the iteration over our array:

public class LegoBox  {
  public LegoBrick[] bricks = new LegoBrick[8000];
  public int brickCount = 0;

  public <R> R forAllBricks(Function<Brick, R> operation, R identity, BinaryOperator<R> reducer) {
    return Arrays.stream(bricks, 0, brickCount).map(operation).reduce(identity, reducer);
  }

  public void forAllBricks(Consumer<LegoBrick> operation) {
    Arrays.stream(bricks, 0, brickCount).forEach(operation);
  }
}

public int countRedBricks() {
  return box.forAllBricks(brick -> brick.isRed() ? 1 : 0, 0, (sum, current) -> sum + current);
}

The functional methods can be tailored to your specific needs, of course. I just provided two examples for possible functional interfaces and their implementation.

The function + reducer case is a very general interface and used here for an implementation of our “count the red bricks” use case. Alternatively you could implement this use case with a more specific but easier to use filter + count interface:

public class LegoBox  {
  public LegoBrick[] bricks = new LegoBrick[8000];
  public int brickCount = 0;

  public long countBricks(Predicate<Brick> filter) {
    return Arrays.stream(bricks, 0, brickCount).filter(operation).count();
  }
}

public int countRedBricks() {
  return box.countBricks(brick -> brick.isRed());
}

The consumer case is very simple and found a lot in this specific project because mutation of the array elements is a typical operation and all over the place.

The functional API avoids duplicating the iteration all the time and removes the need to access the array or iterable/collection. It is therefore much more in the spirit of “tell”.

Conclusion

The new interfaces allow for much simpler and maintainable client code and remove a lot of duplicated iterations on the client side. They can be introduced on the way when implementing requested features for the customer.

That way we invested only minimal effort in cleaner, better maintainable and more error-proof code. When someday all accesses to the public array are encapsulated we can use the new found freedom to internalize the array and change it to a better fitting data structure like an ArrayList.

Ten books that shaped me as a software developer – Part I (Books 0 to 4)

Last week, I’ve done a question and answers event with students when the question came up what the most influential books were that I have read as a software developer. I couldn’t answer the question right away but promised to compile the list with short descriptions of the book’s influence. And here it is – my list of books that left a big mark in my day-to-day work. Others have done the list of books thing before me, and most lists contain the same books over and over again. I take it as an indicator that my list isn’t too far off.

Prologue

Before I start the list, I want to say a few things. The list isn’t ordered or ranked. I describe the effects of each book from my current standpoint, sometimes 20 years after the fact. I read a lot more good, interesting and inspiring books in the last 20 years and they all added to my work personality. But with all the books on my list, I felt enlightened and vibrant with new ideas. They didn’t just inspire me, they elevated my thinking. And because of this criteria of immediate improvement, one book is missing from the list. It’s the first “serious” software development book I’ve ever read in 1998: “Design Patterns. The book was just too much for me (and my study group peers) to handle such early in our careers. We were in our first year of study and had a lot of other battles to fight. I crossed it from my reading list and moved on. Years later, I re-read it and saw so much insight I plainly missed the first time, but gathered elsewhere since. If you want to read this classic, don’t hesitate! If you “only” want to know about design patterns, there’s a better book for that: “Head First Design Patterns“.

The Pragmatic Programmer

https://images-na.ssl-images-amazon.com/images/I/41BKx1AxQWL._SX396_BO1,204,203,200_.jpgMore by chance, my co-founder stumbled upon “The Pragmatic Programmer” in 1999 and devoured it. Then he gave the book to me and it shattered me to my core. I thought I was a decent software developer and here are Dave Thomas and Andy Hunt and talk about things I didn’t even knew existed. A healthy dose of Dunning-Kruger effect is crucial in everybody’s upbringing, but this book ended my overestimation once and for all and gave my studies a focus and direction I wouldn’t have thought to be possible before. I own my whole career to this book, at least in terms of work ethics. I cannot fathom how my professional life would have played out otherwise.

Refactoring

https://images-na.ssl-images-amazon.com/images/I/51K-M5hR8qL._SX392_BO1,204,203,200_.jpg

Also in 1999, Martin Fowler wrote his instant classic “Refactoring“. We bought this book at the first chance we got and raced through the pages. I was a Java developer back then and with most of the examples being in Java, the book needed no explanation nor translation. It was directly applicable knowledge that gave me years of experience virtually for free. This book is a must-read even 20 years later, and has just recently had the second edition announced, this time with code examples in JavaScript. I thought it was a joke first, but I guess it makes sense.

Working Effectively With Legacy Code

https://images-na.ssl-images-amazon.com/images/I/51EgCCLOWxL._SX376_BO1,204,203,200_.jpgIn 2004, Michael Feathers wrote a book that contains his 20+ years of experience with software development and named it “Working Effectively With Legacy Code“. Well, joke’s on you – I don’t write legacy code, my code is perfect. That wasn’t my attitude since 1999 (see list entry #1) and I took this book everywhere. It’s a heavy one, but I read it in the tram, right before the movie starts in cinema, during breakfast, lunch and dinner and virtually any other circumstance. I realized that reading this book will gain me experience a lot faster than actually writing code, so I just stopped for a few weeks. This book answered a lot of mysteries in the form of “is there really no better way to do this?” for me. And it introduced the concept of code seams for me that permeates my work ever since. I can clearly remember the day when I looked at my existing code again and saw the seams for the first time. It was truly eye-opening for me.

Analysis Patterns

https://images-na.ssl-images-amazon.com/images/I/41uNHkTq8NL._SX378_BO1,204,203,200_.jpgMartin Fowler was a very productive author in the late nineties. I’ve read most of his books from this period, if maybe with a few years delay. “Analysis Patterns” from 1996 arrived in my bookshelf in the early 2000’s and was my wake-up call to seeing models instead of actualities. I’ve given this book to many peers, but haven’t received the reactions that I had with this book: Being taught a language (with a graphical notation) that can express actual problems in terms of an overarching solution. Since then, I’ve seen the same solutions applied in many different forms, with many different names and a lot of different special requirements. But they all derive from the same model. This effect was promised by the “Design Patterns” book, but for me, delivered by “Analysis Patterns”. Even Martin Fowler admits that the book is showing its age, but for me, its timeless.

Peopleware

https://images-na.ssl-images-amazon.com/images/I/51MlUgcSICL._SX331_BO1,204,203,200_.jpgSince the late 80’s, Tom DeMarco and Timothy Lister wrote one book after the other. Each book describes a common business-oriented problem and at least one working solution for it. And yet, the very same problems still persist in the business world. It’s as if nobody reads books. “Peopleware” was written in 1987, 30 years ago, and discovered by me and my peers in the late 1990’s. We talked about this book a lot, as it described a (business) world where we didn’t want to work in. We wanted to do better. In a way, this book was a spark to found our own company and don’t repeat the mistakes that seemed to be prevalent in our industry. If you’ve ever shaken your head about “the management”, do yourself a favor and read this book. It will pinpoint the precise problem you’ve felt and give you the words to describe it. And if you’ve read “Peopleware”, liked it and want more, there is good news: There is a whole series waiting for you (not just Vienna).

Epilogue to Part I

These are the first five books from my list, with the last entry being more of a catch-all for a whole series. Remember that this isn’t a generic “go and read these books if you want to call yourself a professional software developer” list. I’m not gatekeeping and it would be useless to even try to do so. These books helped me further my career in the last 20 years, they won’t necessarily help you for the next 20 years. Good books are published every year, you just have to read them.

I’m looking forward to share the second part of my list in the next blog post of this series. Stay tuned!

The rule of additive changes

Change is in the nature of software development. Most difficult aspects of the craft revolve around dealing with change. How does one keep software extensible? How do you adapt to new business requirements?

With experience comes the intuition that some kind of changes are more volatile than other changes. For example, it is often safer to add a new function or type to an application than change an existing one.

This is because adding something new means that it is not already strongly connected to the rest of the application. Or at least that’s the assumption. You have yet to decide how the new component interacts with the rest of the application. Usually this is done by a, preferably small, incision in the innards of your software. The first change, the adding, should not break anything. If anything, the small incision should be the only dangerous aspect of the change.

This is as very important concept: adding should not break things! This is so important, I want to give it a name:

The Rule of Additive Changes

Adding something to a well-designed software system should not break existing functionality. Exceptions should be thoroughly documented and communicated.

Systems should always be designed and tought so that the rule of additive changes holds. Failure to do so will lead to confusing surprises in the best cases, and well hidden bugs in worse cases.

The rule is nothing new, however: it’s a foundation, an axiom, to many other rules, such as the Liskov Substitution Principle:

Inheritance

Quoting from Wikipedia:

“If S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program”

This relies on subtyping as an additive change: S works at least as good as any T, so it is an extension, an addition. You should therefore design your systems in a way that the Liskov Substition Principle, and therefore the rule of additive changes, both hold: An addition of a new type in a hierarchy cannot break anything.

Whitelists vs. Blacklists

Blacklists will often violate the rule of additive changes. Once you add a new element to the domain, the domain behind the blacklist will change as well, while the domain behind a whitelist will be unaffected. Ultimately, both can be what you want, but usually, the more contained change will break less – and you can still change the whitelist explicitly later!

Note that systems that filter classes from a hierarchy via RTTI or, even more subtle, via ask-interfaces, are blacklists. Those systems can break easily when new types are introduces to a hierarchy. Extra care needs to be taken to make sure the rule of addition holds for these systems.

Introspection and Reflection

Without introspection and reflection, programs cannot know when you are adding a new type or a new function. However, with introspection, they can. Any additive change can also be an incision point. Therefore, you need to be extra careful when designing systems that use introspection: They should not break existing functionality for adding something.

For example, adding a function to enable a specific new functionality is okay. A common case of this would be to adding a function to a controller in a web-framework to add a new action. This will not inferfere with existing functionality, so it is fine.

On the other hand, adding a member to a controller should not disable or change functionality. Adding a special member for “filtering” or some kind of security setting falls into this category. You think you’re merely adding something, but in fact you are modifying. A system that relies on such behavior therefore violates the rule of additive changes. Decorating the member is a much better alternative, as that makes it clear that you are indeed modifying something, which might break existing functionality.

Not all languages or frameworks provide this possibility though. In that case, the only alternative is good communication and documentation!

Refactoring

Many engineers implicitly assume that the rule of additive changes holds. In his book “Working Effectively With Legacy Code”, Micheal Feathers proposes the sprout and wrap techniques to change legacy software. The underlying technique is the same for both: formulating a potentially breaking change as mostly additive, with only a small incision point. In the presence of systems that do not follow the rule of additive changes, such risk minimization does not work at all. For example, adding additional function can break a system that relies heavily on introspection – which goes against all intuition.

Conclusion

This rule is not a new concept. It is something that many programmers have in their head already, but possibly fractured into lots of smaller guidelines. But it is one overarching concept and it needs a name to be accessible as such. For me, that makes things a lot clearer when reasoning about systems at large.

From ugly to pretty – Three steps is all it takes

makeupI hold lectures in software engineering for over a decade now. One major topic is testing, specifically unit tests. Other corner stones are refactorings and code readability. So whenever I have the chance to challenge my students in cross-topic aspects of software development, it’s almost always a source of insight for them and especially for me. But one golden moment holds a special place in my memory. This is the (rather elaborate, sorry) story of this moment.

During a lecture about unit tests with JUnit, my students had the task to develop tests for a bank account class. That’s about as boring as testing can be – the account was related to a customer and had a current balance. The customer can withdraw money, but only some customers can overdraw their account. To spice things up a bit, we also added the mock object framework EasyMock to the mix. While I would recommend other mock frameworks for production usage, the learning curve of EasyMock is just about right for first time exposure in a “sheep dip” fashion.

Our first test dealt with drawing money from an empty account that can be overdrawn:

@Test
public void canWithdrawOnCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(true);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(new Euro(30), cash);
  assertEquals(new Euro(-30), account.balance());
  EasyMock.verify(customer);
}

The second test made sure that this withdrawal behaviour only works for customers with sufficient credit standing. We decided to pay out nothing (0 Euro) if the customer tries to withdraw more money than his account currently holds:

@Test
public void cannotTakeUpCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(false);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(Euro.ZERO, cash);
  assertEquals(Euro.ZERO, account.balance());
  EasyMock.verify(customer);
}

As you can tell, a lot of copy and paste was going on in the creation of this test. Just look at the name of the local variable “required” – it’s misleading now. Right up to this point, my main topic was the usage of the mock framework, not perfect code. So I explained the five stages of normalized mock-based unit tests (initialize, train mocks, execute tested code, assert results, verify mocks) and then changed the topic by expressing my displeasure about the duplication and the inferior readability of the code (it even tries to trick you with the “required” variable!). Now it was up to my students to improve our situation (this trick works only a few times for every course before they preventively become even pickier than me). A student accepted the challenge and gave advice:

First step: Extract Method refactoring

The obvious first step was to extract the duplication in its own method and adjust the calls by their parameters. This is an easy refactoring that will almost always improve the situation. Let’s see where it got us. Here is the extracted method:

protected void performWithdrawalTestWith(
    boolean customerCanOverdraw,
    Euro amountOfWithdrawal,
    Euro expectedCash,
    Euro expectedBalance) {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(customerCanOverdraw);
  EasyMock.replay(customer);
  Account account = new Account(customer);

  Euro cash = account.withdraw(amountOfWithdrawal);

  assertEquals(expectedCash, cash);
  assertEquals(expectedBalance, customer.balance());
  EasyMock.verify(customer);
}

And the two tests, now really concise:

@Test
public void canWithdrawOnCredit() {
  performWithdrawalTestWith(
      true,
      new Euro(30),
      new Euro(30),
      new Euro(-30));
}

 

@Test
public void cannotTakeUpCredit() {
  performWithdrawalTestWith(
      false,
      new Euro(30),
      Euro.ZERO,
      Euro.ZERO);
}

Well, that did resolve the duplication indeed. But the test methods now lacked any readability. They appeared as if somebody had extracted all the semantics out of the code. We were unhappy, but decided to interpret the current code as an intermediate step to the second refactoring:

Second step: Introduce Explaining Variable refactoring

In the second step, the task was to re-introduce the semantics back into the test methods. All parameters were nameless, so that was our angle of attack. By introducing local variables, we gave the parameters meaning again:

@Test
public void canWithdrawOnCredit() {
  boolean canOverdraw = true;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = new Euro(30);
  Euro resultingBalance = new Euro(-30);

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean canOverdraw = false;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = Euro.ZERO;
  Euro resultingBalance = Euro.ZERO;

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

That brought back the meaning to the test methods, but didn’t improve readability. The code wasn’t intentionally cryptic any more, but still far from being intuitively understandable – and that’s what really readable code should be. If even novices can read your code fluently and grasp the main concepts in the first pass, you’ve created expert code. I challenged the student to further transform the code, without any idea how to carry on myself. My student hesitated, but came up with the decisive refactoring within seconds:

Third step: Rename Variable refactoring

The third step doesn’t change the structure of the code, but its approachability. Instead of naming the local variables after their usage in the extracted method, we name them after their purpose in the test method. A first time reader won’t know about the extracted method (and preferably shouldn’t need to know), so it’s not in the best interest of the reader to foreshadow its details. Instead, we concentrate about telling the reader a coherent story:

@Test
public void canWithdrawOnCredit() {
  boolean aCustomerThatCanOverdraw = true;
  Euro heWithdraws30Euro = new Euro(30);
  Euro receivesTheFullAmount = new Euro(30);
  Euro andIsNow30EuroInTheRed = new Euro(-30);

  performWithdrawalTestWith(
      aCustomerThatCanOverdraw,
      heWithdraws30Euro,
      receivesTheFullAmount,
      andIsNow30EuroInTheRed);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean aCustomerThatCannotOverdraw = false;
  Euro heTriesToWithdraw30Euro = new Euro(30);
  Euro butReceivesNothing = Euro.ZERO;
  Euro andStillHasABalanceOfZero = Euro.ZERO;

  performWithdrawalTestWith(
      aCustomerThatCannotOverdraw,
      heTriesToWithdraw30Euro,
      butReceivesNothing,
      andStillHasABalanceOfZero);
}

If the reader is able to ignore some crude verbalization and special characters, he can read the test out loud and instantly grasp its meaning. The first lines of every test method are a bit confusing, but necessary given Java’s lack of named parameters.

The result might remind you a lot of Behavior Driven Development notation and that’s probably not by chance. In a few minutes during that programming exercise, my students taught themselves to think in scenarios or stories when approaching unit tests. I couldn’t have taught it any better – instead, I got enlightened by this exercise, too.

Summary of the Schneide Dev Brunch at 2012-03-25

This summary is a bit late and my only excuse it that the recent weeks were packed with action. But the good news is: The Schneide Dev Brunch is still alive and gaining traction with an impressive number of participants for the most recent event. The Schneide Dev Brunch is a regular brunch in that you gather together to have a late breakfast or early dinner on a sunday, only that all attendees want to talk about software development (and various other topics). If you bring a software-related topic along with your food, everyone has something to share. We were able to sit in the sun on our roofgarden and enjoy the first warm spring weekend.

We had to do introductory rounds because there were quite some new participants this time. And they brought good topics and insights with them. Let’s have a look at the topics we discussed:

Checker Framework

This isn’t your regular java framework, meant to reside alongside all the other jar files in your dependency folder. The Checker framework enhances java’s type system with “pluggable types”. You have to integrate it in your runtime, your compiler and your IDE to gain best results, but after that you’re nothing less than a superhero among regulars. Imagine pluggable types as additional layers to your class hierarchy, but in the z-axis. You’ll have multiple layers of type hierachies and can include them into your code to aid your programming tasks. A typical use case is the compiler-based null checking ability, while something like Perl’s taint mode is just around the corner.

But, as our speaker pointed out, after a while the rough edges of the framework will show up. It still is somewhat academic and lacks integration sometimes. It’s a great help until it eventually becomes a burden.

Hearing about the Checker framework left us excited to try it sometimes. At least, it’s impressive to see what you can do with a little tweaking at the compiler level.

Getting Stuck

A blog entry by Jeff Wofford inspired one of us to talk about the notion of “being stuck” in software development. Jeff Wofford himself wrote a sequel to the blog entry, differentiating four kinds of stuck. We could relate to the concept and have seen it in the wild before. The notion of “yak shaving” entered the discussion soon. In summary, we discussed the different types of being stuck and getting stuck and what we think about it. While there was no definite result, everyone could take away some insight from the debate.

Zen to Done

One topic was a review of the Zen to Done book on self-organization and productivity improvement. The methodology can be compared to “Getting Things Done“, but is easier to begin with. It defines a bunch of positive habits to try and establish in your everyday life. Once you’ve tried them all, you probably know what works best for you and what just doesn’t resonate at all. On a conceptional level, you can compare Zen to Done to the Clean Code Developer, both implementing the approach of “little steps” and continuous improvement. Very interesting and readily available for your own surveying. There even exists a german translation of the book.

Clean Code Developer mousepads

Speaking of the Clean Code Developer. We at the Softwareschneiderei just published our implementation of mousepads for the Clean Code Developer on our blog. During the Dev Brunch, we reviewed the mousepads and recognized the need for an english version. Stay tuned for them!

Book: Making software

The book “Making software” is a collection of essays from experienced developers, managers and scientists describing the habits, beliefs and fallacies of modern software development. Typical for a book from many different authors is the wide range of topics and different quality levels in terms of content, style and originality. The book gets a recommendation because there should be some interesting reads for everyone inside. One essay was particularly interesting for the reviewer: “How effective is Test-Driven Development?” by Burak Turhan and others. The article treats TDD like a medicine in a clinical trial, trying to determine the primary effects, the most effective dosage and the unwanted side effects. Great fun for every open-minded developer and the origin of a little joke: If there was a pill you could take to improve your testing, would a placebo pill work, too?

Book: Continuous Delivery

This book is the starting point of this year’s hype: “Continuous Delivery” by Jez Humble and others. Does it live up to the hype? In the opinion of our reviewer: yes, mostly. It’s a solid description of all the practices and techniques that followed continuous integration. The Clean Code Developer listed them as “Continuous Integration II” until the book appeared and gave them a name. The book is a highly recommened read for the next years. Hopefully, the practices become state-of-the-art for most projects in the near future, just like it went with CI. The book has a lot of content but doesn’t shy away from repetition, too. You should read it in one piece, because later chapters tend to refer to earlier content quite often.

Three refactorings to grace

The last topic was the beta version of an article about the difference that three easy refactorings can make on test code. The article answered the statement of a participant that he doesn’t follow the DRY principle in test code in a way. It is only available in a german version right now, but will probably be published on the blog anytime soon in a proper english translation.

Epilogue

This Dev Brunch was a lot of fun and had a lot more content than listed here. Some of us even got sunburnt by the first real sunny weather this year. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.