Gesture Touchscreens might render Paper Prototyping useless

With the advent of highly dynamic, gesture-controlled user interfaces for touchscreens, Paper Prototyping seems to lose its applicability.

Paper Prototyping is an highly effective tool to examine the usability of software, even before it is written. The basic idea of Paper Prototyping is that you perform real (software) tasks with real users, but replace everything technical with low-fi substitutes.

Adventures in Low-fi

A typical Paper Prototyping session looks like this:

  • The user gets his task description and has to perform it with the software
  • The computer screen is replaced by an arrangement of paper sheets on a desk
  • The graphical user interface is replaced by hand-drawn copies on paper
  • The computer itself is replaced by a human, mimicking the software responses to input
  • The user operates by finger-pointing or writing with a pen

Advantages of Paper Prototyping

The whole situation described above seems awkward on first look, but is really rewarding for a project in its early stages. The customer has to provide real end user tasks and enough details of the solution to make up a prototype. The team has to produce reasonable drafts of the software GUI and come up with enough understanding of the processes and tasks involved to survive the session without major outages.

The result of a Paper Prototyping session can be used in various ways:

  • Detailed specification of the GUI
  • Use Case or User Story (acted out already)
  • Mock screenshots for the user manual
  • Data for initial acceptance tests

Classical user interface vs. gestures

This approach worked very well as long as the user only had a mouse (pointing, clicking) and a keyboard (writing) at hands. Even then, advanced features like grabbing (drag&drop) or automatic scrolling challenged the creativity of the prototypers. But most GUIs were rather dull and static. The perfect playground for Paper Prototyping.

With the advent of touchscreens, we soon realized that pointing and clicking only needs one finger out of ten. Gestures were introduced to keep all our fingertips busy and to enrich the interaction between user and GUI. We instantly understand the “zoom in” or “scroll down” gestures because it resembles natural behaviour (at least for some of us).

In the wake of gestures, the GUI of our software gets more and more dynamic. The GUI has to be minimalistic so we can control it even with stubby fingers (the new handicap of our generation, compare cell phone key pads). Detailed information has to be provided on demand and only temporarily. Everything can be manipulated. The classical approach to tab through a form (by carefully designed tab orders) isn’t that suitable anymore.

Gestures vs. Paper Prototyping

When using a Paper Prototype, the throughput of scribbled paper is enormous even with the classical GUIs. The more dynamic some dialog is, the more different parts need to be prepared in various states and locations (depending on the fragmentation of the paper screen). With gestures on a touchscreen, the user needs to be able to express them on the screen. Most touchscreen interfaces heavily depend on the (simulated) physical interaction between the fingers and some drawn “objects” on the interface. This is the moment when Paper Prototyping falls short of resembling the real interaction. You just can’t fiddle that fast with all the paper shreds.

No solution yet

I observed this effect when performing a Paper Prototype workshop with my students. The interfaces with classical mouse/keyboard handling performed well in the sessions. Interfaces for touchscreens (iPhone apps were the big newcomer here) just didn’t work out well, especially when downsized to palm size. We weren’t able to come up with a viable solution to make Paper Prototyping perform again for touchscreens and gestures.

Any ideas out there?

Readability of Guard Clauses in Methods

A little story about two opinions on readability of methods containing if-clauses.

Browsing through the code base of one of our customers I frequently stumbled over methods that were roughtly structured like this:

void theMethod
{
  if (some_expression)
  {
    // rest of the method body
    // ...
  }
  // no more code here!
}

And most of the time I was tempted to refactor the method using a guard clause, like so:

void theMethod
{
  if (!some_expression)
  {
    return;
  }
  // rest of the method body
  // ...
}

because this is far more readable for me. When I noticed that the methods were written all by the same guy I told him about by refactoring ideas in absolute certainty that he would agree with me. It came as quite a surprise when, in fact, he didn’t agree with me, at all. Even something like this:

void theMethod
{
  if (some_expression)
  {
    // some code
    // ...
    if (another_expression)
    {
      // some more code
      // ...
    }
    // no more code here ..
  }
  // ... and here
}

was in his eyes far more readable than the refactored version with guard clauses. His rational was that guard clauses make it harder for to see the program flow through the method. And a nested if(…) structure like above was very suitable to express slightly more complicated flows.

All my talks about crappy methods and the downsides of highly indented code were not able to change his mind.

I admit that I can somewhat understand his point about the visibility of the program flow through the method.  And sure, the (nested) ifs increase indentation and the number of possible code paths but since there are no elses and no code after the if-blocks, does that really increase the overall complexity?

Well, I still would prefer smaller methods with guard clauses but as you can see, to a great extend readability lies in the eyes of the beholder.

What do you find readable?

Start with the core

What’s the most important feature you cannot live without? Start with it, you can stop anytime because after that you have a minimal yet usuable system.

If you begin your new product, start from the core functionality. Not the core of the system or architecture.
Ask yourself: What’s the most important feature you cannot live without? Just name one, only one. Build this first and only. Eventually refine or redo if it doesn’t suit your needs. Then continue with the next. You always have a minimal but useable product and can stop anytime.
We once had to develop a web based system where users can file an application which can be reviewed, changed, rated and finally accepted or declined. We started with a web page where you could download a PDF and an email address to send it to when complete. Minimal? Yes. Useless? No. All the functionality which was needed was there. All other stuff could be done via email or phone. It was readily available and useable.
Start with the core.

Verbosity is not Java’s fault

One of Java’s most heard flaws (verbosity) isn’t really tied to the language it is rooted in a more deeply source: it comes from the way we use it.

Quiz: Whats one of the most heard flaws of Java compared to other languages?

Bad Performance? That’s a long overhauled myth. Slow startup? OK, this can be improved… It’s verbosity, right? Right but wrong. Yes, it is one of the most mentioned flaws but is it really inherit to the language Java? Do you really think Closures, annotations or any other new introduced language feature will significantly reduce the clutter? Don’t get me wrong here: closures are a mighty construct and I like them a lot. But the source of the problem lies elsewhere: the APIs. What?! You will tell me Java has some great libraries. These are the ones that let Java stand out! I don’t talk about the functionality of the libraries here I mean the design of the API. Let me elaborate on this.

Example 1: HTML parsing/manipulation

Say you want to parse a HTML page and remove all infoboxes and add your link to a blog box:

        DOMFragmentParser parser = new DOMFragmentParser();
        parser.setFeature("http://xml.org/sax/features/namespaces", false); 
        parser.setFeature("http://cyberneko.org/html/features/balance-tags", false);
        parser.setFeature("http://cyberneko.org/html/features/balance-tags/document-fragment", true);
        parser.setFeature("http://cyberneko.org/html/features/scanner/ignore-specified-charset", true);
        parser.setFeature("http://cyberneko.org/html/features/balance-tags/ignore-outside-content", true);
        HTMLDocument document = new HTMLDocumentImpl();
        DocumentFragment fragment = document.createDocumentFragment();
        parser.parse(new InputSource(new StringReader(html)), fragment);
        XPathFactory factory = XPathFactory.newInstance();
        XPath xpath = factory.newXPath();
        Node infobox = xpath.evaluate("//*/div[@class='infobox']", fragment, XPathConstants.NODE);
        infobox.getParentNode().removeChild(infobox);
        Node blog = xpath.evaluate("//*[@id='blog']", fragment, XPathConstants.NODE);
        NodeList children = blog.getChildNodes();
        for (int i = 0; i < children.getLength(); i++) {
            node.remove(children.item(i));
        }
        blog.appendChild(/*create Elementtree*/);

What you really want to say is:

HTMLDocument document = new HTMLDocument(url);
document.at("//*/div[@class='infobox']").remove();
document.at("//*[@id='blog']").setInnerHtml("<a href='blog_url'>Blog</a>");

Much more concise, easy to read and it communicates its purpose clearly. The functionality is the same but what you need to do is vastly different.

  The library behind the API should do the heavy lifting not the API's user.

Example 2: HTTP requests

Take this example of sending a post request to an URL:

HttpClient client = new HttpClient();
PostMethod post = new PostMethod(url);
for (Entry param : params.entrySet()) {
    post.setParameter(param.key, param.value);
}
try {
    return client.executeMethod(post);
} finally {
    post.releaseConnection();
}

and compare it with:

HttpClient client = new HttpClient();
client.post(url, params);

Yes, there are cases where you want to specify additional attributes or options but mostly you just want to send some params to an URL. This is the default functionality you want to use, so why not:

  Make the easy and most used cases easy,
    the difficult ones not impossible to achieve.

Example 3: Swing’s JTable

So what happens when you designed for one purpose but people usually use it for another one?
The following code displays a JTable filled with attachments showing their name and additional actions:
(Disclaimer: this one makes heavy use of our internal frameworks)

        JTable attachmentTable = new JTable();
        TableColumnBinder<FileAttachment> tableBinding = new TableColumnBinder<FileAttachment>();
        tableBinding.addColumnBinding(new StringColumnBinding<FileAttachment>("Attachments") {
            @Override
            public String getValueFor(FileAttachment element, int row) {
                return element.getName();
            }
        });
        tableBinding.addColumnBinding(new ActionColumnBinding<FileAttachment>("Actions") {
            @Override
            public IAction<?, ?>[] getValueFor(FileAttachment element, int row) {
                return getActionsFor(element);
            }
        });
        tableBinding.bindTo(attachmentTable, this.attachments);

Now think about you had to implement this using bare Swing. You need to create a TableModel which is unfortunately based on row and column indexes instead of elements, you need to write your own renderers and editors, not talking about the different listeners which need to map the passed indexes to the corresponding element.
JTable was designed as a spreadsheet like grid but most of the time people use it as a list of items. This change in use case needs a change in the API. Now indexes are not a good reference method for a cell, you want a list of elements and a column property. When the usage pattern changes you can write a new library or component or you can:

  Evolve your API.

Designed to be used

So why is one API design better than another? The better ones are designed to be used. They have a clearly defined purpose: to get the task done in a simple way. Just that. They don’t want to satisfy a standard or a specification. They don’t need to open up a huge new world of configuration options or preference tweaks.

Call to action

So I argue that to make Java (or your language of choice) a better language and environment we have to design better APIs. Better designed APIs help an environment more than just another new language feature. Don’t jump on the next super duper language band wagon because it has X or Y or any other cool language feature. Improve your (API) design skills! It will help you in every language/environment you use and will use. Learning new languages is good to give you new viewpoints but don’t just flee to them.

Speed up your buildbox, Part IV: Beyond the box

This is the fourth and last part of a series on how to boost your build box without much effort. This episode talks about possible measures to increase the build performance when a single box isn’t enough.

© Friedberg - Fotolia.comIn the first three parts of our effort to speed up our buildbox, we replaced the harddisk with a RAM disk, upgraded the CPU to the top-notch model and installed plenty of fast RAM. This brought the build time down from 03:30 minutes to around 02:00 minutes. The CPU frequency was the biggest time saving factor in our case study. Two minutes is as fast as the build can get for our project without fiddling with the actual build process. It’s sufficient for our case, but it may not for yours.

Even top speed is too slow

Lets assume we maxed out the hardware and still have a build duration far beyond the magical ten minutes mark. What can we do now? There are two viable options at hand if you can exclude the possibility that your build process is really inefficient and needs optimization. In the latter case, it would be better to revise the process instead of the build infrastructure.

Two ways to speed up your build infrastructure

You can go down one or both of two general paths to speed up your build process. To understand the examples, lets assume the build takes 20 minutes to run on your top-notch build box.

  • Add more build boxes. This is the classical “parallelize it!” approach. It won’t speed up the individual build process, but enable more processes to run at the same time. This approach wont change anything if your team does seldom check-ins, which in itself is an anti-pattern to continuous integration. But if your team commits changes every ten minutes, having at least two build boxes will prevent the second committer from waiting 30 minutes on the CI results. Instead, the results will always be there after 20 minutes. You haven’t exactly sped up your build process, but the maximal waiting time of your committers. For details on the implementation, see below at “Growing a build park“.
  • Chop up your build process. This is known as “staging” or “pipelining” your build. This won’t speed up the individual build process, either, but deliver certain partial results of your build earlier. Lets assume you can split your build process into four distinct stages: compile, unit test, integration test, package. Whenever a stage yields a result, the comitter gets feedback immediately. In our example, this might be every 5 minutes. This has several disadvantages, as for example discussed in the article “The pipeline of doom” by Julian Simpson, but can lower the waiting time for specific aspects of your build drastically. You haven’t exactly sped up your build process, but the response time for partial results and therefore the average waiting time of your committer. For details on the implementation, see below at “Installing a build pipeline“.

Growing a build park

If you want to reduce the initial waiting delay of a build before it gets processed or increase the throughput of builds, the build farm pattern is your way to go. By adding slave build machines to your build master, you can distribute the workload on more shoulders. The best way to set up your infrastructure is to introduce a dedicated master box that only delegates actual builds to its slaves. The master box handles the archivation of build artifacts and deals with the web server requests, while the slaves only perform build tasks. The master box can be of average power, with increased storage size, while the slaves should be ultra-fast, without the need of big disks. Solid state disks or even RAM disks of the slaves can be tuned to actual workspace sizes, as it is all that needs to be stored there.

Distributed builds with Hudson

The Hudson continuous integration server has a strength in setting up these master/slave scenarios. It’s ridiculously easy to set up a build slave. You basically only need to click on a link to start the slave process. If you happen to have a standard build, everything needed gets downloaded automatically. If you want your slaves to operate automatically, you can install a windows daemon, provide a SSH account or write your own script. Usually, slaves are set up in a matter of minutes without hassle. A great idea is to turn powerful collegue boxes into build slaves (aka CI zombies) by booting an USB stick. The best way to start with master/slave builds is to turn your current PC into a hudson slave right now by using the Java Web Start method.

Installing a build pipeline

If you are interested in early but incomplete feedback from your build box, staging your build will help you out. If partitioned right, you’ll receive a series of answers on specific questions from your build process. The questions may be like:

  1. Will it compile?
  2. Will it pass the unit tests?
  3. Will it function (pass the integration tests)?
  4. Will it blend?

Ok, the last question is rather unlikely to be answered by your build box. The overall build process will not be any faster, but basic safety test results are reported earlier. If you combine this approach with distributed builds, there is the possibility to designate specifically tuned machines to different stages. The Hudson continuous integration server has the ability to tag a slave with different labels. You can then configure your build to only run on slaves with the desired label assigned.

Staged builds with Hudson

Staging with the Hudson continuous integration server isn’t as easy as the master/slave feature, but there are some plugins that allow for more complex setups. You might experience some functionality that’s still under development, but basic staging is possible even today. In combination with specialized slave build boxes, this approach can lower your build duration. It is a a complex endeavour, though.

Conclusion

Once your single build box is maxed out but still not fast enough, you enter a different realm of continuous integration infrastructure setups. Speeding up a build process beyond the single box isn’t as easy as installing more RAM. But with a fair amount of planning, you have a fair chance to improve the situation. Note that you won’t primarily lower build duration, but increase throughput and utilize partitioning and specialization. These are different measures and might not affect the wall clock time of your build. The combination of staging and distribution is the most powerful setup, but will result in the most complex infrastructure to maintain. Before entering this realm, be sure to apply any possible optimization to your build process. Because you’ll not leave that realm again soon.

What’s your story on build optimization beyond the box? Drop us a comment.

Aligning the Abstraction Level with constant booleans

Constant booleans can help to maintain a single level of abstraction in one method. They are less expensive than a separate method and a big improvement over a mere comment.

If you ever have done consulting, mentoring or teaching on programming techniques I’m sure you have experienced joy as well as disappointment when your “students” either took on your advice and followed it in their day-to-day work or when they just did what you said as long as you sat next to them but forgot all about it the next day. The disappointing behavior often comes from them not fully appreciating, or not being able to fully recognize the advantages of your solution. (And as you are the mentor/consultant/teacher, the latter might also be your fault).

One example for that is the principle to operate on only one level of abstraction within a method or function. See here for a detailed explanation. I have been applying this technique more or less unconsciously already for a long time now and was reminded of it as the Single-Level-of-Abstraction-Principle in Robert C. Martin’s Clean Code.

I have been trying to put this principle in peoples minds for some time now but often with little success. Sure, they often do see the advantages of arriving at much more readable code but they often ignore it in their own code. Most of the time they just don’t see the necessity to create another method with a meaningful name or they content themselves just with putting a comment above some chunk of lower abstraction code (The resulting loud screams for a Extract-Method refactoring often remain unheard, too)

Lately, I did have fairly good success with one little sub-technique of this principle: constant booleans for if-statements. That is, instead of (C++ code):

void someMethod()
{
   if (hard_to_read_boolean_expression_using_lower_abstractions)
   {
      // do stuff
   }
}

you write:

void someMethod()
{
   const bool expressive_name = 
      hard_to_read_boolean_expression_using_lower_abstractions;
   if (expressive_name)
   {
      // do stuff
   }
}

I guess the main reason for the success of this sub-technique is that it increases readability a lot at a cost that is only a tiny bit greater than a simple comment.

True, in many cases it may be even more readable to put the whole if-statement in another method, but using a boolean constant like above is already a big improvement.

FindBugs-driven bughunting in legacy projects

I have been working on a >100k lines legacy project for a while now. We have to juggle customer requests, bug fixes and refactoring so it is hard to improve the quality and employ new techniques or tools while keeping the software running and the clients happy. Initially there were no unit tests and most of the code had a gigantic cyclomatic complexity. Over the course of time we managed to put the system under continuous integration, employed quite some unit tests and analyzed code “hotspots” and our progress with crap4j.

Normally we get bug reports from our userbase or have to test manually to find bugs. A few weeks ago I tried a new approach to bughunting in legacy projects using FindBugs. Many of you surely know this useful tool, so I just want to describe my experiences in that project using FindBugs. Many of the bugs may be in parts of the application which are seldom used or only appear in hard to reproduce circumstances. First a short list of what I encountered and how I dealt with it.

Interesting found bugs in the project

  • There was a calculation using an integer division but returning a double. So the actual computation result was wrong but yet the error would have been hard to catch because people rarely recalculate results of a computer. When writing the test associated to the bugfix I found a StackOverFlowError too!
  • There were quite some null dereferences found, often in contructs like
     if (s == null && s.length() == 0)
     

    instead of

    if (s == null || s.length() == 0)
    

    which could be simplified or rewritten anyway. Sometimes there were possibilities for null dereferences on some paths despite of several null checks in the code.

  • Many performance bugs which may or may not have an effect on overall performance of the system like: new String(), new Integer(12), string concatenation across loops, inefficient usage of java.util.Map.keySet() instead of java.util.Map.entrySet() etc.
  • Some dead stores of local variables and statements without effect which could be thrown away or be corrected to do the intended things.

Things you may want to ignore

There are of course some bugs that you may ignore for now because you know that it is a common pattern in the team and abuse and thus errors are extremely unlikely. I, for example, opted to ignore some dozens of “may expose internal representation” found bugs regarding arrays in interfaces or accessibly via getters because it is a common pattern on the team not to tamper existing arrays as they are seen as immutable by the team members. It would have taken too much time to fix all those without that much of a benefit.

You may opt to ignore the performance bugs too but they are usually easy to fix.

Tips

  • If you have many foundbugs, fix the easy ones to be able to see the important ones more easily.
  • Ignore certain bug categories for now, fix them later, when you stumble upon them.
  • Concentrate on the ones that lead to wrong behaviour and crashes of your application.
  • Try to reproduce the problem with unit test and then fix the code whenever feasible! Tests are great to expose the bug and fix it without unwanted regressions!
  • Many bugs appear in places which need refactoring anyway so here is your chance to catch several flies at once.

Conclusion

With FindBugs you can find common programming errors sprinkled across the whole application in places where you probably would not have looked for years. It can help you to understand some common patterns of your team members and help you all to improve your code quality. Sometimes it even finds some hard to spot errors like the integer computation or null dereferences on certain paths. This is even more true in entangled legacy projects without proper test coverage.

A blind spot of Continuous Integration

Imagine you haven’t changed a continuously integrated project for months when suddenly a unit test breaks. Here’s why CI has a blind spot with flawed tests.

In the early days of April 2008, we updated our hudson continuous integration (CI) server to a new version. This was no unusual action, as there was a new version every day back then, bringing new features in a rapid rate. What was unusual after the upgrade was that one of the surveilled projects failed to build all of a sudden.

Sudden (test) failure without a change

The build was started manually, without a code change. The project itself was inactive back then, meaning that no changes were made for months. And suddenly, a unit test broke. The test was in the project for two whole years without ever going off. What happened?

Good unit tests

There are rules for good unit tests. A basic set are the A-TRIP rules formulated in the excellent beginner’s book “Pragmatic Unit Testing” by Andy Hunt and Dave Thomas. The failed test clearly disobeyed the “repeatable” rule (the R in A-TRIP): It didn’t result in the same result as before while the code under test didn’t change.

Write repeatable tests or your CI will be blind (partially)

The cause of the failed test was putting the clocks back because the daylight-saving time part of the year began. The unit test secured some date calculations by taking the current date and comparing it to future and past dates that got calculated. The calculation went wrong when the daylight-saving mode changes in the calculated period, which was a bug. Repeatability of the unit test was lost when “the current date” entered the code – whether on the unit test or productive code side.

Two years of blindness

How could this bug survive two years without being noticed? The project was under CI surveillance since the beginning, the unit test to detect the bug was present along with the bugged code. The answer is: We never programmed for this project around the weeks of the year when the clock is adjusted and the bug occurs. This was a coincidence influenced by the customer’s schedule. So every time CI (or we) ran the unit test, it passed. Until that day right after putting the clocks back.

How to avoid this blind spot

There are two things you can do to avoid this scenario:

  • Always inject a fixed “the current date” into your code when dealing with date calculations. Only use absolute dates in your unit tests. Time isn’t a healer for your tests, it’s a beast to be tamed.
  • Set up a nightly build for your project that runs once a day even when no changes have been made. It would have caught this bug one and a half year earlier.

To sum it up:

CI only spots bugs when they move (aka the code is changed). Nightly builds provide a (fuzzy) security layer against non-repeatable unit tests. And unit tests with flaws provide only delusory security.

Additional background information

After fully understanding the circumstances, we were curious why the customer didn’t notice the bug and asked him about it. The answer was delightful: “Our computers don’t adjust their clocks. Daylight-saving time only causes trouble.” What a wise decision!

For a good comparison of CI vs. Nightly Builds see this blog entry.

The fallacy of “the right tool”

There is a fallacy around Polyglot Programming, especially the term “the right tool for the job”: Programming languages aren’t tools.

Let me start this blog post with a disclaimer: I’m really convinced of the value of multilingual programming and also think that applying the “right tool for the job” is a good thing. But there is a fallacy around this concept in programming that i want to point out here. The fallacy doesn’t invalidate the concept, keep that in mind.

Polyglot cineasts

Let me start with an odd thought: What if there was a movie, a complicated international thriller around a political intrigue, playing in over half a dozen countries. The actors of each country speak their native tongue and no subtitles are provided. Who would be able to follow the plot? Only a chosen few of really polyglot cineasts would ever appreciate the movie. Most of us wouldn’t want to see it.

Polyglot programming

Our last web application project was comprised of that half a dozen languages (Groovy, Java, HTML, CSS, HQL/SQL, Ant). We could easily include more programming languages if we feel the need to do it. Adding Clojure, Scala or Ruby/JRuby doesn’t sound absurd to us. A programmer capable of knowing and switching between numerous programming languages is called a “Polyglot Programmer“.

The main justification for heterogeneous (polyglot) projects often is the concept of “using the right tool for the job”. The job often is a subtask of the whole project, like building the project, accessing the database, implementing the ever-changing business logic. For each subtask, some other language might outshine the competitors. Besides some reasonable doubt concerning the hidden cost of this approach, there is a misconception of the term “tool”.

Programming languages aren’t tools

If you use a tool in (basic or advanced) engineering, let’s say a hammer to drive some nails into a wooden plate or a screwdriver to decompose your computer, you’ll put the tool aside as soon as “the job” is finished. The resulting product (a new wooden cabinet or a collection of circuit boards) doesn’t include the tool. Most of the times, your job is really finished, without “change requests” to the product.

If your tool happens to be a programming language, you’ll produce source code bound to the tool. Without the tool, the product isn’t working at all. If you regard your compiled binaries as “the product”, you can’t deal with “change requests”, a concept that programmers learn early and painful. The product of a programmer obviously is source code. And programming languages don’t act as tools, but as materials in this respect. Tools go away, materials stick.

Programming languages are materials

As source code is tied to its programming language, they form a conceptional union. So I suggest to change the term to “using the right material for the job” when speaking about programming languages. This is a more profound decision to make in comparision to choosing between a Phillips style or a TORX screwdriver. Materials need to outlast when the tools are long put aside.

But there are tools, too

In my web application example above, we used a lot of tools. Grails is our framework of choice, Jetty our web container to deploy to, the Spring Framework provides mighty utilities and we used IDEA to bolt it all together. We could easily exchange Tomcat for Jetty or IDEA with Eclipse without changing the source code (the example doesn’t work that easy for Grails and Spring, though). Tools need to be replaceable or even disposable.

Summary

The term “the right tool for the job” cannot easily be applied to programming languages, as they aren’t tools, but materials. This is why polyglot programming is dangerous to when used heavily in a single project. It’s easy to end up with a tangled “amalgam project”.

Two more disclaimers:

  • If chosen right, “composite construction” is a powerful concept that unifies the advantages of two materials instead of adding up their drawbacks.
  • Being multilingual is advantageous for a programmer. Just don’t show it all off in one project.

A more elegant way to equals in Java

Implementing equals and hashCode in Java is a basic part of your toolbox. Here I describe a cleaner and less error-prone way to use in your code.

— Disclaimer: I know this is pretty basic stuff but many, many programmers are doing it still wrong —
As a Java programmer you know how to implement equals and that hashCode has to be implemented as well. You use your favorite IDE to generate the necessary code, use common wisdom to help you code it by hand or use annotations. But there is a fourth way: introducing EqualsBuilder (not the apache commons one which has some drawbacks over this one) which implements the general rules for equals and hashCode:

public class EqualsBuilder {

  public static interface IComparable {
      public Object[] getValuesToCompare();
  }

  private EqualsBuilder() {
    super();
  }

  public static int getHashCode(IComparable one) {
    if (null == one) {
      return 0;
    }
    final int prime = 31;
    int result = 1;
    for (Object o : one.getValuesToCompare()) {
      result = prime * result
                + EqualsBuilder.calculateHashCode(o);
    }
    return result;
  }

  private static int calculateHashCode(Object o) {
    if (null == o) {
      return 0;
    }
    return o.hashCode();
  }

  public static boolean isEqual(IComparable one,
                                              Object two) {
    if (null == one || null == two) {
      return false;
    }
    if (one.getClass() != two.getClass()) {
      return false;
    }
    return compareTwoArrays(one.getValuesToCompare(),
              ((IComparable) two).getValuesToCompare());
  }

  private static boolean compareTwoArrays(Object arrayOne, Object arrayTwo) {
      if (Array.getLength(arrayOne) != Array.getLength(arrayTwo)) {
        return false;
      }
      for (int i = 0; i < Array.getLength(arrayOne); i++) {
        if (!EqualsBuilder.areEqual(Array.get(arrayOne, i), Array.get(arrayTwo, i))) {
          return false;
        }
      }
      return true;
  }

  private static boolean areEqual(Object objectOne, Object objectTwo) {
    if (null == objectOne) {
      return null == objectTwo;
    }
    if (null == objectTwo) {
      return false;
    }
    if (objectOne.getClass().isArray() && objectTwo.getClass().isArray()) {
        return compareTwoArrays(objectOne, objectTwo);
    }
    return objectOne.equals(objectTwo);
  }

}

The interface IComparable ensures that equals and hashCode are based on the same instance variables.
To use it your class needs to implement the interface and call the appropiate methods from EqualsBuilder:

public class MyClass implements IComparable {
  private int count;
  private String name;

  public Object[] getValuesToCompare() {
    return new Object[] {Integer.valueOf(count), name};
  }

  @Override
  public int hashCode() {
    return EqualsBuilder.getHashCode(this);
  }

  @Override
  public boolean equals(Object obj) {
    return EqualsBuilder.isEqual(this, obj);
  }
} 

Update: If you want to use isEqual directly one test should be added to the start:

  if (one == two) {
    return true;
  }

Thanks to Nyarla for this hint.

Update 2: Thanks to a hint by Alex I fixed a bug in areEqual: when an array (especially a primitive one) is passed than the equals would return a wrong result.

Update 3: The newly added compareTwoArrays method had a bug: it resulted in true if arrayTwo is bigger than arrayOne but starts the same. Thanks to Thierry for pointing that out.