When it comes to multithreading, better be safe than sorry

Writing multithreaded applications in Java is hard. Here are five problems and how to avoid them without much effort (mostly).

Recently, I attended a code review of the core parts of a web application, written in Java. The application is used by a large customer base and occassionally, there are error reports and exceptions in the log files. Some of these exceptions are the dreaded ConcurrentModificationExceptions, indicating conflicting read/write access on an unsynchronized collection data structure. In the code review, we found several threading flaws, but not after an exhaustive reading of the whole module. Here, I want to present the flaws and give some advice on how to avoid them:

The public lock

In some parts of the code, methods were defined as synchronized through the method declaration keyword:

public synchronized String getLastReservation() { [...]

While there is nothing wrong with this approach in itself, it can be highly dangerous in combination with synchronized blocks. The code above effectively wraps a synchronized block using the object instance (this) as a lock. No information of an object is more publicly visible as the object reference (this), so you have to check all direct or indirect clients of this object if they synchronize on this instance, too. If they do, you have chained two code blocks together, probably without proper mentioning of this fact. The least harmful defect will be performance losses because your code isn’t locked as fine grained as it could be.

The easiest way to avoid these situations it to always hide the locks. Try not to share one object’s locks with other objects. If you choose publicly accessible locks, you can never be sure about that.

The subtle lock change

In one class, there were both instance and class (static) methods, using the synchronized keyword:

public synchronized String getOrderNumberOf(String customerID) { [...]
public  synchronized static int getTotalPendingOrders() { [...]

And while they were both accessing the same collection data structure (a static hashmap), they were using different locks. The lock of the instance method is the instance itself, while the lock of the static method is the class object of the type. This is very dangerous, as it can be easily missed when writing or altering the code.

The best way to prevent this problem it to avoid the synchronized modifier for methods completely. State your locks explicitely, all the time.

Partial locking

In a few classes, collection datatypes like lists were indeed synchronized by internal synchronized-blocks in the methods, using the private collection instance as lock. The synchronized blocks were applied to the altering methods like putX(), removeX() and getX(). But the toString() method, building a comma-separated list of the textual list entries, wasn’t synchronized to the list. The method contained the following code:

public String toString() {
    StringBuilder result = new StringBuilder();
    for (String entry : this.list) {
        result.append(entry);
        result.append(",");
    }
    [...]
    return result.toString();
}

I’ve left out some details and special cases, as they aren’t revelant here. The problem with the foreach loop is that an anonymous Iterator over the list is used and it will relentlessly monitor the list for any changes and throw a ConcurrentModificationException as soon as one of the properly synchronized sections changes it. The toString() method was used to store the list to a session dependent data storage. Every once in a while, the foreach loop threw an exception and failed to properly persist the list data, resulting in data loss.

The most straight-forward solution to this problem might be to add the missing synchronization block in the toString() method. If you don’t want to block the user session while writing to disk, you might traverse the list without an Iterator (and be careful with your assumptions about valid indices) or work on a copy of the list, given that an in-memory copy of the list would be cheap. In an ACID system scenario, you should probably choose to complete your synchronized block guards.

Locking loophole

Another problem was a collection that was synchronized internally, but could be accessed through a getter method. No client could safely modify or traverse the collection, because they had the collection, but not the lock object (that happened to be the collection, too, but who can really be sure about that in the future?). It would be ridiculous to also provide a getter for the lock object (always hide your locks, remember?), the better solution is to refactor the client code to a “tell, don’t ask” style.

To prevent a scenario when a client can access a data structure but not its lock, you shouldn’t be able to gain access to the data structure, but pass “command objects” to the data structure. This is a perfect use case for closures. Effectively, you’ll end up with something like Function or Operation instances that are applied to every element of the collection within a synchronized block and perform your functionality on them. Have a look at op4j for inspirational syntax.

Local locking

This was the worst of all problems and the final reason for this blog entry: In some methods, the lock objects were local variables. In summary, these methods looked like this:

public String getData() {
    Object lock = new Object();
    synchronized (lock) {
        [...]
    }
}

Of course, it wasn’t that obvious. The lock objects were propagated to other methods, stored in datastructures, removed from them, etc. But in the end, each caller of the method got his own lock and could henceforth wreck havoc in code that appeared very well synchronized on first look. The error in its clarity is too stupid to be widespread. The problem was the obfuscation around it. It took us some time to really understand what is going on and where all that lock objects really come from.

My final advice is: If you have to deal with multithreading, don’t outsmart yourself and the next fellow programmer by building complex code structures or implicit relationships. Be as concise and explicit as you can be. Less clutter is more when dealing with threads. The core problem is the all-or-none law of thread synchronization: Either you’ve got it all right or you’ve got it all wrong – you just don’t know yet.

Hide your locks, name your locks explicitely, reduce the scope of necessary locking so that you can survey it easily, never hand out your locked data, and, most important, remove all clutter around your locking structures. This might make the difference between “just works” and endless ominous bug reports.

Embedding Python into C++

In one of our projects the requirement to run small user-defined Python scripts inside a C++ application arose. Thanks to Python’s C-API, nicknamed CPython, embedding (really) simple scripts is pretty straightforward:

Py_Initialize();
const char* pythonScript = "print 'Hello, world!'\n";
int result = PyRun_SimpleString(pythonScript);
Py_Finalize();

Yet, this approach does neither allow running extensive scripts, nor does it provide a way to exchange data between the application and the script. The result of this operation merely indicates whether the script was executed properly by returning 0, or -1 otherwise, e.g. if an exception was raised. To overcome these limitations, CPython offers another, more versatile way to execute scripts:

PyObject* PyRun_String(const char* pythonScript, int startToken, PyObject* globalDictionary, PyObject* localDictionary)

Besides the actual script, this function requires a start token, which should be set to Py_file_input for larger scripts, and two dictionaries containing the exchanged data:

PyObject* main = PyImport_AddModule("__main__");
PyObject* globalDictionary = PyModule_GetDict(main);
PyObject* localDictionary = PyDict_New();
PyObject* result = PyRun_String(pythonScript, Py_file_input, globalDictionary, localDictionary);

Communication between the application and the script is done by inserting entries to one of the dictionaries prior to running the script:

PyObject* value = PyString_FromString("some value");
PyDict_SetItemString(localDict, "someKey", value);

Doing so makes the variable “someKey” and its value available inside the Python script. Accessing the produced data after running the Python script is just as easy:

char* result = String_AsString(PyDict_GetItemString(localDict, "someKey"));

If a variable is created inside the Python script, this variable also becomes accessible from the application through PyDict_GetItemString (or PyDict_GetItem), even if it was not entered into the dictionary beforehand.

The following example shows the complete process of defining variables as dictionary entries, running a small script and retrieving the produced result in the C++ application:

Py_Initialize();
//create the dictionaries as shown above
const char* pythonScript = "result = multiplicand * multiplier\n";
PyDict_SetItemString(localDictionary, "multiplicand", PyInt_FromLong(2));
PyDict_SetItemString(localDictionary, "multiplier", PyInt_FromLong(5));
PyRun_String(pythonScript, Py_file_input, globalDictionary, localDictionary);
long result = PyInt_AsLong(PyDict_GetItemString(localDictionary, "result"));
cout << result << endl;
Py_Finalize();

Readability of Boolean Expressions

Readability of boolean expressions lies in the eyes of the beholder.

Following up on various previous posts on code readability and style I want to provide two more examples today – this time under the common theme of “handling of boolean values”.

Consider this (1a):

bool someMethod()
{
  if (expression) {
    return true;
  } else {
    return false;
  }
}

Yes, there are people who consider this more readable than (1b)

bool someMethod()
{
  return (expression);
}

Another example is this (2a):

  if (someExpression() == true)
    ...

versus my preferred version (2b):

  if (someExpression())
    ...

So what could be the reason for these different viewpoints? One explanation I thought of is as follows: Let’s say you have a background in C and you are therefore used to do something like:

#define FALSE (0)
#define TRUE (!FALSE)

In other words, you may not see boolean as a type of its own, like int and double, with a well-defined value range. Instead you see it more like an enumerated type which makes it feel very naturally do a expression == true comparison.

At the same time it feels not very natural to see the result of a boolean expression as being of type bool with all the consequences – e.g. to be able to return it immediately as in the first example.

Another explanation is that 1a and 2a are as verbose as it can be. You don’t have to make any mental efforts to understand what the code does.

While these may be possible explanations, my guess is that most of you, like me,  still see 1a and 2a as unnecessary visual clutter and consider 1b and 2b as far more readable.

Tests may remember the spec better than the customer or yourself

We have an application in maintenance mode for some years now. One part of the app displays messages in a certain format. They contained %-characters which have a special meaning. Both we and our customer thought they were about encoding line endings or some such. One day our customer reported missing parts within these messages. We dove down into the issue, analysed the raw messages containing a few %-signs and noticed some weird looking code:

public String parse(String message) {
    StringChunker tok = new StringChunker(message, Text.PERCENT);
    DirectChunkBuffer result = new DirectChunkBuffer(Text.NEWLINE);
    if (tok.hasMoreChunks()) {
        result.add(tok.getNextChunk());
    }
    return result.toString();
}

The if-statement feels unusual here as most would expect a while loop essentially splitting the original message by % and putting it together again with newlines in between. Almost immediately we thought of a bug that never until now occurred in production triggered by malformed raw messages.

But our unit tests documented clearly the current behaviour as correct. So we decided to talk again with our customer. He then asked his experts and they confirmed the behaviour and explained their workflow. The %-characters were used as comment characters to hide text blocks the expert workers used as templates. Nothing after the first %-character should be displayed. They also confirmed that the displayed message was correct and the whole error report was indeed some kind of communication problem somewhere in the organisation.

The tests saved us from breaking specified and correctly working behaviour.

After the clarification by the experts and we improved the situation by refactoring the code to communicate its intent clearer. We also documented the message format in the javadocs and a wikipage in addition to the tests.

Grails Gems: Command Objects

A series about the (little) gems found in Grails which can help many projects out there.

Besides domain objects command objects are another way to get validation and data binding of parameters. But why (or when) should you use them?
First when you do not want to persist the data. Like validating parameters for a search query.
Second when you just want a subset of the parameters which has no corresponding domain object. For example for keeping malicious data away from your domain objects.
Third when you get a delta of the new data. When you just want to add to a list and do not want to check if you get a single or a multiple value for your a parameter.

Usage

Usually you put the class of the command in the same file as the controller you use them in. The command object is declared as a parameter of the action closure. You can even use multiple one:

class MyController {
  def action = { MyCommand myCommand, YourCommand yourCommand ->
    ...
  }
}

Grails automatically binds the request parameters to the commands you supply and validates them. Then you can just call command.hasErrors() to see if the validation succeeded.

Separate your code domains

You can improve your code reusability by separating the technical domain code from the business domain code. This article tries to explain how to start.

When you develop software, you most likely have to think in two target domains at the same time. One domain will be the world of your stakeholder. He might talk about business rules and business processes and business everything, so lets call it the business domain. The other domain is the world you own exclusively with your colleagues, it’s the world of computers, programming languages and coding standards. Lets call it the technical domain. It’s the world where your stakeholders will never follow you.

Mixing the domains

Whenever you create source code, you probably try to solve problems in the business domain with the means of your technical domain – e.g. the programming language you’ve chosen on the hardware platform you anticipate the software to run on. Inevitably, you’ll mix parts of the business domain with parts of the technical domain. The main question is – will it blend? Most of the time, the answer is yes. Like milk in coffee, the parts of two domains will blend into an inseparable mixture. Which isn’t necessarily a bad thing – your solution works just fine.

The hard part comes when you want to reuse your code. It’s like reusing the milk in your coffee, but without the coffee. You’ve probably done it, too (reusing domain-blended code, not extracting the milk from your coffee) and it wasn’t the easy “just copy it over here and everything’s fine” reusability you’ve dreamt of.

Separating the domains

One solution for this task begins by realizing which code belongs to which domain. There isn’t a clear set of rules that you can just check and be sure, but we’ve found two rules of thumb helpful for this decision:

  • If you have a strong business domain data type model in your code (that is, you’ve modelled many classes to directly represent concepts and items from your stakeholder’s world), you can look at a line of code and scan for words from the business domain. If there aren’t any, chances are that you’ve found a line belonging to the technical domain. If you prefer to model your data structures with lists and hashmaps containing strings and integers, you’re mostly out of luck here. Hopefully, you’ve chosen explicit names for your variables, so you don’t end with a line stating map.get(key), when in fact, you’re looking up orders.getFor(orderNumber).
  • For every line of code, you can ask yourself “do I want to write it or do I have to write it?”. This question assumes that you really want to solve the problems of the business domain. Every line of code you just have to write because otherwise, the compiler, the QA department, your colleagues or, ultimately, your coder idol of choice would be disappointed is a line from the technical domain. Every line of code that would only disappoint your stakeholder if it would be missing is a line from the business domain. Most likely, everything that your business-driven tests assert is code from the business domain.

Once you categorized your lines of code into their associated domain, you can increase the reusability of your code by separating these lines of code. Effectively, you try to avoid the blending of the parts, much like in a good latte macchiato. If you achieve a clear separation of the different code parts, chances are that you have come a long way to the anticipated “copy and paste” reusability.

Example one: Local separation

Well, all theory is nice and shiny, but what about the real (coding) life? Here are two examples that show the mechanics of the separation process.

In the first example, we’re given a compressed zip file archive as an InputStream. Our task is to write the archive entries to disk, given that certain rules apply:

public void extractEntriesFrom(InputStream in) {
    ZipInputStream zipStream = new ZipInputStream(in);
    try {
         ZipEntry entry = null;
         while ((entry = zipStream.getNextEntry()) != null) {
             if (rulesApplyFor(entry)) {
                 File newFile = new File(entry.getName());
                 writeEntry(zipStream,
                      getOutputStream(basePath(), newFile));
             }
             zipStream.closeEntry();
         }
    } catch (IOException e) {
        e.printStackTrace();
    } finally {
        IOHandler.close(zipStream);
    }
}

This is fairly common code, nothing to be proud of (we can argue that the method signature isn’t as explicit as it should be, the exceptions are poorly handled, etc.), but that’s not the point of this example. Try to focus your attention to the domain of each code line. Is it from the business or the technical domain? Let me refactor the example to a form where the code from both domains is separated, without changing the additional flaws of the code:

public void extractEntriesFrom(InputStream in) {
    ZipInputStream zipStream = new ZipInputStream(in);
    try {
         ZipEntry entry = null;
         while ((entry = zipStream.getNextEntry()) != null) {
             handleEntry(entry, zipStream);
             zipStream.closeEntry();
         }
    } catch (IOException e) {
        e.printStackTrace();
    } finally {
        IOHandler.close(zipStream);
    }
}

protected void handleEntry(ZipEntry entry,
        ZipInputStream zipStream) throws IOException {
    if (rulesApplyFor(entry)) {
        File newFile = new File(entry.getName());
        writeEntry(zipStream,
            getOutputStream(basePath(), newFile));
    }
}

In this version of the same code, the method extractEntriesFrom(…) doesn’t know anything about rules or how to write an entry to the disk. Everything that’s left in the method is part of the technical domain – code you have to write in order to perform something useful within the business domain. The new method handleEntry(…) is nearly free of technical domain stuff. Every line in this method depends on the specific use case, given by your business domain.

Example two: Full separation

Technically, the first example only consisted of a simple refactoring (Extract Method). But by separating the code domains, we’ve done the first step of a journey towards code reusability. It begins with a simple refactoring and ends with separated classes in separated packages from two separated project parts, named something like “application” and “framework”. Even if you only find a class named “Tools” or “Utils” in your project, you’ve done intermediate steps towards the goal: Separating your technical domain code from your business domain code in order to reuse the former (because no two businesses are alike).

The next example shows a full separation in action:

WriteTo.file(target).using(new Writing() {
    @Override
    public void writeTo(PrintWriter writer) {
        writer.println("Hello world!");
        writer.println("Hello second line.");
        // more business domain code here
    }
});

Everything other than the first line (and the necessary java boilerplate) is business domain code. In the first line, only the specified target file isn’t technical. Everything related to opening the file output stream, handling exceptions, closing all resources and all the other fancy stuff you want to do when writing to a file is encapsulated in the WriteTo class. The equivalent to the handleEntry(…) method from the first example is the writeTo(…) method of the Writing interface. Everything within this method is purely business domain related code. The best thing is: you can nearly forget about the technical domain when filling out the method, as it is embedded in a reusable “code clamp” providing the proper context.

Conclusion

If you want to write reusable code, consider separating your two major code domains: the technical domain and the business domain. The first step is to be aware of the domains and distinguish between them. Your separation process then can start with simple extractions and finally lead to a purely technical framework where you “only” have to fill in the business domain code. By the way, it’s a variation of the classic “separation of concerns” principle, if you want to read more.

A shot at definitions beyond “unit test”

When doing research on which kinds of programmatic tests different developers and companies utilize and how they handle them, I realized that there is no common definition of terms and concepts. While most sources agree on what is and what is not a unit test, there are various contradictory definitions of what a test is, if it is not a unit test. In this blog post I’d like to present a brief overview of the definitions we are currently using. Since we steadily try to enhance and refine our development process and tools, the terms and concepts presented here are almost certain to change in the future.

Please note that this post is not intended to fully describe all the details of the different test approaches, but rather to give an idea and first impression on how we distinguish them.

Unit Tests

The most basic kind of programmatic tests, unit-tests, are likely to be the most commonly used kind of test. They help to determine that a small piece of code, e.g. a single method or class, behaves as intended by its developer. If properly applied, unit-tests provide a solid foundation to build an application upon. Figure 1 schematically depicts the scope of a unit-test in an exemplary software system.

Depending on the complexity of the tested system, techniques like mocking of dependencies may be required. Especially system resources need to be replaced by mocks, since unit tests need to be completely independent from them (Michael Feathers describes this and some other requirements of unit tests in his blog post “A Set of Unit Testing Rules”). Furthermore, unit tests are not meant to be long running, but instead have to execute within a split second.

Schematic view of a unit test of a component in an exemplary system
Figure 1: Schematic view of a unit test of an component in an exemplary system

Integration Tests

A more sophisticated approach to testing are integration tests which challenge a part or sub-system of an application made up of several units in order to determine whether these units properly cooperate. In contrast to unit tests, integration tests may include system resources and may also determine the test’s outcome by checking the state of these resources. This larger scope and the fact that the tested functionality is typically made up of several actions, leads to integration tests taking a multitude of the time taken by unit tests. Figure 2 schematically illustrates an integration test’s view on an exemplary system.

Schematic of an integration test in an exemplary system
Figure 2: Schematic of an integration test in an exemplary system

Acceptance Tests

The by far most involved technique to test the behavior of an application is the utilization of acceptance tests. While the other approaches challenge only parts of an application, acceptance tests are meant to challenge the application as a whole from a user’s point of view. This includes using system resources, as well as to control the application and verify its proper function as a user would: Through its (G)UI and without knowing anything about the internals of the software.

Schematic of an acceptance test in an exemplary system
Figure 3: Schematic of an acceptance test in an exemplary system

Conclusion

While some developers only distinguish between unit tests and other tests, defining the latter ones more clearly proved very useful when creating, using and explaining them to other developers and customers. Yet, these definitions are not carved in stone and certainly need to be refined over time. Thus, I would like to get to know your opinion on these definitions. Do you agree or do you have a completely different way of distinguishing between test approaches? How many kinds of tests do you distinguish? And why do you do so?

Prepare for the unexpected

In most larger projects there are many details which cannot be foreseen by the development team. Specifications turn out to be wrong, incomplete or not precise enough for your implementation to work without further adjustments. New features have to work with production data that may not be available in your development or testing environment.

The result I often observed is that everything works fine in your environment including great automated tests but fails nevertheless when deployed to production systems. Sometimes it is minor differences in the operating system version or configuration, the locale for example, may cause your software to fail. Another common problem is  real production data containing unexpected characters, inconsistencies in the data (sometimes due to bugs) or its sheer size.

What can we do to better prepare for unexpected issues after deployment?

The thing is to expect such issues and to implement certain countermeasures to better cope with them. This may conflict with the KISS principle but usually is worth a bit of added complexity. I want to provide some advice which proved useful for us in the past and may help you in the future too:

  1. Provide good, detailed and persistent debug output for certain features: Once we added a complex rule system which operated on existing domain objects. To check every possible combination of domain object states would have been a ton of work, so we wrote tests for the common cases and difficult cases we could think of. Since the correctness of the functionality was not critical we decided to rather display slightly incorrect information instead of failing and thus breaking the feature for the user. We did however provide extensive and detailed logs whenever our rule system detected a problem.
  2. Make certain parts of your communication interface to third party systems configurable: Often your system communicates to different kinds of users and other systems. Common examples are import/export functionality, web service APIs or text protocols. Even if most of the time details like date and number formats, data separators, line endings, character encoding and so forth are specified it often proves valuable to make them configurable. Many times the specification changes or is incorrect, some communication partner implements the protocol slightly different or a format deviates from your assumption breaking your application. It is great if you can change that with a smile in front of your client and make the whole thing work in minutes instead of walking home frustrated to fix the issues.

The above does not mean building applications with ultimate flexibility and configurability and ignoring automated tests or realistic test environments. It just means that there are typical aspects of an application where you can prepare for otherwise unexpected deviations of theory and praxis.

Grails: Beware of the second level cache

Know your caches!

Recently we were hunting a strange bug. Take the following domain model:

class Computer {
  Coder coder
}

class Coder {
  static hasMany = [projects:Project]
}

Querying the computer and iterating over the respective coder and projects sometimes resulted in strange number of projects: 1. Looking into the underlying database we quickly found out that the number of 1 was not correct. It got even more strange: getting the coder in question via Coder.get in the loop yielded the correct results. What was the problem?
After some code reading and debugging another query which was called after the first one but before accessing the coder in the loop gave some insight:

  Coder.withCriteria {
    projects {
      idEq(projectId)
    }
  }

This second query also queried the Coder but constrained the projects to a specific one. These coders were populated into the second level cache and when we called computer.coder the second level cache returned the before queried coder. But this coder had only one project!
Since we only needed the number of coders with this project we changed the second
query to using count, so no instances of Coder are returned and thus saved in the second level cache. Bug fixed.

The Great Divide

There is a great divide in the C++ developer community between “normal” developers that use only basic language features and very savvy ones that know every little corner of the language. The upcoming C++ standard deepens this divide even more.

Recently, I had two very contrary conversations about C++ which show very good the great divide in C++ developer community.

The first was with the technical lead of a team that writes and maintains drivers and control software for a scientific institution. These systems run 24/7 and have to be very stable and reliable.

I had discovered that they use a self-written toolbox library containing classes like SharedPtr<T>, and Thread and suspected immediately a classical NIH-syndrome. I asked him about it and why they don’t use well established libraries like boost. He told me that they indeed are only using the standard library and their own toolbox.

The reason he gave was that despite boost being most elegant C++ library out there, it required very good knowledge about the most advanced C++ mechanisms, and that his team was not on this level … I should probably mention here that his team does a very good job in running their systems. So, apparently, they get along very well with using only basic  C++ features and no “fancy” boost stuff.

The other conversation was with a friend of mine with whom I chat regularly about all sorts of programming related stuff. This time the topic was the upcoming  C++ standard and all its  exciting new stuff. He has lot’s of experience with C++ and knows the language very well. But even someone like him had a hard time to really understand what rvalue references are all about. I had not looked at them in detail, yet,  so he tried to explain them to me. During our discussion I was thinking about if teams like the one introduced before will ever use rvalue references, or other C++0X stuff in their production code, other than maybe the auto keyword for type inference, or constructor delegation.

Honestly, I don’t think stuff like  rvalue refs will become a feature that is often used by “standard industry” teams, because it adds a lot of complexity to an already complex language. Even easy-to-get stuff like the new keywords override, constexpr and final, or additional initialization means like std::initializer_list<T> will take a lot of time to get used regularly by most C++ teams.

Instead, most of C++0X will greatly increase the divide between “normal” C++ developers who get along well with using only basic language features, and experts that know every little corner of the language. And this is simply because there is so much more to know with C++0X.

But don’t let us paint this picture overly black. I, for one, am looking forward to the new standard and I will certainly spread the word about the new possibilities and features in every C++ team I work with.