A review of the year 2011 at Softwareschneiderei

This is a review of the year 2011 for the Softwareschneiderei, a software development company from Karlsruhe, Germany.

The current year 2011 is coming to an end. This is the traditional time to pause and reflect on what has happened. This blogpost tries to sum up our year in software development at Softwareschneiderei. It was an interesting, entertaining and successful year for us, that’s for sure.

The official parts

Our developer blog was alive throughout the hardest times of the year, when everyone was under full project load. Every week, one of our developer shares a little posting with the world. The blog is still managed by token only. Looking at the visitor statistics, we fully appreciate your attention. The first blog post of this year looked at the remainder of a failed project. “A tale of scrap metal code” was a detailed vivisection in three parts. Over the course of the year, we wrote about bogus error messages, Groovy, Grails, GORM and some confessions about coding style and multithreading. If you have the time, spend a few minutes to browse our blog post archive for this year.

Our “official” company blog, written in german language, had no activity this year. We can certainly do better than this and take it on the list for 2012.

The company homepage, written in german language, had continuous updates and extensions this year. We coupled the “Open Source Love Day” (OSLD) with our “Homepage Comittee”, when every employee has to improve the homepage in some aspects and present the change to the “comittee”. Unfortunately, this somehow lead to fewer OSLDs this year.

The ongoing Dev Brunch sessions thinned out a bit in the second half of the year. This was a concession to the ever-growing workload. We strive to establish a tighter schedule with accompanying blog posts in the next year.

The internal parts

We were under heavy development load this year. This isn’t a bad thing, but impacts the internal communication and team building process. We tried to cope a bit by restructuring the Open Source Love Day to a “Team Day”, when the whole team meets and works on various internal or hobby projects. Some of these days were spend on Code Camps and other training events. This means less love for the open source community, but crucial together time for us.

We picked up several “new” programming languages this year. You can tell by the blog posts that we worked with Python, Ruby, Flex/ActionScript and even VisualBasic on real projects. The VisualBasic experience was a little epiphany that it’s really the developer and not the language that leads to shitty software.

One method of continuous improvement is our “creativity budget”. Basically, it’s money every developer can spend to improve his workplace. This budget wasn’t used at all this year, as the workplaces seem to be optimal. This cannot be true, so we bought brand new computers or a big RAM upgrade for everyone. And every computer has a big enough SSD now. We took our own advice seriously and invested in our productivity.

Our developer crew grew again this year. We are beginning to think about the remaining space in the new office again. But as usual, we grow slowly and deliberately. There’s nothing worse than a team of strangers.

Conclusion

The year 2011 was great! We’re looking forward to the year 2012, with our motto of christmas 2011: “cheery and spry” (the original motto is in german language “froh und munter”, I hope the translation caught the original spirit).

Have a great turn of the year, everyone. We’d love to see you again next year.

Python in C++: Rerouting Python’s stdout

A few weeks ago I published a post that showed how to embedd Python into C++ and how to exchange data between the two languages. Today, I want to present a simple practice that comes in handy when embedding Python into C++: Rerouting Python’s standard output using CPython.

After initializing Python, the new destination of the output stream needs to be created using PyFile_FromString(…) and set to be the new standard output:

PyObject* pyStdOut = PyFile_FromString("CONOUT$", "w+");
PyObject* sys = PyImport_ImportModule("sys");
PyObject_SetAttrString(sys, "stdout", pyStdOut);

Basically that’s all it needs. When executing Python script via PyRun_String(…), all calls to print(…) will write the data directly to pyStdOut.

Ater the Python script is finished, the data in pyStdOut can be retrieved and further processed with C++ by converting it using PyFile_AsFile(…):

FILE* pythonOutput = PyFile_AsFile(pyStdOut);

Breakpad and Your CI – A Strong Team

Google’s breakpad together with your CI system can prepare you for the worst.

If your C++ software has to run 24/7 on some server rack at your customer’s data center, it has to meet not only all the user requirements, but also requirements that come from you as developer. When your customer calls you about some “problems”, “strange behaviours”, or even crashes, you must be able to detect what went wrong. Fast!

One means to this end is of course logging. But if your application crashes, nothing beats a decent stacktrace 🙂

Google’s breakpad library comes in very handy here because it provides very easy crash reporting. Even if your process has 2 gigs of virtual memory, breakpad shrinks that ‘core dump’ down to a couple of megs.

Breakpad pulls that trick off by using so-called symbol files that you have to generate for each compiled binary (executable or shared library). These symbol files together with the breakpad dump file that is created at crash time are then used to recreate the stacktrace.

Because every compilation creates different binaries, dump file and symbol files need to be ‘based on’ exactly the same binaries.

This is where you can let your CI system do some work for you. At one of our customers we use Jenkins not only for the usual automatic builds and tests after each check-in but also for release builds that go into production.

At the end of each build, breakpad’s symbol dumper runs over all compiled executables and libraries and generates the symbol files. These are then archived together with the compiled binaries.

Now we are prepared. Whenever some customer sends us a dump file, we can just easily pull out the symbol files corresponding to the software version that runs at this customer and let breakpad do its magic…

 

Deployment with the Play! framework

Play! is a great framework for java-base development of modern web applications. Unfortunately, the documentation about deployment options is not really that extensive in certain details. I want to describe a way to automatically build a self-contained zip archive without the source code. The documentation does state that using the standalone web server is preferred so we will use that option.

Our goal is:

  • an artifact with the executable application
  • no sources in the artifact
  • startup script for different platform and environments
  • CI integration with execution of the tests

Fortunately, the play framework makes most of this quite easy if you know some small tricks.

The first very important step towards our goal is embedding the whole Play! framework somewhere in your project directory. I like to put it into lib/play-x.y.z (x.y.z being the framework version). That way you can do perform all neccessary calls to play scripts using relative paths and provide a self-contained artifact which developers or clients may download and execute on their machine. You can also be sure everyone is using the correct (read “same”) framework version.

The next important thing is to write some small start-scripts so you can demo the software easily on any machine with Java installed. Your clients may try it out theirselves if the project policy is open enough. Here are small examples for linux

#!/bin/sh
python lib/play-1.2.3/play run --%demo -Dprecompiled=true

and windows

REM start our app in the "demo" environment
lib\play-1.2.3\play run --%%demo -Dprecompiled=true

The last ingredient to a great deployment and demoing experience is the build script which builds, tests and packages the software together. We do not want to include the sources in the artifact, so there is a bit of work to do. We perform following steps in the script:

  1. delete old artifacts to ensure a clean build
  2. call play to precompile our application
  3. call play to execute all our automatic tests
  4. copy all needed files into our distribution directory ready to be packed together
  5. pack the artifacts into a zip archive

Our sample build script is for the linux shell but you can easily translate it to the scripting environment of your choice, be it apache ant, gradle, windows batch depending on your needs and preference:

#!/bin/sh

rm -r dist
rm -r test-result
rm -r precompiled
python lib/play-1.2.3/play precompile
python lib/play-1.2.3/play auto-test
TARGET=dist/my_project
mkdir -p $TARGET/app
cp -r app/views $TARGET/app
cp -r conf lib modules precompiled public $TARGET
cp programs/my_project* $TARGET
cd dist && zip -r my_project.zip my_project

Now we can hook the project into a continuous integration server like Jenkins and let it archive the build artifact containing an executable installation of our web application. You could grant your client direct access to the artifact, use it for demos and further deployment steps like triggered upload to a staging server or the like.

HTTP Get: The problem with Percent Encoded Parameters

Encoding problems are common place in software development but sometimes you get them in unexpected places.

Encoding problems are common place in software development but sometimes you get them in unexpected places.
About the setup: we have a web application written in Grails (though the choice of framework here doesn’t really matter) running on Tomcat. A flash application sends a HTTP Get request to this web application.
As you might know parameters in Get request are encoded in the URL with the so called percent encoding for example: %20 for space. But how are they encoded? UTF8?
Looking at our tomcat configuration all Get parameters are decoded with UTF8. Great. But looking at the output of what the flash app sends us we see scrambled Umlauts. Hmmm clearly the flash app does not use UTF8. But wait! There’s another option in Tomcat for decoding Get parameters: look into the header and use the encoding specified there. A restart later nothing changed. So flash does not send its encoding in the HTTP header. Well, let’s take a look at the HTTP standard:

If a reserved character is found in a URI component and no delimiting role
is known for that character, then it must be interpreted as representing the
data octet corresponding to that character's encoding in US-ASCII.

Ah.. US-ASCII and what about non ASCII ones? Wikipedia states:

For a non-ASCII character, it is typically converted to its byte sequence
in UTF-8, and then each byte value is represented as above.

Typically? Not in our case, so we tried ISO-8859-1 and finally the umlauts are correct! But currency signs like the euro are again garbage. So which encoding is similar to Latin-1 but not quite the same?
Yes, guess what: cp1252, the Windows native encoding.
And we tested all this on a Mac?!

The Story of a Multithreading Sin

The story of a bug that was caused by a common multithreading pitfall, the dreaded liquid lock.

In my last blog entry, I wrote about multithreading pitfalls (in Java), and ironically, this was the week when we got a strange bug report from one of our customers. This blog entry tells the story of the bug and adds another multithreading pitfall to the five I’ve already listed in my blog entry “When it comes to multithreading, better be safe than sorry”.

The premise

We developed a software that runs on several geographically distant independent “stations” that collect a multitude of environmental measurement data. This data is preprocessed and stuffed into data packages, which are periodically transferred to a control center. The software of this control center, also developed by us, receives the data packages, stores them on disk and in a huge database and extracts the overall state of the measurement network from raw data. If you describe the main task of the network on this level, it sounds nearly trivial. But the real functionality requirements are manifold and the project grew large.

We kept the whole system as modular as necessary to maintain an overall grasp of what is going on where in the system and installed a sufficient automatic test coverage for the most important parts. The system is still under active development, but the main parts of the network are in production usage without real changes for years now.

The symptoms

This might explain that we were very surprised when our customer told us that the control center had lost some data packages. Very soon, it turned out that the control center would randomly enter a state of “denial”. In this state, it would still accept data packages from the stations and even acknowledge their arrival (so the stations wouldn’t retry the transmission), but only write parts of the package or nothing at all to the disk and database. When the control center entered this state, it would never recover from it. But when we restarted the software manually, everything would run perfectly fine for several days and then revert back into denial without apparent trigger.

We monitored the control center with every means on our disposal, but its memory consumption, CPU footprint and threading behaviour was without noticeable problem even when the instance was in its degraded state. There was no exception or uncommon entry logged in the logfiles. As the symptom happened randomly, without external cause and with no chance of reversal once it happened, we soon suspected some kind of threading issue.

The bug

The problem with a threading issue is that you can’t just reproduce the bug with an unit or system test. We performed several code reviews until we finally had a trace. When a data package arrives, a global data processing lock is acquired (so that no two data packages can be processed in parallel) and the content of the package is inspected. This might trigger several network status changes. These change events are propagated through the system with classic observer/listener structures, using synchronous calls (normal delegation). The overall status of the network is translated in a human readable status message and again forwarded to a group of status message listeners. This is a synchronous call again. One of the status message listeners was the software driver for a LED ticker display. This module was a recent addition to the control center’s hardware outfit and used to display the status message prominently to the operators. Inside this LED software driver, some bytes are written to a socket stream and then the driver awaits an answer of the hardware device. To avoid the situation that two messages are sent to the device at the same time, a lock is acquired just before the message is sent. This code attracted our attention. Lets have a look at it:

private Message lastMessage = new Message();

public void show(Message message) {
    synchronized (this.lastMessage) {
        writeCommandAndWaitForResponse(Command.SHOW_TEXT, message.asBytes());
        this.lastMessage = message;
    }
}

The main problem here is the object the lock is acquired upon: the reference of lastMessage is mutable! We call this a liquid lock, because the lock isn’t as solid as it should be. It’s one of the more hideous multithreading pitfalls as it looks like everything’s fine at first glance. But this lock doesn’t have a complete “locking” effect because each caller may acquire the lock of a different instance. And a lock with a flawed locking behaviour is guaranteed to fail (in production). The liquid lock is like the bigger brother of the local lock. It isn’t local, but its mutability cause the same problems.

The bug finally turned out to be caused by the liquid lock in the LED display driver that got notified of system message changes when a data package arrived. But only if multiple messages were sent at once to the device, discarding some of the necessary answers in this circumstance or if the connection to the LED hardware would fail in the midst of a transmission, the system would not return from the write attempt. If one thread wouldn’t return to the data package processor, the global data processing lock would not be freed (read the start of this chapter again, this is the most important lock in the system!). And while the data processing lock was still held, all other data packages would be received, but piling up to obtain the lock. But the lock would never be returned from the thread waiting on an answer from a hardware device that had no intention to send another answer. This was when the control center appeared to be healthy but didn’t process any data packages anymore.

The conclusion

If you want to avoid the category of liquid lock multithreading bugs, make sure that all your lock instance references are immutable. Being final is an important property of lock instance references. Avoid to retrieve your locks from notoriously muteable data structures like collections or arrays. The best thing you can do to avoid liquid locks is to “freeze” all your lock instances.

Another insight from this story is that software modules have to be separated threadwise, too. It was a major design flaw to let the data processing thread, while holding the main processing lock, descend down into the deep ends of the LED driver, eventually getting stuck there for infinity. Some simple mechanisms like asynchronous listener notification or producer/consumer queues for pending transmission requests would have helped to confine the effects of the liquid lock bug inside the LED module. Without proper thread separation, it took down the whole software instance.

Debug Output

Crafting debug output from std::istream data can be dangerous!

Writing a blog post sometimes can be useful to get some face-palm kind of programming error out of one’s system.

Putting such an error into written words then serves a couple of purposes:

  • it helps oneself remembering
  • it helps others who read it not to do the same thing
  • it serves as error log for future reference

So here it comes:

In one project we use JSON to serialize objects in order to send them over HTTP (we use the very nice JSON Spirit library, btw).

For each object we have serialize/deserialize methods which do the heavy lifting. After having developed a new deserialize method I wanted to test it together with the HTTP request handling. Using curl for this I issued a command like this:

curl -X PUT http://localhost:30222/some/url -d @datafile

This command issues a PUT request to the given URL and uses data in ./datafile, which contains the JSON, as request data.

The request came through but the deserializer wouldn’t do its work. WTF? Let’s see what goes on – let’s put some debug output in:

MyObject MyObjectSerializer::deserialize(std::istream& jsonIn)
{
   // debug output starts here
   std::string stringToDeserialize;
   Poco::StreamCopier::copyToString(jsonIn, stringToDeserialize);
   std::cout << "The String: " << stringToDeserialize << std::endl;
   // debug output ends here

   json_spirit::Value value;
   json_spirit::read(jsonIn, value);
   ...
}

I’ll give you some time to spot the bug…. 3..2..1..got it? Please check Poco::StreamCopier documentation if you are not familiar with POCO libraries.
What’s particularly misleading is the “Copier” part of the name StreamCopier, because it does not exactly copy the bytes from the stream into the string – it moves them. This means that after the debug output code, the istream is empty.

Unfortunately, I did not immediately recognize the change in the error outputs of the JSON parser. This might have given me a hint to the real problem. Instead, during the next half hour I searched for errors in the JSON I was sending.

When I finally realized it …

Inconsistent usage of type definitions kills portability in C/C++

C/C++ are nice low level, high-performance languages if you need to be “close to the metal” due to performance or memory constraints. Nevertheless, C/C++ are portable languages because they provide datatypes to abstract from the underlying hardware and compilers for virtually every hardware platform. It is very easy to kill portability because the specification allows certain platform-dependent sizes for the built-in datatypes. An int can be 16 bits or 32 bits, a long 32 bits or 64 bit and so on. Many programmers mitigate the issue by defining own datatypes like WORD or uint32. If you are using some libraries it is very likely that several such type definitions are available and often times interchangeable.

It is absolutely crucial to be consistent when using the type definitions and usually a good advice not to use the built-in types like int or long because they will change in size on different platforms.

In one of our projects we are working on Tango device servers written in C++ and use the YAT (Yet Another Toolbox) library. There are at least 3 possible ways in this small project to define a (most of the time) unsigned 32 bit word:

  • yat::uint32
  • unsigned long
  • tango::DevUlong

To make matters a bit more interesting the tango::DevUlong is defined by the CORBA C++ mapping. Code using all of these definitions may work on a certain platform so you will not notice the problem right away, but we had several compilation problems and even program crashes when compiling or running on Linux/x86, Linux/AMD64, Windows 7 32bit and Windows 7 64bit. Using types that guarantee their size on all platforms and consistent usage of them will make your code compile and run on many platforms flawlessly.

Readable Code Needs Time and Care

A few weeks ago I was about to write an acceptance test involving socket communication. Since I was only interested in a particular sequence of exchanged data, I needed to wait for the start command and ignore all information sent prior to that command. In this blog post I’d like to present the process of enhancing the readability of the tiny piece of code responsible for this task.

The first version, written without thinking much about readability looked something like the following:

private void waitForStartCommand(DataInputStream inputStream) {
  String content = inputStream.readUTF();
  while (!START_COMMAND.equals(content)) {
    content = inputStream.readUTF();
  }
}

The aspect that disturbed me most about this solution was calling inputStream.readUTF() twice (Remember: DRY). So I refactored and came up with:

private void waitForStartCommand(DataInputStream inputStream) {
  String content = null;
  do {
    content = inputStream.readUTF();
  } while (!START_COMMAND.equals(content)) {
}

In this version the need to declare and initialize a variable grants far too much meaning to an unimportant detail. So, a little refactoring resulted in the final version:

private void waitForStartCommand(DataInputStream inputStream) {
  while (startCommandIsNotReadOn(inputStream)) {
    continue;
  }
}

private boolean startCommandIsNotReadOn(DataInputStream inputStream) {
  return !START_COMMAND.equals(inputStream.readUTF());
}

This example shows pretty well how even rather simple code may need to be refactored several times in order to be highly readably and understandable. Especially code that handles more or less unimportant side aspects, should be as easily to understand as possible in order to avoid conveying the impression of being of major importance.

Hibernate Objects in HTTP Sessions

While discussing common problems and pitfalls in web development
we stumbled on the pitfall of storing hibernate objects in a HTTP session.

While discussing common problems and pitfalls in web development we stumbled on the pitfall of storing hibernate objects in a HTTP session. I do not recommend doing this mainly because of the common pitfall with detached objects.
Usually the lifecycle of a hibernate session is attached to a request/response. So after the end of the response the hibernate object is detached. Accessing non loaded associations of detached objects causes a LazyInitializationException. Even if you try to update or save the detached object you will get an exception if the object isn’t in the hibernate session already. You could re-attach/merge the detached object but this writes your changes on the detached object to the database.
A better way would be to store only the identifier of the hibernate object in the HTTP session,
so you can load the object from the database when you need it.