Dependencies: Let’s have just one more

A few days back, I evaluated some libraries to be used in a C++ based application. At first glance, one of these looked particularly good. Lots of well written and detailed documentation, a nice to use API and quite some success stories. I decided to give it a try, which was exactly when I encountered a first minor inconvenience. While the library was available as pre-built release for a vast amount of operating systems, 32 bit as well as 64 bit, there was no official, pre-built release for the required 64 bit Windows. Thus, I had to compile it myself. No problem so far.

For this case, the documentation even had a section listing the dependencies required for building the library. Due to this list being not very long, I was rather enthusiastic to finally be able to try the library.

To cut it short, that list was all but complete.

I spent the better part of the afternoon trying to get the build done. All in vain. When I resolved one dependency, another until then unknown dependency arose, then the next, and so on. When I reached the point where a dependency required Fortran to be built (no joke!) I eventually decided to abandon the nice looking library in favor of another one, which isn’t nearly as all-embracing and nice, but at least won’t take me even deeper into dependency hell.

This rather frustrating experience made me wonder, whether the authors of the library even once tried to build it on another than the development machine? And if so, why didn’t they bother to include a complete list of the dependencies into the documentation?

Clang, The Friendly Compiler

Clang C/C++ compiler can be called The Friendly Compiler, since it makes it much easier to find and understand compile errors and potential bugs in your code. Go use it!

A while back I suggested to make friends with your compiler as a basis for developing high quality code. My focus then was GCC since it was and still is the compiler I use most of the time. Well, turns out that although GCC may be a reasonably good companion on the C/C++ development road, there are better alternatives.

Enter Clang: I had heard about Clang a few times in the past but never gave it a real shot. That changed after I watched Chandler Carruth’s talk at GoingNative 2012.

First of all I was stunned by the quote from Richard Stallman about GCC being deliberatly designed to make it hard to use it in non-free software. I always wondered why IDEs like KDevelop keep reinventing the wheel all the time by implementing their own C/C++ parsers instead of using already existing and free GCC code. This was the answer: THEY SIMPLY COULDN’T!!

One main point of Chandler’s talk was the quality of diagnostic messages of Clang. GCC is a friend that although telling you exactly what’s wrong with your code, it often does it with complicated sentences hidden in walls of text.

Clang on the other hand, tries very hard to comprehend what you really wanted to write, it speaks in much more understandable words and shows you the offending code locations with nice graphics.

You could say that compared to Clang, which is empathic, understanding, pragmatic and always tries to be on the same page with you, GCC comes across more like an arrogant, self-pleasing and I’m-more-intelligent-than-you kinda guy.

Where GCC says: “What? That should be a template instantiation? Guess what, you’re doing WRONG!! “, Clang is more like: “Ok my friend, now let’s sit down together and analyse step-by-step what’s the problem here. I’ll make us tea.

You’ll find many examples of Clangs nice diagnostic output in Chandler’s talk. Here is another one, as a little teaser:

struct A
{
  std::string _str1;
  std::string _str2;
};

struct AHasher
{
  std::size_t operator() (const A& a)
  {
    return std::tr1::hash()(a._str1) ^
      std::tr1::hash()(a._str2);
  }
};
...
typedef std::tr1::unordered_map<A, int> AMap;
...

What’s wrong with this code? Yes, exactly: the operator in AHasher must be const. Errors with const correctness are typical, easy-to-overlook kind of problems in day-to-day programming. GCCs reaction to something like that is that something discards qualifiers. This may be perfectly right, and after a while you even get used to it. But as you can see with Clang, you can do much better.

The following two screenshots directly compare GCCs and Clangs output compiling the code above. Because there is a template instantiation involved, GCC covers you in its typical wall of text, before it actually tells you what’s wrong (last line).

CLang’s output is much better formated, it shows you the template instantiation steps much more cleanly and in the last line it tells you to the point what is really wrong: …but method is not marked const. Yeah!

 

Refactor now

What would you say about a mechanic or a craftsman who makes his work and does not clean up afterwards? Would you drive your car with stuff lying in the engine bay? Or use your bath with dirt all over?

Sometimes we write code or a test and think: make it work first and refactor it later. But this ‘later’ may not come in a while.
What would you say about a mechanic or a craftsman who makes his work and does not clean up afterwards? Would you drive your car with stuff lying in the engine bay? Or use your bath with dirt all over?
Certainly not.
So don’t wait until someone cleans your code or until you come back after a while and the first thing you do is cleaning up.
I know that when things get tough, deadlines are near, refactoring does not have top priority. So why not use an iteration or even some hours to clean up afterwards? It will help you in the future.

Take your programming course with a grain of salt, please

If you are cursed with silly rules in your programming course, we offer you some word of encouragement to find a mentor and keep your mental sanity and programming habits.

Lately, we had a talk with one of our former interns who now happens to study informatics at university. He presented some code he had written for his programming course and we did a team code review. The review itself was a lot of fun and sparked quite a few discussions. At one point, we assessed the different implementation styles of a method, changing the rather complex single return code into an early return method. Our former intern (now student) listened to the solution and stated: “I am not allowed to do that.”

There was a sudden silence, as everyone tried to comprehend what that means.

The student explained: “my course instructor prefers the single return approach over the early return style”. Well, that’s one thing, we can handle different opinions. “And”, he continued, “he announced there will be a painful deduction of points if we don’t comply to this style.” When the course tried to discuss this point, the explanation given was: “the single return style is superior because the other style is frowned upon.”

We couldn’t believe it. But, as it turns out, there are many rules like the one above in this programming course. And nearly every rule is highly debatable if not plain wrong (in our perception).

There is no problem with the presentation of certain rules in a beginner’s programming course. Novices need clear and precise rules to learn, according to the Dreyfus Model of Skill Acquisition. The concept just doesn’t work for students that aren’t on the Novices level anymore. These students are explicitely forbidden to create more advanced solutions. They are discouraged to look into different programming styles because it will only harm their grades.

We can think of a possible explanation for this scenario: The assignments have to be evaluated by the course instructors. It takes a lot of hard work (and time) to evaluate hundreds of totally different solutions for the same problem. If the solutions are mostly similar in style and concepts, the evaluation is a lot easier and can be done without full understanding of the code.

This is a rather poor explanation. It says “don’t be too advanced in your field of study or you will be too troublesome to attend to”. This is essentially an anesthetization by decree. But the real problem arises when you realize that there won’t be any continuative programming courses. They will never teach you all the more advanced concepts or rectify the silly rules that should get you along as a beginner. After you’ve successfully mastered this course, the studying focusses on the more academic topics of our field. The next possibility to develop your programming skills in a professional setting is your first software development job.

We don’t have a practical solution to this problem. One obvious solution would be to have more instructors evaluate less assignment solutions in the same time, enabling them to dive deeper in the code and give better personalized feedback. This scenario lacks at least enough capable instructors. The reality shows that Novices level students (in the sense of the Dreyfus Model) are often taught by Advanced Beginner level instructors (called a “tutor”).

But we have a word of encouragement for all you students out there, feeling dumbed down by your instructors: It’s not your fault! Take your programming course rules with a (big) grain of salt and talk to other developers. If you don’t know anybody already in the industry, try to make contact with some fellow open source developer on the web. It’s really just the advice “Find a Mentor” from the book Apprenticeship Patterns (highly recommended for aspiring software developers) applied in real life.

Because if you don’t actively unlearn all these arbitrary rules or at least put them into perspective, you’ll start your professional developer career with the burden of some really antic code quirks.

Good luck and tell us your story, if you want.

The Impatient Acceptance Test

When implementing new features it is always a good idea to test them – preferably with automated acceptance tests. Yet, there is a vast number of pitfalls to be avoided when doing so in order to put the testing effort to proper use. Just last week, I encountered a new one (at least for me): The impatient test.

The new feature was basically a long taking background operation which presents its result in a table on the GUI. If the operation succeeded, date and time are displayed in the accordant cell of the table, if not, the cell is left untouched (e.g. left empty if there was no successful run yet, or, if there was, the date of the last successful run is shown).

During the test, four tasks were to be completed by the background operation, two of which were supposed to succeed and the others to fail. So, the expected result was something like:

Expected result

Having obeyed  the “fail-first” guideline and having seen the test pass later on, I was quite sure the test and the feature worked as intended. Yet, manual testing proved otherwise. With the exact same scenario, task 4 always succeeded.

In fact, there was a bug that caused task 4 to result in a false-positive. But why did the automated test not uncover this flaw? Let’s recall what the test does:

  1. Prepare the test environment and the application
  2. Start the background operation comprised of tasks 1-4
  3. Wait
  4. Evaluate/assert the results
  5. Clean up

Some investigation unveiled that the problem was caused by the fact that the test just did not wait long enough for the background operation to finish properly, e.g. the results were evaluated before task 4 was finished. Thus, the false-positive occurred just after the test checked whether it was there:

Timeline of the test
Results being evaluated before everything is finished

After having spotted the source of the problem, several possible solutions ranging from “wait longer in the test” to “explicitly display unfinished runs in the application” came to mind. The most elegant and practical of which is to have the test waiting for the background operation to finish instead of just waiting a given period. Even though this required some more infrastructure.

Though acceptance testing is a great tool for developing software, this experience reminded me that there is also the possibility of flawed, or not completely correct, tests luring you into a false sense of security if you pay too few attention. Manual testing in addition to automated testing may help to avoid these pitfalls.

Don’t mix C++ smart pointers with references

This post will teach by example that mixing smart pointers with references in c++ is not a particularly good idea.

As I did in the past, I will use this post as means to remember and to push the following principle deeper in my head – and hopefully in yours as a reader and C++ programmer:

Do not mix smart pointers with references in your C++ programms.

Of course I knew that before I created this little helper library, that was supposed to make it easier to send data asynchronous over an existing connection. Here is the situation (simplified):

class A
{
  ...
  void doStuff();

  private:
     // a private shared_ptr to B
    boost::shared_ptr<B> _bPointer;
};

class C
{
  public:
    C(B& b) : _b(b)
    {}

    ~C()
    {
      _bRef.resetSomeValueToDefault();
    }

  private:
     // a private reference to B which is set in the ctor
    B& _bRef;
};

void A::doStuff()
{
  createBpointerIfNotExisting();
  C myC(*_bPointer);
  myC.someMethodThatDoesSomethingWithB();
  if (someCondition) {
    // Delete this B instance.
    // A new instance will be created next time
    _bPointer.reset();
  }
}

So class A has a shared pointer of B which is given as a reference to an instance of class C in method A::doStuff. Class C stores the B instance as reference and interacts with it during its lifetime, which ends at the end of A::doStuff.

The last interaction occurrs at the very end of its life – in the destructor.

I highlighted the most important facts, but I’ll give you a few more moments …

The following happens (in A::doStuff):

  • line 29: if no instance of B exists (i.e. _bPointer is null), a new B instance is created and held in _bPointer
  • line 30: instance myC of C is created on the stack. A reference of B is given as ctor parameter
  • line 32-35: if “someCondition” is true, _bPointer is reseted which means that the B instance gets deleted
  • line 37: A::doStuff() ends and myC goes out of scope
  • line 19: the destructor of C is called and _bRef is accessed
  • since the B instance does not exist any more … memory corruption!!!

The most annoying thing with this kind of errors is that the program crashes somewhere, but almost never where the error actually occurred. This means, that you get stack traces pointing you right into some rock-solid 3rd party library which had never failed since you know and use it, or to some completely unrelated part in your code that had worked without any problems before and hasn’t been changed in years.

I even had these classes unit tested before I integrated them. But for some strange reason – maybe because everything gets reset after each test method – the bug never occurred in the tests.

So always be very cautious when you mix smart pointers with references, and when you do, make sure you have your object lifetimes completely under control!

Upgrading your app to Grails 2.0.0? Better wait for 2.0.1

Grails 2.0.0 is a major step forward for this popular and productive, JVM-based web framework. It has many great new features that make you want to migrate existing projects to this new version.

So I branched our project and started the migration process. Everything went smoothly and I had only to fix some minor compilation problems to get our application running again. Soon the first runtime errors occured and approximately 30 out of over 70 acceptance tests failed. Some analysis showed three major issue categories causing the failures:

  1. Saving domain objects with belongsTo() associations may fail with a NULL not allowed for column "AUTHOR_ID"; SQL statement: insert into book (id, version, author_id, name) values (null, ?, ?, ?) [90006-147] message due to grails issue GRAILS-8337. Setting the other direction of the association manually can act as a workaround:
    book.author.book = book
  2. When using the MarkupBuilder with the img tag in your TabLibs, your images may disappear. This is due to a new img closure defined in ApplicationTagLib. The correct fix is using
    delegate.img

    in your MarkupBuilder closures. See GRAILS-8660 for more information.

  3. Handling of null and the Groovy NullObject seems to be broken in some places. So we got org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'null' with class 'org.codehaus.groovy.runtime.NullObject' to class 'Note' using groovy collections’ find() and casting the result with as:
     Note myNote = notes?.find {it.title == aTitle} as Note

    Removing type information and the cast may act as a workaround. Unfortunately, we are not able to reproduce this issue in plain groovy and did not have time to extract a small grails example exhibiting the problem.

These bugs and some other changes may make you reconsider the migration of some bigger project at this point in time. Some of them are resolved already so 2.0.1 may be the release to wait for if you are planning a migration. We will keep an open eye on the next releases and try to switch to 2.0.x when our biggest show stoppers are resolved.

Even though I would advise against migrating bigger existing applications to Grails 2.0.0 I would start new projects on this – otherwise great – new platform release.

Grails 2.0.0 Update: Test Problems

Recently we tried to upgrade to Grails 2.0.0, but problems with mocks stopped our tests to pass.

Grails 2 has some nice improvements over the previous 1.3.x versions and we thought we give it a try. Upgrading our application and its 18 plugins went smooth (we already used the database migration plugin). The application started and ran without problems. The better console output and stacktraces are a welcomed improvement. So all in all a pleasant surprise!
So just running the tests for verification and we can commit to our upgrade branch. Boom!

junit.framework.AssertionFailedError:
No more calls to 'method' expected at this point. End of demands.

Looking at the failing unit test showed that we did not use any mock object for this method call. Running the test alone let it pass. Hhhmm seems like we hit GRAILS-8530. The problem even exists between unit and integration tests. So when you mock something in your unit test it is also mocked in the integration tests which are run after the unit tests.
Even mocking via Expando metaclass and the map notation did not work reliably. So upgrading for us is not viable at the moment.

Separate Master Data and Variable Data

If you come across an accumulation of data fields, you might want to split them into master data and variable data. This could at least help when dealing with storage issues.

In the design of data structures or objects, there are two different kinds of data, namely “Master Data” (german: Stammdaten) and “Variable Data” (german: Bewegungsdaten). The first kind, master data, are data fields that will change seldom over time and can sometimes be used to “identify” an object. The second kind are data fields that capture the current value of an object’s aspect, but are expected to change in the future. If you can categorize your data fields in this manner, think about separating them into different objects.

Let me make an actual example. An application we develop has a central instance (the “center”) that distributes situational data to several operation desks, powered by client applications, named the “clients”. Each client instance is registered in the center to enable the supervision and administration of clients. The data for each client is stored in a ClientInformation object that is mapped to a database relation. Let’s have a look at some of the data fields of ClientInformation:

  • int internalIdentifier – the database primary key for the record
  • String type – some type of the client application
  • String instanceName – the given readable denotation of the operation desk
  • String version – the currently installed version of the client application
  • Date connectionDate – the last time this client application established a connection
  • Date lastActionDate – the last time this client application issued an action command (“was active”)

We can start all kinds of (justified) discussion about primitive obsession, too much information at one place and so on, but for this blog entry, only the categorization in master data and variable data is of interest. My opinion on the example is that the first three data fields (internalIdentifier, type and instanceName) are definitely in the master data category. The last two data fields are clearly variable data, while the version field is something in between. My guts tell me to categorize the version as master data, because it won’t change on a daily schedule.

When separating the two categories of data, the ClientInformation object may turn into a reference holder object only. In this case, the ClientInformation holds two references, one to a new ClientMasterData object (holding internalIdentifier, type, instanceName and version) and another one to a new ClientVariableData object (holding connectionDate and lastActionDate).

A less radical modification would be to let the master data remain in the ClientInformation object and only extract the variable data into a new ClientConnectionData object. If a client connects, only the referenced ClientConnectionData object has to change.

If you separate your master data from the variable data, you can very easily concentrate on the variable data for performance optimizations. This is where the data changes will happen and a tuned storage strategy will pay off. The master data should be designed more carefully concerning the type information, so if we really start the discussion about primitive obsession, I would first tend to the master data fields and argue that the type shouldn’t be a String but an Enum and the version should be a more sophisticated Version type. This could be modelled even with a slow object/relational mapper because the data is only written/read once.

The next time you come across one of your data model objects that contain more than two data fields, have a look at their categorization in master and variable data. Perhaps you can see a good reason to split the object.

Python in C++: Rerouting Python’s stdout

A few weeks ago I published a post that showed how to embedd Python into C++ and how to exchange data between the two languages. Today, I want to present a simple practice that comes in handy when embedding Python into C++: Rerouting Python’s standard output using CPython.

After initializing Python, the new destination of the output stream needs to be created using PyFile_FromString(…) and set to be the new standard output:

PyObject* pyStdOut = PyFile_FromString("CONOUT$", "w+");
PyObject* sys = PyImport_ImportModule("sys");
PyObject_SetAttrString(sys, "stdout", pyStdOut);

Basically that’s all it needs. When executing Python script via PyRun_String(…), all calls to print(…) will write the data directly to pyStdOut.

Ater the Python script is finished, the data in pyStdOut can be retrieved and further processed with C++ by converting it using PyFile_AsFile(…):

FILE* pythonOutput = PyFile_AsFile(pyStdOut);