Building Windows C++ Projects with CMake and Jenkins

An short and easy way to build Windows C++ Software with CMake in Jenkins (with the restriction to support Visual Studio 8).

The C++ programming environment where I feel most comfortable is GCC/Linux (lately with some clang here and there). In terms of build systems I use cmake whenever possible. This environment also makes it easy to use Jenkins as CI server and RPM for deployment and distribution tasks.

So when presented with the task to set up a C++ windows project in Jenkins I tried to do it the same way as much as possible.

The Goal:

A Jenkins job should be set up that builds a windows c++ project on a Windows 7 build slave. For reasons that I will not get into here, compatibility with Visual Studio 8 is required.

The first step was to download and install the correct Windows SDK. This provides all that is needed to build C++ stuff under windows.

Then, after installation of cmake, the first naive try looked like this (in an “execute Windows Batch file” build step)

cmake . -DCMAKE_BUILD_TYPE=Release

This cannot work of course, because cmake will not find compilers and stuff.

Problem: Build Environment

When I do cmake builds manually, i.e. not in Jenkins, I open the Visual Studio 2005 Command Prompt which is a normal windows command shell with all environment variables set. So I tried to do that in Jenkins, too:

call “c:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\SetEnv.Cmd” /Release /x86

cmake . -DCMAKE_BUILD_TYPE=Release

This also did not work and even worse, produced strange (to me, at least) error messages like:

‘Cmd’ is not recognized as an internal or external command, operable program or batch file.

The system cannot find the batch label specified – Set_x86

After some digging, I found the solution: a feature of windows batch programming called delayed expansion, which has to be enabled for SetEnv.Cmd to work correctly.

Solution: SetEnv.cmd and delayed expansion

setlocal enabledelayedexpansion

call “c:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\SetEnv.Cmd” /Release /x86

cmake . -DCMAKE_BUILD_TYPE=Release

nmake

Yes! With this little trick it worked perfectly. And feels almost as with GCC/CMake under Linux:  nice, short and easy.

Basic Image Processing Tasks with OpenCV

2D detectors and scientific CCD cameras produce many megabytes of image data. Open source library OpenCV is highly recommended as your work horse for all kinds of image processing tasks.

For one of our customers in the scientific domain we do a lot of integration of pieces of hardware into the existing measurement- and control network. A good part of these are 2D detectors and scientific CCD cameras, which have all sorts of interfaces like ethernet, firewire and frame grabber cards. Our task is then to write some glue software that makes the camera available and controllable for the scientists.

One standard requirement for us is to do some basic image processing and analytics. Typically, this entails flipping the image horizontally and/or vertically, rotating the image around some multiple of 90 degrees, and calculcating some statistics like standard deviation.

The starting point there is always some image data in memory that has been acquired from the camera. Most of the time the image data is either gray values (8, or 16 bit), or RGB(A).

As we are generally not falling victim to the NIH syndrom we use open source image processing librarys. The first one we tried was CImg, which is a header-only (!) C++ library for image processing. The header-only part is very cool and handy, since you just have to #include <CImg.h> and you are done. No further dependencies. The immediate downside, of course, is long compile times. We are talking about > 40000 lines of C++ template code!

The bigger issue we had with CImg was that for multi-channel images the memory layout is like this: R1R2R3R4…..G1G2G3G4….B1B2B3B4. And since the images from the camera usually come interlaced like R1G1B1R2G2B2… we always had to do tricks to use CImg on these images correctly. These tricks killed us eventually in terms of performance, since some of these 2D detectors produce lots of megabytes of image data that have to be processed in real time.

So OpenCV. Their headline was already very promising:

OpenCV (Open Source Computer Vision) is a library of programming functions for real time computer vision.

Especially the words “real time” look good in there. But let’s see.

Image data in OpenCV is represented by instances of class cv::Mat, which is, of course, short for Matrix. From the documentation:

The class Mat represents an n-dimensional dense numerical single-channel or multi-channel array. It can be used to store real or complex-valued vectors and matrices, grayscale or color images, voxel volumes, vector fields, point clouds, tensors, histograms.

Our standard requirements stated above can then be implemented like this (gray scale, 8 bit image):

void processGrayScale8bitImage(uint16_t width, uint16_t height,
                               const double& rotationAngle,
                               uint8_t* pixelData)
{
  // create cv::Mat instance
  // pixel data is not copied!
  cv::Mat img(height, width, CV_8UC1, pixelData);

  // flip vertically
  // third parameter of cv::flip is the so-called flip-code
  // flip-code == 0 means vertical flipping
  cv::Mat verticallyFlippedImg(height, width, CV_8UC1);
  cv::flip(img, verticallyFlippedImg, 0);

  // flip horizontally
  // flip-code > 0 means horizontal flipping
  cv::Mat horizontallyFlippedImg(height, width, CV_8UC1);
  cv::flip(img, horizontallyFlippedImg, 1);

  // rotation (a bit trickier)
  // 1. calculate center point
  cv::Point2f center(img.cols/2.0F, img.rows/2.0F);
  // 2. create rotation matrix
  cv::Mat rotationMatrix =
    cv::getRotationMatrix2D(center, rotationAngle, 1.0);
  // 3. create cv::Mat that will hold the rotated image.
  // For some rotationAngles width and height are switched
  cv::Mat rotatedImg;
  if ( (rotationAngle / 90.0) % 2 != 0) {
    // switch width and height for rotations like 90, 270 degrees
    rotatedImg =
      cv::Mat(cv::Size(img.size().height, img.size().width),
              img.type());
  } else {
    rotatedImg =
      cv::Mat(cv::Size(img.size().width, img.size().height),
              img.type());
  }
  // 4. actual rotation
  cv::warpAffine(img, rotatedImg,
                 rotationMatrix, rotatedImg.size());

  // save into TIFF file
  cv::imwrite("myimage.tiff", gray);
}

The cool thing is that almost the same code can be used for our other image types, too. The only difference is the image type for the cv::Mat constructor:


8-bit gray scale: CV_U8C1
16bit gray scale: CV_U16C1
RGB : CV_U8C3
RGBA: CV_U8C4

Additionally, the whole thing is blazingly fast! All performance problems gone. Yay!

Getting basic statistical values is also a breeze:

void calculateStatistics(const cv::Mat& img)
{
  // minimum, maximum, sum
  double min = 0.0;
  double max = 0.0;
  cv::minMaxLoc(img, &min, &max);
  double sum = cv::sum(img)[0];

  // mean and standard deviation
  cv::Scalar cvMean;
  cv::Scalar cvStddev;
  cv::meanStdDev(img, cvMean, cvStddev);
}

All in all, the OpenCV experience was very positive, so far. They even support CMake. Highly recommended!

Use Boost’s Multi Index Container!

Boost’s multi index container is a very cool and useful piece of code. Make it a part of your toolbox. You can start slowly by replacing uses of std::set and std::multiset with simple boost::multi_index_containers.

Sometimes, after you have used a special library or other special programming tool for a job, you forget about it because you don’t have that specific use case anymore. Boost’s multi_index container could fall in this category because you don’t have to hold data in memory with the need to access it by different keys all the time.

Therefore, this post is intended to be a reminder for c++ programmers that there exists this pretty cool thing called boost::multi_index_container and that you can use it in more situations than you would think at first.

(If you’re already using it on a regular basis you may stop here, jump directly to the comments and tell us about your typical use cases.)

I remember when I discovered boost::multi_index_container I found it quite intimidating at first sight. All those templates that are used in sometimes weird ways can trigger that feeling if you are not a template metaprogramming specialist (i.e. haven’t yet read Andrei Alexandrescu’s book “Modern C++ Design” ).

But if you look at it after you fought your way through the documentation and after your unit test is green that tests your first example, it doesn’t look that complicated anymore.

My latest use case for boost::multi_index_container was data objects that should be sorted by two different date-times. (For dates and times we use boost::date_time, of course). At first, the requirement was to store the objects sorted by one date time. I used a std::set for that with a custom comparator. Everything was fine.

With changing requirements it became necessary to retrieve objects by another date time, too. I started to use another std::set with a different comparator but then I remembered that there was some cool container somewhere in boost for which you can define multiple indices ….

After I had set it up with the two date time indices, the code also looked much cleaner because in order to update one object with a new time stamp I could just call container->replace(…) instead of fiddling around with the std::set.

Furthermore, I noticed that setting up a boost::multi_index_container with a specific key makes it much clearer what you intend with this data structure than using a std::set with a custom comparator. It is not that much more typing effort, and you can practice template metaprogramming a little bit 🙂

Let’s compare the two implementations:

#include <boost/shared_ptr.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
using boost::posix_time::ptime;

// objects of this class should be stored
class MyDataClass
{
  public:
    const ptime& getUpdateTime() const;
    const ptime& getDataChangedTime() const;

  private:
    ptime _updateTimestamp;
    ptime _dataChangedTimestamp;
};
typedef boost::shared_ptr<MyDataClass> MyDataClassPtr;

Now the definition of a multi index container:

#include <boost/multi_index_container.hpp>
#include <boost/multi_index/ordered_index.hpp>
#include <boost/multi_index/mem_fun.hpp>
using namespace boost::multi_index;

typedef multi_index_container
<
  MyDataClassPtr,
  indexed_by
  <
    ordered_non_unique
    <
      const_mem_fun<MyDataClass, 
        const ptime&, 
        &MyDataClass::getUpdateTime>
    >
  >
> MyDataClassContainer;

compared to std::set:

#include <set>

// we need a comparator first
struct MyDataClassComparatorByUpdateTime
{
  bool operator() (const MyDataClassPtr& lhs, 
                   const MyDataClassPtr& rhs) const
  {
    return lhs->getUpdateTime() < rhs->getUpdateTime();
  }
};
typedef std::multiset<MyDataClassPtr, 
                      MyDataClassComparatorByUpdateTime> 
   MyDataClassSetByUpdateTime;

What I like is that the typedef for the multi index container reads almost like a sentence. Besides, it is purely declarative (as long as you get away without custom key extractors), whereas with std::multiset you have to implement the comparator.

In addition to being a reminder, I hope this post also serves as motivation to get to know boost::multi_index_container and to make it a part of your toolbox. If you still have fears of contact, start small by replacing usages of std::set/multiset.

Performance Hogs Sometimes Live in Most Unexpected Places

Surprises when measuring performance are common – but sometimes you just can’t believe it.

When we develop software we always apply the best practice of not optimizing prematurely. This plays together with other best practices like writing the most readable code, or YAGNI.

‘Premature’ means different things in different situations. If you don’t have performance problems it means that there is absolutely no point in optimizing code. And if you do have performance problems it means that Thou Shalt Never Guess which code to optimize because software developers are very bad at this. The keyword here is profiling.

Since we don’t like to be “very bad” at something we always try to improve our skills in this field. The skill of guessing which code has to be optimized, or “profiling in your head” is no different in this regard.

So most of the times in profiling sessions, I have a few unspoken guesses at which parts of the code the profiler will point me to. Unfortunately, I have to say that  I am very often very surprised by the outcome.

Surprises in performance fixing sessions are common but they are of different quality. One rather BIG surprise was to find out that std::string::find of the C++ standard library is significantly slower (by factor > 10) than its C library counterpart strstr (discovered with gcc-4.4.6 on CentOS 6, verified with eglibc-2.13 and gcc-4.7).

Yes, you read right and you may not believe it. That was my reaction, too, so I wrote a little test program containing only two strings and calls to std::string::find and std::strstr, respectively. The results were – and I’ve no problem repeating myself here – a BIG surprise.

The reason for that is that std::strstr uses a highly optimized string matching algorithm version whereas std::string::find works with straight-forward memory comparison.

So when doing profiling sessions, always be prepared for shaking-your-world-view kind of surprises. They can even come from your beloved and highly regarded standard library.

UPDATE: See this stackoverflow question for more information.

Clang, The Friendly Compiler

Clang C/C++ compiler can be called The Friendly Compiler, since it makes it much easier to find and understand compile errors and potential bugs in your code. Go use it!

A while back I suggested to make friends with your compiler as a basis for developing high quality code. My focus then was GCC since it was and still is the compiler I use most of the time. Well, turns out that although GCC may be a reasonably good companion on the C/C++ development road, there are better alternatives.

Enter Clang: I had heard about Clang a few times in the past but never gave it a real shot. That changed after I watched Chandler Carruth’s talk at GoingNative 2012.

First of all I was stunned by the quote from Richard Stallman about GCC being deliberatly designed to make it hard to use it in non-free software. I always wondered why IDEs like KDevelop keep reinventing the wheel all the time by implementing their own C/C++ parsers instead of using already existing and free GCC code. This was the answer: THEY SIMPLY COULDN’T!!

One main point of Chandler’s talk was the quality of diagnostic messages of Clang. GCC is a friend that although telling you exactly what’s wrong with your code, it often does it with complicated sentences hidden in walls of text.

Clang on the other hand, tries very hard to comprehend what you really wanted to write, it speaks in much more understandable words and shows you the offending code locations with nice graphics.

You could say that compared to Clang, which is empathic, understanding, pragmatic and always tries to be on the same page with you, GCC comes across more like an arrogant, self-pleasing and I’m-more-intelligent-than-you kinda guy.

Where GCC says: “What? That should be a template instantiation? Guess what, you’re doing WRONG!! “, Clang is more like: “Ok my friend, now let’s sit down together and analyse step-by-step what’s the problem here. I’ll make us tea.

You’ll find many examples of Clangs nice diagnostic output in Chandler’s talk. Here is another one, as a little teaser:

struct A
{
  std::string _str1;
  std::string _str2;
};

struct AHasher
{
  std::size_t operator() (const A& a)
  {
    return std::tr1::hash()(a._str1) ^
      std::tr1::hash()(a._str2);
  }
};
...
typedef std::tr1::unordered_map<A, int> AMap;
...

What’s wrong with this code? Yes, exactly: the operator in AHasher must be const. Errors with const correctness are typical, easy-to-overlook kind of problems in day-to-day programming. GCCs reaction to something like that is that something discards qualifiers. This may be perfectly right, and after a while you even get used to it. But as you can see with Clang, you can do much better.

The following two screenshots directly compare GCCs and Clangs output compiling the code above. Because there is a template instantiation involved, GCC covers you in its typical wall of text, before it actually tells you what’s wrong (last line).

CLang’s output is much better formated, it shows you the template instantiation steps much more cleanly and in the last line it tells you to the point what is really wrong: …but method is not marked const. Yeah!

 

Don’t mix C++ smart pointers with references

This post will teach by example that mixing smart pointers with references in c++ is not a particularly good idea.

As I did in the past, I will use this post as means to remember and to push the following principle deeper in my head – and hopefully in yours as a reader and C++ programmer:

Do not mix smart pointers with references in your C++ programms.

Of course I knew that before I created this little helper library, that was supposed to make it easier to send data asynchronous over an existing connection. Here is the situation (simplified):

class A
{
  ...
  void doStuff();

  private:
     // a private shared_ptr to B
    boost::shared_ptr<B> _bPointer;
};

class C
{
  public:
    C(B& b) : _b(b)
    {}

    ~C()
    {
      _bRef.resetSomeValueToDefault();
    }

  private:
     // a private reference to B which is set in the ctor
    B& _bRef;
};

void A::doStuff()
{
  createBpointerIfNotExisting();
  C myC(*_bPointer);
  myC.someMethodThatDoesSomethingWithB();
  if (someCondition) {
    // Delete this B instance.
    // A new instance will be created next time
    _bPointer.reset();
  }
}

So class A has a shared pointer of B which is given as a reference to an instance of class C in method A::doStuff. Class C stores the B instance as reference and interacts with it during its lifetime, which ends at the end of A::doStuff.

The last interaction occurrs at the very end of its life – in the destructor.

I highlighted the most important facts, but I’ll give you a few more moments …

The following happens (in A::doStuff):

  • line 29: if no instance of B exists (i.e. _bPointer is null), a new B instance is created and held in _bPointer
  • line 30: instance myC of C is created on the stack. A reference of B is given as ctor parameter
  • line 32-35: if “someCondition” is true, _bPointer is reseted which means that the B instance gets deleted
  • line 37: A::doStuff() ends and myC goes out of scope
  • line 19: the destructor of C is called and _bRef is accessed
  • since the B instance does not exist any more … memory corruption!!!

The most annoying thing with this kind of errors is that the program crashes somewhere, but almost never where the error actually occurred. This means, that you get stack traces pointing you right into some rock-solid 3rd party library which had never failed since you know and use it, or to some completely unrelated part in your code that had worked without any problems before and hasn’t been changed in years.

I even had these classes unit tested before I integrated them. But for some strange reason – maybe because everything gets reset after each test method – the bug never occurred in the tests.

So always be very cautious when you mix smart pointers with references, and when you do, make sure you have your object lifetimes completely under control!

Breakpad and Your CI – A Strong Team

Google’s breakpad together with your CI system can prepare you for the worst.

If your C++ software has to run 24/7 on some server rack at your customer’s data center, it has to meet not only all the user requirements, but also requirements that come from you as developer. When your customer calls you about some “problems”, “strange behaviours”, or even crashes, you must be able to detect what went wrong. Fast!

One means to this end is of course logging. But if your application crashes, nothing beats a decent stacktrace 🙂

Google’s breakpad library comes in very handy here because it provides very easy crash reporting. Even if your process has 2 gigs of virtual memory, breakpad shrinks that ‘core dump’ down to a couple of megs.

Breakpad pulls that trick off by using so-called symbol files that you have to generate for each compiled binary (executable or shared library). These symbol files together with the breakpad dump file that is created at crash time are then used to recreate the stacktrace.

Because every compilation creates different binaries, dump file and symbol files need to be ‘based on’ exactly the same binaries.

This is where you can let your CI system do some work for you. At one of our customers we use Jenkins not only for the usual automatic builds and tests after each check-in but also for release builds that go into production.

At the end of each build, breakpad’s symbol dumper runs over all compiled executables and libraries and generates the symbol files. These are then archived together with the compiled binaries.

Now we are prepared. Whenever some customer sends us a dump file, we can just easily pull out the symbol files corresponding to the software version that runs at this customer and let breakpad do its magic…

 

Debug Output

Crafting debug output from std::istream data can be dangerous!

Writing a blog post sometimes can be useful to get some face-palm kind of programming error out of one’s system.

Putting such an error into written words then serves a couple of purposes:

  • it helps oneself remembering
  • it helps others who read it not to do the same thing
  • it serves as error log for future reference

So here it comes:

In one project we use JSON to serialize objects in order to send them over HTTP (we use the very nice JSON Spirit library, btw).

For each object we have serialize/deserialize methods which do the heavy lifting. After having developed a new deserialize method I wanted to test it together with the HTTP request handling. Using curl for this I issued a command like this:

curl -X PUT http://localhost:30222/some/url -d @datafile

This command issues a PUT request to the given URL and uses data in ./datafile, which contains the JSON, as request data.

The request came through but the deserializer wouldn’t do its work. WTF? Let’s see what goes on – let’s put some debug output in:

MyObject MyObjectSerializer::deserialize(std::istream& jsonIn)
{
   // debug output starts here
   std::string stringToDeserialize;
   Poco::StreamCopier::copyToString(jsonIn, stringToDeserialize);
   std::cout << "The String: " << stringToDeserialize << std::endl;
   // debug output ends here

   json_spirit::Value value;
   json_spirit::read(jsonIn, value);
   ...
}

I’ll give you some time to spot the bug…. 3..2..1..got it? Please check Poco::StreamCopier documentation if you are not familiar with POCO libraries.
What’s particularly misleading is the “Copier” part of the name StreamCopier, because it does not exactly copy the bytes from the stream into the string – it moves them. This means that after the debug output code, the istream is empty.

Unfortunately, I did not immediately recognize the change in the error outputs of the JSON parser. This might have given me a hint to the real problem. Instead, during the next half hour I searched for errors in the JSON I was sending.

When I finally realized it …

Readability of Boolean Expressions

Readability of boolean expressions lies in the eyes of the beholder.

Following up on various previous posts on code readability and style I want to provide two more examples today – this time under the common theme of “handling of boolean values”.

Consider this (1a):

bool someMethod()
{
  if (expression) {
    return true;
  } else {
    return false;
  }
}

Yes, there are people who consider this more readable than (1b)

bool someMethod()
{
  return (expression);
}

Another example is this (2a):

  if (someExpression() == true)
    ...

versus my preferred version (2b):

  if (someExpression())
    ...

So what could be the reason for these different viewpoints? One explanation I thought of is as follows: Let’s say you have a background in C and you are therefore used to do something like:

#define FALSE (0)
#define TRUE (!FALSE)

In other words, you may not see boolean as a type of its own, like int and double, with a well-defined value range. Instead you see it more like an enumerated type which makes it feel very naturally do a expression == true comparison.

At the same time it feels not very natural to see the result of a boolean expression as being of type bool with all the consequences – e.g. to be able to return it immediately as in the first example.

Another explanation is that 1a and 2a are as verbose as it can be. You don’t have to make any mental efforts to understand what the code does.

While these may be possible explanations, my guess is that most of you, like me,  still see 1a and 2a as unnecessary visual clutter and consider 1b and 2b as far more readable.

The Great Divide

There is a great divide in the C++ developer community between “normal” developers that use only basic language features and very savvy ones that know every little corner of the language. The upcoming C++ standard deepens this divide even more.

Recently, I had two very contrary conversations about C++ which show very good the great divide in C++ developer community.

The first was with the technical lead of a team that writes and maintains drivers and control software for a scientific institution. These systems run 24/7 and have to be very stable and reliable.

I had discovered that they use a self-written toolbox library containing classes like SharedPtr<T>, and Thread and suspected immediately a classical NIH-syndrome. I asked him about it and why they don’t use well established libraries like boost. He told me that they indeed are only using the standard library and their own toolbox.

The reason he gave was that despite boost being most elegant C++ library out there, it required very good knowledge about the most advanced C++ mechanisms, and that his team was not on this level … I should probably mention here that his team does a very good job in running their systems. So, apparently, they get along very well with using only basic  C++ features and no “fancy” boost stuff.

The other conversation was with a friend of mine with whom I chat regularly about all sorts of programming related stuff. This time the topic was the upcoming  C++ standard and all its  exciting new stuff. He has lot’s of experience with C++ and knows the language very well. But even someone like him had a hard time to really understand what rvalue references are all about. I had not looked at them in detail, yet,  so he tried to explain them to me. During our discussion I was thinking about if teams like the one introduced before will ever use rvalue references, or other C++0X stuff in their production code, other than maybe the auto keyword for type inference, or constructor delegation.

Honestly, I don’t think stuff like  rvalue refs will become a feature that is often used by “standard industry” teams, because it adds a lot of complexity to an already complex language. Even easy-to-get stuff like the new keywords override, constexpr and final, or additional initialization means like std::initializer_list<T> will take a lot of time to get used regularly by most C++ teams.

Instead, most of C++0X will greatly increase the divide between “normal” C++ developers who get along well with using only basic language features, and experts that know every little corner of the language. And this is simply because there is so much more to know with C++0X.

But don’t let us paint this picture overly black. I, for one, am looking forward to the new standard and I will certainly spread the word about the new possibilities and features in every C++ team I work with.