Responsive Qt GUIs – Threading with Qt

Qt4 used to have only primitive threading support. Starting with version 4.4 new classes and functions makes your threading life a lot easier. So in case you haven’t come around to look at those features, do it now!, it’s worth it.

If you have used Qt4 for some time now, specifically since pre 4.4 versions, you may or may not aware of the latest developments in the threading part of the library. This post shall be a reminder in case you didn’t follow the versions in detail or just didn’t get around to look closer and/or update.

In pre 4.4 versions, the only way to do threading was to use class QThread. Subclass QThread, implement the run method, and there you had your thread. This looks fine at first, but, taking the title of the post as example, it can get annoying very fast. Sometimes you have just few lines of code you want to keep away from the GUI thread because, e.g. they could potentially block on some communication socket. Subclassing QThread for every small little work package is not something you want to do, so I guess many users just wrote their own thread pool or the like.

Starting with version4.4. Qt gained two major threading features, for which, IMHO, the Qt people do not a very good job of advertising. The first is QThreadPool together with QRunnable. All Java programmers, which use java.lang.Runnable since the beginning, may have their laugh now, I’ll wait…

The second new threading feature is the QtConcurrent namespace (from the Qt documentation):

The QtConcurrent namespace provides high-level APIs that make it possible to write multi-threaded programs without using low-level threading primitives such as mutexes, read-write locks, wait conditions, or semaphores

Sounds great! What else?

QtConcurrent includes functional programming style APIs for parallel list prosessing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications.

This is really great stuff. Functions like QtConcurrent::run together with QFuture<T> and QFutureWatcher<T> can simplify your threading code significantly. So, if you haven’t got around to look at those new classes by now, I can only advise you to do it immediately. Allocate a refactoring slot in your next Sprint to replace all those old QThread sub-classes by shiny new QRunnables or QtConcurrent functions. It’s worth it!

Let’s get back to the responsive GUIs example. In his Qt Quarterly article, Witold Wysota describes in detail every technical possibility to keep your GUI responsive. It’s a very good article which provides a lot of insights. He starts with manual event processing and mentions the QtConcurrent features only at the very end of the article. I suggest the following order of threading-solutions-to-consider:

  1. QtConcurrent functions
  2. QThreadPool + QRunnable
  3. rest

Stay responsive!

Non-trivial Custom Data in QActions

If you want to implement dynamic context menus with non-trivial custom data in your QActions, the Qt4 documentation is not very helpful. The article describes some solutions to this task.

Sometimes I get very frustrated with the online Qt4 documentation. Sure, the API docs are massive but for many parts they provide only very basic information. Unfortunately, many Qt books, too, often stop exactly at the point where it gets interesting.

One example for this are context menus. The API docs just show you how menus in general are created and how they are connected to the application: Basically, all menus are instances of QMenu which are filled with instances of QAction. QActions are used as representation of any kind of action than can be triggered from the GUI.

The standard method to connect QActions to the GUI controlling code is to use one of their signals, e.g. triggered(). This signal can be connected to a slot of your own class where you can then execute the corresponding action. This works fine as long as you have a limited set of actions that you all know at coding time. For example, a menu in your tool bar which contains actions Undo/Redo/Cut/Copy/Paste can be created very easily.

But there are use cases where you do not know in advance how many actions there will be in your menus. For example, in an application that provides a GUI for composing a complex data structure you may want to provide the user assisting context menus for adding new data parts depending on what parts already exist. Suddenly, you have to connect many actions to one slot and then you somehow have to know which QAction the user actually clicked.

Btw, let’s all recall the Command Pattern for a moment… ok, now on to some solutions.

Method 0 – QAction::setData: The QAction class provides method setData(), which can be used to store custom data in a QAction instance using QVariant as data wrapper. If you then use QMenu’s triggered signal, which gives you a pointer to the QAction that was clicked, you can extract your data from the QAction. I find this a little bit ugly since you have to wrap your data into QVariant which can get messy, if you want to provide more than one data element

Method 1 – Enhancing QAction::triggered(): By sub-classing QAction you can provide your own triggered() signal which you can enhance with all parameters you need in your slot.

class MyAction : public QAction
{
  Q_OBJECT
  public:
    MyAction(QString someActionInfo)
      : someActionInfo_(someActionInfo)
    {
      connect(this, SIGNAL(triggered()),
              this, SLOT(onTriggered()));
    }
  signals:
    void triggered(QString someActionInfo);
  private slots:
    void onTriggered() {
      emit triggered(someActionInfo_);
    }
  private:
    QString someActionInfo_;
};

This is nice and easy but limited to what data types can be transported via signal/slot parameters.

Method 2 – QSignalMapper: From the Qt4 docs on QSignalMapper:

This class collects a set of parameterless signals, and re-emits them with integer, string or widget parameters corresponding to the object that sent the signal.

… which is basically the same as we did in method 1.

Method 3 – Separate domain specific action classes: By the time the context menu is created you add QActions to the menu using QMenu’s addAction methods. Then you create instances of separate Command-like classes (as in Command Pattern) and connect them with the QAction’s triggered() signal:

// Command-like custom action class. No GUI related stuff here!
class MySpecialAction : public QObject
{
  Q_OBJECT
  public:
    MySpecialAction(QObject* parent, &lt;all necessary parameters to execute&gt;);

  public slots:
    void execute();
  ...
};

// create context menu
QAction* specialAction =
  menu-&gt;addAction(&quot;Special Action Nr. 1&quot;);
MySpecialAction* mySpecialAction =
  new MySpecialAction(specialAction, ...);
connect(specialAction, SIGNAL(triggered()),
        mySpecialAction, SLOT(execute()));

As you can see, QAction specialAction is parent of mySpecialAction, thereby taking ownership of mySpecialAction. This is my preferred approach because it is the most flexible in terms of what custom data can be stored in the command. Furthermore, the part that contains the execution code – MySpecialAction – has nothing at all to do with GUI stuff and can easily be used in other parts of the system, e.g. non-GUI system interfaces.

How have you solved this problem?

The C++ Shoot-yourself-in-the-Foot of the Week

I think we can all agree that C++, compared to other languages, provides quite a number of possibilities to screw up. Everybody working with the language at some point probably had problems with e.g. its exception system, operator overloading or automatic type conversions – to name just a few of the darker corners.

There are also many mitigation strategies which all come down to ban certain aspects of the language and/or define strict code conventions. If you follow Google’s style guide, for example, you cannot use exceptions and you are restricted to a very small set of boost libs.

But developers – being humans – often find creative and surprising ways to thwart every good intentions. In an external project the following conventions are in place:

  • Use const wherever you possibly can
  • Use boost::shared_ptr wherever it makes sense.
  • Define typedefs to shared_ptrs  in order to make code more readable.
  • typedefs to shared_ptrs are to be defined like this:
typedef boost::shared_ptr&lt;MySuperDuperClass&gt; MySuperDuperClassPtr;
  • typedefs to shared const pointers are to be defined like this:
typedef boost::shared_ptr&lt;const MySuperDuperClass&gt; MySuperDuperClassCPtr;

As you can see, postfixes Ptr and CPtr are the markers for normal shared_ptrs and constant shared_ptrs.

Last week, a compile error about some const to non-const conversion made me nearly pull my hair out. The types of variables that were involved all had the CPtr postfix but still the code didn’t compile. After a little digging I found that one of the typedefs involved was like this:

typedef boost::shared_ptr&lt;  MySuperDuperClass&gt; MySuperDuperClassCPtr;

Somebody just deleted the const modifier in front of MySuperDuperClass but left the name with the CPtr untouched. And because non-const to const conversions are allowed this was not detected for a couple of weeks. Nice going!

Any suggestions for a decent style checker for c++? Thanks in advance 😉

Wrestling with Qt’s Model/View API – Filtering in Tree Models

Qt4’s model/view API can be kind of a challenge sometimes. Well, prepare for a even harder fight when sorting and filtering come into play.

As I described in one of my last posts, Qt4’s model/view API can be kind of a challenge sometimes. Well, prepare for a even harder fight when sorting and filtering come into play.

Let’s say you finally managed to get the data into your model and to provide correct implementations of the required methods in order for the attached views to display it properly. One of your next assignments after that is very likely something like implementing some kind of sorting and filtering of the model data. Qt provides a simple-at-first-sight proxy architecture for this with API class QSortFilterProxyModel as main ingredient.

Small preliminary side note: Last time I checked it was good OO practice to have only one responsibility for a given class. And wasn’t that even more important for good API design? Well, let’s not distract us with such minor details.

With my model implementation, none of the standard filtering mechanisms, like setting a filter regexp, were applicable, so I had to override method

QVariant filterAcceptsRow ( int source_row, const QModelIndex& sourceParent ) const

in order to make it work. Well, the rows disappeared as they should, but unfortunately so did all the columns except the first one. So what to do now? One small part of the documentation of QSortFilterProxyModel made me a little uneasy:

“… This simple proxy mechanism may need to be overridden for source models with more complex behavior; for example, if the source model provides a custom hasChildren() implementation you should also provide one in the proxy model.”

What on earth should I do with that? “… may need to be overridden“? “… for example.. hasChildren()…” Why can’t they just say clearly what methods must be overridden in which cases???

After a lot more trial and error I found that for whatever reason,

int columnCount ( const QModelIndex& parent ) const

had to be overridden in order for the columns to reappear. The implementation looks like what I had thought the proxy model would do already:

int MyFilter::columnCount ( const QModelIndex& parent ) const
{
   return sourceModel()->columnCount(parent);
}

So beware of QSortFilterProxyModel! It’s not as easy to use as it looks, and with that kind of fuzzy documentation it is even harder.

Forced into switch/case – Qt’s Model/View API

During my life as a programmer I have more and more come to dislike switch/case statements. They tend to be hard to grasp and with languages like C/C++ they are often the source of hard-to-find errors. Compilers that have warnings about missing default statements or missing cases for enumerated values can help to mitigate the situation, but still, I try to avoid them whenever I can.

The same holds true for if-elseif cascades or lots of if-elses in one method. They are hard to read, hard to maintain, increase the Crap, etc.

If you share this kind of mindset I invite you implement to some custom models with Qt4’s Model/View API. The design of the Model/View classes is derived from the well-known MVC pattern which separates data (model), presentation (view) and application logic (controller). In Qt’s case, view and controller are combined, supposedly making it simpler to use.

The basic idea of Qt’s implementation of its Model/View design is that views communicate with models using so-called model indexes. Using a table as an example, a row/column pair of (3,4) would be a model index pointing to data element in row 3, column 4. When a view is to be displayed it asks the attached model for all sorts of information about the data.

There are a few model implementations for standard tasks like simple string lists (QStringListModel) or file system manipulation (QDirModel < Qt4.4, QFileSystemModel >= Qt4.4). But usually you have to roll your own. For that, you have to subclass one of the abstract model classes that suits your needs best and implement some crucial methods.

For example, model methods rowCount and columnCount are called by the view to obtain the range of data it has to display. It then uses, among others, the data method to query all the stuff it needs to display the data items. The data method has the following signature:

QVariant data ( const QModelIndex&amp; index, int role ) const

Seems easy to understand: parameter index determines the data item to display and with QVariant as return type it is possible to return a wide range of data types. Parameter role is used to query different aspects of the data items. Apart from Qt::DisplayRole, which usually triggers the model to return some text, there are quite a lot other roles. Let’s look at a few examples:

  • Qt::ToolTipRole can be used to define a tool tip about the data item
  • Qt::FontRole can be use to define specific fonts
  • Qt::BackgroundRole and Qt::ForegroundRole can be used to set corresponding colors

So the views call data repeatedly with all the different roles and your model implementation is supposed to handle those different calls correctly. Say you implement a table model with some rows and columns. The design of the data method is forcing you into something like this …

QVariant data ( const QModelIndex&amp; index, int role ) const  {
   if (!index.isValid()) {
      return QVariant();
   }

   switch (role)
   {
      case Qt::DisplayRole:
         switch (index.column())
         {
            case 0:
               // return display data for column 0
               break;
            case 1:
               // return display data for column 1
               break;
            ...
         }
         break;

      case Qt::ToolTipRole:
         switch (index.column())
         {
            case 0:
               // return tool tip data for column 0
               break;
            case 1:
               // return tool tip data for column 1
               break;
            ...
         }
         break;
      ...
   }
}

… or equivalent if-else structures. What happens here? The design of the data method forces the implementation to “switch” over role and column in one method. But nested switch/case statements? AARGH!! With our mindset outlined in the beginning this is clearly unacceptable.

So what to do? Well, to tell the truth, I’m still working on the best™ solution to that but, anyway, here is a first easy improvement: handler methods. Define handler methods for each role you want to support and store them in a map. Like so:

#include &lt;QAbstractTableModel&gt;

class MyTableModel : public QAbstractTableModel
{
  Q_OBJECT

  typedef QVariant (MyTableModel::*RoleHandler) (const QModelIndex&amp; idx) const;
  typedef std::map&lt;int, RoleHandler&gt; RoleHandlerMap;

  public:
    enum Columns {
      NAME_COLUMN = 0,
      ADDRESS_COLUMN
    };

    MyTableModel() {
      m_roleHandlerMap[Qt::DisplayRole] =
         &amp;MyTableModel::displayRoleHandler;
      m_roleHandlerMap[Qt::ToolTipRole] =
         &amp;MyTableModel::tooltipRoleHandler;
    }

    QVariant displayRoleHandler(const QModelIndex&amp; idx) const {
      switch (idx.column()) {
        case NAME_COLUMN:
          // return name data
          break;

        case ADDRESS_COLUMN:
          // return address data
          break;

        default:
          Q_ASSERT(!&quot;Invalid column&quot;);
          break;
      }
      return QVariant();
    }

    QVariant tooltipRoleHandler(const QModelIndex&amp; idx) const {
      ...
    }

    QVariant data(const QModelIndex&amp; idx, int role) const {
      // omitted: check for invalid model index

      if (m_roleHandlerMap.count(role) == 0) {
        return QVariant();
      }

      RoleHandler roleHandler =
        (*m_roleHandlerMap.find(role)).second;
      return (this-&gt;*roleHandler)(idx);
    }
  private:
    RoleHandlerMap m_roleHandlerMap;
};

The advantage of this approach is that the supported roles are very well communicated. We still have to switch over the columns, though.

I’m currently working on a better solution which splits the data calls up into more meaningful methods and kind of binds the columns to specific parts of the data items in order to get a more row-centric approach: one row = one element, columns = element attributes. I hope this will get me out of this switch/case/if/else nightmare.

What do you think about it? I mean, is it just me, or is an API that forces you into crappy code just not so well done?

How would you solve this?

Evil operator overloading of the day

The other day we encountered a strange stack overflow when cout-ing an instance of a custom class. The stream output operator << was overloaded for the class to get a nice output, but since the class had only two std::string attributes the implementation was very simple:

using namespace std;

class MyClass
{
   public:
   ...
   private:
      string stringA_;
      string stringB_;

   friend ostream& operator << (ostream& out, const MyClass& myClass);
};

ostream& operator << (ostream& out, const MyClass& myClass)
{
   return out << "MyClass (A: " << myClass.stringA_ 
              <<", B: " << myClass.stringB_ << ")"  << std::endl;
}

Because the debugger pointed us to a completely separate code part, our first thought was that maybe some old libraries had been accidently linked or some memory got corrupted somehow. Unfortunately, all efforts in that direction lead to nothing.

That was the time when we noticed that using old-style printf instead of std::cout did work just fine. Hm..

So back to that completely separate code part. Is it really so separate? And what does it do anyway?

We looked closer and after a few minutes we discovered the following code parts. Just look a little while before you read on, it’s not that difficult:

// some .h file somewhere in the code base that somehow got included where our stack overflow occurred:

...
typedef std::string MySpecialName;
...
ostream& operator << (ostream& out, const MySpecialName& name);

// and in some .cpp file nearby

...
ostream& operator << (ostream& out, const MySpecialName& name)
{
   out << "MySpecialName: " << name  << std::endl;
}
...

Got it? Yes, right! That overloaded out-stream operator << for MySpecialName together with that innocent looking typedef above put your program right into death by segmentation fault.  Overloading the out-stream operator for a given type can be a good idea – as long as that type is not a typedef of std::string. The code above not only leads to the operator << recursively calling itself but also sucks every other part of the code into its black hole which happens to include the .h file and wants to << a std::string variable.

You just have to love C++…

How much boost does a C++ newbie need?

The other day, I talked to a C++ developer, who is relatively new in the language, about the C++ training they just had at his company. The training topics were already somewhat advanced and contained e.g. STL containers and their peculiarities, STL algorithms and some boost stuff like binders and smart pointers. That got me thinking about how much of STL and boost does a C++ developer just has to know in order to survive their C++ projects.

There is also another angle to this. There are certain corners of the C++ language, e.g. template metaprogramming, which are just hard to get, even for more experienced developers. And because of that, in my opinion, they have no place in a standard industry C++ project. But where do you draw the line? With template meta-programming it is obvious that it probably will never be in every day usage by Joe Developer. But what about e.g. boost’s multi-index container or their functional programming stuff? One could say that it depends on the skills of team whether more advanced stuff can be used or not. But suppose your team consist largely of C++ beginners and does not have much experience in the language, would you want to pass on using Boost.Spirit when you had to do some serious parsing? Or would you want to use error codes instead of decent exceptions, because they add a lot more potentially “invisible” code paths? Probably not, but those are certainly no easy decisions.

One of the problems with STL and boost for a C++ beginner can be illustrated with the following easy problem: How do you convert an int into a std::string and back? Having already internalized the stream classes the beginner might come up with something like this:

 int i = 5;
 std::ostringstream out;
 out << i;
 std::string i_string = out.str();  

 int j=0;
 std::istringstream in(i_string);
 in >> j;
 assert(i == j);

But if he just had learned a little boost he would know that, in fact, it is as easy as this:

 int i=5;
 std::string i_string = boost::lexical_cast<std::string>(i);

 int j = boost::lexical_cast<int>(i_string);

So you just have to know some basic boost stuff in order to write fairly decent C++ code. Besides boost::lexical_cast, which is part of the Boost Conversion Library, here is my personal list of mandatory boost knowledge:

Boost.Assign: Why still bother with std::map::push_back and the likes, if there is a much easier and concise syntax to initialize containers?

Boost.Bind (If you use functional programming): No one should be forced to wade through the mud of STL binders any longer. Boost::bind is just so much easier.

Boost.Foreach: Every for-loop becomes a code-smell after your first use of BOOST_FOREACH.

Boost.Member Function: see Boost.Bind

Boost.Smart Pointers: No comment is needed on that one.

As you can see, these are only the most basic libraries. Other extremely useful things for day-to-day programming are e.g. Boost.FileSystem, Boost.DateTime, Boost.Exceptions, Boost.Format, Boost.Unordered and Boost.Utilities.

Of course, you don’t have to memorize every part of the boost libraries, but boost.org should in any case be the first address to look for a solution to your daily  C++ challenges.

Structuring CppUnit Tests

How to structure cppunit tests in non-trivial software systems so that they can be easily executed selectively during code-compile-test cycle and at the same time are easy to execute as a whole by your continuous integration system.

While unit testing in Java is dominated by JUnit, C++ developers can choose between a variety of frameworks. See here for a comprehensive list. Here you can find a nice comparison of the biggest players in the game.

Being probably one of the oldest frameworks CppUnit sure has some usability issues but is still widely used. It is criticised mostly because you have to do a lot of boilerplate typing to add new tests. In the following I will not repeat how tests can be written in CppUnit as this is described already exhaustively (e.g. here or here). Instead I will concentrate on the task of how to structure CppUnit tests in bigger projects. “Bigger” in this case means at least a few architectually independent parts which are compiled independently, i.e. into different libraries.

Having independently compiled parts in your project means that you want to compile their unit tests independently, too. The goal is then to structure the tests so that they can easily be executed selectively during development time by the programmer and at the same time are easy to execute as a whole during CI time (CI meaning Continuous Integration, of course).

As C++ has no reflection or other meta programming elements like the Java Annotations, things like automatic test discovery and how to add new tests become a whole topic of its own. See the CppUnit cookbook for how to do that with CppUnit . In my projects I only use the TestFactoryRegistry approach because it provides the most automatics in this regard.

Let’s begin with a simplest setup, the Link-Time Trap (see example source code): Test runner and result reporter are setup in the “main” function that is compiled into an executable. The actual unit tests are compiled in separate libraries and are all linked to the executable that contains the main function. While this solution works well for small projects it does not scale. This is simply because every time you change something during the code-compile-test cycle the unit test executable has to be relinked, which can take a considerable amount of time the bigger the project gets. You fall into the Link Time Trap!

The solution I use in many projects is as follows: Like in the simple approach, there is one test main function which is compiled into a test executable. All unit tests are compiled into libraries according to their place in the system architecture. To avoid the Link-Time-Trap, they are not linked to the test executable but instead are automatically discovered and loaded during test execution.

1. Automatic Discovery

Applying a little convention-over-configuration all testing libraries end with the suffix “_tests.so”. The testing main function can then simply walk over the directory tree of the project and find all shared libraries that contain unit test classes.

2. Loading

If a “.._test.so” library has been found, it simply gets loaded using dlopen (under Unix/Linux). When the library is loaded the unit tests are automatically registered with the TestFactoryRegistry.

3. Execution

After all unit test libraries has been found and loaded text execution is the same as in the simple approach above.

Here my enhanced testmain.cpp (see example source code).

#include ... 

using namespace boost::filesystem; 
using namespace std; 

void loadPlugins(const std::string& rootPath) 
{
  directory_iterator end_itr; 
  for (directory_iterator itr(rootPath); itr != end_itr; ++itr) { 
    if (is_directory(*itr)) {
      string leaf = (*itr).leaf(); 
      if (leaf[0] != '.') { 
        loadPlugins((*itr).string()); 
      } 
      continue; 
    } 
    const string fileName = (*itr).string();
    if (fileName.find("_tests.so") == string::npos) { 
      continue;
    }
    void * handle = 
      dlopen (fileName.c_str(), RTLD_NOW | RTLD_GLOBAL); 
    cout << "Opening : " << fileName.c_str() << endl; 
    if (!handle) { 
      cout << "Error: " << dlerror() << endl; 
      exit (1); 
    } 
  } 
} 

int main ( int argc, char ** argv ) { 
  string rootPath = "./"; 
  if (argc > 1) { 
    rootPath = static_cast<const char*>(argv[1]); 
  } 
  cout << "Loading all test libs under " << rootPath << endl; 
  string runArg = std::string ( "All Tests" ); 
  // get registry 
  CppUnit::TestFactoryRegistry& registry = 
    CppUnit::TestFactoryRegistry::getRegistry();
  
  loadPlugins(rootPath); 
  // Create the event manager and test controller 
  CppUnit::TestResult controller; 

  // Add a listener that collects test result 
  CppUnit::TestResultCollector result; 
  controller.addListener ( &result ); 
  CppUnit::TextUi::TestRunner *runner = 
    new CppUnit::TextUi::TestRunner; 

  std::ofstream xmlout ( "testresultout.xml" ); 
  CppUnit::XmlOutputter xmlOutputter ( &result, xmlout ); 
  CppUnit::TextOutputter consoleOutputter ( &result, std::cout ); 

  runner->addTest ( registry.makeTest() ); 
  runner->run ( controller, runArg.c_str() ); 

  xmlOutputter.write(); 
  consoleOutputter.write(); 

  return result.wasSuccessful() ? 0 : 1; 
}

As you can see the loadPlugins function uses the Boost.Filesystem library to walk over the directory tree.

It also takes a rootPath argument which you can give as parameter when you call the test main executable. This solves our goal stated above. When you want to execute unit tests selectively during development you can give the path of the corresponding testing library as parameter. Like so:

./testmain path/to/specific/testing/library

In your CI environment on the other hand you can execute all tests at once by giving the root path of the project, or the path where all testing libraries have been installed to.

./testmain project/root

CMake Builder Plugin for Hudson

Update: Check out my post introducing the newest version of the plugin.

Today I’m pleased to announce the first version of the cmakebuilder plugin for Hudson. It can be used to build cmake based projects without having to write a shell script (see my previous blog post). Using the scratch-my-own-itch approach I started out implementing only those features that I needed for my cmake projects which are mostly Linux/g++ based so far.

Let’s do a quick walk through the configuration:

1. CMake Path:
If the cmake executable is not in your $PATH variable you can set its path in the global Hudson configuration page.

2. Build Configuration:

To use the cmake builder in your Free-style project, just add “CMake Build” to your build steps. The configuration is pretty straight forward. You just have to set some basic directories and the build type.

cmakebuilder demo config
cmakebuilder demo config

The demo config above results in the following behavior (shell pseudocode):

if $WORKSPACE/build_dir does not exist
   mkdir $WORKSPACE/build_dir
end if

cd $WORKSPACE/build_dir
cmake $WORKSPACE/src -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=$WORKSPACE/install_dir
make
make install

That’s it. Feedback is very much appreciated!!

Originally the plan was to have the plugin downloadable from the hudson plugins site by now but I still have some publishing problems to overcome. So if you are interested, make sure to check out the plugins site again in a few days. I will also post an update here as soon as the plugin can be downloaded.

Update: After fixing some maven settings I was finally able to publish the plugin. Check it out!

Make friends with your compiler

Suppose you are a C++ programmer on a project and you have the best intentions to write really good code. The one fellow that you better make friends with is your compiler. He will support you in your efforts whenever he can. Unless you don’t let him. One sure way to reject his help is to switch off all compiler warnings. I know it should be well-known by now that compiling at high warning levels is something to do always and anytime but it seems that many people just don’t do it.
Taking g++ as example, high warning levels do not mean just having “-Wall” switched on. Even if its name suggests otherwise, “-Wall” is just the minimum there. If you just spend like 5 minutes or so to look at the man page of g++ you find many many more helpful and reasonable -W… switches. For example (g++-4.3.2):


-Wctor-dtor-privacy: Warn when a class seems unusable because all the constructors or destructors in that class are private, and it has neither friends nor public static member functions.

Cool stuff! Let’s what else is there:


-Woverloaded-virtual: Warn when a function declaration hides virtual functions from a base class. Example:

class Base
{
public:
virtual void myFunction();
};

class Subclass : public Base
{
public:
void myFunction() const;
};

I would certainly like to be warned about that, but may be that’s just me.


-Weffc++: Warn about violations of the following style guidelines from Scott Meyers’ Effective C++ book

This is certainly one of the most “effective” weapons in your fight for bug-free software. It causes the compiler to spit out warnings about code issues that can lead to subtle and hard-to-find bugs but also about things that are considered good programming practice.

So suppose you read the g++ man page, you enabled all warning switches additional to “-Wall” that seem reasonable to you and you plan to compile your project cleanly without warnings. Unfortunately, chances are quite high that your effort will be instantly thwarted by third-party libraries that your code includes. Because even if your code is clean and shiny, header files of third-pary library may be not. Especially with “-Weffc++” this could result in a massive amount of warning messages in code that you have no control of. Even with the otherwise powerful, easy-to-use and supposedly mature Qt library you run into that problem. Compiling code that includes Qt headers like <QComboBox>with “-Weffc++” switched on is just unbearable.

Leaving aside the fact that my confidence in Qt has declined considerably since I noticed this, the question remains what to do ignore the shortcomings of other peoples code. With GCC you can for example add pragmas around offending includes as desribed here. Or you can create wrapper headers for third-party includes that contain


#pragma GCC system_header

AFAIK Microsoft’s compilers have similar pragmas:


#pragma warning(push, 1)
#include
#include
#pragma warning(pop)

Warning switches are a powerful tool to increase your code quality. You just have to use them!