SSL with POCO

A short introduction to using SSL support in POCO C++ libraries.

Admittedly, the topic of this post is very specific but I hope it will still be of some value for some people.The task for today is to setup SSL server and client with POCO framework classes. I will leave out the whole certificate managing issues and just assume that the right files are at hand.

The SSL related part of  the POCO libraries essentially wraps the OpenSSL library into a nice object-oriented interface. When you know OpenSSL, you can instantly relate to classes like Poco::Net::Context, or the …Handler classes (if you replace “handler” with “callback”).

“SSL” stands for Secure Socket Layer, so the first thing to discover is class Poco::Net::SecureServerSocket. As you would expect, this class is derived from Poco::Net::ServerSocket, extending it only with SSL related stuff. And sure enough, some constructors of Poco::Net::ServerSocket take a Poco::Net::ContextPtr as argument.

But why only some constructors? Since there is no setContext method, there must be some other mechanism in place by which SecureServerSockets get their SSL context.

Introducing Poco::Net::SSLManager. From the API docs:

SSLManager is a singleton for holding the default server/client Context and handling callbacks for certificate verification errors and private key passphrases.

Proper initialization of SSLManager is critical.

Aha! So all the constructors of SecureServerSocket that do not take Context pointers simply get it from the SSLManager singleton.

But how to initialize SSLManager?

1. The POCO Way:

If you developed your application with POCO from the ground up there probably exists a sub-class of Poco::Application, and all the configuration is handled by the built-in configuration classes.

With this in place, all you have to do is to add the proper ssl configuration elements:

openSSL.server.privateKeyFile = /path/to/key/file
openSSL.server.certificateFile = /path/to/certificate/file
openSSL.server.verificationMode = none
openSSL.server.verificationDepth = 9
openSSL.server.loadDefaultCAFile = false
openSSL.server.cypherList = ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH
openSSL.server.privateKeyPassphraseHandler.name = KeyFileHandler
openSSL.server.privateKeyPassphraseHandler.options.password = securePassword
openSSL.server.invalidCertificateHandler = AcceptCertificateHandler

2. Manually:

Depending on which side you are – client or server – you have to call SSLManager::initializeClient or  SSLManager::initializeServer. Both methods take three arguments:

  1. PrivateKeyPassphraseHandler pointer
  2. InvalidCertificateHandler pointer
  3. Context pointer

This is where it becomes a little bit tricky: If you try to instantiate a Context with a privateKey file in order to provide it as argument to the initialize… method, a PrivateKeyPassphraseHandler might be needed. This handler is fetched from the SSLManager singleton – which you are just about to initialize!.

This circular dependency between Context and SSLManager can be overcome e.g. if you call SSLManager::initializeServer first only with a PrivateKeyPassphraseHandler, a InvalidCertificateHandler and null Context pointer. Then instantiate the Context and call SSLManager::initializeServer again.

Now that SSL Manager is initialized we can use Secure… prefixed classes as we would used their non-SSL counterparts. As with SecureServerSocket, other Secure… classes are derieved from corresponding non-secure base classes.

Conclusion: Once you got around the initialization of SSLManager singelton, using SSL POCO classes is very easy and straight forward. Check it out!

SSD? Don’t think! Just Buy!

SSDs makes everything blazingly fast – even Grails + IDEA development

My personal experience with SSDs began with an Intel X25M that I built into a Lenovo Thinkpad R61. It replaced a Seagate 160 GB 5400rpm which in combination with Windows Vista … well, let’s just say, it wasn’t that fast.

The SSD changed everything. It was not just faster, it was downright awesome! As if I had a completely new computer.

With that in mind I thought about my desktop PC. It’s a little more than 2 year old Windows XP box, Intel Core2Duo 2.7 GHz, 4GB RAM, with a not so slow Samsung HDD. I use it mainly for programming, which is most of the time Grails programming under IntelliJ IDEA.

And let me tell you, the Grails + IDEA combination can get dog slow at times. The start-up time of IDEA alone gives you time to skim over the first three pages of Hacker News and read the latest XKCD.

So the plan was to put an extra SSD into the Windows box and put only programming related stuff on it. This would save me the potential hassle of moving my whole system but would still give me development speed-up.

I had to be a little careful because the standard settings for IDEA’s so-called “system path” and “config path” is in the user’s home directory. (Btw, this settings can be changed in file “idea.properties” which resides in “IDEA_INSTALLATION_DIR\bin”, e.g.: c:\Progam Files\JetBrains\IntelliJ IDEA 9.0.4\bin)

I think you already guessed the result. Three words: fast, faster, SSD. It’s just amazing! IDEA start-up is so fast now, I barely have time for a quick look at the newest headlines on InfoQ.

The next step is of course to put the whole system on SSD but that will probably have to wait until we upgrade the whole company to Win7. Can’t wait… 🙂

Active Object with POCO’s Active Methods

POCO’s ActiveMethods require minimal additional code to implement the Active Object design pattern

Active Object is a well known design pattern for synchronizing access to an object and/or resource. The basic idea is to separate method invocation from method execution which is done in a dedicated thread.

Instead of using the objects interface directly, a client of an Active Object uses some kind of  proxy which enqueues a so-called Method Request for later execution. The proxy finishes immediately and returns to the client some sort of callback, or variable, by which the client can receive the result. These intermeditate result variables are also known as Futures.

As always, there are lots of ways to implement this pattern. For example, if you had an interface like this

class MyObject
{
  public:
    int doStuff(const std::string& param) =0;
    std::string doSomeOtherThing(int i) =0;
};

applying a straight forward implementation, you would first transform this into a proxy and method request classes:

class MyObjectProxy
{
  public:
    MyObjectProxy(MyObject* theObject);
    // proxy methods
    Future<int> doStuff(const std::string& param);
    Future<std::string> doSomeOtherThing(int i);
  private:
    MyObject * _myObject;
};

class MethodRequest_DoStuff :
  public AbstractMethodRequest
{
  public:
    MethodRequest_DoStuff(const std::string& param);
    // all method request classes must implement execute()
    virtual void execute(MyObject* theObject);

  private:
    const std::string _param;
};

… and so on (for more details see this basic paper by Douglas C. Schmidt, or read chapter Concurrency Patterns in POSA2).

It’s easy to see that this implementation produces a lot of boilerplate code. To solve this, you could either cook up some code generation, or look for some language support to reduce the amount of characters you have to type. In C++, some sort of template solution can be the way to go, or…

Introducing Active Methods

With class ActiveMethod together with support classes ActiveDispatcher and ActiveResult the POCO C++ libraries provide very simple and elegant building blocks for implementing  the Active Object pattern.

ActiveMethod:  this is the core piece. When called, an ActiveMethod executes in its own thread.

ActiveResult: this is what I referred to earlier as a Future. Instances of ActiveResult are used to pass the result of an ActiveMethod call back to the client.

ActiveDispatcher: if you only use ActiveMethods, every ActiveMethod thread can execute in parallel.  With ActiveDispatcher as base class, ActiveMethod calls are serialized, thus implementing real™ Active Object behaviour.

Here my earlier example using ActiveMethods:

class MyObject
{
  public:
    // ActiveMethods are initialized in the ctors
    // initializer list
    MyObject()
      : doStuff(this, &MyObject::doStuffImpl),
        doSomeOtherThing(this, &MyObject::doSomeOtherThingImpl)
    {}

    ActiveMethod<int, std::string, MyObject> doStuff;
    ActiveMethod<std::string, int, MyObject> doSomeOtherThing;
  private:
    int doStuffImpl(const std::string& param);
    std::string doSomeOtherThingImpl(int i);
};

This is used as follows:

MyObject myObject;
ActiveResult<std::string> result = myObject.doSomeOtherThing(42);
...
result.wait();
std::cout << result.data() << std::endl;

This solution requires minimal amounts of additional code to transform your lame and boring normal object into a full-fledged Active Object. The only downside is that Active Methods currently can only have one parameter. If you need more, you have to use tuples or parameter objects.

Have fun!

Shrink your dependency list with POCO

POCO is a nice set of C++ libraries which provides elegant solutions for day-to-day tasks.

When you write C++ applications of any sort you are very likely to need support libraries in addition to what comes with C++ (which is not much, btw). Of course, this holds true for any other language. But with Java and its rich JDK for example this need is not so imminent.

Starting at the very beginning, let’s see how fast the need for support arises.

int main(int argc, char** argv)
{
// parsing command line arguments
...

How to parse those command line arguments in a simple and easy way? How about a little help output when the program is called with -h or –help? Ok, we got boost::program_options for this.

Going further in your program you may want to have some sort of logging capability. Unfortunately, as of boost version 1.45 there is nothing to be found there. So you add a nice logging library.

And so on.

But wait! You don’t want to depend on too many 3rd party libraries because, among other things, they add deployment complexity.

Not even Qt, as one of the major players in the C++ framework world, provides solutions to both previous examples. As of version 4.7, no logging and not much support with command line arguments. And you end-up having to use QString, one of many non-std::string classes in C++ frameworks, which can get annoying at times (of course there are reasons why those exist).

I could go on with the list of smaller or larger concerns for which you either roll your own implementation or include yet another library in your project.

Instead I would like to point you to POCO, a nice set of C++ libraries which provide easy solutions for many basic and/or advanced day-to-day tasks. From their website:

Modern, powerful open source C++ class libraries and frameworks for building network- and internet-based applications that run on desktop, server and embedded systems

Besides very basic stuff like logging, date/time handling, threads, memory management, UTF-8, etc. they also provide lots of higher level classes for things like SMTP, POP3, SQL database access and HTTP. They even have a so called C++ Server Page Compiler which is basically something like JSP or Active Server Pages.

And they have no own string class! Yay! Instead they provide lots of functions classes and streams to do string manuipulation on good old std::string.

One thing I like most about POCO, though, is its clean, well-documented and apparently very high quality code. Although it is not overly functional or template-heavy, like you see it in in boost very often, it still provides elegant solutions.

Check it out and shrink your dependency list.

Bug hunting fun with std::sort

Small errors in custom comparison functions used with std::sort can lead to hard-to-find bugs.

The other day I came across a nice little C++-shoot-yourself-in-the-foot at one of our customers. Let’s see how fast you can spot the problem. The following code crashes with segmentation fault sometime, somewhere in the sort call (line 31).

#include <iostream>
#include <vector>
#include <boost/shared_ptr.hpp>
#include <boost/bind.hpp>

using namespace std;
using namespace boost;

enum SORT_ORDER
{
  SORT_ORDER_ASCENDING,
  SORT_ORDER_DESCENDING
};

bool compareValues(const std::string& valueLeft,
                   const std::string& valueRight,
                   SORT_ORDER order)
{
  const bool compareResult = (valueLeft < valueRight);
  if (order == SORT_ORDER_DESCENDING) {
    return !compareResult;
  }
  return compareResult;
}

int main(int argc, char *argv[])
{
  std::vector<std::string> strValues(300);
  std::fill(strValues.begin(), strValues.end(),
            "Hallo");
  std::sort(strValues.begin(), strValues.end(),
            bind(compareValues, _1, _2, SORT_ORDER_DESCENDING));
  return EXIT_SUCCESS;
}

Any ideas? The tricky thing about this bug is that the stacktrace output in the debugger gives absolutely no hint at all about its cause. And this is a simplified version of the real code which has to sort boost::shared_ptrs instead of strings. Believe me, you don’t want to see that stacktrace. Because of the use of boost::bind together with boost::shared_ptrs it looks, well, let’s say intimidating.

Still no idea?

I’ll give you a hint. If the SORT_ORDER is set to SORT_ORDER_ASCENDING everything is fine. …

Ok, the problem is that std::sort algorithm must be given a comparison function (object) that defines a strict weak ordering on the elements that are to be sorted. In other words the comparison function object must implement the ‘<‘ (less than) relationship on the elements.

Unfortunately, lines 20 to 22 break this ordering when SORT_ORDER_DESCENDING is given. The initial idea of this code was that, well, if compareResult gets returned on ascending sort order, lets just return the negation of it when the “negation” of acscending order is requested. This, of course, destroys the strict weak ordering requirement because whenever valueLeft == valueRight, the function returns true, meaning instead that valueLeft < valueRight. And this somehow wreaks havoc inside std::sort.

A better version of the function could be:

...
bool compareValues(const std::string& valueLeft,
                   const std::string& valueRight,
                   SORT_ORDER order)
{
  // solution: return false independent of sort order
  // whenever valueLeft == valueRight
  if (valueLeft == valueRight) {
    return false;
  }
  const bool compareResult = (valueLeft < valueRight);
  if (order == SORT_ORDER_DESCENDING) {
    return !compareResult;
  }
  return compareResult;
}
...

The really annoying thing about this whole issue is that std::sort just randomly crashes with a stack trace that shows nothing but some weird memory corruption going on. After the initial shock, this sends you down the complete wrong bug hunting road where you start looking for spots where memory could be overwritten or the like.

So beware of custom comparison functions or function objects. They might look innocent and easy, but they can give you lot’s of headaches.

Developing Grails Apps – Some Dark Sides

Most of the time, developing Grails apps is a nice experience. But there are also dark sides. One of which is when bugs do appear or do not appear depending on how you started your app.

Usually, I try to avoid it but this time a Disclaimer is in order: This is not a Grails rant. Most of the time developing Grails projects is fast and smooth. Using Grails brought many advantages for us. But there are also dark sides…

My main criticism is that Grails abstractions are more than leaky! In every list of examples for the definition of the term Leaky Abstraction Grails should be top. As soon as you leave the tutorial/scaffolding/helloworld level you have to know a lot about the underlying stack. And with Hibernate and Spring neither of the words small, easy and lightweight do apply.

GORM, too, is only easy to use at first sight. The very informative blog series about GORM gotchas should absolutely become part of the user guide or the refence docs.

And there are those times where it gets really unpleasant. This is e.g. when a bug does appear in your grails application running in a servlet container (packaged in a .war)  but does not appear when the application is started from within the IDE. Our last one of those was a naming conflict in a .gsp file. The controller handed a model like this to the .gsp:

...
return [fieldValue: 'THE_VALUE', ...]

The model entry ‘fieldValue’ was used in the .gsp to set the value of a combo box. Unfortunately, ‘fieldValue’ is also the name of a built-in Grails tag

Admittedly, ‘fieldValue’ was not the wisest choice of names and I would certainly expect to get scolded loudly by Grails for that – ideally with a nice descriptive exception. But what happend instead led to a loud scolding of Grails from us. And to some big question marks: What is the difference between executing Grails from the IDE and within a servlet container with respect to naming resolution? Why is there a difference, at all?

We had a hard time figuring out this one, not least because the error message was not very telling. And since this was not the first of those works-in-the-IDE-but-not-in-a-real-environment bugs there is always this slightly uneasy feeling…

As I said in the beginning, most of the time developing Grails applications is nice and shiny. I would not support their slogan, though. My personal search for the best web development tool is definitively not over.

How about your search?

Responsive Qt GUIs – Threading with Qt

Qt4 used to have only primitive threading support. Starting with version 4.4 new classes and functions makes your threading life a lot easier. So in case you haven’t come around to look at those features, do it now!, it’s worth it.

If you have used Qt4 for some time now, specifically since pre 4.4 versions, you may or may not aware of the latest developments in the threading part of the library. This post shall be a reminder in case you didn’t follow the versions in detail or just didn’t get around to look closer and/or update.

In pre 4.4 versions, the only way to do threading was to use class QThread. Subclass QThread, implement the run method, and there you had your thread. This looks fine at first, but, taking the title of the post as example, it can get annoying very fast. Sometimes you have just few lines of code you want to keep away from the GUI thread because, e.g. they could potentially block on some communication socket. Subclassing QThread for every small little work package is not something you want to do, so I guess many users just wrote their own thread pool or the like.

Starting with version4.4. Qt gained two major threading features, for which, IMHO, the Qt people do not a very good job of advertising. The first is QThreadPool together with QRunnable. All Java programmers, which use java.lang.Runnable since the beginning, may have their laugh now, I’ll wait…

The second new threading feature is the QtConcurrent namespace (from the Qt documentation):

The QtConcurrent namespace provides high-level APIs that make it possible to write multi-threaded programs without using low-level threading primitives such as mutexes, read-write locks, wait conditions, or semaphores

Sounds great! What else?

QtConcurrent includes functional programming style APIs for parallel list prosessing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications.

This is really great stuff. Functions like QtConcurrent::run together with QFuture<T> and QFutureWatcher<T> can simplify your threading code significantly. So, if you haven’t got around to look at those new classes by now, I can only advise you to do it immediately. Allocate a refactoring slot in your next Sprint to replace all those old QThread sub-classes by shiny new QRunnables or QtConcurrent functions. It’s worth it!

Let’s get back to the responsive GUIs example. In his Qt Quarterly article, Witold Wysota describes in detail every technical possibility to keep your GUI responsive. It’s a very good article which provides a lot of insights. He starts with manual event processing and mentions the QtConcurrent features only at the very end of the article. I suggest the following order of threading-solutions-to-consider:

  1. QtConcurrent functions
  2. QThreadPool + QRunnable
  3. rest

Stay responsive!

Non-trivial Custom Data in QActions

If you want to implement dynamic context menus with non-trivial custom data in your QActions, the Qt4 documentation is not very helpful. The article describes some solutions to this task.

Sometimes I get very frustrated with the online Qt4 documentation. Sure, the API docs are massive but for many parts they provide only very basic information. Unfortunately, many Qt books, too, often stop exactly at the point where it gets interesting.

One example for this are context menus. The API docs just show you how menus in general are created and how they are connected to the application: Basically, all menus are instances of QMenu which are filled with instances of QAction. QActions are used as representation of any kind of action than can be triggered from the GUI.

The standard method to connect QActions to the GUI controlling code is to use one of their signals, e.g. triggered(). This signal can be connected to a slot of your own class where you can then execute the corresponding action. This works fine as long as you have a limited set of actions that you all know at coding time. For example, a menu in your tool bar which contains actions Undo/Redo/Cut/Copy/Paste can be created very easily.

But there are use cases where you do not know in advance how many actions there will be in your menus. For example, in an application that provides a GUI for composing a complex data structure you may want to provide the user assisting context menus for adding new data parts depending on what parts already exist. Suddenly, you have to connect many actions to one slot and then you somehow have to know which QAction the user actually clicked.

Btw, let’s all recall the Command Pattern for a moment… ok, now on to some solutions.

Method 0 – QAction::setData: The QAction class provides method setData(), which can be used to store custom data in a QAction instance using QVariant as data wrapper. If you then use QMenu’s triggered signal, which gives you a pointer to the QAction that was clicked, you can extract your data from the QAction. I find this a little bit ugly since you have to wrap your data into QVariant which can get messy, if you want to provide more than one data element

Method 1 – Enhancing QAction::triggered(): By sub-classing QAction you can provide your own triggered() signal which you can enhance with all parameters you need in your slot.

class MyAction : public QAction
{
  Q_OBJECT
  public:
    MyAction(QString someActionInfo)
      : someActionInfo_(someActionInfo)
    {
      connect(this, SIGNAL(triggered()),
              this, SLOT(onTriggered()));
    }
  signals:
    void triggered(QString someActionInfo);
  private slots:
    void onTriggered() {
      emit triggered(someActionInfo_);
    }
  private:
    QString someActionInfo_;
};

This is nice and easy but limited to what data types can be transported via signal/slot parameters.

Method 2 – QSignalMapper: From the Qt4 docs on QSignalMapper:

This class collects a set of parameterless signals, and re-emits them with integer, string or widget parameters corresponding to the object that sent the signal.

… which is basically the same as we did in method 1.

Method 3 – Separate domain specific action classes: By the time the context menu is created you add QActions to the menu using QMenu’s addAction methods. Then you create instances of separate Command-like classes (as in Command Pattern) and connect them with the QAction’s triggered() signal:

// Command-like custom action class. No GUI related stuff here!
class MySpecialAction : public QObject
{
  Q_OBJECT
  public:
    MySpecialAction(QObject* parent, &lt;all necessary parameters to execute&gt;);

  public slots:
    void execute();
  ...
};

// create context menu
QAction* specialAction =
  menu-&gt;addAction(&quot;Special Action Nr. 1&quot;);
MySpecialAction* mySpecialAction =
  new MySpecialAction(specialAction, ...);
connect(specialAction, SIGNAL(triggered()),
        mySpecialAction, SLOT(execute()));

As you can see, QAction specialAction is parent of mySpecialAction, thereby taking ownership of mySpecialAction. This is my preferred approach because it is the most flexible in terms of what custom data can be stored in the command. Furthermore, the part that contains the execution code – MySpecialAction – has nothing at all to do with GUI stuff and can easily be used in other parts of the system, e.g. non-GUI system interfaces.

How have you solved this problem?

The C++ Shoot-yourself-in-the-Foot of the Week

I think we can all agree that C++, compared to other languages, provides quite a number of possibilities to screw up. Everybody working with the language at some point probably had problems with e.g. its exception system, operator overloading or automatic type conversions – to name just a few of the darker corners.

There are also many mitigation strategies which all come down to ban certain aspects of the language and/or define strict code conventions. If you follow Google’s style guide, for example, you cannot use exceptions and you are restricted to a very small set of boost libs.

But developers – being humans – often find creative and surprising ways to thwart every good intentions. In an external project the following conventions are in place:

  • Use const wherever you possibly can
  • Use boost::shared_ptr wherever it makes sense.
  • Define typedefs to shared_ptrs  in order to make code more readable.
  • typedefs to shared_ptrs are to be defined like this:
typedef boost::shared_ptr&lt;MySuperDuperClass&gt; MySuperDuperClassPtr;
  • typedefs to shared const pointers are to be defined like this:
typedef boost::shared_ptr&lt;const MySuperDuperClass&gt; MySuperDuperClassCPtr;

As you can see, postfixes Ptr and CPtr are the markers for normal shared_ptrs and constant shared_ptrs.

Last week, a compile error about some const to non-const conversion made me nearly pull my hair out. The types of variables that were involved all had the CPtr postfix but still the code didn’t compile. After a little digging I found that one of the typedefs involved was like this:

typedef boost::shared_ptr&lt;  MySuperDuperClass&gt; MySuperDuperClassCPtr;

Somebody just deleted the const modifier in front of MySuperDuperClass but left the name with the CPtr untouched. And because non-const to const conversions are allowed this was not detected for a couple of weeks. Nice going!

Any suggestions for a decent style checker for c++? Thanks in advance 😉

Improved Version of CMake Builder for Hudson

Introducing version 1.5 of cmake builder plugin for Hudson.

Today I just want to give a small round-up of the improvements made on the cmake builder plugin since my last blog post. Back then, version 1.2 was released to support master/slave configurations. As of yesterday, we are at version 1.5 which contains the following improvements/bug-fixes:

  • Bug: The drop-down box for selecting the build type didn’t remember its value. This was fixed with a patch by Atte Timonen.
  • Improvement: Also included in Atte’s patch was the propagation of environment variables to the cmake command which now allows to do parameterized builds. A big thank’s to Atte!
  • Improvement: The install command gets only executed when install directory and install command is given. Before, the build was either broken or $WORKSPACE was used automatically as install directory. Thanks to Dat Chu for his feedback.
  • Improvement: The one-line ‘Other CMake Arguments’ field can get full pretty quickly, so it was changed to a multi-line text-area.

Thank’s again for the feedback, and have fun with the new version!