Enable Capture/Replay for Selenium Flex

One of the missing features of the SeleniumFlexAPI was capture/replay. So I looked into different ways to enable it:

  • Approach 1: Dispatch a DOM event and listen for it in ide-extensions.js

    Problem: Where do I include the name/id of the Flex control?
  • Approach 2: Custom events

    Problem: How do I listen to them in ide-extensions.js?

Solution: Additions to the Selenium IDE code: window.record

The solution is to add a new method to the Selenium IDE code: window.record which delegates to recorder.record. So that the Flex code can call this method directly through the ExternalInterface. The clear advantage of this technique is that there is no code pollution in your production code. But you have to change the SeleniumIDE code. There is an issue in the SeleniumIDE Jira which describes the additions, so go and vote for it!
Addtional code is also needed in the SeleniumFlexAPI.as in the applicationCompleteHandler:

private function applicationCompleteHandler(event:FlexEvent):void {
  ...
  registerListeners(appTreeParser.thisApp.parent);
  ...
}
private function registerListeners(subject:*):void {
  subject.addEventListener(MouseEvent.CLICK, recordClick);
  subject.addEventListener(MouseEvent.DOUBLE_CLICK, recordDoubleClick);
  subject.addEventListener(Event.ADDED, childAdded);
  addListenerRecursive(subject);
}

Bubbling events like MouseEvent.CLICK can be added here but for the non-bubbling ones you have to recursively walk the displayobject hierarchy:

public function addListenerRecursive(root:*):void {
  for(var i:int = 0; i < root.numChildren; i++) {
    try {
      var child:Object = root.getChildAt(i);
      if (isMenuBar(child)) {
        child.removeEventListener(MenuEvent.ITEM_CLICK, recordMenuItemClick);
        child.addEventListener(MenuEvent.ITEM_CLICK, recordMenuItemClick);
      }
      if (isTextControl(child)) {
        child.removeEventListener(Event.CHANGE, recordTextChange);
        child.addEventListener(Event.CHANGE, recordTextChange);
      }
      if (isDataGrid(child)) {
        child.removeEventListener(DataGridEvent.ITEM_FOCUS_IN, recordItemClick);
        child.addEventListener(DataGridEvent.ITEM_FOCUS_IN, recordItemClick);
      }
      addListenerRecursive(child);
    } catch(e:Error) {}
  }
}

In the event handling functions you just call the record function with the appropiate SeleniumFlex command:

private function recordClick(event:MouseEvent):void {
  ExternalInterface.call("record", "flexClick", "name=" + event.target.name, "");
}

Since Flex Sprites have no ids I use the name here for identifying the clicked target.
Another pitfall is when components are added dynamically (like when a Date opens, it adds a calendar view):

private function childAdded(event:Event):void {
  if (isDate(event.target.parent.parent)) {
    event.target.parent.parent.removeEventListener(CalendarLayoutChangeEvent.CHANGE, recordDate);
    event.target.parent.parent.addEventListener(CalendarLayoutChangeEvent.CHANGE, recordDate);
  }
}

Conclusion

So finally capture/replay in SeleniumFlex becomes a reality! Nonetheless there is some work to do to support the different kinds of flex controls and Selenium commands.

Stacked smartness doesn’t add up

When a software is composed of different layers, friction occurs. That’s when features turn into bugs.

houseofcardsThere is a strong urge to make software smart. Whenever something smart gets built in, it’s called a feature. Features of a software are effects you don’t foresee, but find handy for your use case. If your use case is impaired by a feature, you’ll likely call it a bug.

Some features of various software

To make my point clear, i have to introduce two features of software that are very practical for their anticipated use case and then change their context by adding another layer:

Ant filesets

If you use Ant as a build script language, you’ll find filesets very pratical. If you want to modify, copy or delete a bunch of files, you specify a root directory and some similarities between the files (like equal filetypes or names) and you’re done. Let me give you an example to show the use case:

<delete>
    <fileset dir="${basedir}" include="**/build.xml,**/pom.xml"/>
</delete>

This will very likely delete all ant build scripts and maven setting files in your project (so please use with care). Notice how the include attribute is comma-separated for multiple patterns. According to the documentation, the comma can be omitted for a space character.

Then, there is Hudson, a very powerful continuous integration server. One source of its power is the familiarity of configuration syntax, specifically when accessing a bunch of files:

hudson-include

The given text field specifies the include attribute of an Ant fileset. You immediately inherit all the power of ant’s fileset, but the features, too. Here, it’s a feature that two pattern can be excluded by just a space character. If your path contains spaces, you cannot express your pattern in this text field. When using the fileset directly in Ant, you can alter the syntax and use multiple nested include tags, but within Hudson, you are stuck with the single include attribute.

Struts2 internationalization

The second example is fully described in my previous blog entry (“The perils of \u0027”).

As a short summary: The Struts2 framework inherits the power of Java’s MessageFormat when loading language dependent text. As the apostrophe is a special character to MessageFormat, it cannot be used directly in the text entries.

The principle behind the examples

Both examples share a common principle: “Stacked smartness doesn’t add up“. What’s a feature to one software, may be a bug to a software that builds on top of it.

Software developers tend to “stack up” different third party software products to compose their own product with even higher-level functionality. There is nothing wrong with this approach, as long as the context of the underlying products doesn’t change much. If it changes, features begin to behave like bugs.

The cost of stacking

Stacked products are likely to increase the ability of skilled users to re-use their knowledge. Every developer familiar to Ant will instantly be empowered to use the file patterns of Hudson. Every developer familiar to MessageFormat can produce powerful i18n entries that do most of the formatting automatically. That’s a great productivity gain.

But on the other side, if you aren’t familiar to ant when using hudson or know nothing about MessageFormat when just translating the i18n entries of a Struts2 webapp, you’ll be surprised by strange effects. And you won’t find sufficient documentation of these effects in the first place. There will be a link to some obscure project or class you never heard about, telling you all sorts of details you don’t want to hear right now. You can’t easily put them into the right context anyway. You will be down to trial and error, frustrated that your use case seems impossible without explanation. That’s a great productivity loss.

Often, you can’t blame any part of the stack, not even the topmost, for the occuring bugs. If a specific stack maintains and increases productivity, depends on the use case of the topmost layer compared to the underlying anticipated ones. If those aren’t documentated, its hard to notice the displacement.

A metaphor on software stacks

Whenever I hear about a software stack, a picture of a man on a stack of crates occurs to me. Here is the original photo of my thoughts.

What’s your encounter with a shaky stacking?

The perils of \u0027

Adventures (read: pitfalls) of internationalization with Struts2, concerning the principle “stacked smartness doesn’t add up”.

u0027Struts2 is a framework for web application development in Java. It’s considered mature and feature-rich and inherits the internationalization (i18n) capabilities of the Java platform. This is what you would expect. In fact, the i18n features of Struts2 are more powerful than the platform ones, but the power comes with a price.

Examples of the sunshine path

If you read a book like “Struts 2 in Action” written by Donald Brown and others, you’ll come across a chapter named “Understanding internationalization” (it’s chapter 11). You’ll get a great overview with a real-world example of what is possible (placeholder expansion, for example) and if you read a bit further, there is a word of warning:

“You might also want to further investigate the MessageFormat class of the Java platform. We saw its fundamentals in this chapter when we learned of the native Java support for parameterization of message texts and the autoformatting of date and numbers. As we indicated earlier, the MessageFormat class is much richer than we’ve had the time to demonstrate. We recommend consulting the Java documentation if you have further formatting needs as well. “

If you postpone this warning, you’re doomed. It’s not the fault of the book that their examples are the sunshine case (the best circumstances that might happen). The book tries to teach you the basics of Struts2, not its pitfalls.

A pitfall of Struts2 I18N

You will write a web application in Struts2, using the powerful built-in i18n, just to discover that some entries aren’t printed right. Let’s have an example i18n entry:

impossible.action.message=You can't do this

If you include this entry in a webpage using Struts2 i18n tags, you’ll find the apostrophe (unicode character \u0027) missing:

You cant do this

What happened? You didn’t read all about MessageFormat. The apostrophe is a special character for the MessageFormat parser, indicating regions of non-interpreted text (Quoted Strings). As there is only one apostrophe in our example, it just gets omitted and ignored. If there were two of them, both would be omitted and all expansion effort between them would be ceased.

How to overcome the pitfall

You’ll need to escape the apostrophe to have it show up. Here’s the paragraph of the MessageFormat APIDoc:

Within a String, "''" represents a single quote. A QuotedString can contain arbitrary characters except single quotes; the surrounding single quotes are removed. An UnquotedString can contain arbitrary characters except single quotes and left curly brackets. Thus, a string that should result in the formatted message “‘{0}'” can be written as "'''{'0}''" or "'''{0}'''".

That’s bad news. You have to tell your translators to double-type their apostrophes, else they won’t show up. But only the ones represented by \u0027, not the specialized ones of the higher unicode regions like “grave accent”  or “acute accent”. If you already have a large amount of translations, you need to check every apostrophe if it was meant to be printed or to control the MessageFormat parser.

The underlying principle

This unexpected behaviour of an otherwise powerful functionality is a common sign of a principle I call “stacked smartness doesn’t add up”. I will blog about the principle in the near future, so here’s just a short description: A powerful (smart) behaviour makes sense in the original use case, but when (re-)used in another layer of functionality, it becomes a burden, because strange side-effects need to be taken care of.

Hudson for C++/CMake/CppUnit Revised

A few months ago, in order to use Hudson as CI for your C++/CMake/CppUnit projects you had to do quite a lot of shell scripting. By now the situation has very much improved as some very useful plugins came into existence. To cover the situation described in my previous post you can now use a combination of the CMake plugin and the CppUnit plugin.

With these extensions Hudson gets more and more useful for C/C++ developers. Yet another new plugin that supports this trend is the CCCC Plugin which uses the CCCC tool to generate trend reports for various software metrics including cyclomatic complexity.

How to find the HTML Entity you look for

As a webdeveloper have you ever wondered how a special character has to be encoded as a html entity? There is a nice little tool available online that will answer your call for help.EntityLook for 'b' What makes the tool really rock is the simplicity and great incremental search. Typing in the letter ‘c’ will present you entities for “cent”, “copyright”, the greek “sigma” and mathematical entities like “superset” because the basic shape of the resulting special character is also considered. Upon entering a ‘b’ you will get the german ß as one of the results.This kind of search is almost a “do what I mean” feature and very helpful if you do not know exact substrings or meaning of your special character.

There is a Firefox-Extension and as a special goodie for our beloved Mac-users there is even a dashboard widget available that works without internet connection and is a bit more convenient to use than the web application.

Followup: Selenium Flex API with Firefox 3 and Selenium IDE 1.0 Beta 2 now working

Because of a bug in the Selenium Flex API, it didn’t work with Firefox 3 and the Selenium IDE 1.0 beta 2.
To fix this bug add the following line:

    if (flashObj.wrappedJSObject) {
        flashObj = flashObj.wrappedJSObject;
    }

to ide-extensions.js in Selenium.prototype.callFlexMethod after

var flashObj = selenium.browserbot.findElement(this.flashObjectLocator);

The state of functional testing in Flex

Functional testing of UIs is an important and often neglected way of ensuring quality and prevent regression. The Flex world of functional tests seems at the very beginning. We evaluated some of the tools available and used the following criterias:

  • OS independence
    the tool and the created test scripts should run on at least every platform the Flex SDK and the Flash platform are available
  • Tool changes
    how much we need to change or adapt the tool to suit our needs
  • Code pollution
    how much the actual code needs to be polluted to support this testing tool
  • Capture/Replay
    the tool needs at least an option to capture and replay test scripts
  • Additional License Costs
    if we need to pay additional (besides the tool) license costs for things like the FlexBuilder Pro
OS ind. Tool changes Code pollution Capture / Replay Add. costs
Automation based tools + 0 +
SeleniumFlex + 0 + 0 +
FunFx 0 + 0
Fluorida + + +

Automation based tools (like FlexMonkey, QTP and RIATest) use the Flex automation API and have additional costs for FlexBuilderPro (700$ per license). For custom components you have to add automation code to them (pollution) and introduce them and their events in FlexMonkeys event map (tool changes).
SeleniumFlex uses the JavascriptBridge (ExternalInterface) of the FlashPlayer and needs you to add the custom components and events to this external interface which resides in the tool/test code (therefore a 0 at tool changes). You can use the Selenium plugin for spy (the ids)/replay but the capture option isn’t working yet (0 for capture/replay).
FunFx also uses the ExternalInterface and is written in Ruby but runs only on Windows (- for OS independence) because it connects to the Flex application via Win32OLE. I found no capture/replay (-) and the website says you need FlexBuilder (I don’t know why therefore a 0 for license costs, we use IntelliJ IDEA for Flex development)
Fluorida seems to be at the beginning and there is very little documentation so it looks like to need an investment (- for tool changes). It has no capture/replay (-).

Conclusion

So our tool of choice is SeleniumFlex and we hope to get capture/replay working in the near future.
What experience have you made with functional testing in Flex? Which one do you use?

Easy code inspection using QDox

Spend five minutes and inspect your code for the aspect you always wanted to know using the QDox project.

Copyright by http://www.clipartof.com/So, you’ve inspected your Java code in any possible way, using Findbugs, Checkstyle, PMD, Crap4J and many other tools. You know every number by heart and keep a sharp eye on its trend. But what about some simple questions you might ask yourself about your project, like:

  • How many instance variables aren’t final?
  • Are there any setXYZ()-methods without any parameter?
  • Which classes have more than one constructor?

Each of this question isn’t of much relevance to the project, but its answer might be crucial in one specific situation.

Using QDox for throw-away tooling

QDox is a fine little project making steady progress in being a very intuitive Java code structure inspection API. It’s got a footprint of just one JAR (less than 200k) you need to add to your project and one class you need to remember as a starting point. Everything else can be learnt on the fly, using the code completion feature of your favorite IDE.

Let’s answer the first question of our list by printing out all the names of all instance variables that aren’t final. I’m assuming you call this class in your project’s root directory.

public class NonFinalFinder {
    public static void main(String[] args) {
         File sourceFolder = new File(".");
         JavaDocBuilder parser = new JavaDocBuilder();
         builder.addSourceTree(sourceFolder);
         JavaClass[] javaClasses = parser.getClasses();
         for (JavaClass javaClass : javaClasses) {
             JavaField[] fields = javaClass.getFields();
             for (JavaField javaField : fields) {
                 if (!javaField.isFinal()) {
                     System.out.println("Field "
                       + javaField.getName()
                       + " of class "
                       + javaClass.getFullyQualifiedName()
                       + " is not final.");
                }
            }
        }
    }
}

The QDox parser is called JavaDocBuilder for historical reasons. It takes a directory through addSourceTree() and parses all the java files it finds in there recursively. That’s all you need to program to gain access to your code structure.

In our example, we descend into the code hierarchy using the parser.getClasses() method. From the JavaClass objects, we retrieve their JavaFields and ask each one if it’s final, printing out its name otherwise.

Praising QDox

The code needed to answer our example question is seven lines in essence. Once you navigate through your code structure, the QDox API is self-explanatory. You only need to remember the first two lines of code to get started.

The QDox project had a long quiet period in the past while making the jump to the Java 5 language spec. Today, it’s a very active project published under the Apache 2.0 license. The developers add features nearly every day, making it a perfect choice for your next five-minute throw-away tool.

What’s your tool idea?

Tell me about your code specific aspect you always wanted to know. What would an implementation using QDox look like?

Structuring CppUnit Tests

How to structure cppunit tests in non-trivial software systems so that they can be easily executed selectively during code-compile-test cycle and at the same time are easy to execute as a whole by your continuous integration system.

While unit testing in Java is dominated by JUnit, C++ developers can choose between a variety of frameworks. See here for a comprehensive list. Here you can find a nice comparison of the biggest players in the game.

Being probably one of the oldest frameworks CppUnit sure has some usability issues but is still widely used. It is criticised mostly because you have to do a lot of boilerplate typing to add new tests. In the following I will not repeat how tests can be written in CppUnit as this is described already exhaustively (e.g. here or here). Instead I will concentrate on the task of how to structure CppUnit tests in bigger projects. “Bigger” in this case means at least a few architectually independent parts which are compiled independently, i.e. into different libraries.

Having independently compiled parts in your project means that you want to compile their unit tests independently, too. The goal is then to structure the tests so that they can easily be executed selectively during development time by the programmer and at the same time are easy to execute as a whole during CI time (CI meaning Continuous Integration, of course).

As C++ has no reflection or other meta programming elements like the Java Annotations, things like automatic test discovery and how to add new tests become a whole topic of its own. See the CppUnit cookbook for how to do that with CppUnit . In my projects I only use the TestFactoryRegistry approach because it provides the most automatics in this regard.

Let’s begin with a simplest setup, the Link-Time Trap (see example source code): Test runner and result reporter are setup in the “main” function that is compiled into an executable. The actual unit tests are compiled in separate libraries and are all linked to the executable that contains the main function. While this solution works well for small projects it does not scale. This is simply because every time you change something during the code-compile-test cycle the unit test executable has to be relinked, which can take a considerable amount of time the bigger the project gets. You fall into the Link Time Trap!

The solution I use in many projects is as follows: Like in the simple approach, there is one test main function which is compiled into a test executable. All unit tests are compiled into libraries according to their place in the system architecture. To avoid the Link-Time-Trap, they are not linked to the test executable but instead are automatically discovered and loaded during test execution.

1. Automatic Discovery

Applying a little convention-over-configuration all testing libraries end with the suffix “_tests.so”. The testing main function can then simply walk over the directory tree of the project and find all shared libraries that contain unit test classes.

2. Loading

If a “.._test.so” library has been found, it simply gets loaded using dlopen (under Unix/Linux). When the library is loaded the unit tests are automatically registered with the TestFactoryRegistry.

3. Execution

After all unit test libraries has been found and loaded text execution is the same as in the simple approach above.

Here my enhanced testmain.cpp (see example source code).

#include ... 

using namespace boost::filesystem; 
using namespace std; 

void loadPlugins(const std::string& rootPath) 
{
  directory_iterator end_itr; 
  for (directory_iterator itr(rootPath); itr != end_itr; ++itr) { 
    if (is_directory(*itr)) {
      string leaf = (*itr).leaf(); 
      if (leaf[0] != '.') { 
        loadPlugins((*itr).string()); 
      } 
      continue; 
    } 
    const string fileName = (*itr).string();
    if (fileName.find("_tests.so") == string::npos) { 
      continue;
    }
    void * handle = 
      dlopen (fileName.c_str(), RTLD_NOW | RTLD_GLOBAL); 
    cout << "Opening : " << fileName.c_str() << endl; 
    if (!handle) { 
      cout << "Error: " << dlerror() << endl; 
      exit (1); 
    } 
  } 
} 

int main ( int argc, char ** argv ) { 
  string rootPath = "./"; 
  if (argc > 1) { 
    rootPath = static_cast<const char*>(argv[1]); 
  } 
  cout << "Loading all test libs under " << rootPath << endl; 
  string runArg = std::string ( "All Tests" ); 
  // get registry 
  CppUnit::TestFactoryRegistry& registry = 
    CppUnit::TestFactoryRegistry::getRegistry();
  
  loadPlugins(rootPath); 
  // Create the event manager and test controller 
  CppUnit::TestResult controller; 

  // Add a listener that collects test result 
  CppUnit::TestResultCollector result; 
  controller.addListener ( &result ); 
  CppUnit::TextUi::TestRunner *runner = 
    new CppUnit::TextUi::TestRunner; 

  std::ofstream xmlout ( "testresultout.xml" ); 
  CppUnit::XmlOutputter xmlOutputter ( &result, xmlout ); 
  CppUnit::TextOutputter consoleOutputter ( &result, std::cout ); 

  runner->addTest ( registry.makeTest() ); 
  runner->run ( controller, runArg.c_str() ); 

  xmlOutputter.write(); 
  consoleOutputter.write(); 

  return result.wasSuccessful() ? 0 : 1; 
}

As you can see the loadPlugins function uses the Boost.Filesystem library to walk over the directory tree.

It also takes a rootPath argument which you can give as parameter when you call the test main executable. This solves our goal stated above. When you want to execute unit tests selectively during development you can give the path of the corresponding testing library as parameter. Like so:

./testmain path/to/specific/testing/library

In your CI environment on the other hand you can execute all tests at once by giving the root path of the project, or the path where all testing libraries have been installed to.

./testmain project/root

CMake Builder Plugin for Hudson

Update: Check out my post introducing the newest version of the plugin.

Today I’m pleased to announce the first version of the cmakebuilder plugin for Hudson. It can be used to build cmake based projects without having to write a shell script (see my previous blog post). Using the scratch-my-own-itch approach I started out implementing only those features that I needed for my cmake projects which are mostly Linux/g++ based so far.

Let’s do a quick walk through the configuration:

1. CMake Path:
If the cmake executable is not in your $PATH variable you can set its path in the global Hudson configuration page.

2. Build Configuration:

To use the cmake builder in your Free-style project, just add “CMake Build” to your build steps. The configuration is pretty straight forward. You just have to set some basic directories and the build type.

cmakebuilder demo config
cmakebuilder demo config

The demo config above results in the following behavior (shell pseudocode):

if $WORKSPACE/build_dir does not exist
   mkdir $WORKSPACE/build_dir
end if

cd $WORKSPACE/build_dir
cmake $WORKSPACE/src -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=$WORKSPACE/install_dir
make
make install

That’s it. Feedback is very much appreciated!!

Originally the plan was to have the plugin downloadable from the hudson plugins site by now but I still have some publishing problems to overcome. So if you are interested, make sure to check out the plugins site again in a few days. I will also post an update here as soon as the plugin can be downloaded.

Update: After fixing some maven settings I was finally able to publish the plugin. Check it out!