Grails Gems: Command Objects

A series about the (little) gems found in Grails which can help many projects out there.

Besides domain objects command objects are another way to get validation and data binding of parameters. But why (or when) should you use them?
First when you do not want to persist the data. Like validating parameters for a search query.
Second when you just want a subset of the parameters which has no corresponding domain object. For example for keeping malicious data away from your domain objects.
Third when you get a delta of the new data. When you just want to add to a list and do not want to check if you get a single or a multiple value for your a parameter.

Usage

Usually you put the class of the command in the same file as the controller you use them in. The command object is declared as a parameter of the action closure. You can even use multiple one:

class MyController {
  def action = { MyCommand myCommand, YourCommand yourCommand ->
    ...
  }
}

Grails automatically binds the request parameters to the commands you supply and validates them. Then you can just call command.hasErrors() to see if the validation succeeded.

Separate your code domains

You can improve your code reusability by separating the technical domain code from the business domain code. This article tries to explain how to start.

When you develop software, you most likely have to think in two target domains at the same time. One domain will be the world of your stakeholder. He might talk about business rules and business processes and business everything, so lets call it the business domain. The other domain is the world you own exclusively with your colleagues, it’s the world of computers, programming languages and coding standards. Lets call it the technical domain. It’s the world where your stakeholders will never follow you.

Mixing the domains

Whenever you create source code, you probably try to solve problems in the business domain with the means of your technical domain – e.g. the programming language you’ve chosen on the hardware platform you anticipate the software to run on. Inevitably, you’ll mix parts of the business domain with parts of the technical domain. The main question is – will it blend? Most of the time, the answer is yes. Like milk in coffee, the parts of two domains will blend into an inseparable mixture. Which isn’t necessarily a bad thing – your solution works just fine.

The hard part comes when you want to reuse your code. It’s like reusing the milk in your coffee, but without the coffee. You’ve probably done it, too (reusing domain-blended code, not extracting the milk from your coffee) and it wasn’t the easy “just copy it over here and everything’s fine” reusability you’ve dreamt of.

Separating the domains

One solution for this task begins by realizing which code belongs to which domain. There isn’t a clear set of rules that you can just check and be sure, but we’ve found two rules of thumb helpful for this decision:

  • If you have a strong business domain data type model in your code (that is, you’ve modelled many classes to directly represent concepts and items from your stakeholder’s world), you can look at a line of code and scan for words from the business domain. If there aren’t any, chances are that you’ve found a line belonging to the technical domain. If you prefer to model your data structures with lists and hashmaps containing strings and integers, you’re mostly out of luck here. Hopefully, you’ve chosen explicit names for your variables, so you don’t end with a line stating map.get(key), when in fact, you’re looking up orders.getFor(orderNumber).
  • For every line of code, you can ask yourself “do I want to write it or do I have to write it?”. This question assumes that you really want to solve the problems of the business domain. Every line of code you just have to write because otherwise, the compiler, the QA department, your colleagues or, ultimately, your coder idol of choice would be disappointed is a line from the technical domain. Every line of code that would only disappoint your stakeholder if it would be missing is a line from the business domain. Most likely, everything that your business-driven tests assert is code from the business domain.

Once you categorized your lines of code into their associated domain, you can increase the reusability of your code by separating these lines of code. Effectively, you try to avoid the blending of the parts, much like in a good latte macchiato. If you achieve a clear separation of the different code parts, chances are that you have come a long way to the anticipated “copy and paste” reusability.

Example one: Local separation

Well, all theory is nice and shiny, but what about the real (coding) life? Here are two examples that show the mechanics of the separation process.

In the first example, we’re given a compressed zip file archive as an InputStream. Our task is to write the archive entries to disk, given that certain rules apply:

public void extractEntriesFrom(InputStream in) {
    ZipInputStream zipStream = new ZipInputStream(in);
    try {
         ZipEntry entry = null;
         while ((entry = zipStream.getNextEntry()) != null) {
             if (rulesApplyFor(entry)) {
                 File newFile = new File(entry.getName());
                 writeEntry(zipStream,
                      getOutputStream(basePath(), newFile));
             }
             zipStream.closeEntry();
         }
    } catch (IOException e) {
        e.printStackTrace();
    } finally {
        IOHandler.close(zipStream);
    }
}

This is fairly common code, nothing to be proud of (we can argue that the method signature isn’t as explicit as it should be, the exceptions are poorly handled, etc.), but that’s not the point of this example. Try to focus your attention to the domain of each code line. Is it from the business or the technical domain? Let me refactor the example to a form where the code from both domains is separated, without changing the additional flaws of the code:

public void extractEntriesFrom(InputStream in) {
    ZipInputStream zipStream = new ZipInputStream(in);
    try {
         ZipEntry entry = null;
         while ((entry = zipStream.getNextEntry()) != null) {
             handleEntry(entry, zipStream);
             zipStream.closeEntry();
         }
    } catch (IOException e) {
        e.printStackTrace();
    } finally {
        IOHandler.close(zipStream);
    }
}

protected void handleEntry(ZipEntry entry,
        ZipInputStream zipStream) throws IOException {
    if (rulesApplyFor(entry)) {
        File newFile = new File(entry.getName());
        writeEntry(zipStream,
            getOutputStream(basePath(), newFile));
    }
}

In this version of the same code, the method extractEntriesFrom(…) doesn’t know anything about rules or how to write an entry to the disk. Everything that’s left in the method is part of the technical domain – code you have to write in order to perform something useful within the business domain. The new method handleEntry(…) is nearly free of technical domain stuff. Every line in this method depends on the specific use case, given by your business domain.

Example two: Full separation

Technically, the first example only consisted of a simple refactoring (Extract Method). But by separating the code domains, we’ve done the first step of a journey towards code reusability. It begins with a simple refactoring and ends with separated classes in separated packages from two separated project parts, named something like “application” and “framework”. Even if you only find a class named “Tools” or “Utils” in your project, you’ve done intermediate steps towards the goal: Separating your technical domain code from your business domain code in order to reuse the former (because no two businesses are alike).

The next example shows a full separation in action:

WriteTo.file(target).using(new Writing() {
    @Override
    public void writeTo(PrintWriter writer) {
        writer.println("Hello world!");
        writer.println("Hello second line.");
        // more business domain code here
    }
});

Everything other than the first line (and the necessary java boilerplate) is business domain code. In the first line, only the specified target file isn’t technical. Everything related to opening the file output stream, handling exceptions, closing all resources and all the other fancy stuff you want to do when writing to a file is encapsulated in the WriteTo class. The equivalent to the handleEntry(…) method from the first example is the writeTo(…) method of the Writing interface. Everything within this method is purely business domain related code. The best thing is: you can nearly forget about the technical domain when filling out the method, as it is embedded in a reusable “code clamp” providing the proper context.

Conclusion

If you want to write reusable code, consider separating your two major code domains: the technical domain and the business domain. The first step is to be aware of the domains and distinguish between them. Your separation process then can start with simple extractions and finally lead to a purely technical framework where you “only” have to fill in the business domain code. By the way, it’s a variation of the classic “separation of concerns” principle, if you want to read more.

A shot at definitions beyond “unit test”

When doing research on which kinds of programmatic tests different developers and companies utilize and how they handle them, I realized that there is no common definition of terms and concepts. While most sources agree on what is and what is not a unit test, there are various contradictory definitions of what a test is, if it is not a unit test. In this blog post I’d like to present a brief overview of the definitions we are currently using. Since we steadily try to enhance and refine our development process and tools, the terms and concepts presented here are almost certain to change in the future.

Please note that this post is not intended to fully describe all the details of the different test approaches, but rather to give an idea and first impression on how we distinguish them.

Unit Tests

The most basic kind of programmatic tests, unit-tests, are likely to be the most commonly used kind of test. They help to determine that a small piece of code, e.g. a single method or class, behaves as intended by its developer. If properly applied, unit-tests provide a solid foundation to build an application upon. Figure 1 schematically depicts the scope of a unit-test in an exemplary software system.

Depending on the complexity of the tested system, techniques like mocking of dependencies may be required. Especially system resources need to be replaced by mocks, since unit tests need to be completely independent from them (Michael Feathers describes this and some other requirements of unit tests in his blog post “A Set of Unit Testing Rules”). Furthermore, unit tests are not meant to be long running, but instead have to execute within a split second.

Schematic view of a unit test of a component in an exemplary system
Figure 1: Schematic view of a unit test of an component in an exemplary system

Integration Tests

A more sophisticated approach to testing are integration tests which challenge a part or sub-system of an application made up of several units in order to determine whether these units properly cooperate. In contrast to unit tests, integration tests may include system resources and may also determine the test’s outcome by checking the state of these resources. This larger scope and the fact that the tested functionality is typically made up of several actions, leads to integration tests taking a multitude of the time taken by unit tests. Figure 2 schematically illustrates an integration test’s view on an exemplary system.

Schematic of an integration test in an exemplary system
Figure 2: Schematic of an integration test in an exemplary system

Acceptance Tests

The by far most involved technique to test the behavior of an application is the utilization of acceptance tests. While the other approaches challenge only parts of an application, acceptance tests are meant to challenge the application as a whole from a user’s point of view. This includes using system resources, as well as to control the application and verify its proper function as a user would: Through its (G)UI and without knowing anything about the internals of the software.

Schematic of an acceptance test in an exemplary system
Figure 3: Schematic of an acceptance test in an exemplary system

Conclusion

While some developers only distinguish between unit tests and other tests, defining the latter ones more clearly proved very useful when creating, using and explaining them to other developers and customers. Yet, these definitions are not carved in stone and certainly need to be refined over time. Thus, I would like to get to know your opinion on these definitions. Do you agree or do you have a completely different way of distinguishing between test approaches? How many kinds of tests do you distinguish? And why do you do so?

Prepare for the unexpected

In most larger projects there are many details which cannot be foreseen by the development team. Specifications turn out to be wrong, incomplete or not precise enough for your implementation to work without further adjustments. New features have to work with production data that may not be available in your development or testing environment.

The result I often observed is that everything works fine in your environment including great automated tests but fails nevertheless when deployed to production systems. Sometimes it is minor differences in the operating system version or configuration, the locale for example, may cause your software to fail. Another common problem is  real production data containing unexpected characters, inconsistencies in the data (sometimes due to bugs) or its sheer size.

What can we do to better prepare for unexpected issues after deployment?

The thing is to expect such issues and to implement certain countermeasures to better cope with them. This may conflict with the KISS principle but usually is worth a bit of added complexity. I want to provide some advice which proved useful for us in the past and may help you in the future too:

  1. Provide good, detailed and persistent debug output for certain features: Once we added a complex rule system which operated on existing domain objects. To check every possible combination of domain object states would have been a ton of work, so we wrote tests for the common cases and difficult cases we could think of. Since the correctness of the functionality was not critical we decided to rather display slightly incorrect information instead of failing and thus breaking the feature for the user. We did however provide extensive and detailed logs whenever our rule system detected a problem.
  2. Make certain parts of your communication interface to third party systems configurable: Often your system communicates to different kinds of users and other systems. Common examples are import/export functionality, web service APIs or text protocols. Even if most of the time details like date and number formats, data separators, line endings, character encoding and so forth are specified it often proves valuable to make them configurable. Many times the specification changes or is incorrect, some communication partner implements the protocol slightly different or a format deviates from your assumption breaking your application. It is great if you can change that with a smile in front of your client and make the whole thing work in minutes instead of walking home frustrated to fix the issues.

The above does not mean building applications with ultimate flexibility and configurability and ignoring automated tests or realistic test environments. It just means that there are typical aspects of an application where you can prepare for otherwise unexpected deviations of theory and praxis.

A VisualBasic.NET cheat sheet for Java developers

If you want to learn VisualBasic.NET coming from a Java perspective, we’ve prepared a little cheat sheet to ease the transition.

Sometimes, we cannot choose what language to implement a project in. Be it because of environmental restrictions (everything else is programmed in language X) or just because there’s an existing code base that needs to be extended and improved. This is when our polyglot programming mindset will be challenged. In a recent project, we picked up the current incarnation of VisualBasic, a language most of us willfully forgot after brief exposure in the late nineties, more than 10 years ago.

Spaceward Ho!

So we ventured into the land of VisualEverything, installing VisualStudio (without ReSharper at first) and finding out about the changes in VisualBasic.NET compared to VisualBasic 6, the language version we used back in the days. Being heavily trained in Java and “javaesque” languages, we were pleasantly surprised to find a modern, object-oriented language with a state-of-the-art platform SDK (the .NET framework) and only little reminiscences of the old age. Microsoft did a great job in modernizing the language, cutting out maybe a bit too much language specific stuff. VisualBasic.NET feels like C# with an uninspired syntax.

Making the transition

To ease our exploration of the language features of VisualBasic.NET, one of our student workers made a comparison table between Java and VisualBasic.NET. This cheat sheet helped us tremendously to wrap our heads around the syntax and the language. The platform SDK is very similar to the Java API, as you can see in the corresponding sections of the table. And because it helped us, it might also help you to gain a quick overview over VisualBasic.NET when you are heading from Java.

I have to thank Frederik Zipp a lot for his work. My only contribution to this cheat sheet is the translation from german to english. I can only try to imagine his effort of putting everything together. And while you might read the whole comparison in about 21 minutes (as stated in the title), it’s worth several hours of searching.

The downloads

And without much further ado, here are the download links for the HTML and PDF versions of the “Java vs. VisualBasic.NET cheat sheet”:

You may use and modify the documents as you see fit. If you redistribute it, please adhere to the Creative Commons Attribution-ShareAlike license. Thank you.

Grails: Beware of the second level cache

Know your caches!

Recently we were hunting a strange bug. Take the following domain model:

class Computer {
  Coder coder
}

class Coder {
  static hasMany = [projects:Project]
}

Querying the computer and iterating over the respective coder and projects sometimes resulted in strange number of projects: 1. Looking into the underlying database we quickly found out that the number of 1 was not correct. It got even more strange: getting the coder in question via Coder.get in the loop yielded the correct results. What was the problem?
After some code reading and debugging another query which was called after the first one but before accessing the coder in the loop gave some insight:

  Coder.withCriteria {
    projects {
      idEq(projectId)
    }
  }

This second query also queried the Coder but constrained the projects to a specific one. These coders were populated into the second level cache and when we called computer.coder the second level cache returned the before queried coder. But this coder had only one project!
Since we only needed the number of coders with this project we changed the second
query to using count, so no instances of Coder are returned and thus saved in the second level cache. Bug fixed.

The Great Divide

There is a great divide in the C++ developer community between “normal” developers that use only basic language features and very savvy ones that know every little corner of the language. The upcoming C++ standard deepens this divide even more.

Recently, I had two very contrary conversations about C++ which show very good the great divide in C++ developer community.

The first was with the technical lead of a team that writes and maintains drivers and control software for a scientific institution. These systems run 24/7 and have to be very stable and reliable.

I had discovered that they use a self-written toolbox library containing classes like SharedPtr<T>, and Thread and suspected immediately a classical NIH-syndrome. I asked him about it and why they don’t use well established libraries like boost. He told me that they indeed are only using the standard library and their own toolbox.

The reason he gave was that despite boost being most elegant C++ library out there, it required very good knowledge about the most advanced C++ mechanisms, and that his team was not on this level … I should probably mention here that his team does a very good job in running their systems. So, apparently, they get along very well with using only basic  C++ features and no “fancy” boost stuff.

The other conversation was with a friend of mine with whom I chat regularly about all sorts of programming related stuff. This time the topic was the upcoming  C++ standard and all its  exciting new stuff. He has lot’s of experience with C++ and knows the language very well. But even someone like him had a hard time to really understand what rvalue references are all about. I had not looked at them in detail, yet,  so he tried to explain them to me. During our discussion I was thinking about if teams like the one introduced before will ever use rvalue references, or other C++0X stuff in their production code, other than maybe the auto keyword for type inference, or constructor delegation.

Honestly, I don’t think stuff like  rvalue refs will become a feature that is often used by “standard industry” teams, because it adds a lot of complexity to an already complex language. Even easy-to-get stuff like the new keywords override, constexpr and final, or additional initialization means like std::initializer_list<T> will take a lot of time to get used regularly by most C++ teams.

Instead, most of C++0X will greatly increase the divide between “normal” C++ developers who get along well with using only basic language features, and experts that know every little corner of the language. And this is simply because there is so much more to know with C++0X.

But don’t let us paint this picture overly black. I, for one, am looking forward to the new standard and I will certainly spread the word about the new possibilities and features in every C++ team I work with.

Summary of the Schneide Dev Brunch at 2011-07-17

A summary of our Dev Brunch at Sunday 2011-07-17. You’ll read about conferences, the GRASP principles and some cool projects to know about, mostly.

Last Sunday, the 17th July of 2011, we held another Dev Brunch at our company.

A Dev Brunch is an event that brings three main ingredients together: developers, food and software industry related topics. Given enough time (there is never enough time!), we chat, eat, learn and laugh the whole evening through. Most of the stories and chitchat that is told cannot be summarized and has little value outside its context. But most participants bring a little topic alongside their food bag, something of interest they can talk like 10 minutes about. This blog post summarizes at least the official topics and gives links to additional resources.

Conference review of the Java Forum Stuttgart 2011

The Java Forum Stuttgart is an annual conference held by the Java User Group Stuttgart. It’s the biggest regional Java event and always worth a visit (as long as you understand the german language). This year, the talks stagnated a bit around topics that are mostly well-known.

The best talk was given by Michael Wiedeking from MATHEMA Software GmbH in Erlangen. The talk titled “The next big (Java) thing”, but mostly addressed the history and current state of Java in an entertaining and thought-provoking way. The premise was that you have to know the past and present to anticipate the future. The slides don’t represent the talk well enough, but here’s a link anyway.

Another session introduced the PatternTesting toolkit, a collection of helper classes and useful features that enrich the development of unit testing. Alongside the other spice you can add to unit tests, this project might be worth a look. My favorite was the @Broken annotation that ignores a test case until a given date. It’s like an @Ignore with a best-before date.

There were the usual introductory talks, for example about CouchDB and git/Egit. They were well-executed, but lacked a certain thrill if you heard about the projects before.

As a personal summary, the Java world lacks the “next big thing” a bit.Two buzz products for the next year might be Eclipse Jubula (for UI testing) and Griffon (for desktop application development).

Conference review of the Karlsruhe Entwicklertag (developer day) 2011

The Karlsruhe Entwicklertag is another annual conference, spanning several days and presenting top-notch talks and sessions. It’s the first address for software developers in Karlsruhe that want to stay up to date with current topics and products.

Some topics were presented nearly identically to the Java Forum Stuttgart (but half a year earlier if that matters), while other tracks (like the Pecha Kucha talks) can only be found here.

The buzz product for the next year might be Gerrit (for code review) and Eclipse Jubula again (for UI testing).

As a personal summary, even this conference lacked a certain drive towards real new “big picture” topics. But maybe, that’s just allright given all the hype of the last years.

The GRASP principles

This topic contained hands-on software development knowledge about the nine principles named “GRASP” or General Responsibility Assignment Software Patterns/Principles. There is nothing really new about the GRASP principles, they will only give you common names for otherwise mostly unnamed best practices or fundamental design paradigms and patterns.

We even went through some educational slides that summarize the principles. The most discussion arose about the name “Pure Fabrication” for classes without a relation to the problem domain.

If you are an average experienced software developer, spend a few minutes and scan the GRASP principles so you can combine the name with the specific content.

First-hand experiences of combining work and children

We are well within the best age to raise children. So this topic gets a lot attention, specifically the actual tipps to survive the first two years with kids and how to interact with the different administrative bodies. Germany is a welfare state, but nobody claimed that welfare should be easy or logical. We’ve learned a lot about different reference dates and unusual time partitioning.

Another insight was that working less than 40 percent isn’t really worth the hassle. You are mostly inefficient and aware of it.

That’s all, folks

As always, we shared a lot more information and anecdotes. If you want to participate at one of our Dev Brunches, let us know. We are open for guests and really interested in your topics.

Bogus Error Messages with Qt .ui Files

Name your Qt Forms correctly and you will save lots of debugging time.

Bogus errors together with their messages can have a large number of reasons – full hard drives being one of the classics. When it comes to programming and especially C++, the possibilities for cryptic, meaningless and misleading error message are infinite.

A nice one bit us at one of our customers the other day. The message was something like

QLayout can only have instances of QWidget as parent

and it appeared as standard error output during program start-up. Needless to say that the whole thing crashed with a segmentation fault after that. The only change that was made was a header file that was added to the Qt files list in the CMakeLists.txt file.  The Qt class in this header file was just in its beginnings and had not yet any QLayouts, or QWidgets in it. Even the  C++ standard measure of cleaning and recompiling everything didn’t help.

So how is it possible that an additional Qt header file that has not references to QLayout and QWidget can cause such an error message?

As all of you experienced C/C++ developers know, for the compiler, a code file is not only the stuff that it contains directly but also what is #included! The offending header file included a generated ui description file which you get when you design your windows – or Forms in Qt terminology – with the Qt designer and use the Compile-Time-Form-Processing-approach to incorporate the form into the code base.

But how can that effect anything?

The Qt designer saves the forms into .ui files. From that, the so-called User Interface Compiler (uic) generates a header file containing a C++ class together with inlined code that creates the form. Form components like line edits, or push buttons are generated as instance attributes. The name of the class is generated from the name of the form. You can even use namespaces.  By naming it e.g. myproject::BestFormEverDesigned the generated class is named BestFormEverDesigned   is put into namespace myproject.

So far, so nice, handy and easy to use.

When you create a new form in qt designer, the default name is Form. Maybe you can guess already where this leads to…

Two forms for which the respective developers forgot to set a proper name, existed in the same sub project and had been compiled and linked into the same shared library. The compiler has no chance to detect this, because it sees only one

class Form
{

at a time. The linker happily links all of this together since it thinks that all Forms are created equal. And then at run-time … Boom!

I will have to look into a little Jenkins helper which breaks the build when a Form form is checked in…

Your own dsl: a primer on operators

When writing your own domain specific language (dsl), a full fledged parser generator like antlr can be very helpful with the nitty gritty. You may come to a point where you want to use (infix) operators in your language. But beware! A naive solution might look like:

expr: number '+' expr | number '-' expr | number '*' expr | number '/' expr | …;

If you want to support mathematical like operators this solution misses two important traits: operator precedence and associativity.
Precendence can be easily achieved:

expr: term '+' expr |  term '-' expr | …;
term: number '*' term | number '/' term | …;

The operators with the lowest precedence come first, then the next and so on. Unfortunately this has one side effect: the operators are now right associative.
Which means an expression like 5 – 4 + 3 would evaluate to -2 and not 4. Because of right associativity it is the same as 5 – (4 + 3). So another refinement does the trick:

expr: term (('+'  |  '-' | …) term)*;
term: number (('*' | '/' | …) number)* | …;