Hibernate Objects in HTTP Sessions

While discussing common problems and pitfalls in web development
we stumbled on the pitfall of storing hibernate objects in a HTTP session.

While discussing common problems and pitfalls in web development we stumbled on the pitfall of storing hibernate objects in a HTTP session. I do not recommend doing this mainly because of the common pitfall with detached objects.
Usually the lifecycle of a hibernate session is attached to a request/response. So after the end of the response the hibernate object is detached. Accessing non loaded associations of detached objects causes a LazyInitializationException. Even if you try to update or save the detached object you will get an exception if the object isn’t in the hibernate session already. You could re-attach/merge the detached object but this writes your changes on the detached object to the database.
A better way would be to store only the identifier of the hibernate object in the HTTP session,
so you can load the object from the database when you need it.

When it comes to multithreading, better be safe than sorry

Writing multithreaded applications in Java is hard. Here are five problems and how to avoid them without much effort (mostly).

Recently, I attended a code review of the core parts of a web application, written in Java. The application is used by a large customer base and occassionally, there are error reports and exceptions in the log files. Some of these exceptions are the dreaded ConcurrentModificationExceptions, indicating conflicting read/write access on an unsynchronized collection data structure. In the code review, we found several threading flaws, but not after an exhaustive reading of the whole module. Here, I want to present the flaws and give some advice on how to avoid them:

The public lock

In some parts of the code, methods were defined as synchronized through the method declaration keyword:

public synchronized String getLastReservation() { [...]

While there is nothing wrong with this approach in itself, it can be highly dangerous in combination with synchronized blocks. The code above effectively wraps a synchronized block using the object instance (this) as a lock. No information of an object is more publicly visible as the object reference (this), so you have to check all direct or indirect clients of this object if they synchronize on this instance, too. If they do, you have chained two code blocks together, probably without proper mentioning of this fact. The least harmful defect will be performance losses because your code isn’t locked as fine grained as it could be.

The easiest way to avoid these situations it to always hide the locks. Try not to share one object’s locks with other objects. If you choose publicly accessible locks, you can never be sure about that.

The subtle lock change

In one class, there were both instance and class (static) methods, using the synchronized keyword:

public synchronized String getOrderNumberOf(String customerID) { [...]
public  synchronized static int getTotalPendingOrders() { [...]

And while they were both accessing the same collection data structure (a static hashmap), they were using different locks. The lock of the instance method is the instance itself, while the lock of the static method is the class object of the type. This is very dangerous, as it can be easily missed when writing or altering the code.

The best way to prevent this problem it to avoid the synchronized modifier for methods completely. State your locks explicitely, all the time.

Partial locking

In a few classes, collection datatypes like lists were indeed synchronized by internal synchronized-blocks in the methods, using the private collection instance as lock. The synchronized blocks were applied to the altering methods like putX(), removeX() and getX(). But the toString() method, building a comma-separated list of the textual list entries, wasn’t synchronized to the list. The method contained the following code:

public String toString() {
    StringBuilder result = new StringBuilder();
    for (String entry : this.list) {
        result.append(entry);
        result.append(",");
    }
    [...]
    return result.toString();
}

I’ve left out some details and special cases, as they aren’t revelant here. The problem with the foreach loop is that an anonymous Iterator over the list is used and it will relentlessly monitor the list for any changes and throw a ConcurrentModificationException as soon as one of the properly synchronized sections changes it. The toString() method was used to store the list to a session dependent data storage. Every once in a while, the foreach loop threw an exception and failed to properly persist the list data, resulting in data loss.

The most straight-forward solution to this problem might be to add the missing synchronization block in the toString() method. If you don’t want to block the user session while writing to disk, you might traverse the list without an Iterator (and be careful with your assumptions about valid indices) or work on a copy of the list, given that an in-memory copy of the list would be cheap. In an ACID system scenario, you should probably choose to complete your synchronized block guards.

Locking loophole

Another problem was a collection that was synchronized internally, but could be accessed through a getter method. No client could safely modify or traverse the collection, because they had the collection, but not the lock object (that happened to be the collection, too, but who can really be sure about that in the future?). It would be ridiculous to also provide a getter for the lock object (always hide your locks, remember?), the better solution is to refactor the client code to a “tell, don’t ask” style.

To prevent a scenario when a client can access a data structure but not its lock, you shouldn’t be able to gain access to the data structure, but pass “command objects” to the data structure. This is a perfect use case for closures. Effectively, you’ll end up with something like Function or Operation instances that are applied to every element of the collection within a synchronized block and perform your functionality on them. Have a look at op4j for inspirational syntax.

Local locking

This was the worst of all problems and the final reason for this blog entry: In some methods, the lock objects were local variables. In summary, these methods looked like this:

public String getData() {
    Object lock = new Object();
    synchronized (lock) {
        [...]
    }
}

Of course, it wasn’t that obvious. The lock objects were propagated to other methods, stored in datastructures, removed from them, etc. But in the end, each caller of the method got his own lock and could henceforth wreck havoc in code that appeared very well synchronized on first look. The error in its clarity is too stupid to be widespread. The problem was the obfuscation around it. It took us some time to really understand what is going on and where all that lock objects really come from.

My final advice is: If you have to deal with multithreading, don’t outsmart yourself and the next fellow programmer by building complex code structures or implicit relationships. Be as concise and explicit as you can be. Less clutter is more when dealing with threads. The core problem is the all-or-none law of thread synchronization: Either you’ve got it all right or you’ve got it all wrong – you just don’t know yet.

Hide your locks, name your locks explicitely, reduce the scope of necessary locking so that you can survey it easily, never hand out your locked data, and, most important, remove all clutter around your locking structures. This might make the difference between “just works” and endless ominous bug reports.

GORM-Performance with Collections

The other day I was looking to improve the performance of specific parts of our Grails application. I quickly found the typical bottleneck in database centric Grails apps: Too many queries were executed because GORM hides away database queries by its built-in persistence methods for domain objects and the extremely nice dynamic finders. In search for improvements and places to use GORM/Hibernate caching I stumbled upon a very good and helpful presentation on GORM-performance in general and especially collection usage. Burt Beckwith presents some common problems and good patterns to overcome them in his SpringOne 2GX talk. I highly recommend having a thorough look at his presentation.

Nevertheless, I want to summarize his bottom line here: GORM does provide a nice abstraction from relational databases but this abstraction is leaky at times. So you have to know exactly how the stuff in your domain classes is mapped. Be especially careful it collections tend to become “large” because performance will suffer extremely. We already observed a significant performance degradation for some dozen elements; your mileage may vary. For many simple modifications on a collection all its elements have to be loaded from the database!
Instead of using hasMany/belongsTo just add a back reference to the domain object your object belongs to. With the collection you lose cascading delete and some GORM functionality but you can still use dynamic finders and put the functionality to manage associations yourself into respective classes. This may be a large gain in specific cases!

Grails upgrade – lessons learned

Grails is a great framework for rapidly developing web applications on top the proven java/servlet platform. Especially smaller, short-lived projects can be a real breeze with all the scaffolding, GORM and convention-over-configuration built into grails.

We happen to use it for a quite complex web application project for almost 3 years now. Half a year ago we upgrade from grails 1.0.x to 1.3.4. That makes 3 major versions in one upgrade step and produced obviously a lot of work and many small bugs. I do not want to put the blame on the grails guys here, because most of the stuff was mentioned in the release notes and it was a big step we decided to take when the decision came to continue the project for several years to come.

Our upgrade policy changed due to that experience and we try to stay a lot more current to be able to adapt our software to framework changes more incrementally. Some weeks ago we upgraded from 1.3.4 to 1.3.7 and this experience was not pleasant. Even though we skimmed through the changelogs and release notes and thought the update should be uncritical for us grails behaviour changed in two aspects which broke things for us:

  1. An API-change where GroovyPagesException was changed to GrailsTagException
  2. Behavioural change where no application context and injections are available in functional tests anymore

Item 1 was easy to fix but you need really good testing to spot it before it slips into production. Such subtle API changes should not happen in micro-version updates as that can easily break parts of the system whithout you knowing because of groovy/grails’ dynamic nature. No compiler saves you here.

Item 2 produced some amount of work for us because we build a quite extensive acceptance test suite using services and domain objects to setup the initial environment for each test. Luckily, there is the grails remote control plugin which you can use for things like that.

Lessons learned

  • You should have extensive automated test suites when developing a grails application over a longer period because things can break in unexpected ways without code changes on your side.
  • Try to plan upgrades some time ahead of releases and dedicate time to scanning the release notes and actually performing the upgrade. It may take you significantly longer than the smooth upgrade procedure itself suggests.

The grails team seems to be increasingly aware of backwards compatibility but they still have some way to go. We hope and expect to see fewer unexpected breakages to occur in the future.

Developing Grails Apps – Some Dark Sides

Most of the time, developing Grails apps is a nice experience. But there are also dark sides. One of which is when bugs do appear or do not appear depending on how you started your app.

Usually, I try to avoid it but this time a Disclaimer is in order: This is not a Grails rant. Most of the time developing Grails projects is fast and smooth. Using Grails brought many advantages for us. But there are also dark sides…

My main criticism is that Grails abstractions are more than leaky! In every list of examples for the definition of the term Leaky Abstraction Grails should be top. As soon as you leave the tutorial/scaffolding/helloworld level you have to know a lot about the underlying stack. And with Hibernate and Spring neither of the words small, easy and lightweight do apply.

GORM, too, is only easy to use at first sight. The very informative blog series about GORM gotchas should absolutely become part of the user guide or the refence docs.

And there are those times where it gets really unpleasant. This is e.g. when a bug does appear in your grails application running in a servlet container (packaged in a .war)  but does not appear when the application is started from within the IDE. Our last one of those was a naming conflict in a .gsp file. The controller handed a model like this to the .gsp:

...
return [fieldValue: 'THE_VALUE', ...]

The model entry ‘fieldValue’ was used in the .gsp to set the value of a combo box. Unfortunately, ‘fieldValue’ is also the name of a built-in Grails tag

Admittedly, ‘fieldValue’ was not the wisest choice of names and I would certainly expect to get scolded loudly by Grails for that – ideally with a nice descriptive exception. But what happend instead led to a loud scolding of Grails from us. And to some big question marks: What is the difference between executing Grails from the IDE and within a servlet container with respect to naming resolution? Why is there a difference, at all?

We had a hard time figuring out this one, not least because the error message was not very telling. And since this was not the first of those works-in-the-IDE-but-not-in-a-real-environment bugs there is always this slightly uneasy feeling…

As I said in the beginning, most of the time developing Grails applications is nice and shiny. I would not support their slogan, though. My personal search for the best web development tool is definitively not over.

How about your search?

A more elegant way to HTTP Requests in Java

The support for sending and processing HTTP requests was always very basic in the JDK. There are many, many frameworks out there for sending requests and handling or parsing the response. But IMHO two stand out: HTTPClient for sending and HTMLUnit for handling. And since HTMLUnit uses HTTPClient under the hood the two are a perfect match.

This is an example HTTP Post:

HttpClient client = new HttpClient();
PostMethod post = new PostMethod(url);
for (Entry param : params.entrySet()) {
    post.setParameter(param.key, param.value);
}
try {
    return client.executeMethod(post);
} finally {
    post.releaseConnection();
}

and HTTP Get:

WebClient webClient = new WebClient();
return (HtmlPage) webClient.getPage(url);

Accessing the returned HTML via XPath is also very straightforward:

List roomDivs=(List)page.getByXPath("//div[contains(@class, 'room')]");
for (HtmlElement div:roomDivs) {
  rooms.add(
    new Room(this,
      ((HtmlElement) div.getByXPath(".//h2/a").get(0)).getTextContent(),
      div.getId())
  );
}

One last issue remains: HTTPClient caches its cookies but HTMLUnit creates a HTTPClient on its own. But if you override HttpWebConnection and give it your HTTPClient everything works smoothly:

public class HttpClientBackedWebConnection extends HttpWebConnection {
  private HttpClient client;

  public HttpClientBackedWebConnection(WebClient webClient,
      HttpClient client) {
    super(webClient);
    this.client = client;
  }

  @Override
  protected HttpClient getHttpClient() {
    return client;
  }
}

Just set your custom webconnection on your webclient:

webClient.setWebConnection(
  new HttpClientBackedWebConnection(webClient, client)
);

Always be aware of the charset encoding hell

Most developers already struggled with textual data from some third party system and getting garbage special characters and the like because of wrong character encodings.  Some days ago we encountered an obscure problem when it was possible to login into one of our apps from the computer with the password database running but not from other machines using the same db.  After diving into the problem we found out that they SHA-1 hashes generated from our app were slightly different. Looking at the code revealed that platform encoding was used and that lead to different results:platform-encoding

The apps were running on Windows XP and Windows 2k3 Server respectively and you would expect that it would not make much of a difference but in fact it did!

Lesson:

Always specify the encoding explicitly, when exchanging character data with any other system. Here are some examples:

  • String.getBytes(“utf-8”), new Printwriter(file, “ascii”) in Java
  • HTML-Forms with attribute accept-charset="ISO-8859-1"
  • In XML headers <?xml version="1.0" encoding="ISO-8859-15"?>
  • In your Database and/or JDBC driver
  • In your file format documentation
  • In LaTeX documents
  • everywhere where you can provide that info easily (e.g. as a comment in a config file)

Problems with character encodings seem to appear every once in a while either as end user, when your umlauts get garbled or as a programmer that has to deal with third party input like web forms or text files.

The text file rant

After stumbling over an encoding problem *again* I thought a bit about the whole issue and some of my thought manifested in this rant about text files. I do not want to blame our computer science predecessors for inventing and using restricted charsets like ASCII or iso8859. Nobody has forseen the rapid development of computers and their worldwide adoption and use in everyday life and thus need for an extensible charset (think of the addition of new symbols like the €), let aside performance and memory considerations. The problem I see with text files is that there is no standard way to describe the used encoding. Most text files just leave it to the user to guess what the encoding might be whereas almost all binary file formats feature some kind of defined header with metadata about the content, e.g. bit depth and compression method in image files. For text files you usually have to use heuristical tools which work  more or less depending on the input.

A standardized header for text files right from the start would have helped to indicate the encoding and possibly language or encoding version information of the text and many problems we have today would not exist. The encoding attribute in the XML header or the byte order mark in UTF-8 are workarounds for the fundamental problem of a missing text file header.

Grails Web Application Security: XSS prevention

XSS (Cross Site Scripting) became a favored attack method in the last years. Several things are possible using an XSS vulnerability ranging from small annoyances to a complete desaster.
The XSS prevention cheat sheet states 6 rules to prevent XSS attacks. For a complete solution output encoding is needed in addition to input validation.
Here I take a further look on how to use the built in encoding methods in grails applications to prevent XSS.

Take 1: The global option

There exists a global option that specifies how all output is encoded when using ${}. See grails-app/conf/Config.groovy:

// The default codec used to encode data with ${}
grails.views.default.codec="html" // none, html, base64

So every input inside ${} is encoded but beware of the standard scaffolds where fieldValue is used inside ${}. Since fieldValue uses encoding you get a double escaped output – not a security problem, but the output is garbage.
This leaves the tags from the tag libraries to be reviewed for XSS vulnerability. The standard grails tags use all HTML encoding. If you use older versions than grails 1.1: beware of a bug in the renderErrors tag. Default encoding ${} does not help you when you use your custom tags. In this case you should nevertheless encode the output!
But problems arise with other tags like radioGroup like others found out.
So the global option does not result in much protection (only ${}), double escaping and problems with grails tags.

Take 2: Tainted strings

Other languages/frameworks (like Perl, Ruby, PHP,…) use a taint mode. There are some research works for Java.
Generally speaking in gsps three different outputs have to be escaped: ${}, <%%> and the ones from tags/taglibs. If a tainted String appears you can issue a warning and disallow or escape it. The problem in Java/Groovy is that Strings are value objects and since get copied in every operation so the tainted flag needs to be transferred, too. The same tainted flag must also be introduced for GStrings.
Since there isn’t any implementation or plugin for groovy/grails yet, right now you have to take the classic route:

Take 3: Test suites and reviews

Having a decent test suite in e.g. Selenium and reviewing your code for XSS vulnerabilities is still the best option in your grails apps. Maybe the tainted flags can help you in the future to spot places which you didn’t catch in a review.

P.S. A short overview for Java frameworks and their handling of XSS can be found here

Don’t trust micro versions

Normally you would think, that upgrading a third party dependency where its micro version (after the second dot, like x in 2.3.x) changes should make your software work (even) better and not break it. Sadly enough it can easily happen. Some time ago we stumbled over a subtle change in the JNDI implementation of the Jetty webserver and servlet container: In version 6.1.11 you specified (or at least could specify) JNDI resources in jetty-env.xml with URLs like jdbc/myDatabase. After the update to 6.1.12 the specified resource could not be found anymore. Digging through code changelogs and the like provided a solution that finally worked with 6.1.12: java:comp/env/jdbc/myDatabase. The bad thing is that the latter does not work with 6.1.11 so that our configuration became micro-version-dependent on Jetty.

It seems that a new feature around JETTY-725 in the update from 6.1.11 to 6.1.12 broke our software.

Conclusion

Always make sure that your dependencies are fixed for your software releases and test your software everytime when upgrading a dependency. Do not trust some automatic dependency update system or the version numbers of a project. In the end they are just numbers and should indicate the impact of the changes but you never can be sure the changes do not break something for you.

Enable Capture/Replay for Selenium Flex

One of the missing features of the SeleniumFlexAPI was capture/replay. So I looked into different ways to enable it:

  • Approach 1: Dispatch a DOM event and listen for it in ide-extensions.js

    Problem: Where do I include the name/id of the Flex control?
  • Approach 2: Custom events

    Problem: How do I listen to them in ide-extensions.js?

Solution: Additions to the Selenium IDE code: window.record

The solution is to add a new method to the Selenium IDE code: window.record which delegates to recorder.record. So that the Flex code can call this method directly through the ExternalInterface. The clear advantage of this technique is that there is no code pollution in your production code. But you have to change the SeleniumIDE code. There is an issue in the SeleniumIDE Jira which describes the additions, so go and vote for it!
Addtional code is also needed in the SeleniumFlexAPI.as in the applicationCompleteHandler:

private function applicationCompleteHandler(event:FlexEvent):void {
  ...
  registerListeners(appTreeParser.thisApp.parent);
  ...
}
private function registerListeners(subject:*):void {
  subject.addEventListener(MouseEvent.CLICK, recordClick);
  subject.addEventListener(MouseEvent.DOUBLE_CLICK, recordDoubleClick);
  subject.addEventListener(Event.ADDED, childAdded);
  addListenerRecursive(subject);
}

Bubbling events like MouseEvent.CLICK can be added here but for the non-bubbling ones you have to recursively walk the displayobject hierarchy:

public function addListenerRecursive(root:*):void {
  for(var i:int = 0; i < root.numChildren; i++) {
    try {
      var child:Object = root.getChildAt(i);
      if (isMenuBar(child)) {
        child.removeEventListener(MenuEvent.ITEM_CLICK, recordMenuItemClick);
        child.addEventListener(MenuEvent.ITEM_CLICK, recordMenuItemClick);
      }
      if (isTextControl(child)) {
        child.removeEventListener(Event.CHANGE, recordTextChange);
        child.addEventListener(Event.CHANGE, recordTextChange);
      }
      if (isDataGrid(child)) {
        child.removeEventListener(DataGridEvent.ITEM_FOCUS_IN, recordItemClick);
        child.addEventListener(DataGridEvent.ITEM_FOCUS_IN, recordItemClick);
      }
      addListenerRecursive(child);
    } catch(e:Error) {}
  }
}

In the event handling functions you just call the record function with the appropiate SeleniumFlex command:

private function recordClick(event:MouseEvent):void {
  ExternalInterface.call("record", "flexClick", "name=" + event.target.name, "");
}

Since Flex Sprites have no ids I use the name here for identifying the clicked target.
Another pitfall is when components are added dynamically (like when a Date opens, it adds a calendar view):

private function childAdded(event:Event):void {
  if (isDate(event.target.parent.parent)) {
    event.target.parent.parent.removeEventListener(CalendarLayoutChangeEvent.CHANGE, recordDate);
    event.target.parent.parent.addEventListener(CalendarLayoutChangeEvent.CHANGE, recordDate);
  }
}

Conclusion

So finally capture/replay in SeleniumFlex becomes a reality! Nonetheless there is some work to do to support the different kinds of flex controls and Selenium commands.