Grails / GORM performance tuning tips

Every situation and every code is different but here are some pitfalls that can cost performance and tips you can try to improve performance in Grails / GORM

First things first: never optimize without measuring. Even more so with Grails there are many layers involved when running code: the code itself, the compiler optimized version, the Grails library stack, hotspot, the Java VM, the operating system, the C libraries, the CPU… With this many layers and even more possibilities you shouldn’t guess where you can improve the performance.

Measuring the performance

So how do you measure code? If you have a profiler like JProfiler you can use it to measure different aspects of your code like CPU utilization, hotspots, JDBC query performance, Hibernate, etc. But even without a decent profiler some custom code snippets can go a long way. Sometimes we use dedicated methods for measuring the runtime:

class Measurement {
  public static void runs(String opertationName, Closure toMeasure) {
    long start = System.nanoTime()
    long end = System.nanoTime()
    println("Operation ${operationName} took ${(end - start) / 1E6} ms")

or you can even show the growth in the Hibernate persistence context:

class Measurement {
  public static void grown(String opertationName, Closure toMeasure) {
    PersistenceContext pc = sessionFactory.currentSession.persistenceContext
    Map before = numberOfInstancesPerClass(pc)
    Map after = numberOfInstancesPerClass(pc)
    println "The operation ${operationName} has grown the persistence context: ${differenceOf(after, before)}"

Improving the performance

So when you found your bad performing code, what can you do about it? Every situation and every code is different but here are some pitfalls that can cost performance and tips you can try to improve performance:

GORM hotspots

Performance problems with GORM can be in different areas. A good rule of thumb is to reduce the number of queries hitting the database. This can be achieved by combining results with outer join, eager fetching associations or improving caching. Another hotspot can be long running operations which you can improve via creating indices on the database but first analyze the query with database specific tools like ANALYZE.
Also a typical problem can be a large persistence context. Why is this a problem? The default flush mode in Hibernate and hence GORM is auto which means before any query the persistence context is flushed. Flushing means Hibernate checks every property of every instance if it has changed. The larger the persistence context the more work to do. One option would be to clear the session periodically after a flush but this could decrease the performance because once loaded and therefore cached instances need to be reloaded from the database.
Another option is to identify the parts of your code which only need read access on the instances. Here you can use a stateless session or in Grails you can use the Spring annotation @Transactional(readOnly = true). It can be beneficial for the performance to separate read only and write access to the database. You could also experiment with the flush mode but beware that this can lead to wrong query results.

The thin line: where to stop?

If you measure and improve you can get big and small improvements. The problem is to decide which of these small ones change the code in a good or minimal way. It is a trade off between performance and code design as some performance improvements can worsen the code quality. Another cup of tea left untouched in this discussion is scalability. Whereas performance concentrates of the actual data and the current situation, scalability looks on the performance of the system when the data increases. Some performance improvements can worsen scalability. As with performance: measure, measure, measure.

2 thoughts on “Grails / GORM performance tuning tips”

  1. I like your conclusion and your point that you need to measure the performance before optimization. The reason is that multiple layers make the system complex, such that it behaves chaotic and unpredictable, right?

    Do you use the code you have provided for measurements as is, or just as starting point? In my experience, such measurements can be quite misleading if you do not measure performance in one slice for a large data set:
    – System.nanoTime() has quite a misleading name – its resolution can become as rough as currentTimeMillis(). If you take lots of small measurements, your results can be quite weird (see next bullets). Do you have any experience in using other notions of time?
    – since the system behaves nondeterministically, you should do some statistics over multiple runs (see e.g.
    – the jvm induces quite some nondeterminism (esp. the JVM) that you should consider and try to minimize (see e.g.,

    Have you made some similar experience and refined your approach for measurements? Or how do you make sure you have collected sufficient sensible data for your experiment? Some frameworks for such measurements are listed at

    1. Yes, the layers, external systems and sources, caches, multithreading make it impossible to predict consistently. Sometimes we use the code, sometimes we use tools like JProfiler or JMeter. It really depends on what you want to measure.
      Since our optimizations tend to take an operation from minutes to seconds we can neglect short running code. When profiling database dependent code multiple runs introduce another bunch of problems e.g. with caching.
      The data we use to drive our profiling is reconstructed from real world usage. we tend to improve the real world use cases and neglect other ones because in the past we used scenarios and assumptions but they can lead in very wrong directions.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.