Adding a dynamic React page to your classic grails multi-page application

We are developing and maintaining a more than 10 years old classic multi-page application based on the Grails web framework. With the advent of HTML 5 and modern browsers with faster JavaScript engines user expect more and more dynamic and pleasant user experience (UX) from web applications. Our application is used by hundreds of users and our customer expects a stable, familiar and feature-rich experience that continues to improve over time. Something like a complete rewrite of the UI is way out of scope time- and budget-wise.

One of the new feature requests would benefit highly from a client-side JavaScript implementation so we looked at our options. Fortunately it is quite easy to integrate a react app with grails and the gradle build system. So we implemented the new page almost completely as a react app while leaving all the other pages as normal server-side rendered Groovy Server Pages (GSP). The result is quite convincing and opens up a transition path to more and more dynamic client-side pages and perhaps even to the complete transformation to a single-page-application (SPA) in a distant future.

Integrating a React-App into Grails build process

The Grails react-webpack profile can serve as a great starting point to integrate a react app into an existing grails project. First you create the react app for the new page in the folder src/main/webapp, using the create-react-app scripts for example. Then you need to add a $GRAILS_PROJECT/webpack.config.js to configure webpack appropriately like so:

var path = require('path');

module.exports = {
  entry: './src/main/webapp/index.js',
  output: {
    path: path.join(__dirname, 'grails-app/assets/javascripts'),
    publicPath: '/assets/',
    filename: 'bundle.js'
  },
  module: {
    rules: [
      {
        test: /\.js$/,
        include: path.join(__dirname, 'src/main/webapp'),
        use: {
          loader: 'babel-loader',
          options: {
            presets: ["@babel/preset-env", "@babel/preset-react"],
            plugins: ["transform-class-properties"]
          }
        }
      },
      {
        test: /\.css$/,
        use: [
          'style-loader',
          'css-loader'
        ]
      },
      {
        test: /\.(jpe?g|png|gif|svg)$/i,
        use: {
          loader: 'url-loader?limit=10000&prefix=assets/!img'
        }
      }
    ]
  }
};

The next step is to move the package.json to the $GRAILS_PROJECT directory because we want gradle tasks to take care of building and bundling it as a grails asset. To make this convenient we add some gradle tasks employing yarn to our build.gradle:

buildscript {
    dependencies {
        ...
        classpath "com.moowork.gradle:gradle-node-plugin:1.2.0"
    }
}

...

apply plugin:"com.moowork.node"

...

node {
    version = '12.15.0'
    yarnVersion = '1.22.0'
    distBaseUrl = 'https://nodejs.org/dist'
    download = true
}

task bundle(type: YarnTask, dependsOn: 'yarn') {
    group = 'build'
    description = 'Build the client bundle'
    args = ['run', 'bundle']
}

task webpack(type: YarnTask, dependsOn: 'yarn') {
    group = 'application'
    description = 'Build the client bundle in watch mode'
    args = ['run', 'start']
}

bootRun.dependsOn(['bundle'])
assetCompile.dependsOn(['bundle'])

...

Now we have integrated our new react app with the grails build system and packaging. The webpack task allows updating the javascript bundle on the fly so that we have almost the same hot reloading support when developing as with the rest of grails.

Delivering the react app as a page

Now that we have integrated the react app in the build and packaging process of our grails application we need to deliver it when the new page is requested by the browser. This is quite simple and straightforward and can be achieved with a GSP like so:

<html>
<head>
    <meta name="layout" content="main"/>
    <title>
        <g:message code="example.header"/>
    </title>
</head>
<body>
    <div id="react-content">
    </div>
    <asset:javascript src="bundle.js"/>
</body>
</html>

Now you just have to develop the endpoints for the javascript app in form of normal grails controllers rendering JSON instead of GSP views. This is extremely easy using groovy maps and the grails JSON converters:

import grails.converters.JSON

class DataApiController {

    def getData = {
        def responseData = [
            name: 'John',
            age: 37
        ]
        render responseData as JSON
    }
}

Conclusion

Grails and its build infrastructure is flexible enough to easily integrate SPA pages into an existing traditional web application. This allows you to deliver modern UX and features expected by nowadays users without completely rewriting your trusty and proven grails application. The process can be gradually and individual pages/views can be renewed when needed. That way you can continually add value to your customer while incrementally modernizing your application.

Getting Shibboleth SSO attributes securely to your application

Accounts and user data are a matter of trust. Single sign-on (SSO) can improve the user experience (UX), convenience and security especially if you are offering several web applications often used by the same user. If you do not want to force your users to big vendors offering SSO like google or facebook or do not trust them you can implement SSO for your offerings with open-source software (OSS) like shibboleth. With shibboleth it may be even feasible to join an existing federation like SWITCH, DFN or InCommon thus enabling logins for thousands of users without creating new accounts and login data.

If you are implementing you SSO with shibboleth you usually have to enable your web applications to deal with shibboleth attributes. Shibboleth attributes are information about the authenticated user provided by the SSO infrastructure, e.g. the apache web server and mod_shib in conjunction with associated identity providers (IDP). In general there are two options for access of these attributes:

  1. HTTP request headers
  2. Request environment variables (not to confuse with system environment variables!)

Using request headers should be avoided as it is less secure and prone to spoofing. Access to the request environment depends on the framework your web application is using.

Shibboleth attributes in Java Servlet-based apps

In Java Servlet-based applications like Grails or Java EE access to the shibboleth attributes is really easy as they are provided as request attributes. So simply calling request.getAttribute("AJP_eppn") will provide you the value of the eppn (“EduPrincipalPersonName”) attribute set by shibboleth if a user is authenticated and the attribute is made available. There are 2 caveats though:

  1. Request attributes are prefixed by default with AJP_ if you are using mod_proxy_ajp to connect apache with your servlet container.
  2. Shibboleth attributes are not contained in request.getAttributeNames()! You have to directly access them knowing their name.

Shibboleth attributes in WSGI-based apps

If you are using a WSGI-compatible python web framework for your application you can get the shibboleth attributes from the wsgi.environ dictionary that is part of the request. In CherryPy for example you can use the following code to obtain the eppn:

eppn = cherrypy.request.wsgi_environ['eppn']

I did not find the name of the WSGI environment dictionary clearly documented in my efforts to make shibboleth work with my CherryPy application but after that everything was a bliss.

Conclusion

Accessing shibboleth attributes in a safe manner is straightforward in web environments like Java servlets and Python WSGI applications. Nevertheless, you have to know the above aspects regarding naming and visibility or you will be puzzled by the behaviour of the shibboleth service provider.

Explicit types – and when to use them

Many modern programming languages offer a way declare variables without an explicit type if the type can be inferred, either dynamically or statically. Many also allow for variables to be explicitly defined with a type. For example, Scala and C# let you omit the explicit variable type via the var keyword, but both also allow defining variables with explicit types. I’m coming from the C++ world, where “auto” is available for this purpose since the relatively recent C++11. However, people are still debating whether you should actually use it.

Pros

Herb Sutter popularised the almost-always-auto style. He advocates that using more type inference is good because it is roughly equivalent to programming against interfaces instead of implementations. He says that “Overcommitting to explicit types makes code less generic and more interdependent, and therefore more brittle and limited.” However, he also mentions that you might sometimes want to use explicit types.

Now what exactly is overcommiting here? When is the right time to use explicit types?

Cons

Opponents to implicit typing, many of them experienced veterans, often state that they want the actual type visible in the source code. They don’t want to rely on type inference being right. They want the code to explicitly state what’s going on.

At first, I figured that was just conservatism in the face of a new “scary” feature that they did not fully understand. After all, IDEs can usually infer the type on-the-fly and you can hover on a variable to let it show you the type.

For C++, the function signature is a natural boundary where you often insert explicit types, unless you want to commit to the compile time and physical dependency cost that comes with templates. Other languages, such as Groovy, do not have this trade-off and let you skip explicit types almost everywhere. After working with Groovy/Grails for a while, where the dominant style seems to be to omit types whereever possible, it dawned on me that the opponents of implicit typing have a point. Not only does the IDE often fail to show me the inferred type (even though it still works way more often than I would have anticipated), but I also found it harder to follow and modify code that did not mention explicit types. Seemingly contrary to Herb Sutter’s argument, that code felt more brittle than I had liked.

Middle-ground

As usual, the truth seems to be somewhere in the middle. I propose the following rule for when to use explicit types:

  • Explicit typing for domain-types
  • Implicit typing everywhere else

Code using types from the problem domain should be as specific as possible. There’s no need for it to be generic – it’s actually counter-productive, as otherwise the code model would be inconsistent with model of the problem domain. This is also the most important aspect to grok when reading code, so it should be explicit. The type is as important as the action on it.

On the other hand, for pure-fabrication types that do not respresent a concept in the domain, the action is important, while the type is merely a means to achieve this action. Typically, most of the elements from a language’s standard library fall into this category. All your containers, iterators, callables. Their types are merely implementation details: an associative container could be an array, or a hash-map or a tree structure. Exchanging it rarely changes the meaning of the code in the problem domain – it just changes its performance characteristics.

Containers will occasionally contain domain-types in their type. What do you do about those? I think they belong in the “everywhere else” catergory, but you should be take extra care to name the contained type when working with it – for example when declaring the variable of the for-each loop on it, or when inserting something into it. This way, the “collection of domain-type” aspect will become clear, but the specific container implementation will stay implicit – like it should.

What do you think? Is this a useful proposition for your code?

How to speed up your ORM queries of n x m associations

What solution (no cache) causes a 45x speedup? Learn the different approaches and how they compare

What causes a speedup like this? (all numbers are in ms)

Disclaimer: the absolute benchmark numbers are for illustration purposes, the relationship and the speedup between the different approaches are important (just for the curious: I measured 500 entries per table in a PostgreSQL database with both Rails 4.1.0 and Grails 2.3.8 running on Java 7 on a recent MBP running OSX 10.8)

Say you have the model classes Book and (Book)Writer which are connected via a n x m table named Authorship:

A typical query would be to list all books with its authors like:

Fowler, Martin: Refactoring

A straight forward way is to query all authorships:

In Rails:

# 1500 ms
Authorship.all.map {|authorship| "#{authorship.writer.lastname}, #{authorship.writer.firstname}: #{authorship.book.title}"}

In Grails:

// 585 ms
Authorship.list().collect {"${it.writer.lastname}, ${it.writer.firstname}: ${it.book.title}"}

This is unsurprisingly not very fast. The problem with this approach is that it causes the famous n+1 select problem. The first option we have is to use eager fetching. In Rails we can use ‘includes’ or ‘joins’. ‘Includes’ loads the associated objects via additional queries, one for authorship, one for writer and one for book.

# 2300 ms
Authorship.includes(:book, :writer).all

‘Joins’ uses SQL inner joins to load the associated objects.

# 1000 ms
Authorship.joins(:book, :writer).all
# returns only the first element
Authorship.joins(:book, :writer).includes(:book, :writer).all

Additional queries with ‘includes’ in our case slows down the whole request but with joins we can more than halve our time. The combination of both directives causes Rails to return just one record and is therefore ruled out.

In Grails using ‘belongsTo’ on the associations speeds up the request considerably.

class Authorship {
    static belongsTo = [book:Book, writer:BookWriter]

    Book book
    BookWriter writer
}

// 430 ms
Authorship.list()

Also we can implement eager loading with specifying ‘lazy: false’ in our mapping which boosts a mild performance increase.

class Authorship {
    static mapping = {
        book lazy: false
        writer lazy: false
    }
}

// 416 ms
Authorship.list()

Can we do better? The normal approach is to use ‘has_many’ associations and query from one side of the n x m association. Since we use more properties from the writer we start from here.

class Writer < ActiveRecord::Base
  has_many :authors
  has_many :books, through: :authors
end

Testing the different combinations of ‘includes’ and ‘joins’ yields interesting results.

# 1525 ms
Writer.all.joins(:books)

# 2300 ms
Writer.all.includes(:books)

# 196 ms
Writer.all.joins(:books).includes(:books)

With both options our request is now faster than ever (196 ms), a speedup of 7.
What about Grails? Adding ‘hasMany’ and the authorship table as a join table is easy.

class BookWriter {
    static mapping = {
        books joinTable:[name: 'authorships', key: 'writer_id']
    }

    static hasMany = [books:Book]
}
// 313 ms, adding lazy: false results in 295 ms
BookWriter.list().collect {"${it.lastname}, ${it.firstname}: ${it.books*.title}"}

The result is rather disappointing. Only a mild speedup (2x) and even slower than Rails.

Is this the most we can get out of our queries?
Looking at the benchmark results and the detailed numbers Rails shows us hints that the query per se is not the problem anymore but the deserialization. What if we try to limit our created object graph and use a model class backed by a database view? We can create a view containing all the attributes we need even with associations to the books and writers.

create view author_views as (SELECT "authorships"."writer_id" AS writer_id, "authorships"."book_id" AS book_id, "books"."title" AS book_title, "writers"."firstname" AS writer_firstname, "writers"."lastname" AS writer_lastname FROM "authorships" INNER JOIN "books" ON "books"."id" = "authorships"."book_id" INNER JOIN "writers" ON "writers"."id" = "authorships"."writer_id")

Let’s take a look at our request time:

# 15 ms
 AuthorView.select(:writer_lastname, :writer_firstname, :book_title).all.map { |author| "#{author.writer_lastname}, #{author.writer_firstname}: #{author.book_title}" }
// 13 ms
AuthorView.list().collect {"${it.writerLastname}, ${it.writerFirstname}: ${it.bookTitle}"}

13 ms and 15 ms. This surprised me a lot. Seeing this in comparison shows how much this impacts performance of our request.

The lesson here is that sometimes the performance can be improved outside of our code and that mapping database results to objects is a costly operation.

Grails Update from 2.2 to 2.3

An update in the minor version does not seem like a big step but this is one brought a lot of changes, so here a step by step guide which highlights some pitfalls.

An update in the minor version does not seem like a big step but this is one brought a lot of changes, so here a step by step guide which highlights some pitfalls.

First update the version of Grails in your application properties:

app.grails.version=2.3.8

The tomcat and hibernate plugins now have versions of their respective frameworks and not the version number of Grails:

plugins.tomcat=7.0.52.1
plugins.hibernate=3.6.10.13

Grails 2.3 has a new databinding mechanism. To use the old one, especially if you use custom property editors you have to add this option to your Config.groovy:

grails.databinding.useSpringBinder = true

But even with the old databinding something changed. The field id is not bound in command objects you need to bind id explicitly:

def action = { MyCommand command ->
  command.id = params['id']?.toLong()
}

Besides the databinding mechanism also the dependency resolving changed. But you can use the old ivy mechanism by including this in BuildConfig.groovy:

grails.project.dependency.resolver="ivy"

Nonetheless all dependencies must be declared in application.properties or BuildConfig.groovy. If you have a lib directory with local jars in your application you need to add this to your repositories as a local directory:

grails.project.dependency.resolution = {
    repositories {
        flatDir name:'myRepo', dirs:'lib'
    }
}

When you have all dependencies declared your application should start.

Tests

Grails 2.3 features a new test mode: forking. This causes some problems and is better to be deactivated in BuildConfig.groovy:

grails.project.fork = [
        test: false,
]

With the new version only JUnit4 style tests are supported. This means that you don’t extend GroovyTestCase or GrailsUnitTestCase. All rules must be public and non static. All tests methods need to be annotated with @Test. Set up methods are annotated with @Before and must be public. The tearDown methods must also be annotated with @After and be public. A bug in Grails prevents you from naming the set up and tear down methods freely: the names must be setUp and tearDown. All test methods must be public void, the old def declaration is not supported anymore. Now without extending GroovyTestCase you lose the assertion methods and need to add a static import:

import static groovy.util.GroovyTestCase.*

Unit Tests

All tests should be annotated with @TestMixin([GrailsUnitTestMixin]). If you need to mock domain classes you change mockDomain to @Mock:

class MyTest {
	public void testThis() {
      mockDomain(MyDomainClass, [mdc])
  }
}
@Mock([Proposal])
class MyTest {
	public void testThis() {
      mdc.save()
  }
}

Configuration is now already mocked and your properties can be added easily:

config.my.property.value.is='123'

Integration tests

As mentioned before setUp method naming has a bug: you have to name them setUp otherwise the changes to your database aren’t rollbacked.

Acceptance Tests with Selenium

You need to patch the Remote Control Plugin because of a ClassNotFoundException. Add an additional constructor to RemoteControl.groovy to support setting the classloader:

RemoteControl(ClassLoader loader) {
  super(new HttpTransport(getFunctionalTestReceiverAddress(), loader), loader)
} 

In your tests you call this new constructor with the classloader of your class:

new RemoteControl(getClass().classLoader)

Using a Groovy Mixin in your application does not work in your tests and need to be replaced with grails.util.Mixin. But this only works in one way: the target class can access the mixin but the mixin not the target class. For this to work you need to let your mixin implement MixinTargetAware and declare a field named target:

class MyMixin implements MixinTargetAware {
	def target
}

Subtle changes and pitfalls

If you have a classname with a Controller suffix and a corresponding test but which isn’t a Grails controller Grails nevertheless tries to mock the class in your unit tests. If you rename the test to something without controller everything works fine.

In our pre 2.3 project we had some select tags in our views and used fieldValue for the selection:

<g:select value="${fieldValue(bean: object, field: 'value')}">

But now the select tag uses equals which fails if the values aren’t Strings. Just use the unescaped value:

<g:select value="${object?.value}">

I hope this guide and hints help others to avoid the headaches when upgrading your Grails application.

Automatic deployment of (Grails) applications

What was your most embarrassing moment in your career as a software engineer? Mine was when I deployed an application to production and it didn’t even start. Stop using manual deployment and learn how to automate your (Grails) deployment

What was your most embarrassing moment in your career as a software engineer? Mine was when I deployed an application to production and it didn’t even start.

Early in my career deploying an application usually involved a fair bunch of manual steps. Logging in to a remote server via ssh and executing various commands. After a while repetitive steps were bundled in shell scripts. But mistakes happened. That’s normal. The solution is to automate as much as we can. So here are the steps to automatic deployment happiness.

Build

One of the oldest requirements for software development mentioned in The Joel Test is that you can build your app in one step. With Grails that’s easy just create a build file (we use Apache Ant here but others will do) in which you call grails clean, grails test and then grails war:

<project name="my_project" default="test" basedir=".">
  <property name="grails" value="${grails.home}/bin/grails"/>
  
  <target name="-call-grails">
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="${grails.task}"/><arg value="${grails.file.path}"/>
      <env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>
  
  <target name="-call-grails-without-filepath">
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="${grails.task}"/><env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>

  <target name="clean" description="--> Cleans a Grails application">
    <antcall target="-call-grails-without-filepath">
      <param name="grails.task" value="clean"/>
    </antcall>
  </target>
  
  <target name="test" description="--> Run a Grails applications tests">
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="test-app"/>
      <arg value="-echoOut"/>
      <arg value="-echoErr"/>
      <arg value="unit:"/>
      <arg value="integration:"/>
      <env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>

  <target name="war" description="--> Creates a WAR of a Grails application">
    <property name="build.for" value="production"/>
    <property name="build.war" value="${artifact.name}"/>
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="-Dgrails.env=${build.for}"/><arg value="war"/><arg value="${target.directory}/${build.war}"/>
      <env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>
  
</project>

Here we call Grails via the shell scripts but you can also use the Grails ant task and generate a starting build file with

grails integrate-with --ant

and modify it accordingly.

Note that we specify the environment for building the war because we want to build two wars: one for production and one for our functional tests. The environment for the functional tests mimic the deployment environment as close as possible but in practice you have little differences. This can be things like having no database cluster or no smtp.
Now we can put all this into our continuous integration tool Jenkins and every time a checkin is made out Grails application is built.

Test

Unit and integration tests are already run when building and packaging. But we also have functional tests which deploy to a local Tomcat and test against it. Here we fetch the test war of the last successful build from our CI:

<target name="functional-test" description="--> Run functional tests">
  <mkdir dir="${target.base.directory}"/>
  <antcall target="-fetch-file">
    <param name="fetch.from" value="${jenkins.base.url}/job/${jenkins.job.name}/lastSuccessfulBuild/artifact/_artifacts/${test.artifact.name}"/>
    <param name="fetch.to" value="${target.base.directory}/${test.artifact.name}"/>
  </antcall>
  <antcall target="-run-tomcat">
    <param name="tomcat.command.option" value="stop"/>
  </antcall>
  <copy file="${target.base.directory}/${test.artifact.name}" tofile="${tomcat.webapp.dir}/${artifact.name}"/>
  <antcall target="-run-tomcat">
    <param name="tomcat.command.option" value="start"/>
  </antcall>
  <chmod file="${grails}" perm="u+x"/>
  <exec dir="${basedir}" executable="${grails}" failonerror="true">
    <arg value="-Dselenium.url=http://localhost:8080/${product.name}/"/>
    <arg value="test-app"/>
    <arg value="-functional"/>
    <arg value="-baseUrl=http://localhost:8080/${product.name}/"/>
    <env key="GRAILS_HOME" value="${grails.home}"/>
  </exec>
</target>

Stopping and starting Tomcat and deploying our application war in between fixes the perm gen space errors which are thrown after a few hot deployments. The baseUrl and selenium.url parameters tell the functional plugin to look at an external running Tomcat. When you omit them they start the Tomcat and Grails application themselves in their process.

Release

Now all tests passed and you are ready to deploy. So you fetch the last build … but wait! What happens if you have to redeploy and in between new builds happened in the ci? To prevent this we introduce a step before deployment: a release. This step just copies the artifacts from the last build and gives them the correct version. It also fetches the lists of issues fixed from our issue tracker (Jira) for this version as a PDF. These lists can be sent to the customer after a successful deployment.

Deploy

After releasing we can now deploy. This means fetching the war from the release job in our ci server and copying it to the target server. Then the procedure is similar to the functional test one with some slight but important differences. First we make a backup of the old war in case anything goes wrong and we have to rollback. Second we also copy the context.xml file which Tomcat needs for the JNDI configuration. Note that we don’t need to copy over local data files like PDF reports or serach indexes which were produced by our application. These lie outside our web application root.

<target name="deploy">
  <antcall target="-fetch-artifacts"/>

  <scp file="${production.war}" todir="${target.server.username}@${target.server}:${target.server.dir}" trust="true"/>
  <scp file="${target.server}/context.xml" todir="${target.server.username}@${target.server}:${target.server.dir}/${production.config}" trust="true"/>

  <antcall target="-run-tomcat-remotely"><param name="tomcat.command.option" value="stop"/></antcall>

  <antcall target="-copy-file-remotely">
    <param name="remote.file" value="${tomcat.webapps.dir}/${production.war}"/>
    <param name="remote.tofile" value="${tomcat.webapps.dir}/${production.war}.bak"/>
  </antcall>
  <antcall target="-copy-file-remotely">
    <param name="remote.file" value="${target.server.dir}/${production.war}"/>
    <param name="remote.tofile" value="${tomcat.webapps.dir}/${production.war}"/>
  </antcall>
  <antcall target="-copy-file-remotely">
    <param name="remote.file" value="${target.server.dir}/${production.config}"/>
    <param name="remote.tofile" value="${tomcat.conf.dir}/Catalina/localhost/${production.config}"/>
  </antcall>

  <antcall target="-run-tomcat-remotely"><param name="tomcat.command.option" value="start"/></antcall>
</target>

Different Environments: Staging and Production

If you look closely at the deployment script you notice that uses the context.xml file from a directory named after the target server. In practice you have multiple deployment targets not just one. At the very least you have what we call a staging server. This server is used for testing the deployment and the deployed application before interrupting or corrupting the production system. It can even be used to publish a pre release version for the customer to try. We use a seperate job in our ci server for this. We separate the configurations needed for the different environments in directories named after the target server. What you shouldn’t do is to include all those configurations in your development configurations. You don’t want to corrupt a production application when using the staging one or when your tests run or even when you are developing. So keep configurations needed for the deployment environment separate and separate from each other.

Celebrate

Now you can deploy over and over again with just one click. This is something to celebrate. No more headaches, no more bitten finger nails. But nevertheless you should take care when you access a production system even it is automatically. Something you didn’t foresee in your process could go wrong or you could make a mistake when you try out the application via the browser. Since we need to be aware of this responsibility everybody who interacts with a production system has to wear our cowboy hats. This is a conscious step to remind oneself to take care and also it reminds everybody else that you shouldn’t disturb someone interacting with a production system. So don’t mess with the cowboy!

A touch of Grails: cache invalidation

In a recent post I introduced a caching strategy. One difficulty with caching is always when and where do I need to invalidate the cache contents.

In a recent post I introduced a caching strategy. One difficulty with caching is always when and where do I need to invalidate the cache contents. One of the easiest things to do is to use timestamped cache keys like id-lastUpdated. But what if you cache another related domain object under the same key. One option would be to include its timestamp in the key, too. Another one is to touch the timestamped object.

class A {
  Long id
  Date lastUpdated
}

class B {
  A a
  
  def beforeUpdate = {
    a.lastUpdated = new Date()
  }
}

So you would need this touch in beforeUpdate, afterInsert and beforeDelete. This can get pretty cumbersome. What if I could just declare that I want to touch another object when this object changes like

class A {
  Long id
  Date lastUpdated
}

class B {
  static touch = ['a']

  A a
}

For this we just need to listen to the GORM persistence events.

class TouchPersistenceListener extends AbstractPersistenceEventListener {

  public TouchPersistenceListener(final Datastore datastore) {
    super(datastore)
  }

  @Override
  protected void onPersistenceEvent(AbstractPersistenceEvent event) {
    touch(event.entityObject)
  }

  public void touch(def domain) {
    MetaProperty touch = domain.metaClass.hasProperty(domain, 'touch')
    if (touch != null) {
      Date now = new Date()
      def listOfPropertiesToTouch = touch.getProperty(domain)
      listOfPropertiesToTouch.each { propertyName ->
        if (domain."$propertyName" == null) {
          return
        }
        if (domain."$propertyName" instanceof Collection) {
          domain."$propertyName".findAll {it}*.lastUpdated = now
        } else {
          domain."$propertyName".lastUpdated = now
        }
      }
    }
  }

  boolean supportsEventType(Class eventType) {
    return [PreInsertEvent, PostInsertEvent, PreUpdateEvent].contains(eventType)
  }
}

One caveat here is that all persistence events are fired by hibernate during a flush. Especially the delete event caused problems like objects which have a session but you cannot load lazy loaded associations. This is a known hibernate bug and is fixed is 4.01 but Grails uses an older version. So we just decorate the save and delete methods of our domain classes and register our listener for the other events.

class BootStrap {
  def grailsApplication

  def init = { servletContext ->
    def ctx = grailsApplication.mainContext
    ctx.getBeansOfType(Datastore).values().each { Datastore d ->
      def touchListener = new TouchPersistenceListener(d)
      ctx.addApplicationListener touchListener
      decorateWith(touchListener)
    }
  }

  private void decorateWith(TouchPersistenceListener touchListener) {
    grailsApplication.domainClasses.each { GrailsClass clazz ->
      if (clazz.hasProperty('touch')) {
        def oldSave = clazz.metaClass.pickMethod('save', [Map] as Class[])
        def oldDelete = clazz.metaClass.pickMethod('delete', [Map] as Class[])
        clazz.metaClass.save = { params ->
          touchListener.touch(delegate)
          oldSave.invoke(delegate, [params] as Object[])
        }
        clazz.metaClass.delete = { params ->
          touchListener.touch(delegate)
          oldDelete.invoke(delegate, [params] as Object[])
        }
      }
    }
  }
}

Now we can just declare that changing this object touches another one it also works with 1:n or m:n associations. We don’t have to worry where to invalidate the cache in our code just annotate every object used in the cache with touch if it has an association to our object used in the key or include it in the key. Changing those objects invalidates the cache automatically.

Scaling your web app: Cache me if you can

Invalidation and transaction aware caching using memcached with Grails as an example

One of the biggest problems of caches is how and when do I invalidate my cache content? When you read outdated data from the cache you are toast.
For example we have a list of children elements inside a parent. Normally you would cache the children under the parent’s id:

cache[parent.id] = children

But how do you know if your cache content is still valid? When one child or the list of children changes you write the new content into the cache

cache[parent.id] = newChildren

But when do you update the cache? If you place the update code where the list of children is modified the cache is updated before transaction has ended. You break the isolation. Another point would be after the transaction has been committed but then you have to track all changes. There is a better way: use a timestamp from the database which is also visible to other transactions when it is committed. It should also be in the parent object because you need this object for the cache key nonetheless. You could use lastUpdated or another timestamp for this which is updated when the children collection changes. The cache key is now:

cache[parent.id + '_' + parent.lastUpdated]

Now other transactions read the parent object and get the old timestamp and so the old cache content before the transaction is committed. The transaction itself gets the new content. In Grails if you change the collection lastUpdated is automatically updated and in Rails with belongs_to and touch even a change in a child updates the lastUpdate of the parent – no manual invalidation needed.

Excourse: using memcached with Grails

If you want to use memcached from the JVM there is a good library which wraps common calls: spymemcached. If you want to use spymemcached from Grails you drop the jar into your lib folder and wrap it in a Service:

class MemcachedService implements InitializingBean {
  static final Object NULL = "NULL"
  def MemcachedClient memcachedClient

  def void afterPropertiesSet() {
    memcachedClient = new MemcachedClient(
      new ConnectionFactoryBuilder().setTranscoder(new CustomSerializingTranscoder()).build(),
      AddrUtil.getAddresses("localhost:11211")
    )
  }

  def connected() {
    return !memcachedClient.availableServers.isEmpty()
  }

  def get(String key) {
    return memcachedClient.get(key)
  }

  def set(String key, Object value) {
    memcachedClient.set(key, 600, value)
  }

  def clear() {
    memcachedClient.flush()
  }
}

Spymemcached serializes your cache content so you need to make all your cached classes implement Serializable. Since Grails uses its own class loaders we had problems with deserializing and used a custom serializing transcoder to get the right class loader (taken from this issue):

public class CustomSerializingTranscoder extends SerializingTranscoder {

  @Override
  protected Object deserialize(byte[] bytes) {
    final ClassLoader currentClassLoader = Thread.currentThread().getContextClassLoader();
    ObjectInputStream in = null;
    try {
      ByteArrayInputStream bs = new ByteArrayInputStream(bytes);
      in = new ObjectInputStream(bs) {
        @Override
        protected Class<ObjectStreamClass> resolveClass(ObjectStreamClass objectStreamClass) throws IOException, ClassNotFoundException {
          try {
            return (Class<ObjectStreamClass>) currentClassLoader.loadClass(objectStreamClass.getName());
          } catch (Exception e) {
            return (Class<ObjectStreamClass>) super.resolveClass(objectStreamClass);
          }
        }
      };
      return in.readObject();
    } catch (Exception e) {
      e.printStackTrace();
      throw new RuntimeException(e);
    } finally {
      closeStream(in);
    }
  }

  private static void closeStream(Closeable c) {
    if (c != null) {
      try {
        c.close();
      } catch (IOException e) {
        e.printStackTrace();
      }
    }
  }
}

With the connected method you can check if any memcached instances are available. Which is better than calling a method and waiting for the timeout.

def connected() {
  return !memcachedClient.availableServers.isEmpty()
}

Now you can inject your Service where you need to and cache along.

Cache the outermost layer

If you use Hibernate you get database based caching almost for free, so why bother using another cache? In one application we used Hibernate to fetch a large chunk of data from the database and even with caches it took 100 ms. Measuring the code showed that the processing of the data (conversion for the client) took by far the biggest chunk. Caching the processed data lead to 2 ms for the whole request. So one take away is here that caching the result of (user indepedent) calculations and conversions can speed up your request even further. When you got static resources you could also use HTTP directives.

Summary of the Schneide Dev Brunch at 2013-06-16

If you couldn’t attend the Schneide Dev Brunch in June 2013, here are the main topics we discussed summarized as good as I remember them.

brunch64-borderedA week ago, we held another Schneide Dev Brunch. The Dev Brunch is a regular brunch on a sunday, only that all attendees want to talk about software development and various other topics. If you bring a software-related topic along with your food, everyone has something to share. The brunch was very well-attended this time. We had bright sunny weather and used our roof garden to catch some sunrays. There were lots of topics and chatter, so let me try to summarize a few of them:

Introduction to Dwarf Fortress

The night before the Dev Brunch, we held another Schneide event, an introduction to the sandbox-type simulation game “Dwarf Fortress“. The game thrives on its dichotomy of a ridiculous depth of details (like simulating the fate of every sock in the game) and a general breadth of visualization, where every character of ASCII art can mean at least a dozen things, depending on context. If you can get used to the graphics and the rather crude controls, it will probably fascinate you for a long time. It fascinated us that night a lot longer than anticipated, but we finally managed to explore the big underground cave we accidentally spudded while digging for gold (literally).

Refactoring Golf

A week before the Dev Brunch, we held yet another Schneide event, a Refactoring Golf contest. Don’t worry, this was a rather coincidental clustering of appointments. This event will have its own blog entry soon, as it was really surprising. We used the courses published by Angel Núñez Salazar and Gustavo Quiroz Madueño and only translated their presentation. We learned that every IDE has individual strongpoints and drawbacks, even with rather basic usage patterns. And we learned that being able to focus on the “way” (the refactorings) instead of the “goal” (the final code) really shifts perception and frees your thoughts. But so little time! When was real golf ever so time-pressured? It was lots of fun.

Grails: the wrong abstraction?

The discussion soon drifted to the broad topic of web application frameworks and Grails in particular. We discussed its inability to “protect” the developer from the details of HTTP and HTML imperfection and compared it to other solutions like Qt’s QML, JavaFX or EMF. Soon, we revolved around AngularJS and JAX-RS. I’m not able to fully summarize everything here, but one sentence sticks out: “AngularJS is the Grails for Javascript developers”.

Another interesting fact is that we aren’t sure which web application framework we should/would/might use for our next project. Even “write your own” seemed a viable option. How history repeats itself!

If you have to pick a web application framework today, you might want to listen to Matt Raibel of AppFuse fame for a while. Also, there is the definition of ROCA-style frameworks out there.

There were a few more mentions of frameworks like RequireJS, leading to Asynchronous Module Definition (AMD)-styled systems. All in all, the discussion was very inspiring to look at tools and frameworks that might not cross your path on other occassions.

Principle of Mutual Oblivion

The “Principle of Mutual Oblivion” or PoMO is an interesting way to think about dependencies between software components. The blog entries are german language only yet. We discussed the approach for a bit and could see how it leads to “one tool for one job”. But we could also see drawbacks if applied to larger projects. Interesting, nonetheless.

Kanban

We also discussed the project management process Kanban for a while. The best part of the discussion was the question “why Kanban?” and the answer “it has fewer rules than SCRUM”. It is astonishing how processes can produce frustration, or perhaps more specific, uncover frustration.

Object Calisthenics workshops

Yet another workshop report, this time from two identical workshops applying the Object Calisthenics rules to a limited programming task. The participants were students that just learned about the rules. This might also be worked up into a full blog entry, because it was very insightful to watch both workshops unfold. The first one ended in cathartic frustration while the second workshop was concluded with joy about working programs. To circumvent the restraining rules of the Object Calisthenics, the approach used most of the time was to move the problem to another class. Several moves and numerous classes later, the rules still formed an inpenetrable barrier, but the code was bloated beyond repair.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The high number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Summary of the Schneide Dev Brunch at 2012-10-14

If you couldn’t attend the Schneide Dev Brunch in October 2012, here are the main topics we discussed neatly summarized.

Two weeks ago, we held another Schneide Dev Brunch. The Dev Brunch is a regular brunch on a sunday, only that all attendees want to talk about software development and various other topics. If you bring a software-related topic along with your food, everyone has something to share. The brunch was so well attended that we had trouble to all find a chair and fit at the table. We had to stay inside as the weather was rainy and too cold for prolonged outdoor sessions. Let’s have a look at the main topics we discussed:

Work hard, play hard

The first topic was a summary of the contents of the documentary movie “work hard play hard” about our modern work places. The documentary is a recommended watch for everyone thinking about joining this side of the industry. It’s beautiful sometimes and very painful to watch most times. You might cherish some of the rougher edges on your workplace afterwards. The DVD is out now.

Dual Monitoring

A short discussion about the efficiency increase that happens just by adding another monitor to your desk. There was no dispute: If you don’t at least try it, you waste money. That’s what I meant when I blogged about the second monitor being an profitable investment. Just one downfall, it shouldn’t end like this.

Management by Directive

Another discussion about the management of large departments. The “directive issuer” manager style is a common sight in this environment. I won’t repeat the discussion itself, but rather add an amusing story about an ex-military commander running a software development company. Enjoy!

Review of the Sneak Preview “Quality Assurance Best Practices in Karlsruhe”

There was a “sneak preview” organised by the VKSI, a local association of software engineers a few weeks ago. The topic of the whole event was “Quality Assurance Best Practices in Karlsruhe“. The event was divided into three independent presentations with different topics:

  • Non-Functional Software Tests” by Gebhard Ebeling: The talk was about realistic load- and performance testing of complex applications (and websites). While the presentation omitted tools and code completely, there were some take-aways even for developers that had never performed these types of tests before. This was arguably the best presentation of the event.
  • Contracts im Software Engineering” by Ben Romberg and Stefan Schürle: This talk was about the benefits of software contracts (think about checked method or class invariants) and the presentation of a particular implementation in Java, namely C4J. The perceived problem with this solution was the rather clumsy source code necessary to define the contracts.
  • MoDisco Software Modernization & Analysis” by Benjamin Klatt: MoDisco builds a model out of source code that is detailled enough to apply meaningful transformations to it and have the exact same source code (plus transformed code) as output. The idea looked very promising, but the presentation lacked actual source code examples. Nonetheless, MoDisco proves that there is a future for modell-driven analysis.

We had a lengthy discussion about software contracts and Design By Contract (DBC) in general. One tool that got mentioned several times was “CoFoJa” from (at least initially) Google.

Book review: Java Application Architecture: Modularity Patterns with Examples Using OSGI

In the rather new book of the Robert C. Martin signature series, Kirk Knoernschild tackles the hard task to teach software architecture through a book. One participant read the book and is very happy about the experience and insight he got from it. The book itself is repetitive at times, but that adds to the accessibility of the topic at hand when you jump right into a chapter. Additionally to the modularity and architecture aspects, you’ll learn OSGI through the code examples. This books gets a recommendation.

Book review: ATDD by Example

Another new book is from Markus Gärtner, of the Kent Beck signature series this time. It takes the reader by the hand and shows a way to use Cucumber, FitNesse and of course Behavior-Driven Development as a tool-and-process framework to implement (Acceptance-) Test Driven Development. None of our participants read the book fully yet, but it’s already a promising start. If you are looking for a new book about testing (after having read the great GOOS book), don’t hesitate. Another recommendation to read.

Visitor design pattern breaks modularization

One participant brought up the problem that he wanted absolute modularization in his application layout, but used a visitor design pattern at some central place. This breaks modularization, as the type information is exposed too much. We discussed the problem with some diagrams and sketches and came up with several alternatives, each with their own advantages and drawbacks. That was a great code design session among seasoned professionals.

Why are services included into Grails?

Another discussion was about the Grails web framework and the necessity for a service layer or service classes explicitly. We sketched out the fundamental architecture of a Grails application and discussed different possible alternatives to a dedicated service layer. There are some nice features about Grails services (like injection by convention, transaction and scope), but nothing really too sophisticated to distinguish them from POGOs. The discussion was open-ended, as usual with complex topics.

Review of a workshop on agile software-engineering

Lately, a participant visited a workshop on agile software-engineering, focussing a lot on SCRUM and XP. The workshop ran for several days and included lots of hands-on exercises. The workshop itself provided not much new content for seasoned agile developers, but served as an accurate and thorough introduction for younger developers. A major part of the workshop were social aspects of agile environments. Concepts like team empowerement are usually not taught in technical workshops. Important additional topics comprised of agile planning and estimation and proper retrospectives. The workshop itself was more of a entry-level introduction to agile development, but very effective in that regard.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The high number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.