Small cause – big effect (Story 2)

A short story about when Wethern’s Law of Suspended Judgement caused some trouble on the production system.

This is another story about a small misstep during programming that caused a lot of trouble in effect. The core problem has nothing to do with a specific programming language or some technical aspect, but the concept of assumptions. Let’s have a look at what an assumption really is:

assumption: The act of taking for granted, or supposing a thing without proof; a supposition; an unwarrantable claim.

A whole lot of our daily work is based on assumptions, even about technical details that we could easily prove or refute with a simple test. And that isn’t necessarily a bad thing. As long as the assumptions are valid or the results of false information are in the harmless spectrum, it saves us valuable time and resources.

A hidden risk

97thingsarchitectBut what if we aren’t aware of our assumptions? What if we believe them being fact and build a functionality on it without proper validation? That’s when little fiascos happen.

Timothy High cited in his chapter “Challenge assumptions – especially your own” in the book “97 Things Every Software Architect Should Know” the lesser known Wethern’s Law of Suspended Judgement:

Assumption is the mother of all screw-ups.

Much too often, we only realize afterwards that the crucial fact we relied on wasn’t that trustworthy. It was another false assumption in disguise.

Distributed updates

But now it’s storytime again: Some years ago, a software company created a product with an ever-changing core. The program was used by quite some customers and could be updated over the internet. If you want to have a current analogy, think about an anti-virus scanner with its relatively static scanning unit and the virus signature database that gets outdated within days. To deliver the update patches to the customer machines, a single update server was sufficient. The update patches were small and happened weekly or daily, so it could be handled by a dedicated machine.

To determine if it needed another patch, the client program contacted the update server and downloaded a single text file that contained the list of all available patches. Because the patches were incremental, the client program needed to determine its patch level, calculate the missing patches and download and apply them in the right order. This procedure did its job for some time, but one day, the size of the patch list file exceeded the size of a typical patch, so that the biggest part of network traffic was consumed by the initial list download.

Cutting network traffic

A short time measure was to shorten the patch list by removing ancient versions that were surely out of use by then. But it became apparent that the patch list file would be the system’s achilles heel sooner or later, especially if the update frequency would move up a gear, as it was planned. So there was a plan to eliminate the need to download the patch list if it had not changed since the last contact. Instead of directly downloading the patch list, the client would now download a file containing only the modification date of the patch list file. If the modification date was after the last download, it would continue to download the patch list. Otherwise, it would refrain from downloading anything, because no new patches could be present.

Discovering the assumption

The new system was developed and put into service shortly before the constant traffic would exhaust the capabilities of the single update server. But the company admins watched in horror as the traffic didn’t abate, but continued on the previous level and even increased a bit. The number of requests to the server nearly doubled. Apparently, the clients now always downloaded the modification date file and the patch list file, regardless of their last contact. The developers must have screwed up with their implementation.

But no error was found in the code. Everything worked just fine, if – yes, if (there’s the assumption!) the modification date was correct. A quick check revealed that the date in the modification date file was in fact the modification date of the patch list file. This wasn’t the cause of the problem either. Until somebody discovered that the number in the modification date file on the update server changed every minute. And that in fact, the patch list file got rewritten very often, regardless of changes in its content. A simple cron job, running every minute, pushed the latest patch list file from the development server to the update server, changing its modification date in the process and editing the modification date file accordingly.

The assumption was that if the patch list file’s content would not change, the same would be true for its modification date. This was true during development, but changed once the cron job got involved. The assumption hold true during development and went into sabotage mode on the live system.

Record and challenge your assumptions

The developers cannot be blamed for the error in this story. But they could have avoided it if they had adopted the habit of recording their assumptions and had them communicated to the administrators in charge of the live system. The first step towards this goal is to make the process of relying on assumptions visible to oneself. If you can be sure to know about your assumptions, you can record them, test against them (to make them fact in your world) and subsequently communicate them to your users. This whole process won’t even start if you aren’t aware of your assumptions.

Know Your Tools: Why Mockitos when() works

Some days ago, my colleague asked how Mockito can differentiate between a method invocation outside of an expectation and one inside. If you want to know it too, read on.

Some days ago, my colleague asked how Mockito can differentiate between a method invocation outside of an expectation and one inside. If you want to know it too, read on.

The difference

Typically a mocking framework follows a Record/Replay/Verify model. In the first phase the expectations are recorded, in the second the mocked methods are called by the code under test and finally the expectations are verified. Consider an example with EasyMock straight from their documentation:

//record
mock = createMock(Collaborator.class);
mock.documentAdded("New Document");
//replay
replay(mock);
classUnderTest.addDocument("New Document", new byte[0]);
//verify
verify(mock);

Now, with Mockito the difference between the phases is not as clear as with EasyMock:

//record
LinkedList mockedList = mock(LinkedList.class);
when(mockedList.get(0)).thenReturn("first");
//replay
System.out.println(mockedList.get(0));
//verify
verify(mockedList).get(0);

The invocation of get() is evaluated before the invocations of when() or println() so there is no way to change the phase before the call. There is also no way to tell whether the current expectation is the last to start the replay mode automatically. How does it work then? All necessary code is contained in the following classes: MockitoCore, MockHandlerImpl, OngoingStubbing and MockingProgressImpl with its wrapper ThreadSafeMockingProgress.

Record

//record
LinkedList mockedList = mock(LinkedList.class);
when(mockedList.get(0)).thenReturn("first");

In the second line, a mock is created via the mock method. This call is delegated to MockitoCore, which initiates a creation of a proxy and a registration of MockHandlerImpl as the handler for its invocations.

The third line actually contains three steps. First the method to mock is invoked on the mock. Because MockHandlerImpl has been registered for all method calls on this proxy, it is now called. It keeps the current invocation, adds it to the list of all invocations recorded and creates the object to collect the expectations, the “OngoingStubbing”. The instance of OngoingStubbing is stored in an instance of the MockingProgressImpl. To keep the instance between the calls to the framework, a ThreadLocal member of singleton ThreadSafeMockingProgress is used. Since no mocked answer for the call to mock exists, a default result is returned. The second step is the invocation of when(), which returns the instance of OngoingStubbing previously deposited by MockHandlerImpl in MockingProgressImpl. OngoingStubbing implements the method then(), which is used as a means of recording the expected result in the third step. The result and the cached invocation are then saved together, ready to be retrieved. During this process, the invocation call is “consumed” and removed from the list of recorded invocations.

Replay

//replay
System.out.println(mockedList.get(0));

In the line five the method get() is called again. Since the result for it has been defined, MockHandlerImpl returns the retrieved result to the caller. The call is recorded and stored for for further use.

Verify

//verify
verify(mockedList).get(0);

Verification also consists of multiple steps. The call to verify() marks the end of stubbing and sets the verification mode. In the following call to get() on the basis of set verification mode MockHandlerImpl is able to differentiate between the phases and passes the invocations recorded to the verification code.

Final thoughts

The developers of Mockito achieved much with simple constructs like singletons and shared state. The stuff behind the syntax sugar is sometimes even considered magic. I hope that, after reading this article, you no longer believe in magic but use your knowledge to create similar great frameworks.

Another point: Since Mockito uses ThreadLocal as storage for its state, is it possible to confuse it by using multiple threads? What do you think?

Special upgrade notes for Grails 1.3.x to 2.2.x

Usually there are quite extensive upgrade notes that should take you from one Grails release to another. Every now and then there are subtle changes in behaviour that may break your application without being mentioned in the notes. We are maintaining some Grails applications started years ago in the Grails 1.0.x era and a bucket full of experience upgrading between major releases.

Here are our special upgrade notes for 1.3.x to 2.2.x:

  • domain constructors with default parameters lead to DuplicateMethodErrors. The easy fix is to change code like
    public MyDomain(def number = 0) {
        ...
    }
    

    to

    public MyDomain() {
        this(0)
    }
    
    public MyDomain(def number) {
        ...
    }
    
  • private static classes are disallowed in controllers. So in general avoid visibility modifiers for multiple classes in one file.
  • If you use Apache Shiro with the Grails Shiro Plugin for authentication, you will have to do some work for existing accounts to stay working because the default CredentialMatcher changed from SHA1 to SHA256. To get the old behaviour add the following to conf/spring/resources.groovy:
    import org.apache.shiro.authc.credential.Sha1CredentialsMatcher
    
    beans = {
        ...
        credentialMatcher(Sha1CredentialsMatcher) {
            storedCredentialsHexEncoded = true
        }
        ...
    }
    
    
  • A domain class property or even a domain class with the name “environment” clashes(d) with a spring bean (GRAILS-7851) and leads to unexpected effects. Renaming the property or class is a viable workaround.
  • Namespacing in tag libs is broken so that you cannot name a local variable “properties”:
        def myTag = { attrs, body ->
            String properties = 'some string'
    

    leads to a bogus error
    [groovyc] TagLib.groovy: -1: The return type of java.lang.String getProperties() in TagLib$_closure24_closure87 is incompatible with java.util.Map getProperties() in groovy.lang.Closure.Simply renaming the variable to something like props fixes the problem.

  • Migrations need package statements if you organize them in subdirectories.

In addition to the changes mentioned in the official release notes solving the issues above made our application work again with the latest and greatest Grails release.

Scaling your web app: Cache me if you can

Invalidation and transaction aware caching using memcached with Grails as an example

One of the biggest problems of caches is how and when do I invalidate my cache content? When you read outdated data from the cache you are toast.
For example we have a list of children elements inside a parent. Normally you would cache the children under the parent’s id:

cache[parent.id] = children

But how do you know if your cache content is still valid? When one child or the list of children changes you write the new content into the cache

cache[parent.id] = newChildren

But when do you update the cache? If you place the update code where the list of children is modified the cache is updated before transaction has ended. You break the isolation. Another point would be after the transaction has been committed but then you have to track all changes. There is a better way: use a timestamp from the database which is also visible to other transactions when it is committed. It should also be in the parent object because you need this object for the cache key nonetheless. You could use lastUpdated or another timestamp for this which is updated when the children collection changes. The cache key is now:

cache[parent.id + '_' + parent.lastUpdated]

Now other transactions read the parent object and get the old timestamp and so the old cache content before the transaction is committed. The transaction itself gets the new content. In Grails if you change the collection lastUpdated is automatically updated and in Rails with belongs_to and touch even a change in a child updates the lastUpdate of the parent – no manual invalidation needed.

Excourse: using memcached with Grails

If you want to use memcached from the JVM there is a good library which wraps common calls: spymemcached. If you want to use spymemcached from Grails you drop the jar into your lib folder and wrap it in a Service:

class MemcachedService implements InitializingBean {
  static final Object NULL = "NULL"
  def MemcachedClient memcachedClient

  def void afterPropertiesSet() {
    memcachedClient = new MemcachedClient(
      new ConnectionFactoryBuilder().setTranscoder(new CustomSerializingTranscoder()).build(),
      AddrUtil.getAddresses("localhost:11211")
    )
  }

  def connected() {
    return !memcachedClient.availableServers.isEmpty()
  }

  def get(String key) {
    return memcachedClient.get(key)
  }

  def set(String key, Object value) {
    memcachedClient.set(key, 600, value)
  }

  def clear() {
    memcachedClient.flush()
  }
}

Spymemcached serializes your cache content so you need to make all your cached classes implement Serializable. Since Grails uses its own class loaders we had problems with deserializing and used a custom serializing transcoder to get the right class loader (taken from this issue):

public class CustomSerializingTranscoder extends SerializingTranscoder {

  @Override
  protected Object deserialize(byte[] bytes) {
    final ClassLoader currentClassLoader = Thread.currentThread().getContextClassLoader();
    ObjectInputStream in = null;
    try {
      ByteArrayInputStream bs = new ByteArrayInputStream(bytes);
      in = new ObjectInputStream(bs) {
        @Override
        protected Class<ObjectStreamClass> resolveClass(ObjectStreamClass objectStreamClass) throws IOException, ClassNotFoundException {
          try {
            return (Class<ObjectStreamClass>) currentClassLoader.loadClass(objectStreamClass.getName());
          } catch (Exception e) {
            return (Class<ObjectStreamClass>) super.resolveClass(objectStreamClass);
          }
        }
      };
      return in.readObject();
    } catch (Exception e) {
      e.printStackTrace();
      throw new RuntimeException(e);
    } finally {
      closeStream(in);
    }
  }

  private static void closeStream(Closeable c) {
    if (c != null) {
      try {
        c.close();
      } catch (IOException e) {
        e.printStackTrace();
      }
    }
  }
}

With the connected method you can check if any memcached instances are available. Which is better than calling a method and waiting for the timeout.

def connected() {
  return !memcachedClient.availableServers.isEmpty()
}

Now you can inject your Service where you need to and cache along.

Cache the outermost layer

If you use Hibernate you get database based caching almost for free, so why bother using another cache? In one application we used Hibernate to fetch a large chunk of data from the database and even with caches it took 100 ms. Measuring the code showed that the processing of the data (conversion for the client) took by far the biggest chunk. Caching the processed data lead to 2 ms for the whole request. So one take away is here that caching the result of (user indepedent) calculations and conversions can speed up your request even further. When you got static resources you could also use HTTP directives.

Small cause – big effect (Story 1)

This is a story about a small programming error that caused a company-wide chaos.

Today’s modern software development is very different from the low-level programming you had to endure in the early days of computing. Computer science invented concepts like “high-level programming languages” that abstract a lot of the menial tasks away that are necessary to perform the desired functionality. Examples of these abstractions are register allocation, memory management and control structures. But even if we step into programming at a “high level”, we build whole cities of skyscrapers (metaphorically speaking) on top of those foundation layers. And still, if we make errors in the lowest building blocks of our buildings, they will come crashing down like a tower in a game of jenga.

Paris_Tuileries_Garden_Facepalm_statueThis story illustrates one such error in a small and low building block that has widespread and expensive effects. And while the problem might seem perfectly avoidable in isolation, in reality, there are no silver bullets that will solve those problems completely and forever.

The story is altered in some aspects to protect the innocent, but the core message wasn’t changed. I also want to point out that our company isn’t connected to this story in any way other than being able to retell it.

The setting

Back in the days of unreliable and slow dial-up internet connections, there was a big company-wide software system that should connect all clients, desktop computers and notebooks likewise, with a central server that should publish important news and documentation updates. The company developing and using the system had numerous field workers that connected from the hotel room somewhere near the next customer, let the updates run and disconnected again. They relied on extensive internal documentation that had to be up-to-date when working offline at the customer’s site.

Because the documentation was too big to be transferred completely after each change, but not partitioned enough to use a standard tool like rsync, the company developed a custom solution written in Java. A central server kept track of every client and all updates that needed to be delivered. This was done using a Set, more specifically a HashSet for each client. Imagine a HashMap (or Dictionary) with only a key, but no payload value. The Set’s entries were the update command objects itself. With this construct, whenever a client connected, the server could just iterate the corresponding Set and execute every object in it. After the execution had succeeded, the object was removed from the Set.

Going live

The system went live and worked instantly. All clients were served their updates and the computation power requirements for the central server were low because only few decisions needed to be made. The internet connection was the bottleneck, as expected. But it soon turned out to be too much of a bottleneck. More and more clients didn’t get all their updates. Some were provided with the most recent updates, but lacked older ones. Others only got the older ones.

The administrators asked for a bigger line and got it installed. The problems thinned out for a while, but soon returned as strong as ever. It wasn’t a problem of raw bandwidth apparently. The developers had a look at their data structure and discovered that a HashSet wouldn’t guarantee the order of traversal, so that old and new updates could easily get mixed up. But that shouldn’t be a problem because once the updates were delivered, they would be removed from the Set. And all updates had to be delivered anyways, regardless of age.

Going down

Then the central server instance stopped working with an OutOfMemoryError. The heap space of the Java virtual machine was used up by update command objects, sitting in their HashSets waiting to be executed and removed. It was clear that there were far too many update command objects to come up with a reasonable explanation. The part of the system that generated the update commands was reviewed and double-checked. No errors related to the problem at hand were found.

The next step was a review of the algorithm for iterating, executing and removing the update commands. And there, right in the update command class, the cause was found: The update command objects calculated their hashcode value based on their data fields, including the command’s execution date. Every time the update command was executed, this date field was updated to the most recent value. This caused the hashcode value of the object to change. And this had the effect that the update command object couldn’t be removed from the Set because the HashSet implementation relies on the hashcode value to find its objects. You could ask the Set if the object was still contained and it would answer “no”, but still include it into each loop over the content.

The cause

The Sets with update commands for the clients always grew in size, because once a update command object was executed, it couldn’t be removed but appeared absent. Whenever a client connected, it got served all update commands since the beginning, over and over again in semi-random order. This explained why sometimes, the most recent updates were delivered, but older ones were still missing. It also explained why the bandwidth was never enough and all clients lacked updates sooner or later.

The cost of this excessive update orgy was substantial: numerous clients had leeched all (useless) data they could get until the connection was cut, day for day, over expensive long-distance calls. Yet, they lacked crucial updates that caused additional harm and chaos. And all this damage could be tracked down to a simple programming error:

Never include mutable data into the calculation of a hashkey.

Performance considerations with network requests, database queries and other IO

Todays processors, memory and other sub systems are wicked fast. Nevertheless, many applications feel sluggish. In my experience this is true for client and server applications and not limited to specific scenarios. So the question is, why?

Many developer rush straight into optimizing their code to save CPU cycles. Most of the time thats not the real problem. The most important rule of performance optimisation stays true: Measure first!

Often times you will find your application waiting the greater part of its running time waiting for input/output (IO). Common sources for IO are database queries, network/http request and file system operations. Many developers are aware of these facts but we see this problem very often whether in inhouse or on-site customer projects.

Profile the unresponsive/slow parts of your application and check especially for hidden excess IO, here some Java examples:

  • The innocently looking method File.isFile() typically does a seek on the harddrive on each call. Using it an a loop over several dozens of files will slow you down massively.
  • The java.net.URL class does network requests for hashCode() and equals()! Never use it in collections, especially HashMaps. It is better to use the java.net.URI for managing the resource location and only convert to URL when needed.
  • Using an object-relational-mapping (ORM) tool like hibernate most people default to lazy loading. If your usage pattern requires to load the referenced objects all or most of the time you will get many additional database requests, at least one for each accessed association. In such cases it is most likely better to use eager fetching because the network and query overhead is reduced drastically and the data has to be loaded anyway.

So if you have performance and/or responsiveness problems, keep an eye on your IO pattern and optimize the algorithms to reduce IO. Usually it will help you much more than micro optimisation of your application code.

C/C++ pitfalls for Java developers

Java and C/C++ have concepts that are similar enough to get an inexperienced Java developer confused. Here I want to show you some mistakes I found or done myself.

Java and C/C++ have concepts that are similar enough to get an inexperienced Java developer confused. Here I want to show you some mistakes I found or done myself.

Type conversion rules

A well known and often used pattern is simultaneous assignment of an expression to a variable and its comparison with another value.

if((a = b) != c) {
  // do something
}

In both Java and C would this code would have the same behaviour. The problem arises when a parenthesis is misplaced, resulting in an assignment of a boolean expression to a:

if((a = b != c)) {
  // do something
}

Since a boolean expression can be converted to an integer and the assignment expression is contained in a parenthesis, the compiler may even not ensue a warning. For Java this code isn’t legal anymore while perfectly fine in C. The error strikes most hard when the result of the comparison, namely 0 or 1, is a valid value. A good example is a call to socket(), that may return 0 as a file descriptor for stdin. The probably simplest solution to this problem is separating the assignment from comparison – even at the cost of a temporary variable.

Memory management

The behaviour of standard containers is sometimes combined with incomplete/misunderstood behaviour of pointers. An example:

class A {}
class B
{
  public:
  void foo()
  {
    std::vector<A*> theContainer;
    for(int i = 0; i < 100; i++) {
      theContainer.push_back(new A());
    }
  }
}

Every call to foo() would result in a memory leak due to not deleted A’s. When the vector is destructed, a destructor of each contained item is called. For pointers and other scalar types this is a no-op, resulting in missing call to the destructor of pointed to class. A solution to this problem could be the use of smart pointers wrapping the actual pointers or an explicit destruction of pointed to objects before the vector goes out of scope.

Deterministic destruction

Coming from language with automatic memory management there is some uncertainty when it comes to the order of destruction when multiple objects leave the scope. Consider this example:

void foo()
{
  std::lock_guard<std::mutex> lock(mutex);
  std::ifstream input ....

  //some operations

  //??
}

In this case the stream is destructed before the lock, guaranteeing that the stream is destructed before the execution reaches the destructor of the lock. This pattern is exploited by the RAII.

Exception handling

This is my personal favourite. Here is a little quiz: what is printed to the screen?

try {
  throw new SomeException();
} catch (SomeException& e) {
  std::cout << "first" << std::endl;
} catch (...) {
  std::cout << "second" << std::endl;
}

As some may already have guessed from the question: the answer is “second”. To make the code work, the reference in the catch block has to be replaced by the pointer. Another, and probably better alternative is to create the exception on the stack. The reason behind this mistake is that in java any thrown object is constructed with new. Explicit hints or experience are required to avoid such flawed exception handling.

Grails / GORM performance tuning tips

Every situation and every code is different but here are some pitfalls that can cost performance and tips you can try to improve performance in Grails / GORM

First things first: never optimize without measuring. Even more so with Grails there are many layers involved when running code: the code itself, the compiler optimized version, the Grails library stack, hotspot, the Java VM, the operating system, the C libraries, the CPU… With this many layers and even more possibilities you shouldn’t guess where you can improve the performance.

Measuring the performance

So how do you measure code? If you have a profiler like JProfiler you can use it to measure different aspects of your code like CPU utilization, hotspots, JDBC query performance, Hibernate, etc. But even without a decent profiler some custom code snippets can go a long way. Sometimes we use dedicated methods for measuring the runtime:

class Measurement {
  public static void runs(String opertationName, Closure toMeasure) {
    long start = System.nanoTime()
    toMeasure.call()
    long end = System.nanoTime()
    println("Operation ${operationName} took ${(end - start) / 1E6} ms")
  }
}

or you can even show the growth in the Hibernate persistence context:

class Measurement {
  public static void grown(String opertationName, Closure toMeasure) {
    PersistenceContext pc = sessionFactory.currentSession.persistenceContext
    Map before = numberOfInstancesPerClass(pc)
    toMeasure.call()
    Map after = numberOfInstancesPerClass(pc)
    println "The operation ${operationName} has grown the persistence context: ${differenceOf(after, before)}"
  }
}

Improving the performance

So when you found your bad performing code, what can you do about it? Every situation and every code is different but here are some pitfalls that can cost performance and tips you can try to improve performance:

GORM hotspots

Performance problems with GORM can be in different areas. A good rule of thumb is to reduce the number of queries hitting the database. This can be achieved by combining results with outer join, eager fetching associations or improving caching. Another hotspot can be long running operations which you can improve via creating indices on the database but first analyze the query with database specific tools like ANALYZE.
Also a typical problem can be a large persistence context. Why is this a problem? The default flush mode in Hibernate and hence GORM is auto which means before any query the persistence context is flushed. Flushing means Hibernate checks every property of every instance if it has changed. The larger the persistence context the more work to do. One option would be to clear the session periodically after a flush but this could decrease the performance because once loaded and therefore cached instances need to be reloaded from the database.
Another option is to identify the parts of your code which only need read access on the instances. Here you can use a stateless session or in Grails you can use the Spring annotation @Transactional(readOnly = true). It can be beneficial for the performance to separate read only and write access to the database. You could also experiment with the flush mode but beware that this can lead to wrong query results.

The thin line: where to stop?

If you measure and improve you can get big and small improvements. The problem is to decide which of these small ones change the code in a good or minimal way. It is a trade off between performance and code design as some performance improvements can worsen the code quality. Another cup of tea left untouched in this discussion is scalability. Whereas performance concentrates of the actual data and the current situation, scalability looks on the performance of the system when the data increases. Some performance improvements can worsen scalability. As with performance: measure, measure, measure.

Testing Java with Grails 2.2

We have some projects that consist of both java and groovy classes. If you don’t pay attention, you can have a nice WTF moment.

Let us look at the following fictive example. You want to test a static method of a java class “BlogAction”. This method should tell you whether a user can trigger a delete action depending on a configuration property.

The classes under test:

public class BlogAction {
    public static boolean isDeletePossible() {
        return ConfigurationOption.allowsDeletion();
    }
}
class ConfigurationOption {
    static boolean allowsDeletion() {
        // code of the Option class omitted, here it always returns false
        return Option.isEnabled('userCanDelete');
    }
}

In our test we mock the method of ConfigurationOption to return some special value and test for it:

@TestMixin([GrailsUnitTestMixin])
class BlogActionTest {
    @Test
    void postCanBeDeletedWhenOptionIsSet() {
        def option = mockFor(ConfigurationOption)
        option.demand.static.allowsDeletion(0..(Integer.MAX_VALUE-1)) { -> true }

        assertTrue(BlogAction.isDeletePossible())
    }
}

As result, the test runner greets us with a nice message:

junit.framework.AssertionFailedError
    at junit.framework.Assert.fail(Assert.java:48)
    at junit.framework.Assert.assertTrue(Assert.java:20)
    at junit.framework.Assert.assertTrue(Assert.java:27)
    at junit.framework.Assert$assertTrue.callStatic(Unknown Source)
    ...
    at BlogActionTest.postCanBeDeletedWhenOptionIsSet(BlogActionTest.groovy:21)
    ...

Why? There is not much code to mock, what is missing? An additional assert statement adds clarity:

assertTrue(ConfigurationOption.allowsDeletion())

The static method still returns false! The metaclass “magic” provided by mockFor() is not used by my java class. BlogAction simply ignores it and calls the allowsDeletion method directly. But there is a solution: we can mock the call to the “Option” class.

@Test
void postCanBeDeletedWhenOptionIsSet() {
    def option = mockFor(Option)
    option.demand.static.isEnabled(0..(Integer.MAX_VALUE-1)) { String propertyName -> true }

    assertTrue(BlogAction.isDeletePossible())
}

Lessons learned: The more happens behind the scenes, the more important becomes the context of execution.