It’s only Cores and Caches but I like it

759px-amd_am5x86_dieMost of our software development economy is based on a simple promise: The computing power (or “performance”) of a common computer will double every two years. This promise accompanied us for 40 years now, a time during which our computers got monitors, acquired harddisks and provided RAM beyond the 640 kB that was enough for nobody. In the more recent years, we don’t operate systems with one CPU, but four, eight or even twelve of them. So it came as a great irritation when ten years ago, Herb Sutter predicted that “The free lunch is over” and even Gordon Moore, the originator of Moore’s Law that forms the basis of our simple promise said that it will only hold true for ten to fiveteen more years. Or, in other words, until today.

Irritation

That’s a bit unsettling, to say the least, and should be motivation enough to have a good look at everything we are doing. Intel, the biggest manufacturer of CPUs for computers, has indicated earlier this year that Moore’s Law cannot be fulfilled any longer. So, the free lunch is really over. And it turns out to have some hidden costs. One cost is a certain complacency, the conviction that things will continue to be as they were and that coding styles chiseled over years and decades hold an inherent value of experience.

Complacency

Don’t get me wrong – there is great value in experience, but not all knowledge of the past is helpful for the future. Sometimes, fundamental things change. Just as the tables will eventually turn for every optimization trick, we need to reevaluate some axioms of our stance towards performance. Let me reiterate some common knowledge:

There are two types of performance inherently baked into your source code: Theoretical and practical performance.

Performance

The first type is theoretical performance, measured in O(n), O(n²) or even O(n!) and mostly influenced by the complexity class of the algorithm you are using. It will translate into runtime behaviour (like in the case of O(n!) your software is already dead, you just don’t know yet), but isn’t concerned with the details of your implementation. Not using an unnecessary high complexity class for a given problem will continue to be a valueable skill that every developer should master.

On the other hand, practical performance is measured in milliseconds (or nanoseconds if you are into micro-benchmarks and can pull off to measure them correctly) and can heavily depend on just a few lines in your source code. Practical performance is the observable runtime behaviour of your software on a given hardware. There are two subtypes of practical performance:

  • Throughput (How many operations are computed by the system in a given unit of time?)
  • Latency (How long does it take one operation to be computed by the system?)

If you run a service, throughput is your main metric for performance. If you use a service, latency is your main concern. Let me explain this by the metapher of a breakfast egg. If you want to eat your breakfast at a hotel buffet and the eggs are empty, your main concern is how fast you will get your freshly boiled egg (latency). But if you run the hotel kitchen, you probably want to cook a lot of eggs at once (throughput), even if that means that one particular egg might boil slightly longer as if you’d boiled each of them individually.

Latency

Those two subtypes are not entirely independent from each other. But the main concern for most performance based work done by developers is latency. It is relatively easy to measure and to reason about. If you work with latency-based performance issues, you should know about the latency numbers every programmer should know, either in visual form or translated to a more human time scale. Lets iterate some of the numbers and their scaled counterpart here:

  • 1 CPU cycle (0.3 ns): 1 second
  • Level 1 cache access (0.9 ns): 3 seconds
  • Branch mispredict (2.5 ns): 8 seconds
  • Level 2 cache access (2.8 ns): 9 seconds
  • Level 3 cache access (12.9 ns): 43 seconds
  • Main memory access (120 ns): 6 minutes
  • Solid-state disk I/O (50-150 μs): 2-6 days
  • Rotational disk I/O (1-10 ms): 1-12 months

We can discuss any number in detail, but the overall message stands out nonetheless: CPUs are lightning fast and caches are the only system components that can somewhat keep up. As soon as your program hits the RAM, your peak performance is lost. This brings us to the main concept of latency optimization:

Your program’s latency is ultimately decided by your ability to decrease cache misses.

You can save CPU cycles by performing clever hacks, but if you are able to always read your data (and code) from the cache, you’ll be 360 times faster than if your program constantly has to read from RAM. Your source code doesn’t have to change at all for this to happen. A good compiler and/or optimizing runtime can work wonders if you adhere to your programming language’s memory model. In reality, you probably have to rearrange your instructions and align your data structures. That’s the performance optimization of today, not the old cycle stinting. The big challenge is that none of these aspects are visible on the source code level of your program. We have to develop our programs kind of blindfolded currently.

Concurrency

One way how we’ve held up Moore’s Law in the last ten years was the introduction of multiprocessor computing into normal computers. If you cram two CPUs onto the die, the number of transistors on it has doubled. A single-threaded program doesn’t run any faster, though. So we need to look at concurrent programming to unlock the full power of our systems. Basically, there are two types of concurrent programming, deliberate and mechanical.

  • Deliberate concurrent programming means that you as the developer actively introduce threads, fibers or similar concepts into your source code to control parallel computation.
  • Mechanical concurrent programming means that your source code can be parallelized by the compiler and/or runtime and/or the hardware (e.g. hyper-threading) without changing the correctness of your program.

In both types of concurrent programming, you need to be aware about the constraints and limitations of correct concurrency. It doesn’t matter if your program is blazingly fast and utilizes all cores if the result is wrong or only occasionally correct. Once again, the memory model of your programming language is a useful set of rules and abstractions to guide you. Most higher-level concurrency models like actors narrow your possibilities even further, with functional programming being one of the strictest (and most powerful ones).

In the field of software development, we are theoretically well-prepared to take on the task of pervasive concurrent programming. But we need to forget about the good old times of single-core confirmability and embrace the chaotic world of raw computing power, the world of cores and caches.

Cores ‘n’ Caches

This is our live now: We rely on Cores ‘n’ Caches to feed us performance, but the lunch isn’t free anymore. We have to adapt and adjust, to rethink our core axioms and let go of those parts of our experience that are now hindrances. Just like Rock ‘n’ Roll changed the rules in the music business, our new motto changes ours.

Let’s rock.

A touch of Grails: cache invalidation

In a recent post I introduced a caching strategy. One difficulty with caching is always when and where do I need to invalidate the cache contents. One of the easiest things to do is to use timestamped cache keys like id-lastUpdated. But what if you cache another related domain object under the same key. One option would be to include its timestamp in the key, too. Another one is to touch the timestamped object.

class A {
  Long id
  Date lastUpdated
}

class B {
  A a
  
  def beforeUpdate = {
    a.lastUpdated = new Date()
  }
}

So you would need this touch in beforeUpdate, afterInsert and beforeDelete. This can get pretty cumbersome. What if I could just declare that I want to touch another object when this object changes like

class A {
  Long id
  Date lastUpdated
}

class B {
  static touch = ['a']

  A a
}

For this we just need to listen to the GORM persistence events.

class TouchPersistenceListener extends AbstractPersistenceEventListener {

  public TouchPersistenceListener(final Datastore datastore) {
    super(datastore)
  }

  @Override
  protected void onPersistenceEvent(AbstractPersistenceEvent event) {
    touch(event.entityObject)
  }

  public void touch(def domain) {
    MetaProperty touch = domain.metaClass.hasProperty(domain, 'touch')
    if (touch != null) {
      Date now = new Date()
      def listOfPropertiesToTouch = touch.getProperty(domain)
      listOfPropertiesToTouch.each { propertyName ->
        if (domain."$propertyName" == null) {
          return
        }
        if (domain."$propertyName" instanceof Collection) {
          domain."$propertyName".findAll {it}*.lastUpdated = now
        } else {
          domain."$propertyName".lastUpdated = now
        }
      }
    }
  }

  boolean supportsEventType(Class eventType) {
    return [PreInsertEvent, PostInsertEvent, PreUpdateEvent].contains(eventType)
  }
}

One caveat here is that all persistence events are fired by hibernate during a flush. Especially the delete event caused problems like objects which have a session but you cannot load lazy loaded associations. This is a known hibernate bug and is fixed is 4.01 but Grails uses an older version. So we just decorate the save and delete methods of our domain classes and register our listener for the other events.

class BootStrap {
  def grailsApplication

  def init = { servletContext ->
    def ctx = grailsApplication.mainContext
    ctx.getBeansOfType(Datastore).values().each { Datastore d ->
      def touchListener = new TouchPersistenceListener(d)
      ctx.addApplicationListener touchListener
      decorateWith(touchListener)
    }
  }

  private void decorateWith(TouchPersistenceListener touchListener) {
    grailsApplication.domainClasses.each { GrailsClass clazz ->
      if (clazz.hasProperty('touch')) {
        def oldSave = clazz.metaClass.pickMethod('save', [Map] as Class[])
        def oldDelete = clazz.metaClass.pickMethod('delete', [Map] as Class[])
        clazz.metaClass.save = { params ->
          touchListener.touch(delegate)
          oldSave.invoke(delegate, [params] as Object[])
        }
        clazz.metaClass.delete = { params ->
          touchListener.touch(delegate)
          oldDelete.invoke(delegate, [params] as Object[])
        }
      }
    }
  }
}

Now we can just declare that changing this object touches another one it also works with 1:n or m:n associations. We don’t have to worry where to invalidate the cache in our code just annotate every object used in the cache with touch if it has an association to our object used in the key or include it in the key. Changing those objects invalidates the cache automatically.

Scaling your web app: Cache me if you can

One of the biggest problems of caches is how and when do I invalidate my cache content? When you read outdated data from the cache you are toast.
For example we have a list of children elements inside a parent. Normally you would cache the children under the parent’s id:

cache[parent.id] = children

But how do you know if your cache content is still valid? When one child or the list of children changes you write the new content into the cache

cache[parent.id] = newChildren

But when do you update the cache? If you place the update code where the list of children is modified the cache is updated before transaction has ended. You break the isolation. Another point would be after the transaction has been committed but then you have to track all changes. There is a better way: use a timestamp from the database which is also visible to other transactions when it is committed. It should also be in the parent object because you need this object for the cache key nonetheless. You could use lastUpdated or another timestamp for this which is updated when the children collection changes. The cache key is now:

cache[parent.id + '_' + parent.lastUpdated]

Now other transactions read the parent object and get the old timestamp and so the old cache content before the transaction is committed. The transaction itself gets the new content. In Grails if you change the collection lastUpdated is automatically updated and in Rails with belongs_to and touch even a change in a child updates the lastUpdate of the parent – no manual invalidation needed.

Excourse: using memcached with Grails

If you want to use memcached from the JVM there is a good library which wraps common calls: spymemcached. If you want to use spymemcached from Grails you drop the jar into your lib folder and wrap it in a Service:

class MemcachedService implements InitializingBean {
  static final Object NULL = "NULL"
  def MemcachedClient memcachedClient

  def void afterPropertiesSet() {
    memcachedClient = new MemcachedClient(
      new ConnectionFactoryBuilder().setTranscoder(new CustomSerializingTranscoder()).build(),
      AddrUtil.getAddresses("localhost:11211")
    )
  }

  def connected() {
    return !memcachedClient.availableServers.isEmpty()
  }

  def get(String key) {
    return memcachedClient.get(key)
  }

  def set(String key, Object value) {
    memcachedClient.set(key, 600, value)
  }

  def clear() {
    memcachedClient.flush()
  }
}

Spymemcached serializes your cache content so you need to make all your cached classes implement Serializable. Since Grails uses its own class loaders we had problems with deserializing and used a custom serializing transcoder to get the right class loader (taken from this issue):

public class CustomSerializingTranscoder extends SerializingTranscoder {

  @Override
  protected Object deserialize(byte[] bytes) {
    final ClassLoader currentClassLoader = Thread.currentThread().getContextClassLoader();
    ObjectInputStream in = null;
    try {
      ByteArrayInputStream bs = new ByteArrayInputStream(bytes);
      in = new ObjectInputStream(bs) {
        @Override
        protected Class<ObjectStreamClass> resolveClass(ObjectStreamClass objectStreamClass) throws IOException, ClassNotFoundException {
          try {
            return (Class<ObjectStreamClass>) currentClassLoader.loadClass(objectStreamClass.getName());
          } catch (Exception e) {
            return (Class<ObjectStreamClass>) super.resolveClass(objectStreamClass);
          }
        }
      };
      return in.readObject();
    } catch (Exception e) {
      e.printStackTrace();
      throw new RuntimeException(e);
    } finally {
      closeStream(in);
    }
  }

  private static void closeStream(Closeable c) {
    if (c != null) {
      try {
        c.close();
      } catch (IOException e) {
        e.printStackTrace();
      }
    }
  }
}

With the connected method you can check if any memcached instances are available. Which is better than calling a method and waiting for the timeout.

def connected() {
  return !memcachedClient.availableServers.isEmpty()
}

Now you can inject your Service where you need to and cache along.

Cache the outermost layer

If you use Hibernate you get database based caching almost for free, so why bother using another cache? In one application we used Hibernate to fetch a large chunk of data from the database and even with caches it took 100 ms. Measuring the code showed that the processing of the data (conversion for the client) took by far the biggest chunk. Caching the processed data lead to 2 ms for the whole request. So one take away is here that caching the result of (user indepedent) calculations and conversions can speed up your request even further. When you got static resources you could also use HTTP directives.