Data-Oriented Design: Using data as interfaces

A Code Centric World

In main-stream OOP, polymorphism is achieved by virtual functions. To reuse some code, you simply need one implementation of a specific “virtual” interface. Bigger programs are composed by some functions calling other functions calling yet other functions. Virtual functions introduce a flexibility here to that allow parts of the call tree to be replaced, allowing calling functions to be reused by running on different, but homogenuous, callees. This is a very “code centric” view of a program. The data is merely used as context for functions calling each other.

Duality

Let us, for the moment, assume that all the functions and objects that such a program runs on, are pure. They never have any side effects, and communicate solely via parameters to and return values from the function. Now that’s not traditional OOP, and a more functional-programming way of doing things, but it is surely possible to structure (at least large parts of) traditional OOP programs that way. This premise helps understanding how data oriented design is in fact dual to the traditional “code centric” view of a program: Instead of looking at the functions calling each other, we can also look at how the data is being transformed by each step in the program because that is exactly what goes into, and comes out of each function. IS-A becomes “produces/consumes compatible data”.

Cooking without functions

I am using C# in the example, because LINQ, or any nice map/reduce implementation, makes this really staight-forward. But the principle applies to many languages. I have been using the technique in C++, C#, Java and even dBase.
Let’s say we have a recipe of sorts that has a few ingredients encoded in a simple class:

class Ingredient
{
  public string Name { get; set; }
  public decimal Amount { get; set; }
}

We store them in a simple List and have a nice function that can compute the percentage of each ingredient:

public static IReadOnlyList<(string, decimal)> 
    Percentages(IEnumerable<Ingredient> incredients)
{
  var sum = incredients.Sum(x => x.Amount);
    return incredients
      .Select(x => (x.Name, x.Amount / sum))
      .ToList();
}

Now things change, and just to make it difficult, we need a new ingredient type that is just a little more complicated:

class IngredientInfo
{
  public string Name { get; set; }
  /* other useful stuff */
}

class ComplicatedIngredient
{
  public IngredientInfo Info { get; set; }
  public decimal Amount { get; set; }
}

And we definitely want to use the old, simple one, as well. But we need our percentage function to work for recipes that have both Ingredients and also ComplicatedIngredients. Now the go-to OOP approach would be to introduce a common interface that is implemented by both classes, like this:

interface IIngredient
{
  string GetName();
  string GetAmount();
}

That is trivial to implement for both classes, but adds quite a bunch of boilerplate, just about doubling the size of our program. Then we just replace IReadOnlyList<Ingredient> by IReadOnlyList<IIngredient> in the Percentage function. That last bit is just so violating the Open/Closed principle, but just because we did not use the interface right away (Who thought YAGNI was a good idea?). Also, the new interface is quite the opposite of the Tell, don’t ask principle, but there’s no easy way around that because the “Percentage” function only has meaning on a List<> of them.

Cooking with data

But what if we just use data as the interface? In this case, it so happens that we can easiely turn a ComplicatedIngredient into an Ingredient for our purposes. In C#’s LINQ, a simple Select() will do nicely:

var simplified = complicated
  .Select(x => new Ingredient
   { 
     Name = x.Info.Name,
     Amount = x.Amount
   });

Now that can easiely be passed into the Percentages function, without even touching it. Great!

In this case, one object could neatly be converted into the other, which is often not the case in practice. However, there’s often a “common denominator class” that can be found pretty much the same way as extracting a common interface would. Just look at the info you can retrieve from that imaginary interface. In this case, that was the same as the original Ingredients class.

Further thoughts

To apply this, you sometimes have to restructure your programs a little bit, which often means going wide instead of deep. For example, you might have to convert your data to a homogenuous form in a preprocessing step instead of accessing different objects homogenuously directly in your algorithms, or use postprocessing afterwards.
In languages like C++, this can even net you a huge performance win, which is often cited as the greatest thing about data-oriented design. But, first and foremost, I find that this leads to programs that are easier to understand for both machine and people. I have found myself using this data-centric form of code reuse a lot more lately.

Are you using something like this as well or are you still firmly on the override train, and why? Tell me in the comments!

Updating Grails 3.3.x to 4.0.x

We have a long history of maintaining a fairly large grails application which we took from Grails 1.0 to 4.0. We sometimes decided to skip some intermediate releases of the framework because of problems or missing incentives to upgrade. If your are interested in our experiences of the past, feel free to have a look our stories:

This is the next installment of our journey to the latest and greatest version of the Grails framework. This time the changes do not seem as intimidating like going from 2.x to 3.x. There are less moving parts, at least from the perspective of an application developer where almost everything stayed the same (gradle build system, YAML configuration, Geb functional tests etc.). Under the hood there are of course some bigger changes like new major versions of GORM/Hibernate and Spring Boot and the switch to Micronaut as the parent application context.


The hurdles we faced

  • For historical reasons our application uses flush mode “auto”. This does not work until today, see https://github.com/grails/grails-core/issues/11376
  • The most work intensive change is that Hibernate 5 requires you to perform your work in transactions. So we have dozens of places where we need to add missing @Transactional annotations to make especially saving domain objects work. Therefore we have to essentially test the whole application.
  • The handling of HibernateProxies again became more intransparent which led to numerous IllegalArgumentExceptions (“object ist not an instance of declaring type”). Sometimes we could move from generated hashCode()/equals() implementations to the groovy-Annotation @EqualsAndHashCode (actually a good thing) whereas in other places we did manual unwrapping or switched to eager fetching to avoid these problems.

In addition we faced minor gotchas like changed configuration entries. The one that cost us some hours was the subtle change of server.contextPath to server.servlet.context-path but nothing major or blocking.

We also had to transform many of our unit and integration tests to Spock and the new Grails Testing Support framework. It makes the tests more readable anyway and feels more fruitful than trying to debug the old-style Grails Test Mixins based tests.

Improvements

One major improvement for us in the Grails ecosystem is the good news that the shiro plugin is again officially available, maintained and cleaned up (see https://github.com/nerdErg/grails-shiro). Now we do not need to use our own poor man’s port anymore.

Open questions

Regarding the proclaimed performance improvements and reduced memory consumptions we do not have final numbers or impressions yet. We will deliver results on this front in the future.

More important is an incovenience we are still facing regarding hot-code-reloading. It does not work for us even using OpenJDK 8 with the old spring-loaded mechanism. The new restart-style of micronaut/spring-boot is not really productive for us because the startup times can easily reach the minute range even on fast hardware.

Pro-Tip

My hottest advice for you is this one:

Create a fresh Grails 4 app and compare central files like application.yml and build.gradle to get up to the state-of-the-art.

Conclusion

While this upgrade still was a lot of work and meant many places had to be touched it was a lot smoother than many of the previous ones. We hope that things improve further in the future as the technological stack is up-to-date and much more mature than in the early days…

Getting started with exact arithmetic and F#

In this blog post, I claimed that some exact arithmetic beyond rational numbers can be implemented on a computer. Today I want to show you how that might be done by showing you the beginning of my implementation. I chose F# for the task, since I have been waiting for an opportunity to check it out anyway. So this post is a more practical (first) follow up on the more theoretic one linked above with some of my F# developing experiences on the side.

F# turned out to be mostly pleasant to use, the only annoying thing that happened to me along the way was some weirdness of F# or of the otherwise very helpful IDE Rider: F# seems to need a compilation order of the source code files and I only found out by acts of desperation that this order is supposed to be controlled by drag & drop:

The code I want to (partially) explain is available on github:

https://github.com/felixwellen/ExactArithmetic

I will link to the current commit, when I discuss specifc sections below.

Prerequesite: Rational numbers and Polynomials

As explained in the ‘theory post’, polynomials will be the basic ingredient to cook more exact numbers from the rationals. The rationals themselves can be built from ‘BigInteger’s (source). The basic arithmetic operations follow the rules commonly tought in schools (here is addition):

static member (+) (l: Rational, r: Rational) =
    Rational(l.up * r.down + r.up * l.down,
             l.down * r.down)

‘up’ and ‘down’ are ‘BigInteger’s representing the nominator and denominator of the rational number. ‘-‘, ‘*’ and ‘/’ are defined in the same style and extended to polynomials with rational coefficients (source).

There are two things important for this post, that polynomials have and rationals do not have: Degrees and remainders. The degree of a polynomial is just the number of its coefficients minus one, unless it is constant zero. The zero-polynomial has degree -1 in my code, but that specific value is not too important – it just needs to be smaller than all the other degrees.

Remainders are a bit more work to calculate. For two polynomials P and Q where Q is not zero, there is always a unique polynomial R that has a smaller degree such that:

P = Q * D + R

For some polynomial D (the algorithm is here).

Numberfields and examples

The ingredients are put together in the type ‘NumberField’ which is the name used in algebra, so it is precisely what is described here. Yet it is far from obvious that this is the ‘same’ things as in my example code.

One source of confusion of this approach to exact arithmetic is that we do not know which solution of a polynomial equation we are using. In the example with the square root, the solutions only differ in the sign, but things can get more complicated. This ambiguity is also the reason that you will not find a function in my code, that approximates the elements of a numberfield by a decimal number. In order to do that, we would have to choose a particular solution first.

Now, in the form of unit tests unit tests, we can look at a very basic example of a number field: The one from the theory-post containing a solution of the equation X²=2:

let TwoAsPolynomial = Polynomial([|Rational(2,1)|])
let ModulusForSquareRootOfTwo = 
     Polynomial.Power(Polynomial.X,2) - TwoAsPolynomial
let E = NumberField(ModulusForSquareRootOfTwo)   
let TwoAsNumberFieldElement = NumberFieldElement(E, TwoAsPolynomial)

[<Fact>]
let ``the abstract solution is a solution of the given equation``() =
    let e = E.Solution in  (* e is a solution of the equation 'X^2-2=0' *)
    Assert.Equal(E.Zero, e * e - TwoAsNumberFieldElement)

There are applications of these numbers which have no obvious relation to square roots. For example, there are numberfields containing roots of unity, which would allow us to calculate with rotations in the plane by rational fraction of a full rotation. This might be the topic of a follow up post…

C++ pass-thru parameters

So in ye olde days, before C++11 and move semantics, it was common for functions to use mutable references to pass container-content to the caller, like this:

void random_between(std::vector<int>& out,
  int left, int right, std::size_t N)
{
  std::uniform_int_distribution<> 
    distribution(left, right);
  for (std::size_t i = 0; i < N; ++i)
    out.push_back(distribution(rng));
}

and you would often use it like this:

std::vector<int> numbers;
random_between(numbers, 7, 42, 10);

Basically trading expressiveness and convenience for speed/efficiency.

Convenience is king

Now obviously, those days are over. With move-semantics and guaranteed copy-elision backing us up, it is usually fine to just return the filled container, like this:

std::vector<int> random_between(int left, int right,
  std::size_t N)
{
  std::vector<int> out;
  std::uniform_int_distribution<>
    distribution(left, right);
  for (std::size_t i = 0; i < N; ++i)
    out.push_back(distribution(rng));
  return out;
}

Now you no longer have to initialize the container to use this function and the function also became pure, clearly differentiating between its inputs and outputs.

Mostly better?

However, there is a downside: Before, the function could be used to append multiple runs into the same container, like this:

std::vector<int> numbers;
for (int i = 0; i < 5; ++i)
  random_between(numbers, 50*i + 7, 50*i + 42, 10);

That use case suddenly became a lot harder. Also, what if you want to keep your vector around and just .clear() it before calling the function again later, to save allocations? That’s also no longer possible. I am not saying that these two use cases should make you prefer the old variant, as they tend not to happen very often. But when they do, it’s all the more annoying. So what if we could have your cake and eat it, too?

A Compromise

How about this:

std::vector<int> random_between(int left, int right,
  std::size_t N, std::vector<int> out = {})
{
  std::uniform_int_distribution<>
    distribution(left, right);
  for (std::size_t i = 0; i < N; ++i)
    out.push_back(distribution(rng));
  return out;
}

Now you can use it to just append again:

std::vector<int> numbers;
for (int i = 0; i < 5; ++i)
  numbers = random_between(
    50*i + 7, 50*i + 42, 10, std::move(numbers));

But you can also use it in the straightforward way, for the hopefully more common case:

auto numbers = random_between(
  50*i + 7, 50*i + 42, 10);

Now you should definitely not do this with all your functions returning a container. But it is a nice pattern to have up your sleeve when the need arises. It should be noted that passing a mutable reference can still be faster in some cases, as that will save you two moves. And you can also add a container-returning facade variant as an overload, but I think this pattern is a very nice compromise that can be implemented by moving a single variable to the parameter list and defaulting it. It keeps 99% of the use cases identically to the original container-returning variant, while making the “append” use slightly more verbose, but also more expressive.

The Java Cache API and Custom Key Generators

The Java Cache API allows you to add a @CacheResult annotation to a method, which means that calls to the method will be cached:

import javax.cache.annotation.CacheResult;

@CacheResult
public String exampleMethod(String a, int b) {
    // ...
}

The cache will be looked up before the annotated method executes. If a value is found in the cache it is returned and the annotated method is never actually executed.

The cache lookup is based on the method parameters. By default a cache key is generated by a key generator that uses Arrays.deepHashCode(Object[]) and Arrays.deepEquals(Object[], Object[]) on the method parameters. The cache lookup based on this key is similar to a HashMap lookup.

You can define and configure multiple caches in your application and reference them by name via the cacheName parameter of the @CacheResult annotation:

@CacheResult(cacheName="examplecache")
public String exampleMethod(String a, int b) {

If no cache name is given the cache name is based on the fully qualified method name and the types of its parameters, for example in this case: “my.app.Example.exampleMethod(java.lang.String,int)”. This way there will be no conflicts with other cached methods with the same set of parameters.

Custom Key Generators

But what if you actually want to use the same cache for multiple methods without conflicts? The solution is to define and use a custom cache key generator. In the following example both methods use the same cache (“examplecache”), but also use a custom cache key generator (MethodSpecificKeyGenerator):

@CacheResult(
  cacheName="examplecache",
  cacheKeyGenerator=MethodSpecificKeyGenerator.class)
public String exampleMethodA(String a, int b) {
    // ...
}

@CacheResult(
  cacheName="examplecache",
  cacheKeyGenerator=MethodSpecificKeyGenerator.class)
public String exampleMethodB(String a, int b) {
    // ...
}

Now we have to implement the MethodSpecificKeyGenerator:

import org.infinispan.jcache.annotation.DefaultCacheKey;

import javax.cache.annotation.CacheInvocationParameter;
import javax.cache.annotation.CacheKeyGenerator;
import javax.cache.annotation.CacheKeyInvocationContext;
import javax.cache.annotation.GeneratedCacheKey;

public class MethodSpecificKeyGenerator
  implements CacheKeyGenerator {

  @Override
  public GeneratedCacheKey generateCacheKey(CacheKeyInvocationContext<? extends Annotation> context) {
    Stream<Object> methodIdentity = Stream.of(context.getMethod());
    Stream<Object> parameterValues = Arrays.stream(context.getKeyParameters()).map(CacheInvocationParameter::getValue);
    return new DefaultCacheKey(Stream.concat(methodIdentity, parameterValues).toArray());
  }
}

This key generator not only uses the parameter values of the method call but also the identity of the method to generate the key. The call to context.getMethod() returns a java.lang.reflect.Method instance for the called method, which has appropriate hashCode() and equals() implementations. Both this method object and the parameter values are passed to the DefaultCacheKey implementation, which uses deep equality on its parameters, as mentioned above.

By adding the method’s identity to the cache key we have ensured that there will be no conflicts with other methods when using the same cache.

Adding a dynamic React page to your classic grails multi-page application

We are developing and maintaining a more than 10 years old classic multi-page application based on the Grails web framework. With the advent of HTML 5 and modern browsers with faster JavaScript engines user expect more and more dynamic and pleasant user experience (UX) from web applications. Our application is used by hundreds of users and our customer expects a stable, familiar and feature-rich experience that continues to improve over time. Something like a complete rewrite of the UI is way out of scope time- and budget-wise.

One of the new feature requests would benefit highly from a client-side JavaScript implementation so we looked at our options. Fortunately it is quite easy to integrate a react app with grails and the gradle build system. So we implemented the new page almost completely as a react app while leaving all the other pages as normal server-side rendered Groovy Server Pages (GSP). The result is quite convincing and opens up a transition path to more and more dynamic client-side pages and perhaps even to the complete transformation to a single-page-application (SPA) in a distant future.

Integrating a React-App into Grails build process

The Grails react-webpack profile can serve as a great starting point to integrate a react app into an existing grails project. First you create the react app for the new page in the folder src/main/webapp, using the create-react-app scripts for example. Then you need to add a $GRAILS_PROJECT/webpack.config.js to configure webpack appropriately like so:

var path = require('path');

module.exports = {
  entry: './src/main/webapp/index.js',
  output: {
    path: path.join(__dirname, 'grails-app/assets/javascripts'),
    publicPath: '/assets/',
    filename: 'bundle.js'
  },
  module: {
    rules: [
      {
        test: /\.js$/,
        include: path.join(__dirname, 'src/main/webapp'),
        use: {
          loader: 'babel-loader',
          options: {
            presets: ["@babel/preset-env", "@babel/preset-react"],
            plugins: ["transform-class-properties"]
          }
        }
      },
      {
        test: /\.css$/,
        use: [
          'style-loader',
          'css-loader'
        ]
      },
      {
        test: /\.(jpe?g|png|gif|svg)$/i,
        use: {
          loader: 'url-loader?limit=10000&prefix=assets/!img'
        }
      }
    ]
  }
};

The next step is to move the package.json to the $GRAILS_PROJECT directory because we want gradle tasks to take care of building and bundling it as a grails asset. To make this convenient we add some gradle tasks employing yarn to our build.gradle:

buildscript {
    dependencies {
        ...
        classpath "com.moowork.gradle:gradle-node-plugin:1.2.0"
    }
}

...

apply plugin:"com.moowork.node"

...

node {
    version = '12.15.0'
    yarnVersion = '1.22.0'
    distBaseUrl = 'https://nodejs.org/dist'
    download = true
}

task bundle(type: YarnTask, dependsOn: 'yarn') {
    group = 'build'
    description = 'Build the client bundle'
    args = ['run', 'bundle']
}

task webpack(type: YarnTask, dependsOn: 'yarn') {
    group = 'application'
    description = 'Build the client bundle in watch mode'
    args = ['run', 'start']
}

bootRun.dependsOn(['bundle'])
assetCompile.dependsOn(['bundle'])

...

Now we have integrated our new react app with the grails build system and packaging. The webpack task allows updating the javascript bundle on the fly so that we have almost the same hot reloading support when developing as with the rest of grails.

Delivering the react app as a page

Now that we have integrated the react app in the build and packaging process of our grails application we need to deliver it when the new page is requested by the browser. This is quite simple and straightforward and can be achieved with a GSP like so:

<html>
<head>
    <meta name="layout" content="main"/>
    <title>
        <g:message code="example.header"/>
    </title>
</head>
<body>
    <div id="react-content">
    </div>
    <asset:javascript src="bundle.js"/>
</body>
</html>

Now you just have to develop the endpoints for the javascript app in form of normal grails controllers rendering JSON instead of GSP views. This is extremely easy using groovy maps and the grails JSON converters:

import grails.converters.JSON

class DataApiController {

    def getData = {
        def responseData = [
            name: 'John',
            age: 37
        ]
        render responseData as JSON
    }
}

Conclusion

Grails and its build infrastructure is flexible enough to easily integrate SPA pages into an existing traditional web application. This allows you to deliver modern UX and features expected by nowadays users without completely rewriting your trusty and proven grails application. The process can be gradually and individual pages/views can be renewed when needed. That way you can continually add value to your customer while incrementally modernizing your application.

Some strings are more equal before your Oracle database

When working with customer code based on ADO.net, I was surprised by the following error message:

The german message just tells us that some UpdateCommand had an effect on “0” instead of the expected “1” rows of a DataTable. This happened on writing some changes to a table using an OracleDataAdapter. What really surprised me at this point was that there certainly was no other thread writing to the database during my update attempt. Even more confusing was, that my method of changing DataTables and using the OracleDataAdapter to write changes had worked pretty well so far.

In this case, the title “DBConcurrencyExceptionturned out to be quite misleading. The text message was absolutely correct, though.

The explanation

The UpdateCommand is a prepared statement generated by the OracleDataAdapter. It may be used to write the changes a DataTable keeps track of to a database. To update a row, the UpdateCommand identifies the row with a WHERE-clause that matches all original values of the row and writes the updates to the row. So if we have a table with two rows, a primary id and a number, the update statement would essentially look like this:

UPDATE EXAMPLE_TABLE
  SET ROW_ID =:current_ROW_ID, 
      NUMBER_COLUMN =:current_NUMBER_COLUMN
WHERE
      ROW_ID =:old_ROW_ID 
  AND NUMBER_COLUMN =:old_NUMBER_COLUMN

In my case, the problem turned out to be caused by string-valued columns and was due to some oracle-weirdness that was already discussed on this blog (https://schneide.blog/2010/07/12/an-oracle-story-null-empty-or-what/): On writing, empty strings (more precisely: empty VARCHAR2s) are transformed to a DBNull. Note however, that the following are not equivalent:

WHERE TEXT_COLUMN = ''
WHERE TEXT_COLUMN is null

The first will just never match… (at least with Oracle 11g). So saying that null and empty strings are the same would not be an accurate description.

The WHERE-clause of the generated UpdateCommands look more complicated for (nullable) columns of type VARCHAR2. But instead of trying to understand the generated code, I just guessed that the problem was a bug or inconsistency in the OracleDataAdapter that caused the exception. And in fact, it turned out that the problem occured whenever I tried to write an empty string to a column that was DBNull before. Which would explain the message of the DBConcurrencyException, since the DataTable thinks there is a difference between empty strings and DBNulls but due to the conversion there will be no difference when the corrensponding row is updated. So once understood, the problem was easily fixed by transforming all empty strings to null prior to invoking the UpdateCommand.