Developing for Cordova + SQLite in a standard Browser environment

As any developer, who doesn’t just love it when a product that has grown over the years suddenly needs to target a new platform (e.g. operating system) because some customer demands changed, some dependency broke or some other totally unexpected thing called “progress” happened?

Fortunately, there are some approachs to cross-platform development and if one expects such a change of direction, one can early on adopt a suitable runtime environment such as Apache Cordova or Capacitor/Ionic or similar, who all promise you a Write-Once-Run-Anywhere experience, decoupling the application logic from the lower-level OS interactions.

Unfortunately though, this promise is a total lie and usually, after starting such a totally platform-agnostic project, really soon you will want to use a dependency that will only work for one platform and then your options are limited.

One such example is a Cordova project we are currently moving from Android to iOS, and in that process also redesigning a nice, modern frontend to replace a very outdated (read: unmaintainable) Vanilla JS application. So now we have set it up smoothly (React + Vite + Typescript – you name it!), so technically we do not need anything iOS-specific yet, so we can work on our redesign in a pure-browser environment with hot reloading and the likes – life is good!

Then comes the realization that our application is quite data heavy and uses an on-device SQL database to persist its data, and we don’t have that in the browser – so, life turned bad.

What to do? There had been a client-side WebSQL database specification once, but this was unofficial and never fully implemented, abandoned in 2010, still present in Chrome but they are even live announcing how they are removing it, so this is not the future-proof way to go.

We crave a smooth flow of development.

  • It is not an option to re-build the app at every change.
  • It is not an option to have the production system use its SQLite DB and the development environment to use a totally different one like IndexedDB – certain SQLite queries are too ingrained in our application.
  • It’s only probably an option to use an experimental technology like absurd-sql, which aims to fill in that gap but then again needs advanced API features like Web Workers, SharedArrayBuffer, Atomics API which we wouldn’t require else
  • It is possible to use in-memory SQLite via sql.js but for persistence, it wasn’t instantly obvious to me how to couple that with the partially supported Origin Private File System API

So after all, this is the easiest solution that still gave me most of my developer smoothness back: Use sql.js in memory and for development, display two nice buttons on the UI which let me download the whole DB and upload one from file again. This is the sketch:

We create a CombinedDatabase class which, depending on the environment, can hand out such a database in a Singleton-like manner

class CombinedDatabase {

    // This is the Singleton-part

    private static instance: CombinedDatabase;

    public static get = async (): Promise<CombinedDatabase> => {
        if (!this.instance) {
            const {db, type} = await this.createDatabase();
            this.instance = new CombinedDatabase(db, type);
        }
        return this.instance;
    };

    private static createDatabase = async () => {
        if (inProductionEnvironment()) {
            return {
                db: createCordovaSqliteInstance(),
                type: "CordovaSqlite"
             };
        } else {
            const sqlWasmUrl = (await import("../assets/sql-wasm.wasm?url")).default;
            // we extend the window object for reasons I tell you below
            window.sqlJs = await initSqlJs({locateFile: () => sqlWasmUrl});
            const db = new window.sqlJs.Database();
            return {db, type: "InMemory"};
        }
    }


    // This is the actual flesh, i.e. a switch of which API to use

    private readonly type: string;
    private cordovaSqliteDb: SQLitePlugin.Database | null = null;
    private inMemorySqlJsDb: SqlJsDatabase | null = null;

    private constructor(db: SQLitePlugin.Database | SqlJsDatabase, type: string) {
        this.type = type;
        switch(type) {
            case "CordovaSqlite":
                this.cordovaSqliteDb = db as SQLitePlugin.Database;
                break;
            case "InMemory":
                this.inMemorySqlJsDb = db as SqlJsDatabase;
                break;
            default:
                throw Error("Invalid CombinedDatabase type: " + type);
        }
    }

   // ... and then there are some methods

}

(This is simplified – in actual, type is an enum for me , and there’s also error handling, but you know – not the point here).

This structure is nice, because you can now implement low-level methods like some executeQuery(...) etc. which just decide depending on the type, which of the private DB instances it can address, and even if they work differently, return a unified response format.

The rest of our application does not know anything about any Cordova-SQLite-dependency, or sql.js, or whatever. Life is good again.

So How do Import / Export work?

I gave the CombinedDatabase some interfacing methods, similar to


    public async export() {
        switch (this.type) {
            case "CordovaSqlite":
                throw Error("Not implemented for cordova-sqlite database");
            case "InMemorySqlJs":
                return this.inMemorySqlJsDb!.export();
            default:
                throw Error("DB not initialized, cannot export.");
        }
    }

    public async import(binaryData: Uint8Array) {
        if (this.type !== CombinedDatabaseType.InMemorySqlJs) {
            throw Error("DB import only implemented for the in-memory/sql.js database, this is a DEVELOPMENT feature!");
        }
        await this.close();
        this.inMemorySqlJsDb = new window.sqlJs.Database(binaryData);
    }

This is also the reason why I monkey-patched the window object earlier, so I still have this API around outside the Singleton instantiation (createDatabase). Yes, this is a global variable and a kind of hack, but imo is what can safely be done inside the Browser within some good measure.

Remember, in Typescript you need to declare this e.g. in some global.d.ts file

import {SqlJsStatic} from "sql.js";

declare global {
    interface Window {
        sqlJs?: SqlJsStatic
    }
}

Or go around the Window interface by casting (window as any).sqlJs – you decide what you prefer.

Anyway, the export() functionality can then be used quite handily, it returns the in-memory database as a binary array and you can make the browser download that via a Blob URL:

api.db.export().then((array: Uint8Array) => {
    const blob = new Blob([array], {type: "application/x-sqlite3"});
    const link = document.createElement("a");
    link.href = URL.createObjectURL(blob);
    link.download = `bonpland${Date.now()}.db`;
    link.target = "_blank";
    link.click();
});

And similarly, you can use import() by reading a Uint8Array from a temporary <input type="file"> element with a FileReader() (somewhat common solution, but just comment below if you want the details).

To be exact, I don’t even use the import() button anymore because I pass my development DB as an asset to the dev server. This is nice (and only takes a few seconds on hot reloading because our DB is like 50 MB in size), but somewhat Vite-specific, which is why I will postpone this topic to some later blog time.

Oracle DB’s Gradual Password Rollover Feature

It is good security practice to change passwords regularly. When changing a database password, however, the problem arises that applications that access this database have to be reconfigured if the password changes. If multiple applications or services use the same database user, then they all need to be reconfigured at once, typically during a scheduled downtime.

Oracle 21c introduced a new feature called Gradual Password Rollover that can help make such a password change less disruptive. The feature was also backported to Oracle 19c. If this feature is switched on for a user profile, a transition time is granted when the password is changed, during which both the old and the new password are valid. The applications can then change their configuration to the new password within this period according to their own schedule.

How to enable it

You must first be logged in as a privileged user who is allowed to manage users. The grace period for which both passwords should be valid after a password change is set via a user profile. A user profile is a set of limits on the database resources and the user password. The profile setting for this feature is called PASSWORD_ROLLOVER_TIME. You either create a new profile and specify this setting as a limit, or you adjust an existing profile. Here are both variants:

-- Create a new profile ...
CREATE PROFILE example_profile LIMIT PASSWORD_ROLLOVER_TIME 1;

-- ... or alter an existing profile
ALTER PROFILE example_profile LIMIT PASSWORD_ROLLOVER_TIME 1;

The unit of this setting is days. The minimum value is one hour (1/24) and the maximum value is 60 (days). You can assign this profile to a user with the following statement:

ALTER USER example_user PROFILE example_profile;

Now change the user’s password:

ALTER USER example_user IDENTIFIED BY thenewpassword;

Now you should be able to log in as this user with both the old and the new password. You can query the current status from the dba_users table:

SELECT username, account_status, profile
  FROM  dba_users
  WHERE username='example_user';

The value of the account_status column should have changed from OPEN to OPEN & IN ROLLOVER. This indicates that the user account is in the password rollover phase, and two passwords are active at the same time. You can end this period early with the following command:

ALTER USER example_user EXPIRE PASSWORD ROLLOVER PERIOD;

A final note: If you change the password again during the rollover period only the original password (the one before the rollover period was started) and the latest password are valid, which means a user account can’t have more than two valid passwords at the same time.

Materialized views in Oracle

Most relational database management systems (RDBMs) support not only views, but also materialized views. Materialized views and normal views are both database objects used to present data to users, but they work in different ways.

A database view is a virtual table or a named query that presents data from one or more tables in a specific way. They are not physical tables and do not store data directly, but instead retrieve data from the underlying tables based on a specified query.

Materialized views are similar to normal views, but they store pre-computed results. When a materialized view is created, the results of the underlying query are computed and stored in the database. One advantage of materialized views is faster query performance by avoiding the need to compute the same results repeatedly. This is especially useful for complex and time-consuming queries, as the results can be stored and accessed quickly.

The syntax for creating a normal view in an Oracle database is as follows:

CREATE VIEW view_name AS SELECT … FROM … WHERE …;

To create a materialized view instead of a normal view you add the MATERIALIZED keyword:

CREATE MATERIALIZED VIEW view_name AS SELECT … FROM … WHERE …;

When creating a materialized view you should think about and decide on three aspects of materialized views:

  1. the refresh method,
  2. the refresh interval,
  3. and the storage properties

The refresh method

The refresh method determines how the data in the materialized view is updated or refreshed to reflect changes in the base tables.

  • COMPLETE: This one completely rebuilds the materialized view from scratch. It drops the existing contents of the materialized view and then re-executes the query to populate it with fresh data. This method can be resource-intensive and slow, especially for large materialized views.
  • FAST: Updates only the rows in the materialized view that have changed since the last refresh. It uses the materialized view logs on the base tables to identify the changed rows and then applies the changes to the materialized view. It can be much faster than a complete refresh, especially if there are only a few changes to the data.
  • FORCE: Tries to perform a fast refresh if possible, but falls back to a complete refresh if necessary. This method is useful if you want to try to perform a fast refresh, but you’re not sure if it will be possible due to the nature of the data or the query.

You can specify the refresh method when creating the materialized view using the REFRESH keyword:

CREATE MATERIALIZED VIEW view_name
  REFRESH FAST
  AS SELECT ...;

If you do not specify a refresh mode FORCE is the default.

The refresh interval

The refresh interval controls how often the materialized view is automatically refreshed. It determines how frequently the materialized view is updated to reflect changes in the underlying data.

Some refresh interval options in Oracle are:

  • ON COMMIT: The materialized view is refreshed automatically every time a transaction that modifies the underlying data is committed. This interval is useful when you need to keep the materialized view up-to-date in near real-time.
  • ON DEMAND: The materialized view is refreshed only when you explicitly request a refresh using the DBMS_MVIEW.REFRESH.
  • START WITH … NEXT: With this interval, the materialized view is refreshed automatically at regular intervals. It is useful when you want to balance the need for up-to-date data with the resources required to refresh the materialized view.

You can specify the refresh interval when creating the materialized view by adding it to the REFRESH clause when creating the view:

CREATE MATERIALIZED VIEW view_name
  REFRESH FAST ON COMMIT
  AS SELECT ...;

The following materialized view gets refreshed every hour:

CREATE MATERIALIZED VIEW view_name
  REFRESH FAST START WITH SYSDATE NEXT SYSDATE + 1/24
  AS SELECT ...;

Storage properties

Storage properties affect how the data in the materialized view is stored and accessed. In Oracle, some of these are:

  • CACHE: The data is stored in the database buffer cache, which is a portion of memory used to cache frequently accessed data. It improves query performance by reducing disk I/O, but it can consume a significant amount of memory.
  • LOGGING: Changes to the materialized view data are logged in the database redo logs. This property ensures that changes to the materialized view can be recovered in case of a system failure but can result in additional overhead.
  • TABLESPACE: Allows you to specify the tablespace where the materialized view data is stored.

Again, you can specify these properties when creating the materialized view:

CREATE MATERIALIZED VIEW view_name
  CACHE
  LOGGING
  TABLESPACE tablespace_name
AS SELECT ... FROM ... WHERE ...;

Now you know the basics for creating materialized views in an Oracle database when needed. There is still more to learn about them. You can find the full reference here.

Grails Domain update optimisation

As many readers may know we are developing and maintaining some Grails applications for more than 10 years now. One of the main selling points of Grails is its domain model and object-relational-mapper (ORM) called GORM.

In general ORMs are useful for easy and convenient development at the cost of a bit of performance and flexibility. One of the best features of GORM is the availability of several flexible APIs for use-cases where dynamic finders are not enough. Let us look at a real-world example.

The performance problem

In one part of our application we have personal messages that are marked as read after viewing. For some users there can be quite a lot messages so we implemented a “mark all as read”-feature. The naive implementation looks like this:

def markAllAsRead() {
    def user = securityService.loggedInUser
    def messages = Messages.findAllByUserAndTimelineEntry.findAllByAuthorAndRead(user, false)
    messages.each { message ->
        message.read = true
        message.save()
    }
    Messages.withSession { session -> session.flush()}
 }

While this is both correct and simple it only works well for a limited amount of messages per user. Performance will degrade because all the domain objects are loaded into domain objects, then modified and save one-by-one to the session. Finally the session is persisted to the database. In our use case this could take several seconds which is much too long for a good user experience.

DetachedCriteria to the rescue

GORM offers a much better solution for such use-cases that does not sacrifice expressiveness. Instead it offers a succinct API called “Where Queries” that creates DetachedCriteria and offers batch-updates.

def markAllAsRead() {
    def user = securityService.loggedInUser
    def messages = Messages.where {
        read == false
        addressee == user
    }
    messages.updateAll(read: true)
}

This implementation takes only a few milliseconds to execute with the same dataset as above which is de facto native SQL performance.

Conclusion

Before cursing GORM for bad performance one should have a deeper look at the alternative querying APIs like Where Queries, Criteria, DetachedCriteria, SQL Projections and Restrictions to enhance your ORM toolbox. Compared to dynamic finders and GORM-methods on domain objects they offer better composability and performance without resorting to HQL or plain SQL.

Partitioning in Oracle Database: Because Who Wants to Search an Endless Table?

As data volumes continue to grow, managing large database tables and indexes can become a challenge. This is where partitioning comes in. Partitioning is a feature of database systems that allows you to divide large tables and indexes into smaller, more manageable parts, known as partitions. This can improve the performance and manageability of your database. Aside from performance considerations, maintenance operations, such as backups and index rebuilds, can become easier by allowing them to be performed on smaller subsets of data.

This is achieved by reducing the amount of data that needs to be scanned during query execution. When a query is executed, the database can use the partitioning information to skip over partitions that do not contain the relevant data, instead of having to scan the entire table. This reduces the amount of I/O required to execute the query, which can result in significant performance gains, especially for large tables.

There are several types of partitioning available in Oracle Database, including range partitioning, hash partitioning, list partitioning, and composite partitioning. Each type of partitioning is suited to different use cases and can be used to optimize the performance of your database in different ways. In this blog post we will look range partitioning.

Range partitioning

Here is an example of range-based partitioning in Oracle:

CREATE TABLE books (
  id NUMBER,
  title VARCHAR2(200),
  publication_year NUMBER
)

PARTITION BY RANGE (publication_year) (
  PARTITION p_before_2000 VALUES LESS THAN (2000),
  PARTITION p_2000s VALUES LESS THAN (2010),
  PARTITION p_2010s VALUES LESS THAN (2020),
  PARTITION p_after_2020 VALUES LESS THAN (MAXVALUE)
);

In this example, we have created a table called books that stores book titles, partitioned by the year of publication. We have defined four partitions, p_before_2000, p_2000s, p_2010s, and p_after_2020.

Now, when we insert data into the books table, it will automatically be placed in the appropriate partition based on the year of publication:

INSERT INTO books (id, title, publication_year)
  VALUES (1, 'Nineteen Eighty-Four', 1949);

This book will be inserted into partition p_before_2000, as the year of publication is before 2000. The following book will be placed into partition p_2000s:

INSERT INTO books (id, title, publication_year)
  VALUES (2, 'The Hunger Games', 2008);

When we query the books table, the database will only access the partitions that contain the data we need. For example, if we want to retrieve data for books published in 2015 and 2016, the database will only access partition p_2010s.

SELECT * FROM books WHERE publication_year>=2015 AND publication_year<=2016:

However, you should be aware that while partitioning can improve query performance for some types of queries, it can also negatively impact query performance for others, especially if the partitioning scheme does not align well with the query patterns. Therefore, you should tailor the partitioning to your needs and check if it brings the desired effect.

PostgreSQL’s hstore module for semi-structured data

PostgreSQL has an extension module called hstore that allows you to store semi-structured data in a key/value format. Values ​​of an hstore object are stored like in a dictionary. You can also reference its values in SQL queries.

To use the extension, it must first be loaded into the current database:

CREATE EXTENSION hstore;

Now you can use the data type hstore. Here, we create a table with some regular columns and one column of type hstore:

CREATE TABLE animals (
    id     serial PRIMARY KEY,
    name   text,
    props  hstore
);

Literals of type hstore are written in single quotes, containing a set of key => value pairs separated by commas:

INSERT INTO
    animals (name, props)
VALUES
    ('Octopus', 'arms => 8, habitat => sea, color => varying'),
    ('Cat',     'legs => 4, fur => soft'),
    ('Bee',     'legs => 6, wings => 4, likes => pollen');

The order of the pairs is irrelevant. Keys within a hstore are unique. If you declare the same key more than once only one instance will be kept and the others will be discarded. You can use double quotes to include spaces or special characters:

'"fun-fact" => "Cats sleep for around 13 to 16 hours a day (70% of their life)"'

If the type of the literal can’t be inferred you can append ::hstore as a type indicator:

'legs => 4, fur => soft'::hstore

Both keys and values are stored as strings, so these two are equivalent:

'legs => 4, fur => soft'
'"legs" => "4", "fur" => "soft"'

Another limitation of hstore values is that they cannot be nested, which means they are less powerful than JSON objects.

You can use the -> operator to dereference a key, for example in a SELECT:

SELECT
    name, props->'legs' AS number_of_legs
FROM
    animals;

It returns NULL if the key is not present. Of course, you can also use it in a WHERE clause:

SELECT * FROM animals WHERE props->'fur' = 'soft';

There are many other operators and functions that can be used with hstore objects. Here is a small selection (please refer to the documentation for a complete list):

  • The || operator concatenates (merges) two hstores: a || b
  • The ? operator checks the existence of a key and returns a boolean value: props ? 'fur'
  • The - operator deletes a key from a hstore: props - 'fur'
  • The akeys function returns an array of a hstore’s keys: akeys(hstore)

You can also convert a hstore object to JSON: hstore_to_json(hstore). If you want to learn more about JSON in PostgreSQL you can continue reading this blog post: Working with JSON data in PostgreSQL

Copying and moving rows between tables in PostgreSQL

In this article, I’ll show some helpful tips for copying and moving data between database tables in PostgreSQL.

Copying

The simplest operation is copying rows from one table to another table. The associated SQL query is known to most. You can simply combine an INSERT with a SELECT:

INSERT INTO short_books
  SELECT *
    FROM books
    WHERE pages < 50;

Of course, if you want to copy a complete table, you must first create the target table with the same columns. Instead of just repeating the original CREATE TABLE with all the column definitions with a different name, there is a shortcut in the form of CREATE TABLE … LIKE.

CREATE TABLE books_copy (LIKE books);

If you want the copy to inherit all constraints, indices and defaults of the source table you can add INCLUDING ALL:

CREATE TABLE books_copy (LIKE books INCLUDING ALL);

Instead of executing a CREATE TABLE first and then an INSERT, you can also directly combine CREATE TABLE with a SELECT:

CREATE TABLE books_copy AS
  SELECT * FROM books;

Moving

The direct method of moving specific rows from one table to another table is a bit less known. You can of course first copy the rows into the target table and then delete the rows from the source table. However, this is also possible with just one statement, in one go. To do this, you need to know the RETURNING clause. It can be appended to a DELETE or UPDATE statement and causes the affected rows to be returned as the result set after the respective action:

DELETE FROM books
  WHERE pages < 50
  RETURNING
    title, author, pages;

This can be used in combination with the WITH … AS clause to move rows between tables with just one SQL statement:

WITH selection AS (
  DELETE FROM books
  WHERE pages < 50
  RETURNING *
)
INSERT INTO short_books
  SELECT * FROM selection;

The function of WITH can be thought of as defining a named temporary view that can only be used in the current statement.

SQLite in ASP.NET 6.0: Access your database file via HTTP Endpoint

It is one of our fundamental principles to always choose the most-easy-while-capable tool for a job. For this, we try not to shower our customers with the newest, most hip technology available, but to use a technology stack we are

  • comfortable with
  • quick to provide the required minimum of customer value
  • keeping enough options open in order anything changes

One of the heavily affected aspects in that regard is the choice of data storage. There are a lot of different design paradigms one can choose from, but with the “most easy” aspect at hand, the question mostly resolves around the needs of the customer, not the wants (or “might be useful one day”) of the developer.

If your customer already has their PostgreSQL databases distributed in their Kubernetes as an example, it might be advisable to aim for that. If the customer does not have any integrated structure yet, I start with the question:

Is anything more necessary than a single-file database?

For one of our ASP.NET 6.0 applications, this was answered with the choice of Sqlite, due to it being native to the Microsoft universe including Entity Framework, which has many common use cases already answered, i.e. gives you way of caring about your application logic more than their database abstractions.

(It might be said that for .NET, an interesting project seems to have been LiteDB, which also operates on a single database file, but at the time of this writing, seems to have gone stale in development / support, and therefore fell out of my favour soon. Sad.).

Now we have a project in which we are closely in touch with the customer and their live system, very often had it been useful to access their platform and take a snapshot of the database for backup or assurance of our logic, and with the technical overhead in that specific case (which required several steps of sequentially granting remote access), I thought myself:

Why can’t I have a (sufficiently secured) HTTP endpoint that gives me this SQLite file as a File download?

The solution was a bit tricky because either the file was not read-accessible during that HTTP request (having been open already), the filestream was not possible because it was being closed too early, or the encoding of the resulting file would not fit. What finally worked was:

        private readonly static System.Text.Encoding enc1252 =
            CodePagesEncodingProvider.Instance.GetEncoding(1252);

        [HttpGet("database")]
        public ActionResult GetDatabase()
        {
            var dataSource = "sqlite.db";
            if (!System.IO.File.Exists(dataSource))
            {
                return NotFound(dataSource);
            }
            db.SaveChanges();
            // Note: CloseConnection() was not required!

            using var fs = new FileStream(dataSource, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
            var reader = new StreamReader(fs, enc1252);
            var data = reader.ReadToEnd();
            var ms = new MemoryStream(enc1252.GetBytes(data));
            return new FileStreamResult(ms, "application/octet-stream");
        }

Feel free to comment on that way because I found it more than none-trivial to arrive there, but maybe I missed something obvious. Some definite stumbling stones definitively were in

  • The Mime Type “application/octet-stream”, which for some reason would not work with the more adequately sounding choice “application/x-sqlite3” – I have no idea why.
  • The Encoding, which on our system was the Windows CodePages-1252 default, which needed to be specified not only in the interpretation of our bytes stream (second location), but also in the definition of the StreamReader itself (first location).
  • Please note that if your database is encoded via CP-1252, you also need the System.Text.Encoding.CodePages package (available via NuGet)
  • What looks like a missing “using”, is really intentional: If the StreamReader was opened with “using var reader = …”, it had the effect of being disposed before the request was handled correctly – I ran into an error of FileStreamResult: “Cannot access a closed Stream.” – keeping the StreamReader open solved that and the internet told me that this is still not a memory leak; the StreamReader reader gets disposed when the FileStream fs is disposed (see the using in front of that), but it still feels weird.

If you have any comments on that, I’d be very glad to learn from them, but if you don’t and you just have another use case for that problem – I’m happy to help!

PostgreSQL’s new MERGE command

PostgreSQL version 15 introduces a new SQL command: the MERGE command. This allows merging a table into another table. The MERGE command has existed for some time in other databases such as Oracle or SQL Server.

The principle of this command is that you have a target table in which you want to insert or remove data based on a source table under certain conditions, or you want to update existing entries with data from the source table. The source table doesn’t have to be a real table, it can just as easily be a SELECT query.

How to use it, step-by-step

The command begins with MERGE INTO, followed by the name of the target table. We call it dest here:

MERGE
  INTO dest ...

Then you specify the source table with USING, here we call it src:

MERGE
  INTO dest
  USING src
  ...

If you want to use a SELECT query as the source instead of a real table, you can do it like this:

MERGE
  INTO dest
  USING (SELECT ... FROM ...) AS src
  ...

Now you need a condition that is used to match entries from one table to entries from the other table. This is specified after ON. In this example we simply use the IDs of the two tables:

MERGE
  INTO dest
  USING src
  ON dest.id=src.id
  ...

This is followed by a case distinction that describes what should happen if the condition either applies or not. The possible actions can be: UPDATE, DELETE, INSERT, or DO NOTHING.

The two cases are specified with WHEN MATCHED THEN and WHEN NOT MATCHED THEN:

MERGE
  INTO dest
  USING src
  ON dest.id=src.id
  WHEN MATCHED THEN
    UPDATE SET ...
  WHEN NOT MATCHED THEN
    INSERT (...) VALUES (...);

If a match exists, then reasonable actions are UPDATE, DELETE, or DO NOTHING. If no match exists, then reasonable actions are INSERT or DO NOTHING.

In the WHEN cases, additional conditions can be specified with AND:

MERGE
  INTO dest
  USING src
  ON dest.id=src.id
  WHEN MATCHED AND dest.value > src.value THEN
    DELETE
  WHEN MATCHED THEN
    UPDATE SET ...
  WHEN NOT MATCHED THEN
    DO NOTHING;

A realistic example

Here’s an example demonstrating a use case that might occur in the real world:

MERGE
  INTO account a
  USING transaction t
  ON a.id=t.account_id
WHEN MATCHED THEN
  UPDATE SET balance = a.balance + t.amount
WHEN NOT MATCHED THEN
  INSERT (id, balance) VALUES (t.account_id, t.amount);

This statement processes a table of monetary transactions and applies them to their matching customer accounts by adding the amount of each transaction to the balance of the matching account. If no matching account exists it will be created and the initial balance is the amount of the first transaction.

Speeding up your HQL

Using an object-relational-mapper (ORM) to persist your entities, manage their state and query subsets for lists or reports is a wide-spread practice and may speed up your development.

If not used correctly, it may introduce unexpected performance problems because of unefficient default queries and the overhead this mapping introduces as most of the time table rows are converted to domain objects. Often this results in many queries and the n+1 query problem.

Nevertheless, the benefits of using an ORM may outweigh the problems and most problems can be mitigated by features and a correct usage of the tool.

Today I want to present a performance problem we had using GORM/Hibernate and how we easily fixed it without major code restructuring or workarounds.

The Problem

We used a HQL-query to load quite a lot of entities which took about 3 seconds. This was acceptable for our customer. If the user however tried to narrow down the results using a filter loading a smaller amount of the same entities took over 1 minute. Obviously, this was totally unacceptable and counter-intuitive.

The Analysis

Further analysis revealed, that a particular part of the WHERE-clause was responsible for the observed slowdown:

FROM Report r
WHERE r.project.proposal.id = p.id

So we did filter the root entity Report on an entity called Proposal but needed to load an associated Project entity for all reports to consider. So even if we are just using entity-ids to filter the innocently looking path r.project.proposal.id leads to loading and mapping of hundreds of Project entities.

The Solution

In our example we can fortunately do a lot better without big changes to our domain model, the application code or the query.

The relevant part of the schema looks like below:

In the above schema we can see, that both, a Report and a Proposal are associated with a certain project. Remember, that in Hibernate your entities contain only the id of their one-to-one mapped sub-entities by default. This means that if we change the filter clause to

WHERE r.project.id = p.project.id

we skip loading and mapping of all the Project entities and only load the needed reports and proposals. Since they both contain the project id we can use that in our filter. This resulted in more than a 10x speedup with such a simple and non-invasive change.

General Takeaway

ORMs can be a great tool but it is very easy to shoot yourself into the foot. With enough care you can achieve both simple code and good performance but you may run into non-obvious problems every now and then.