Organize complex SQL queries with Common Table Expressions

Complex SQL database queries often contain subqueries.

SELECT * FROM ... 
   WHERE name IN (SELECT name 
         FROM ... 
         WHERE ...)

These can quickly become unreadable, especially if multiple subqueries are involved. A nice way to organise such queries with multiple subqueries is called Common Table Expressions (CTE), or colloquially: “WITH queries”. They are supported by most contemporary SQL databases.

When using Common Table Expressions you list all the subqueries after a WITH keyword in front of the actual query and assign a name to each subquery:

WITH
  <subquery1_name> AS (SELECT ...),
  <subquery2_name> AS (SELECT ...),
  [ ... ]
SELECT ...
FROM ...
[ ... ]

You can now refer to these subqueries in the main query by their names. They are conceptually just like temporary views. You can also refer to a subquery in the main query multiple times, whereas if the subquery was inlined you would have to repeat it. A subquery defined as a Common Table Expression can also refer to the preceding subqueries in the subquery list.

Recursion

A Common Table Expression can even refer to itself, which is a recursive definition. In some database systems you have to add the RECURSIVE keyword after WITH, in others you can leave it out. Here’s a recursive CTE that calculates factorial numbers:

WITH RECURSIVE factorials (n, fact) AS 
  (SELECT 0, 1
   UNION ALL
   SELECT n+1, (n+1)*fact FROM factorials
          WHERE n < 6)
SELECT * FROM factorials;
N FACT
0    1
1    1
2    2
3    6
4   24
5  120

You could also use such a recursive query to descend into a tree hierarchy.

While the use cases for recursive queries are less frequent, I find the general concept of Common Table Expressions very useful to make complex queries more readable.

Determining the sizes of Oracle database tables and indexes

For one of our projects we store large amounts of timeseries data in an Oracle database. Sometimes we want to get an overview of how big the tables and related indexes are. Some database client tools like Toad for Oracle can show this information directly in their user interface, but if you use other tools like the SQuirreL SQL Client or JetBrains DataGrip you have to gather this information yourself via SQL queries.

DBA_SEGMENTS and DBA_INDEXES

For Oracle databases this meta information is available via the DBA_SEGMENTS and DBA_INDEXES tables. To query the sizes of several tables in MB use the following query:

SELECT segment_name, segment_type, bytes/1024/1024 MB
  FROM dba_segments
  WHERE segment_type = 'TABLE'
    AND segment_name IN ('TABLE_NAME_1', 'TABLE_NAME_2');

This SQL query returns the sizes of TABLE_NAME_1 and TABLE_NAME_2.

If you want to see the sizes of all the indexes that are associated with a table or a set of tables you can use the following query:

SELECT idx.table_name, idx.index_name, SUM(bytes)/1024/1024 MB
  FROM dba_segments seg,
       dba_indexes idx
  WHERE idx.table_owner = 'SCHEMA_NAME'
    AND idx.table_name IN ('TABLE_NAME_1', 'TABLE_NAME_2')
    AND idx.owner       = seg.owner
    AND idx.index_name  = seg.segment_name
  GROUP BY idx.index_name, idx.table_name;

Of course, you have to replace SCHEMA_NAME, and TABLE_NAME_x with the names in your database.

Unfortunately, access to this kind of meta information is different for each database system, and the queries above only work for Oracle databases.

Using PostgreSQL for time-series data

The number of sensors and other things that periodically collect data is ever growing. This advent of the  internet of things (IoT) demands a way of storing and analyzing all this so-called time-series data. There are many options for such data – the most prominent being special time-series databases like InfluxDB or well suited, nicely scaling databases like Apache Cassandra.

The problem is you have to tailor your solution to one of these technologies whereas there is SQL with mature database management systems (DBMS) and drivers/bindings for almost any programming language.

Why not use a plain SQL database?

Relational SQL databases are a mature and well understood piece of technology albeit not as sexy as all those new NoSQL databases. Using them for time series data may not be a problem for smaller datasets but sooner or later your ingestion and query performance will degrade massivly. So in general it is not a good option to store all your time-series data in a traditional relational DBMS (RDBMS).

Why use PostgreSQL with TimescaleDB?

With the PostgreSQL extension TimescaleDB you get the best of both worlds: a well known query language, robust tools and scalability.

You access and manage your time-series database just like your ordinary PostgreSQL database. Almost everything including replication and backups will continue to work like before.

You do not have to deal with limitations of specialized solutions or learn a completely new ecosystem just for one aspect of your solution.

The future

We are successfully using TimescaleDB in one of our projects and will continue to share tipps and experience with this technology taking its rising importance into account.

 

Selecting all columns of a database table with an SQL GROUP BY expression

Suppose we have an SQL database table named “temperatures” with the following contents:

LOCATION  TIME        CELSIUS
inside    2018-08-01  24
inside    2018-08-02  28
inside    2018-08-03  21
inside    2018-08-04  28
outside   2018-08-01  29
outside   2018-08-02  31
outside   2018-08-03  25
outside   2018-08-04  30

We want to find the highest temperature for each location. We use the MAX aggregate function and a GROUP BY expression:

SELECT location, MAX(celsius) celsius
FROM temperatures
GROUP BY location;

As expected, the result is:

LOCATION  CELSIUS
outside   31
inside    28

Now suppose we also want to know when each of these extreme temperatures occured. Naively, we try the following query:

SELECT location, time, MAX(celsius) celsius
FROM temperatures
GROUP BY location;

The response is an error: “not a GROUP BY expression”. In a GROUP BY expression all selected columns must be either part of the GROUP BY clause or an aggregate.

To achieve what we want we can use a JOIN:

SELECT
  t.location, t.time, t.celsius
FROM
  temperatures t
JOIN (SELECT location, MAX(celsius) celsius
      FROM temperatures
      GROUP BY location) tmax
ON
  t.location=tmax.location AND t.celsius=tmax.celsius;

This query results in multiple rows per location if the maximum temperature was recorded at different times:

LOCATION  TIME        CELSIUS
outside   2018-08-02  31
inside    2018-08-04  28
inside    2018-08-02  28

If we are only interested in the first occurrence of the maximum temperature per location, we can use the following query:

SELECT
  location,
  MIN(time) KEEP (DENSE_RANK LAST ORDER BY celsius) time,
  MAX(celsius) celsius
FROM
  temperatures
GROUP BY
  location;
LOCATION  TIME        CELSIUS
inside    2018-08-02  28
outside   2018-08-02  31

Here we don’t need a JOIN anymore, because select clause for the time column is an aggregate as well.

Entity Framework migrations with multiple database contexts

The .NET Entity Framework provides functionality for automatic database migrations. Every time your application code requires a change of the database schema you should create a migration, so that the existing database schema is updated when a new version of the application is deployed. Examples for such changes are new entity classes or the addition and removal of properties of existing entity classes. The Entity Framework functionality for database migrations is called Code First Migrations.

Code First Migrations are managed via the so-called Package Manager Console. That’s how it’s called in Visual Studio, because its usually used for package management, but it’s basically a general Power Shell command line interface. After you have created the database context class and the entity model classes for your application, you create an initial migration (usually called InitialCreate), which captures the original state of the database schema for your application:

Add-Migration InitialCreate

This will create a new migration class called InitialCreate in the Migrations folder. The filename is prefixed with a timestamp: 201810702207458_InitialCreate.cs. Each migration class has an Up() method, which applies the migration and a Down() method, which rolls the migration back.

Each subsequent migration only describes the difference to its predecessor migration. For example, you add a new string property Email to your User entity class and add a new migration:

Add-Migration AddUserEmail

The tool will scan your entity classes, compare the current state to the state of the previous migration, calculate the difference and create a new migration class, which adds a new column to the database schema.

When the migrations are run on the target system they are tracked in a special database table called __MigrationHistory.

Multiple database contexts

The above usage of Code First Migrations is well documented. Here I want to describe a feature, that is documented in less detail, because it’s less commonly used: migrations with multiple database contexts.

Let’s assume you have two database contexts: CoreDataContext and MeasurementDataContext. In this case you have to create two migration configuration classes, which inherit DbMigrationsConfiguration. You want to create two subdirectories under the Migrations directory, one for each database context:

In each subdirectory you create a migrations Configuration class, one for each database context:

namespace Migrations.CoreData
{
  internal sealed class Configuration : DbMigrationsConfiguration<CoreDataContext>
  {
    public Configuration()
    {
      MigrationsDirectory = "Migrations\CoreData";
      AutomaticMigrationsEnabled = true;
    }
  }
}

and

namespace Migrations.MeasurementData
{
  internal sealed class Configuration : DbMigrationsConfiguration<MeasruementDataContext>
  {
    public Configuration()
    {
      MigrationsDirectory = "Migrations\MeasurementData";
      AutomaticMigrationsEnabled = true;
    }
  }
}

For each configuration you have to set the MigrationsDirectory property accordingly. The AutomaticMigrationsEnabled property is optional. If it’s set the migrations will be applied automatically at the start of the application.

Now, if you run a migration command like Add-Migration, you have to add the -ConfigurationTypeName option, which specifies the Configuration class for desired the database context:


Add-Migration InitialCreate -ConfigurationTypeName Migrations.CoreData.Configuration

Add-Migration InitialCreate -ConfigurationTypeName Migrations.MeasurementData.Configuration

Add-Migration AddUserEmail -ConfigurationTypeName Migrations.CoreData.Configuration

Add-Migration AddMeasurementTimestamp -ConfigurationTypeName Migrations.MeasurementData.Configuration

The migration classes will now be created in the correct subdirectories.

Handling database warnings with JDBC

Database administrators have the possibility to set lifetimes for user passwords. This can be considered a security feature, so that passwords get updated regularly. But if one of your software services logs into the database with such an account, you want to know when the password expires in good time before this happens, so that you can update the password. Otherwise your service will stop working unexpectedly.

Of course, you can mark the date in your calendar in order to be reminded beforehand, and you probably should. But there is an additional measure you can take. The database administrator can not only set the lifetime of a password, but also a “grace period”. For example:

ALTER PROFILE app_user LIMIT PASSWORD_LIFE_TIME 180 PASSWORD_GRACE_TIME 14;

This SQL command sets the password life time to 180 days (roughly six months) and the grace period to 14 days (two weeks). If you log into the database with this user you will see a warning two weeks before the password will expire. For Oracle databases the warning looks like this:

ORA-28002: the password will expire within 14 days

But your service logs in automatically, without any user interaction. Is it possible to programmatically detect a warning like this? Yes, it is. For example, with JDBC the following code detects warnings after a connection was established:

// Error codes for ORA-nnnnn warnings
static final int passwordWillExpireSoon = 28002;
static final int accountWillExpireSoon = 28011;

void handleWarnings(Connection connection) throws SQLException {
    SQLWarning warning = connection.getWarnings();
    while (null != warning) {
        String message = warning.getMessage();
        log.warn(message);

        int code = warning.getErrorCode();
        if (code == passwordWillExpireSoon) {
            System.out.println("ORA-28002 warning detected");
            // handle appropriately
        }
        if (code == accountWillExpireSoon) {
            System.out.println("ORA-28011 warning detected");
            // handle appropriately
        }
        warning = warning.getNextWarning();
    }
}

Instead of just logging the warnings, you can use this code to send an email to your address, so that you will get notified about a soon-to-be-expired password in advance. The error code depends on your database system.

With this in place you should not be unpleasantly surprised by an expired password. Of course, this only works if the administrator sets a grace period, so you should agree on this approach with your administrator.

Using PostgreSQL with Entity Framework

The most widespread O/R (object-relational) mapper for the .NET platform is the Entity Framework. It is most often used in combination with Microsoft SQL Server as database. But the architecture of the Entity Framework allows to use it with other databases as well. A popular and reliable is open-source SQL database is PostgreSQL. This article shows how to use a PostgreSQL database with the Entity Framework.

Installing the Data Provider

First you need an Entity Framework data provider for PostgreSQL. It is called Npgsql. You can install it via NuGet. If you use Entity Framework 6 the package is called EntityFramework6.Npgsql:

> Install-Package EntityFramework6.Npgsql

If you use Entity Framework Core for the new .NET Core platform, you have to install a different package:

> Install-Package Npgsql.EntityFrameworkCore.PostgreSQL

Configuring the Data Provider

The next step is to configure the data provider and the database connection string in the App.config file of your project, for example:

<configuration>
  <!-- ... -->

  <entityFramework>
    <providers>
      <provider invariantName="Npgsql"
         type="Npgsql.NpgsqlServices, EntityFramework6.Npgsql" />
    </providers>
  </entityFramework>

  <system.data>
    <DbProviderFactories>
      <add name="Npgsql Data Provider"
           invariant="Npgsql"
           description="Data Provider for PostgreSQL"
           type="Npgsql.NpgsqlFactory, Npgsql"
           support="FF" />
    </DbProviderFactories>
  </system.data>

  <connectionStrings>
    <add name="AppDatabaseConnectionString"
         connectionString="Server=localhost;Database=postgres"
         providerName="Npgsql" />
  </connectionStrings>

</configuration>

Possible parameters in the connection string are Server, Port, Database, User Id and Password. Here’s an example connection string using all parameters:

Server=192.168.0.42;Port=5432;Database=mydatabase;User Id=postgres;Password=topsecret

The database context class

To use the configured database you create a database context class in the application code:

class AppDatabase : DbContext
{
  private readonly string schema;

  public AppDatabase(string schema)
    : base("AppDatabaseConnectionString")
  {
    this.schema = schema;
  }

  public DbSet<User> Users { get; set; }

  protected override void OnModelCreating(DbModelBuilder builder)
  {
    builder.HasDefaultSchema(this.schema);
    base.OnModelCreating(builder);
  }
}

The parameter to the super constructor call is the name of the configured connection string in App.config. In this example the method OnModelCreating is overridden to set the name of the used schema. Here the schema name is injected via constructor. For PostgreSQL the default schema is called “public”:

using (var db = new AppDatabase("public"))
{
  var admin = db.Users.First(user => user.UserName == "admin")
  // ...
}

The Entity Framework mapping of entity names and properties are case sensitive. To make the mapping work you have to preserve the case when creating the tables by putting the table and column names in double quotes:

create table public."Users" ("Id" bigserial primary key, "UserName" text not null);

With these basics you’re now set up to use PostgreSQL in combination with the Entity Framework.