Oracle database date and time literals

For one of our projects I work with time series data stored in an Oracle database, so I write a lot of SQL queries involving dates and timestamps. Most Oracle SQL queries I came across online use the TO_DATE function to specify date and time literals within queries:

SELECT * FROM events WHERE created >= TO_DATE('2012-04-23 16:30:00', 'YYYY-MM-DD HH24:MI:SS')

So this is what I started using as well. Of course, this is very flexible, because you can specify exactly the format you want to use. On the other hand it is very verbose.

From other database systems like PostgreSQL databases I was used to specify dates in queries as simple string literals:

SELECT * FROM events WHERE created BETWEEN '2018-01-01' AND '2018-01-31'

This doesn’t work in Oracle, but I was happy find out that Oracle supports short date and timestamp literals in another form:

SELECT * FROM events WHERE created BETWEEN DATE'2018-01-01' AND DATE'2018-01-31'

SELECT * FROM events WHERE created > TIMESTAMP'2012-04-23 16:30:00'

These date/time literals where introduced in Oracle 9i, which isn’t extremely recent. However, since most online tutorials and examples seem to use the TO_DATE function, you may be happy to find out about this little convenience just like me.

Database table naming conventions

Naming things well is an important part of writing maintainable software, and renaming things once their names have become established in a code base can be tedious work. This is true as well for the names of an application database schema, where a schema change usually requires a database migration script. That’s why you should take some time beforehand to set up a naming convention.

Many applications use object-relational mappers (ORM), which have a default naming convention to map class and property names to table and column names. But if you’re not using an ORM, you should set up conventions as well. Here are some tips:

  • Be consistent. For example, choose either only plural or singular for table names, e.g. “books” or “book”, and stick to it. Many sources recommend singular for table names.
  • On abbreviations: Some database systems like Oracle have a character limit for names. The limit for Oracle database table names is 30 characters, which means abbreviations are almost inevitable. If you introduce abbreviations be consistent and document them in a glossary, for example in the project Wiki.
  • Separate word boundaries with underscores and form a hierarchy like “namespace_entity_subentity”, e.g. “blog_post_author”. This way you can sort the tables by name and have them grouped by topic.
  • Avoid unnecessary type markers. A table is still a table if you don’t prefix it with “tbl_”, and adding a “_s” postfix to a column of type string doesn’t really add useful information that couldn’t be seen in the schema browser of any database tool. This is similar to Hungarian notation, which has fallen out of use in today’s software development. If you still want to mark special database objects, for example materialised views, then you should prefer a postfix, e.g. “_mv”, over a prefix, because a prefix would mess up the lexicographic hierarchy established by the previous tip.

And the final advice: Document your conventions so that other team members are aware of them and make them mandatory.

Organize complex SQL queries with Common Table Expressions

Complex SQL database queries often contain subqueries.

SELECT * FROM ... 
   WHERE name IN (SELECT name 
         FROM ... 
         WHERE ...)

These can quickly become unreadable, especially if multiple subqueries are involved. A nice way to organise such queries with multiple subqueries is called Common Table Expressions (CTE), or colloquially: “WITH queries”. They are supported by most contemporary SQL databases.

When using Common Table Expressions you list all the subqueries after a WITH keyword in front of the actual query and assign a name to each subquery:

WITH
  <subquery1_name> AS (SELECT ...),
  <subquery2_name> AS (SELECT ...),
  [ ... ]
SELECT ...
FROM ...
[ ... ]

You can now refer to these subqueries in the main query by their names. They are conceptually just like temporary views. You can also refer to a subquery in the main query multiple times, whereas if the subquery was inlined you would have to repeat it. A subquery defined as a Common Table Expression can also refer to the preceding subqueries in the subquery list.

Recursion

A Common Table Expression can even refer to itself, which is a recursive definition. In some database systems you have to add the RECURSIVE keyword after WITH, in others you can leave it out. Here’s a recursive CTE that calculates factorial numbers:

WITH RECURSIVE factorials (n, fact) AS 
  (SELECT 0, 1
   UNION ALL
   SELECT n+1, (n+1)*fact FROM factorials
          WHERE n < 6)
SELECT * FROM factorials;
N FACT
0    1
1    1
2    2
3    6
4   24
5  120

You could also use such a recursive query to descend into a tree hierarchy.

While the use cases for recursive queries are less frequent, I find the general concept of Common Table Expressions very useful to make complex queries more readable.

Determining the sizes of Oracle database tables and indexes

For one of our projects we store large amounts of timeseries data in an Oracle database. Sometimes we want to get an overview of how big the tables and related indexes are. Some database client tools like Toad for Oracle can show this information directly in their user interface, but if you use other tools like the SQuirreL SQL Client or JetBrains DataGrip you have to gather this information yourself via SQL queries.

DBA_SEGMENTS and DBA_INDEXES

For Oracle databases this meta information is available via the DBA_SEGMENTS and DBA_INDEXES tables. To query the sizes of several tables in MB use the following query:

SELECT segment_name, segment_type, bytes/1024/1024 MB
  FROM dba_segments
  WHERE segment_type = 'TABLE'
    AND segment_name IN ('TABLE_NAME_1', 'TABLE_NAME_2');

This SQL query returns the sizes of TABLE_NAME_1 and TABLE_NAME_2.

If you want to see the sizes of all the indexes that are associated with a table or a set of tables you can use the following query:

SELECT idx.table_name, idx.index_name, SUM(bytes)/1024/1024 MB
  FROM dba_segments seg,
       dba_indexes idx
  WHERE idx.table_owner = 'SCHEMA_NAME'
    AND idx.table_name IN ('TABLE_NAME_1', 'TABLE_NAME_2')
    AND idx.owner       = seg.owner
    AND idx.index_name  = seg.segment_name
  GROUP BY idx.index_name, idx.table_name;

Of course, you have to replace SCHEMA_NAME, and TABLE_NAME_x with the names in your database.

Unfortunately, access to this kind of meta information is different for each database system, and the queries above only work for Oracle databases.

Using PostgreSQL for time-series data

The number of sensors and other things that periodically collect data is ever growing. This advent of the  internet of things (IoT) demands a way of storing and analyzing all this so-called time-series data. There are many options for such data – the most prominent being special time-series databases like InfluxDB or well suited, nicely scaling databases like Apache Cassandra.

The problem is you have to tailor your solution to one of these technologies whereas there is SQL with mature database management systems (DBMS) and drivers/bindings for almost any programming language.

Why not use a plain SQL database?

Relational SQL databases are a mature and well understood piece of technology albeit not as sexy as all those new NoSQL databases. Using them for time series data may not be a problem for smaller datasets but sooner or later your ingestion and query performance will degrade massivly. So in general it is not a good option to store all your time-series data in a traditional relational DBMS (RDBMS).

Why use PostgreSQL with TimescaleDB?

With the PostgreSQL extension TimescaleDB you get the best of both worlds: a well known query language, robust tools and scalability.

You access and manage your time-series database just like your ordinary PostgreSQL database. Almost everything including replication and backups will continue to work like before.

You do not have to deal with limitations of specialized solutions or learn a completely new ecosystem just for one aspect of your solution.

The future

We are successfully using TimescaleDB in one of our projects and will continue to share tipps and experience with this technology taking its rising importance into account.

 

Selecting all columns of a database table with an SQL GROUP BY expression

Suppose we have an SQL database table named “temperatures” with the following contents:

LOCATION  TIME        CELSIUS
inside    2018-08-01  24
inside    2018-08-02  28
inside    2018-08-03  21
inside    2018-08-04  28
outside   2018-08-01  29
outside   2018-08-02  31
outside   2018-08-03  25
outside   2018-08-04  30

We want to find the highest temperature for each location. We use the MAX aggregate function and a GROUP BY expression:

SELECT location, MAX(celsius) celsius
FROM temperatures
GROUP BY location;

As expected, the result is:

LOCATION  CELSIUS
outside   31
inside    28

Now suppose we also want to know when each of these extreme temperatures occured. Naively, we try the following query:

SELECT location, time, MAX(celsius) celsius
FROM temperatures
GROUP BY location;

The response is an error: “not a GROUP BY expression”. In a GROUP BY expression all selected columns must be either part of the GROUP BY clause or an aggregate.

To achieve what we want we can use a JOIN:

SELECT
  t.location, t.time, t.celsius
FROM
  temperatures t
JOIN (SELECT location, MAX(celsius) celsius
      FROM temperatures
      GROUP BY location) tmax
ON
  t.location=tmax.location AND t.celsius=tmax.celsius;

This query results in multiple rows per location if the maximum temperature was recorded at different times:

LOCATION  TIME        CELSIUS
outside   2018-08-02  31
inside    2018-08-04  28
inside    2018-08-02  28

If we are only interested in the first occurrence of the maximum temperature per location, we can use the following query:

SELECT
  location,
  MIN(time) KEEP (DENSE_RANK LAST ORDER BY celsius) time,
  MAX(celsius) celsius
FROM
  temperatures
GROUP BY
  location;
LOCATION  TIME        CELSIUS
inside    2018-08-02  28
outside   2018-08-02  31

Here we don’t need a JOIN anymore, because select clause for the time column is an aggregate as well.

Entity Framework migrations with multiple database contexts

The .NET Entity Framework provides functionality for automatic database migrations. Every time your application code requires a change of the database schema you should create a migration, so that the existing database schema is updated when a new version of the application is deployed. Examples for such changes are new entity classes or the addition and removal of properties of existing entity classes. The Entity Framework functionality for database migrations is called Code First Migrations.

Code First Migrations are managed via the so-called Package Manager Console. That’s how it’s called in Visual Studio, because its usually used for package management, but it’s basically a general Power Shell command line interface. After you have created the database context class and the entity model classes for your application, you create an initial migration (usually called InitialCreate), which captures the original state of the database schema for your application:

Add-Migration InitialCreate

This will create a new migration class called InitialCreate in the Migrations folder. The filename is prefixed with a timestamp: 201810702207458_InitialCreate.cs. Each migration class has an Up() method, which applies the migration and a Down() method, which rolls the migration back.

Each subsequent migration only describes the difference to its predecessor migration. For example, you add a new string property Email to your User entity class and add a new migration:

Add-Migration AddUserEmail

The tool will scan your entity classes, compare the current state to the state of the previous migration, calculate the difference and create a new migration class, which adds a new column to the database schema.

When the migrations are run on the target system they are tracked in a special database table called __MigrationHistory.

Multiple database contexts

The above usage of Code First Migrations is well documented. Here I want to describe a feature, that is documented in less detail, because it’s less commonly used: migrations with multiple database contexts.

Let’s assume you have two database contexts: CoreDataContext and MeasurementDataContext. In this case you have to create two migration configuration classes, which inherit DbMigrationsConfiguration. You want to create two subdirectories under the Migrations directory, one for each database context:

In each subdirectory you create a migrations Configuration class, one for each database context:

namespace Migrations.CoreData
{
  internal sealed class Configuration : DbMigrationsConfiguration<CoreDataContext>
  {
    public Configuration()
    {
      MigrationsDirectory = "Migrations\CoreData";
      AutomaticMigrationsEnabled = true;
    }
  }
}

and

namespace Migrations.MeasurementData
{
  internal sealed class Configuration : DbMigrationsConfiguration<MeasruementDataContext>
  {
    public Configuration()
    {
      MigrationsDirectory = "Migrations\MeasurementData";
      AutomaticMigrationsEnabled = true;
    }
  }
}

For each configuration you have to set the MigrationsDirectory property accordingly. The AutomaticMigrationsEnabled property is optional. If it’s set the migrations will be applied automatically at the start of the application.

Now, if you run a migration command like Add-Migration, you have to add the -ConfigurationTypeName option, which specifies the Configuration class for desired the database context:


Add-Migration InitialCreate -ConfigurationTypeName Migrations.CoreData.Configuration

Add-Migration InitialCreate -ConfigurationTypeName Migrations.MeasurementData.Configuration

Add-Migration AddUserEmail -ConfigurationTypeName Migrations.CoreData.Configuration

Add-Migration AddMeasurementTimestamp -ConfigurationTypeName Migrations.MeasurementData.Configuration

The migration classes will now be created in the correct subdirectories.