Partitioning in Oracle Database: Because Who Wants to Search an Endless Table?

As data volumes continue to grow, managing large database tables and indexes can become a challenge. This is where partitioning comes in. Partitioning is a feature of database systems that allows you to divide large tables and indexes into smaller, more manageable parts, known as partitions. This can improve the performance and manageability of your database. Aside from performance considerations, maintenance operations, such as backups and index rebuilds, can become easier by allowing them to be performed on smaller subsets of data.

This is achieved by reducing the amount of data that needs to be scanned during query execution. When a query is executed, the database can use the partitioning information to skip over partitions that do not contain the relevant data, instead of having to scan the entire table. This reduces the amount of I/O required to execute the query, which can result in significant performance gains, especially for large tables.

There are several types of partitioning available in Oracle Database, including range partitioning, hash partitioning, list partitioning, and composite partitioning. Each type of partitioning is suited to different use cases and can be used to optimize the performance of your database in different ways. In this blog post we will look range partitioning.

Range partitioning

Here is an example of range-based partitioning in Oracle:

CREATE TABLE books (
  id NUMBER,
  title VARCHAR2(200),
  publication_year NUMBER
)

PARTITION BY RANGE (publication_year) (
  PARTITION p_before_2000 VALUES LESS THAN (2000),
  PARTITION p_2000s VALUES LESS THAN (2010),
  PARTITION p_2010s VALUES LESS THAN (2020),
  PARTITION p_after_2020 VALUES LESS THAN (MAXVALUE)
);

In this example, we have created a table called books that stores book titles, partitioned by the year of publication. We have defined four partitions, p_before_2000, p_2000s, p_2010s, and p_after_2020.

Now, when we insert data into the books table, it will automatically be placed in the appropriate partition based on the year of publication:

INSERT INTO books (id, title, publication_year)
  VALUES (1, 'Nineteen Eighty-Four', 1949);

This book will be inserted into partition p_before_2000, as the year of publication is before 2000. The following book will be placed into partition p_2000s:

INSERT INTO books (id, title, publication_year)
  VALUES (2, 'The Hunger Games', 2008);

When we query the books table, the database will only access the partitions that contain the data we need. For example, if we want to retrieve data for books published in 2015 and 2016, the database will only access partition p_2010s.

SELECT * FROM books WHERE publication_year>=2015 AND publication_year<=2016:

However, you should be aware that while partitioning can improve query performance for some types of queries, it can also negatively impact query performance for others, especially if the partitioning scheme does not align well with the query patterns. Therefore, you should tailor the partitioning to your needs and check if it brings the desired effect.

PostgreSQL’s hstore module for semi-structured data

PostgreSQL has an extension module called hstore that allows you to store semi-structured data in a key/value format. Values ​​of an hstore object are stored like in a dictionary. You can also reference its values in SQL queries.

To use the extension, it must first be loaded into the current database:

CREATE EXTENSION hstore;

Now you can use the data type hstore. Here, we create a table with some regular columns and one column of type hstore:

CREATE TABLE animals (
    id     serial PRIMARY KEY,
    name   text,
    props  hstore
);

Literals of type hstore are written in single quotes, containing a set of key => value pairs separated by commas:

INSERT INTO
    animals (name, props)
VALUES
    ('Octopus', 'arms => 8, habitat => sea, color => varying'),
    ('Cat',     'legs => 4, fur => soft'),
    ('Bee',     'legs => 6, wings => 4, likes => pollen');

The order of the pairs is irrelevant. Keys within a hstore are unique. If you declare the same key more than once only one instance will be kept and the others will be discarded. You can use double quotes to include spaces or special characters:

'"fun-fact" => "Cats sleep for around 13 to 16 hours a day (70% of their life)"'

If the type of the literal can’t be inferred you can append ::hstore as a type indicator:

'legs => 4, fur => soft'::hstore

Both keys and values are stored as strings, so these two are equivalent:

'legs => 4, fur => soft'
'"legs" => "4", "fur" => "soft"'

Another limitation of hstore values is that they cannot be nested, which means they are less powerful than JSON objects.

You can use the -> operator to dereference a key, for example in a SELECT:

SELECT
    name, props->'legs' AS number_of_legs
FROM
    animals;

It returns NULL if the key is not present. Of course, you can also use it in a WHERE clause:

SELECT * FROM animals WHERE props->'fur' = 'soft';

There are many other operators and functions that can be used with hstore objects. Here is a small selection (please refer to the documentation for a complete list):

  • The || operator concatenates (merges) two hstores: a || b
  • The ? operator checks the existence of a key and returns a boolean value: props ? 'fur'
  • The - operator deletes a key from a hstore: props - 'fur'
  • The akeys function returns an array of a hstore’s keys: akeys(hstore)

You can also convert a hstore object to JSON: hstore_to_json(hstore). If you want to learn more about JSON in PostgreSQL you can continue reading this blog post: Working with JSON data in PostgreSQL

Copying and moving rows between tables in PostgreSQL

In this article, I’ll show some helpful tips for copying and moving data between database tables in PostgreSQL.

Copying

The simplest operation is copying rows from one table to another table. The associated SQL query is known to most. You can simply combine an INSERT with a SELECT:

INSERT INTO short_books
  SELECT *
    FROM books
    WHERE pages < 50;

Of course, if you want to copy a complete table, you must first create the target table with the same columns. Instead of just repeating the original CREATE TABLE with all the column definitions with a different name, there is a shortcut in the form of CREATE TABLE … LIKE.

CREATE TABLE books_copy (LIKE books);

If you want the copy to inherit all constraints, indices and defaults of the source table you can add INCLUDING ALL:

CREATE TABLE books_copy (LIKE books INCLUDING ALL);

Instead of executing a CREATE TABLE first and then an INSERT, you can also directly combine CREATE TABLE with a SELECT:

CREATE TABLE books_copy AS
  SELECT * FROM books;

Moving

The direct method of moving specific rows from one table to another table is a bit less known. You can of course first copy the rows into the target table and then delete the rows from the source table. However, this is also possible with just one statement, in one go. To do this, you need to know the RETURNING clause. It can be appended to a DELETE or UPDATE statement and causes the affected rows to be returned as the result set after the respective action:

DELETE FROM books
  WHERE pages < 50
  RETURNING
    title, author, pages;

This can be used in combination with the WITH … AS clause to move rows between tables with just one SQL statement:

WITH selection AS (
  DELETE FROM books
  WHERE pages < 50
  RETURNING *
)
INSERT INTO short_books
  SELECT * FROM selection;

The function of WITH can be thought of as defining a named temporary view that can only be used in the current statement.

PostgreSQL’s new MERGE command

PostgreSQL version 15 introduces a new SQL command: the MERGE command. This allows merging a table into another table. The MERGE command has existed for some time in other databases such as Oracle or SQL Server.

The principle of this command is that you have a target table in which you want to insert or remove data based on a source table under certain conditions, or you want to update existing entries with data from the source table. The source table doesn’t have to be a real table, it can just as easily be a SELECT query.

How to use it, step-by-step

The command begins with MERGE INTO, followed by the name of the target table. We call it dest here:

MERGE
  INTO dest ...

Then you specify the source table with USING, here we call it src:

MERGE
  INTO dest
  USING src
  ...

If you want to use a SELECT query as the source instead of a real table, you can do it like this:

MERGE
  INTO dest
  USING (SELECT ... FROM ...) AS src
  ...

Now you need a condition that is used to match entries from one table to entries from the other table. This is specified after ON. In this example we simply use the IDs of the two tables:

MERGE
  INTO dest
  USING src
  ON dest.id=src.id
  ...

This is followed by a case distinction that describes what should happen if the condition either applies or not. The possible actions can be: UPDATE, DELETE, INSERT, or DO NOTHING.

The two cases are specified with WHEN MATCHED THEN and WHEN NOT MATCHED THEN:

MERGE
  INTO dest
  USING src
  ON dest.id=src.id
  WHEN MATCHED THEN
    UPDATE SET ...
  WHEN NOT MATCHED THEN
    INSERT (...) VALUES (...);

If a match exists, then reasonable actions are UPDATE, DELETE, or DO NOTHING. If no match exists, then reasonable actions are INSERT or DO NOTHING.

In the WHEN cases, additional conditions can be specified with AND:

MERGE
  INTO dest
  USING src
  ON dest.id=src.id
  WHEN MATCHED AND dest.value > src.value THEN
    DELETE
  WHEN MATCHED THEN
    UPDATE SET ...
  WHEN NOT MATCHED THEN
    DO NOTHING;

A realistic example

Here’s an example demonstrating a use case that might occur in the real world:

MERGE
  INTO account a
  USING transaction t
  ON a.id=t.account_id
WHEN MATCHED THEN
  UPDATE SET balance = a.balance + t.amount
WHEN NOT MATCHED THEN
  INSERT (id, balance) VALUES (t.account_id, t.amount);

This statement processes a table of monetary transactions and applies them to their matching customer accounts by adding the amount of each transaction to the balance of the matching account. If no matching account exists it will be created and the initial balance is the amount of the first transaction.

Re-ordering table columns in an Oracle database

In an Oracle database, once a table is created, there is no obvious way to change the order of its columns. Sometimes you add a new column to an existing table and want it to be displayed in a different position by default for query results via SELECT *. Imagine you add an ID column to a table after the fact. Wouldn’t it be nice if this appeared in the first position?

Of course you can drop the whole table and create it again with the new column order. But this is cumbersome and potentially dangerous if the table is already filled with data. However, there is a trick that allows you to rearrange the columns without having to recreate the table.

The key to this is an Oracle feature that allows invisible columns. The feature itself is interesting in its own right, but it has a useful side effect that we’ll exploit. The documentation says:

When you make an invisible column visible, the column is included in the table’s column order as the last column. When you make a visible column invisible, the invisible column is not included in the column order, and the order of the visible columns in the table might be re-arranged.

So the plan is to make the appropriate columns invisible first by clever choice, and then to make them visible again in the desired order. This is how it works:

First we have a table with the following columns.

CREATE TABLE t (a NUMBER, b NUMBER, c NUMBER, e NUMBER, f NUMBER);

Later we realize that we need a column d that should be between c and f. So we add it to the table:

ALTER TABLE t ADD (d NUMBER);

This is of course added at the end:

DESC t;
Name Null? Type   
---- ----- ------ 
A          NUMBER 
B          NUMBER 
C          NUMBER 
E          NUMBER 
F          NUMBER 
D          NUMBER 

To get it in the right position, we first hide the columns e and f, and then make them visible again.

ALTER TABLE t MODIFY (e INVISIBLE, f INVISIBLE);
ALTER TABLE t MODIFY (e VISIBLE, f VISIBLE);

And voilà, we have our desired order:

DESC t;
Name Null? Type   
---- ----- ------ 
A          NUMBER 
B          NUMBER 
C          NUMBER 
D          NUMBER 
E          NUMBER 
F          NUMBER 

Note that this doesn’t change the internal, physical layout of the table on the disk. It’s just a cosmetic change.

PostgreSQL’s “DISTINCT ON” clause

Anyone who uses SQL databases knows the DISTINCT modifier for SELECT queries to get result sets without duplicates. However, PostgreSQL has another variant of it that not everyone knows, but which is very useful: the SELECT DISTINCT ON clause. It can be used to query only the first row of each set of rows according to a grouping.

To understand its usefulness, let’s look at an example and solve it in the classical way first.

The complicated way

Given the following table of items we want to query for each category the item with the highest value.

 name │ category │ value
-------------------------
 A    │ X        │ 52
 B    │ X        │ 35
 C    │ X        │ 52
 D    │ Y        │ 27
 E    │ Y        │ 31
 F    │ Y        │ 20

Usually we’d start out with a query like this:

SELECT
  category,
  MAX(value) AS highest_value
FROM items
GROUP BY category;
category │ highest_value
--------------------------
 X       │ 52
 Y       │ 31

And then use this query as a sub-select:

SELECT * FROM items
WHERE (category, value) IN (
  SELECT
    category,
    MAX(value) AS highest_value
  FROM items
  GROUP BY category
);
 name │ category │ value
-------------------------
 A    │ X        │ 52
 C    │ X        │ 52
 E    │ Y        │ 31

Unfortunately, there are multiple items in category X with the same highest value 52. But we really only want one row for each category. In this case we might use the ROW_NUMBER() function:

SELECT
  name, category, value
FROM (
  SELECT
    items.*,
    ROW_NUMBER() OVER (
      PARTITION BY category
      ORDER BY value DESC, name
    ) AS rownum
  FROM items
) WHERE rownum = 1;
 name │ category │ value
-------------------------
 A    │ X        │ 52
 E    │ Y        │ 31

This is finally our desired result.

The easy way

But I promised it can be easier with the DISTINCT ON clause. How does it work?

SELECT DISTINCT ON (category) *
FROM items
ORDER BY
  category, value DESC, name;

After DISTINCT ON we specify one or more columns by which to group by in parentheses. The ORDER BY clause determines which row will be the first in each group. We get the same result:

 name │ category │ value
-------------------------
 A    │ X        │ 52
 E    │ Y        │ 31

Range Types in PostgreSQL

How do you store ranges in an SQL database? By ranges I mean things like price ranges, temperature ranges, date ranges for scheduling, etc. You’d probably represent them with two columns in a table, like min_price and max_price, min_temperature and max_temperature, start_date and end_date. If you want to represent an unbounded range, you’d probably make one or both columns nullable and then take NULL as +/- infinity.

If you want to test if a value is in a range you can use the BETWEEN operator:

SELECT * FROM products WHERE
  target_price BETWEEN min_price AND max_price;

This doesn’t work as nicely anymore if you work with unbounded ranges as described above. You’d have to add additional checks for NULL. What if you want to test if one of the ranges in the table overlaps with a given range?

SELECT * FROM products WHERE
  max_given >= min_price AND
  min_given <= max_price;

Did I make a mistake here? I’m not sure. What if they should overlap but not cover each other? And again, this becomes even more complicated with unbounded ranges.

Enter range types

PostgreSQL has a better solution for these problems — range types. It comes with these additional built-in data types:

  • int4range: Range of integer
  • int8range: Range of bigint
  • numrange: Range of numeric
  • tsrange: Range of timestamp without time zone
  • tstzrange: Range of timestamp with time zone
  • daterange: Range of date

You can use them as a column type in a table:

CREATE TABLE products (…, price_range numrange);

Construction

You can construct range values for these types like this:

'[20,35]'::int4range
'(5,12]'::int4range
'(6.2,12.5)'::numrange
'[2022-05-01, 2022-05-31]'::daterange
'[9:30, 12:00)'::timerange

As you can see, they use mathematical interval notation. A square bracket means inclusive bound, and a round parenthesis means exclusive bound. They can also be unbounded (infinite) or empty:

'[5,)'::int4range
'(,20]'::int4range
'empty'::int4range

You can get the bounds of a range individually with the lower() and upper() functions:

SELECT * FROM products ORDER BY lower(price_range);

Operators

The range types become really powerful through the range operators. There are a lot, so I will only show some basic examples:

  • The && operators tests if two ranges overlap: range_a && range_b
  • The @> and <@ operators test if the first range contains the second or vice versa: range_a <@ range_b. If used with an element on one side they test if the element is in a range: element <@ range or range @> element.
  • The -|- operator tests if two ranges are adjacent: range_a -|- range_b

Additionally to these boolean tests you can also calculate new ranges based on existing ranges:

The + operator computes the union of two overlapping or adjacent ranges: range_a + range_b. The * computes the intersection of ranges, and the - operator the difference.

Multiranges

There is one more thing I want to mention: For each one of the range types there is also a multirange type: int4multirange, int8multirange, nummultirange, tsmultirange, tstzmultirange, datemultirange. As their names suggest, they store multiple ranges in one value:

'{}'::int4multirange
'{[2,9)}'::int4multirange
'{[2,9), [12,20)}'::int4multirange

The mentioned range operators work with them as well.

Full-text Search with PostgreSQL

If you want to add simple text search functionality to an application backed by an SQL database one of the first things that may come to your mind is the SQL LIKE operator. The LIKE operator and its case-insensitive sibling ILIKE find substrings in text data via wildcards such as %, which matches any sequence of zero or more characters:

SELECT * FROM book WHERE title ILIKE '%dog%'.

However, this approach satisfies only very basic requirements for text search, because it only matches exact substrings. That’s why application developers often use an external search engine like Elasticsearch based on the Apache Lucene library.

With a PostgreSQL database there is another option: it comes with a built-in full-text search. A full-text search analyzes text according to the language of the text, parses it into tokens and converts them into so-called lexemes. These are strings, just like tokens, but they have been normalized so that different forms of the same word, for example “pony” and “ponies”, are made alike. Additionally, stop words are eliminated, which are words that are so common that they are useless for searching, like “a” or “the”. For this purpose the search engine uses a dictionary of the target language.

In PostgreSQL, there are two main functions to perform full-text search: they are to_tsvector and to_tsquery. The ts part in the function names stands for “text search”. The to_tsvector function breaks up the input string and creates a vector of lexemes out of it, which are then used to perform full-text search using the to_tsquery function. The two functions can be combined with the @@ (match) operator, which applies a search query to a search vector:

SELECT title
  FROM book
  WHERE to_tsvector(title) @@ to_tsquery('(cat | dog) & pony')

The query syntax of ts_query supports boolean operators like | (or), & (and), ! (not) and grouping using parentheses, but also other operators like and <-> (“followed by”) and * (prefix matching).

You can specify the target language as a parameter of to_tsvector:

# SELECT to_tsvector('english', 'Thousands of ponies were grazing on the prairie.');

'graze':5 'poni':3 'prairi':8 'thousand':1

Here’s another example in German:

# SELECT to_tsvector('german', 'Wer einen Fehler begeht, und ihn nicht korrigiert, begeht einen zweiten (Konfuzius)');

'begeht':4,9 'fehl':3 'konfuzius':12 'korrigiert':8 'wer':1 'zweit':11

PostgreSQL supports dictionaries for about 80+ languages out-of-the-box.

The examples in this article are just a small glimpse of what is possible with regards to full-text search in PostgreSQL. If you want to learn more you should consult the documentation. The key takeaway is that there is another option between simple LIKE clauses and an external search engine.

Commenting SQL database objects

Did you know that you can annotate database object like tables, views and columns with comments in many SQL database systems? By that I don’t mean comments in SQL scripts, indicated by double dashes (--), but comments attached to the objects themselves, stored in the database. These may be helpful to the database admin by providing context via a description text on what is stored in these objects.

For PostgreSQL and Oracle databases the syntax is as follows:

COMMENT ON TABLE [schema_name.]table_name IS '...';
COMMENT ON COLUMN [schema_name.]table_name.column_name IS '...';

For example:

COMMENT ON COLUMN books.author IS 'The main author''s last name';
COMMENT ON TABLE books IS 'Contains only the best books';

These comments can be viewed in database tools like SQL Developer:

Comments on columns
Comments on tables

You can also view the comments in psql:

db=# \d+ books
 Column |  Type   |          Description
--------+---------+------------------------------
id      | integer |
author  | text    | The main author''s last name
title   | text    |

And for a table:

db=# \dt+ books
                    List of relations
 Schema | Name  | Type  |     |        Description
--------+-------+-------+ ... +------------------------------
public  | books | table |     | Contains only the best books

In Oracle you can query the comments from the data dictionary views ALL_TAB_COMMENTS and ALL_COL_COMMENTS:

> SELECT * FROM all_col_comments WHERE table_name='BOOKS';
OWNER    TABLE_NAME  COLUMN_NAME  COMMENTS
--------------------------------------------------------------
LIBRARY	 BOOKS	     ID           (null)
LIBRARY	 BOOKS	     AUTHOR       The main author's last name
LIBRARY	 BOOKS	     TITLE        (null)

> SELECT * FROM all_tab_comments WHERE table_name='BOOKS';
OWNER    TABLE_NAME  TABLE_TYPE  COMMENTS
--------------------------------------------------------------
LIBRARY	 BOOKS	     TABLE       Contains only the best books

In Oracle comments are limited to tables, views, materialized views, columns, operators and indextypes, but in PostgreSQL you can attach comments to nearly everything. Another good use case for this are documentation comments on database functions:

COMMENT ON FUNCTION my_function IS $$
This function does something important.

Parameters:
...
Example usage:
...
$$;

Note: the $$ delimits multi-line strings (called dollar quoted string constants).

Converting character sets in an Oracle database

I recently had to correct text data stored in an Oracle database with a wrong character set. A bug in one of our applications caused text data to be stored in the ISO 8859-1 West European character encoding. The database, however, is configured to work with UTF-8 encoded strings. The bug was fixed, but the existing data had to be corrected afterwards. If you ever encounter the same problem you can use Oracle’s CONVERT() function to fix the data. The syntax is as follows:

CONVERT(text, to_charset, from_charset)

First you have to find out Oracle’s names of the desired source and target character sets. You can look them up in this table. In my case the the character set names are 'UTF8' and 'WE8ISO8859P1' (for Western European 8-bit ISO-8859-1). You can check the function by selecting from DUAL:

SELECT CONVERT('ä ö ü Ä Ö Ü ß', 'UTF8', 'WE8ISO8859P1') FROM DUAL;

If the result shows the same scrambled characters you see in the data you know that you have chosen the correct character set names:

The other direction should unscramble the characters again:

Finally, you can convert your data:

UPDATE example_table
  SET text_column=CONVERT(text_column, 'WE8ISO8859P1', 'UTF8');