The number of sensors and other things that periodically collect data is ever growing. This advent of the internet of things (IoT) demands a way of storing and analyzing all this so-called time-series data. There are many options for such data – the most prominent being special time-series databases like InfluxDB or well suited, nicely scaling databases like Apache Cassandra.
The problem is you have to tailor your solution to one of these technologies whereas there is SQL with mature database management systems (DBMS) and drivers/bindings for almost any programming language.
Why not use a plain SQL database?
Relational SQL databases are a mature and well understood piece of technology albeit not as sexy as all those new NoSQL databases. Using them for time series data may not be a problem for smaller datasets but sooner or later your ingestion and query performance will degrade massivly. So in general it is not a good option to store all your time-series data in a traditional relational DBMS (RDBMS).
Why use PostgreSQL with TimescaleDB?
With the PostgreSQL extension TimescaleDB you get the best of both worlds: a well known query language, robust tools and scalability.
You access and manage your time-series database just like your ordinary PostgreSQL database. Almost everything including replication and backups will continue to work like before.
You do not have to deal with limitations of specialized solutions or learn a completely new ecosystem just for one aspect of your solution.
We are successfully using TimescaleDB in one of our projects and will continue to share tipps and experience with this technology taking its rising importance into account.
2 thoughts on “Using PostgreSQL for time-series data”
Interesting space. Timescaledb installation provides numerous options for interfacing. I’d be keen to hear which approaches you are using.
What do you mean by “interfacing”? We are using JDBC for reading and delivering selected data to the clients. Our data collectors are written in C++ and use the libpqxx library for data ingestion.