Timescale DB sink connector

TimescaleDB is a category-defining relational database for time-series data. Packaged as a PostgreSQL extension, TimescaleDB is designed to be easy to use, easy to get started, and easy to maintain. All your PostgreSQL knowledge and tools should just work. Our documentation includes multiple resources to help you get started with your first project, dive into the details of our products and services, and learn more about time-series data. In addition to our documentation, we encourage you to join our community to learn more and contribute knowledge.

If you are interested in a direct Decodable Connector for Timescale, contact support@decodable.co or join our Slack community and let us know!

Getting started

Sending a Decodable data stream to Timescale is accomplished in two stages, first by creating a sink connector to a data source that is supported by Timescale, and then by adding that data source to your Timescale configuration. Decodable and Timescale mutually support several technologies, including Apache Kafka.

Configure as a sink

This example demonstrates using Kafka as the sink from Decodable and the source for Timescale. Sign in to Decodable Web and follow the configuration steps provided in the Apache Kafka sink connector topic to create a sink connector. For examples of using the command line tools or scripting, see the How To guides.

Create Kafka data source

You can ingest data into TimescaleDB using the Kafka Connect JDBC sink connector with a JDBC driver. Kafka Connect can be distributed to provide fault tolerance to ensure the connectors are running and continually keeping up with changes in the database.

You can use the Kafka Connect JDBC source connector to import data from any relational database with a JDBC driver into Kafka topics. You can use the JDBC sink connector to export data from Kafka topics to any relational database with a JDBC driver. The JDBC connector supports a wide variety of databases without requiring custom code for each one. The following are additional guidelines to consider:

  • Use the most recent version of the JDBC 4.0 driver available. The latest version of a JDBC driver supports most versions of the database management system, and includes more bug fixes.

  • Use the correct JAR file for the Java version used to run Connect workers. Some JDBC drivers have a single JAR that works on multiple Java versions. Other drivers have one JAR for Java 8 and a different JAR for Java 10 or 11. Make sure to use the correct JAR file for the Java version in use. If you install and try to use the JDBC driver JAR file for the wrong version of Java, starting any JDBC source connector or JDBC sink connector will likely fail with UnsupportedClassVersionError. If this happens, remove the JDBC driver JAR file you installed and repeat the driver installation process with the correct JAR file.

  • The share/java/kafka-connect-jdbc directory mentioned above is for Confluent Platform. If you are using a different installation, find the location where the Confluent JDBC source and sink connector JAR files are located, and place the JDBC driver JAR file(s) for the target databases into the same directory.

  • If the JDBC driver specific to the database management system is not installed correctly, the JDBC source or sink connector will fail on startup. Typically, the system throws the error No suitable driver found. If this happens, install the JDBC driver again by following the instructions.

For more detailed information, see Ingesting data with Kafka in the Timescale documentation.