InfluxDB is an open source time series database developed by InfluxData. InfluxDB is similar to a SQL database, but different in many ways. InfluxDB is purpose-built for time series data. While relational databases can handle time series data, they are not optimized for common time series workloads. InfluxDB is designed to store large volumes of time series data and quickly perform real-time analysis on that data. Optimizing for this use case entails some trade-offs, primarily to increase performance at the cost of functionality, and these include:

  • Simplified conflict resolution increases write performance
  • Restricting access to deletes and updates allows for increased query and write performance
  • Adding data in time ascending order is significantly more performant
  • Writing and querying the database can be done by multiple clients and at high loads at the cost of having a strongly consistent view
  • InfluxDB is good at managing discontinuous data
  • InfluxDB has very powerful tools to deal with aggregate data and large data sets

Getting Started

Sending a Decodable data stream to InfluxDB is accomplished in two stages, first by creating a sink connector to a data source that is supported by InfluxDB, and then by adding that data source to your InfluxDB configuration. Decodable and InfluxDB mutually support several technologies, including Apache Kafka.

Configure As A Sink

This example demonstrates using Kafka as the sink from Decodable and the source for InfluxDB. Sign in to the Decodable Web Console and follow the configuration steps provided for the Kafka Connector to create a sink connector. For examples of using the command line tools or scripting, see the How To guides.

Create Kafka Data Source

The Kafka Connect InfluxDB Sink connector writes data from an Apache Kafka® topic to an InfluxDB host. When more than one record in a batch has the same measurement, time and tags, they are combined and written to InfluxDB. This connector supports the Dead Letter Queue (DLQ) functionality and running one or more tasks, which can lead to huge performance gains when multiple files need to be parsed.

  1. Start the Influx database
  2. Start the Confluent Platform
  3. Next, create a configuration file for the connector
  4. Run the connector with the configuration you created
  5. Next, create a record in the desired topic
  6. To verify the data in InfluxDB
    a. Log in to the InfluxDB Docker container
    b. Log in to InfluxDB shell
    c. Run a select query to verify the records

For more information, please refer to the Kafka documentation for InfluxDB.

Reference

Connector nameinfluxdb
Typesink
Delivery guaranteeat least once

Additional InfluxDB Support
If you are interested in a direct Decodable Connector for InfluxDB, please contact [email protected] or join our Slack community and let us know!


Apache Kafka, Kafka®, Apache® and associated open source project names are either registered trademarks or trademarks of The Apache Software Foundation.

Did this page help you?