InfluxDB is an open source time series database developed by InfluxData. InfluxDB is similar to a SQL database, but different in many ways. InfluxDB is purpose-built for time series data. While relational databases can handle time series data, they are not optimized for common time series workloads. InfluxDB is designed to store large volumes of time series data and quickly perform real-time analysis on that data. Optimizing for this use case entails some trade-offs, primarily to increase performance at the cost of functionality, and these include:
- Simplified conflict resolution increases write performance
- Restricting access to deletes and updates allows for increased query and write performance
- Adding data in time ascending order is significantly more performant
- Writing and querying the database can be done by multiple clients and at high loads at the cost of having a strongly consistent view
- InfluxDB is good at managing discontinuous data
- InfluxDB has very powerful tools to deal with aggregate data and large data sets
Sending a Decodable data stream to InfluxDB is accomplished in two stages, first by creating a sink connector to a data source that is supported by InfluxDB, and then by adding that data source to your InfluxDB configuration. Decodable and InfluxDB mutually support several technologies, including Apache Kafka.
This example demonstrates using Kafka as the sink from Decodable and the source for InfluxDB. Sign in to the Decodable Web Console and follow the configuration steps provided for the Kafka Connector to create a
sink connector. For examples of using the command line tools or scripting, see the How To guides.
The Kafka Connect InfluxDB Sink connector writes data from an Apache Kafka® topic to an InfluxDB host. When more than one record in a batch has the same measurement, time and tags, they are combined and written to InfluxDB. This connector supports the Dead Letter Queue (DLQ) functionality and running one or more tasks, which can lead to huge performance gains when multiple files need to be parsed.
- Start the Influx database
- Start the Confluent Platform
- Next, create a configuration file for the connector
- Run the connector with the configuration you created
- Next, create a record in the desired topic
- To verify the data in InfluxDB
a. Log in to the InfluxDB Docker container
b. Log in to InfluxDB shell
c. Run a
selectquery to verify the records
For more information, please refer to the Kafka documentation for InfluxDB.
|Delivery guarantee||at least once|
Updated 21 days ago