Apache Druid sink integration

This is a supported integration which requires manual configuration.

Contact Decodable support if you are interested in native support with a Decodable connector.

Sending a Decodable data stream to Druid is accomplished in two stages:

  1. Creating a sink connector from Decodable to a data source that’s supported by Druid

  2. Adding that data source to your Druid configuration.

Decodable and Druid mutually support several technologies, including the following:

  • Amazon Kinesis

  • Apache Kafka

Add a Kafka sink connector

Follow the configuration steps provided for the Apache Kafka sink connector.

Create Kafka data source

To ingest event data, also known as message data, from Kafka into Druid, you must submit a supervisor spec. When you enable the Kafka indexing service, you can configure supervisors on the Overlord to manage the creation and lifetime of Kafka indexing tasks. Kafka indexing tasks read events using Kafka’s own partition and offset mechanism to guarantee exactly-once ingestion.

The Kafka indexing service supports transactional topics introduced in Kafka 0.11.x by default. The consumer for Kafka indexing service is incompatible with older Kafka brokers. If you are using an older version, refer to the Kafka upgrade guide. Additionally, you can set isolation.level to read_uncommitted in consumerProperties if either:

  • You don’t need Druid to consume transactional topics.

  • You need Druid to consume older versions of Kafka. Make sure offsets are sequential, since there is no offset gap check in Druid anymore.

If your Kafka cluster enables consumer-group based ACLs, you can set group.id in consumerProperties to override the default auto generated group id.

For more detailed information, see Druid’s Kafka documentation.