Amazon Redshift sink integration

This is a supported integration which requires manual configuration.

Contact Decodable support if you are interested in native support with a Decodable connector.

Sending a Decodable data stream to Redshift is accomplished in two stages:

  1. Creating a sink connector from Decodable to Kinesis

  2. Materializing a view from a Kinesis stream and merging it into Redshift.

For more detailed information, see the Redshift example in the Decodable GitHub repository, or Redshift’s Streaming Ingestion documentation.

Add a Kinesis sink connector

Follow the configuration steps provided for the Amazon Kinesis sink connector.

Configure streaming ingestion

Previously, loading data from a streaming service like Amazon Kinesis Streams into Amazon Redshift included several steps. These included connecting the stream to an Amazon Kinesis Data Firehose and waiting for Kinesis Data Firehose to stage the data in Amazon S3, using various-sized batches at varying-length buffer intervals. After this, Kinesis Data Firehose triggered a COPY command to load the data from Amazon S3 to a table in Redshift.

Rather than including preliminary staging in Amazon S3, streaming ingestion provides low-latency, high-speed ingestion of stream data from Kinesis into Redshift by following these steps.

  1. Create an IAM role

  2. Assign the IAM to Redshift

  3. Define an external schema

  4. Create a materialized view

  5. Refresh the materialized view

  6. Merge the data