Amazon Redshift sink integration This is a supported integration which requires manual configuration. Contact Decodable support if you are interested in native support with a Decodable connector. Sending a Decodable data stream to Redshift is accomplished in two stages: Creating a sink connector from Decodable to Kinesis Materializing a view from a Kinesis stream and merging it into Redshift. For more detailed information, see the Redshift example in the Decodable GitHub repository, or Redshift’s Streaming Ingestion documentation. Add a Kinesis sink connector Follow the configuration steps provided for the Amazon Kinesis sink connector. Configure streaming ingestion Previously, loading data from a streaming service like Amazon Kinesis Streams into Amazon Redshift included several steps. These included connecting the stream to an Amazon Kinesis Data Firehose and waiting for Kinesis Data Firehose to stage the data in Amazon S3, using various-sized batches at varying-length buffer intervals. After this, Kinesis Data Firehose triggered a COPY command to load the data from Amazon S3 to a table in Redshift. Rather than including preliminary staging in Amazon S3, streaming ingestion provides low-latency, high-speed ingestion of stream data from Kinesis into Redshift by following these steps. Create an IAM role Assign the IAM to Redshift Define an external schema Create a materialized view Refresh the materialized view Merge the data