Use in-product walkthroughs to get started with Decodable

To help you get started building your own real-time data processing workflows, Decodable includes in-product walkthroughs that guide you through all the steps that you need to do in order to perform some of our most popular use cases. These walkthroughs include step-by-step instructions on setting up connections to your third-party sources and destinations, creating pipelines to filter and process data, and verifying that data is running through the system as expected.

The in-product walkthroughs are available on the Decodable Web Dashboard page. Use these walkthroughs as a reference to learn how to use Decodable to build real-time data processing workflows. You can substitute the data sources or destinations referenced by these walkthroughs with your platforms of choice.

WalkthroughDescription
Ingest clickstream, orders, or other application events into your data warehouse, data lake, or OLAP databasesThis walkthrough guides you through the high-level steps required to ingest event-orientated data from Apache Kafka, processing it, and sending it to Snowflake. By event-orientated data, we mean product orders, market trades, HTTP requests and response events, or other kinds of application events.
Replicate data from a database to your data warehouse or data lake using Debezium-powered Change Data Capture (CDC)This walkthrough guides you through the high-level steps required to replicate data from PostgreSQL to Snowflake using Decodable and Debezium-powered Change Data Capture. This walkthrough also includes using a Decodable pipeline to filter and mask data before that data is sent to Snowflake.
(Coming soon)

Transform data between Kafka topics in SQL
This walkthrough guides you through the high-level steps required to transform data between two different Apache Kafka topics.
(Coming soon)

Create a curated stream of data for other teams
This walkthrough guides you through the high-level steps required to ingest data from Apache Kafka, sanitize it, and send it to Amazon S3. We'll use role-based access control to create one group with the roles and permissions to modify the flow of data from Apache Kafka to ElasticSearch and another group that has only the role and permissions to view it.
(Coming soon)

Run an existing Apache Flink job written in Java
(Coming soon)