One of the first steps in incorporating Decodable to your existing data ecosystem is by creating connections in Decodable to get data in from a data source or send data out to a data destination. Decodable includes connectors that provide read and write support for a variety of data sources and destinations including Amazon Kinesis, Amazon S3, Apache Kafka, Apache Pulsar, Snowflake, and more. To allow Decodable to access your data, you must configure a connection that contains your credentials for the data source and destination.
Once you've created a connection to your data source and destination of choice, you can activate the connection to stream data to or from the Decodable platform. A connection to a data source is also known as a source connection and a connection to a data destination is also known as a sink connection.
Decodable creates network connections to the resources you specify in connections. As a result, things like hostnames must be resolvable, and IP addresses must be routable. See Decodable IP Addresses if you want to connect to a third-party service that requires explicit whitelisting to enable external access. Let us know if you need help managing connectivity to your data infrastructure.
Processing guarantees for connectors
While Decodable's stream processing engine always processes data exactly once, some connectors don't support this level of guarantee or make it configurable.
The processing guarantee varies depending on the connector. Connectors always specify whether they provide at least once or exactly once processing. For the exact processing guarantee, refer to the topic corresponding to the connector that you are interested in.
For instructions on how to create a connection to a specific data source or destination, refer to the topic in the Connect to a data source or Connect to a data destination chapters that correspond to the data location that you are interested in.
Updated 2 months ago