An account is an isolated tenant in Decodable. It exists within a single cloud provider and region (although it can connect to data services anywhere). All resources - connections, streams, pipelines and so on - are created within an account.

Accounts have one or more identity providers (IDPs) which determine from where logins are allowed. The list of users from those IDPs can be further restricted, as can the privileges of those users.

See Accounts for more information.



A Stream carries records within Decodable and can be read or written to. It is conceptually similar to a "topic" in Kafka, or a "stream" in AWS Kinesis. Streams always have a schema that specifies its fields. Once a stream is defined, it can be used as an input or output by any number of pipelines or connections.

See Streams for more information.


Batch Analogy

If you're coming from a batch processing background, you can think of a Decodable stream like a database table. A stream's records are like a table's rows, and its fields are like a table's columns.


A connection allows data to flow between a Decodable stream and an external system like a database, messaging system or storage system. Connections include technology-specific configuration that allows them to communicate with these systems, typically including hostnames, ports, authentication information, and other settings.

Once configured, a connection can be activated to stream data to or from the Decodable platform. Connections come in two flavors: source and sink. Source connections read from an external system and write to a Decodable stream, while sink connections read from a stream and write to an external system.

See Connections for more information.


A pipeline is a streaming SQL query that processes data from one (or more) input streams, and writes the results to an output stream. Pipelines must be activated to begin processing. Decodable ensures that your pipeline is fault-tolerant and delivers data exactly once by handling pipeline state management, consistent checkpointing and recovery behind the scenes.

See Pipelines for more information.


Both pipelines and connections are allocated a number of tasks that determines the resources available for processing. Tasks run in parallel, allowing processing to scale out as needed. Decodable's stream processing engine allocates up to the number of tasks you specify, although it may allocate fewer if it determines a task would be idle. The amount of processing capacity a task can perform varies based on the size of a record, the complexity of the query, and the speed of source- and sink-connected systems. Typically tasks can process 2-8MiB or 1000-10,000 records per second.

Resource IDs

All resources in Decodable have a generated resource ID that is guaranteed to be unique within your account and never change. In many places, resource IDs are used to allow you to freely rename resources without worrying about breaking your pipelines and other configuration. Resource IDs are short (typically 8 character) strings of letters and numbers similar to Git SHAs. Just like SHAs, you should treat Decodable resource IDs as opaque UTF-8 strings.

What’s Next