Frequently asked questions

Here are some common questions and answers. Don’t see yours? Join us on Slack to ask there.

General

How do I give feedback or request features?

For now, Slack is best. Let us know what you’re looking for. If you get stuck, we’re happy to help.

What kind of service SLA do you have?

Team and Enterprise subscribers can contact support to learn more about their account’s SLA.

What is a task?

A task is a connection or pipeline worker that performs data collection or processing. All connections and pipelines have at least one task, and frequently more based on the configured parallelism. You control the maximum number of tasks when you create a connection or pipeline. Tasks receive a dedicated CPU and memory allocation in Decodable.

Accounts

Can I share resources between accounts?

No, it’s not currently possible to access resources across accounts, although the same users can have access to multiple accounts. It is possible for two accounts to have connections to the same systems. This, for example, allows pipelines in account A to send data to an Apache Kafka topic that is consumed by a pipeline in account B.

How do I delete an account?

Email support to delete an account.

Plans and billing

How is task usage measured and billed?

Decodable measures task usage once per minute and bills you for the average task usage each hour, rounded to the nearest whole number of tasks.

Do inactive pipelines and connections use tasks?

No! You’re only billed for active connections and pipelines.

Do real-time query previews use tasks?

No. The Developer and Team plans include a limited number of concurrent real-time previews while the Enterprise plan includes an unlimited number of previews.

Can I switch plans?

Yes! You can switch plans at any time. If you select a plan with a lesser task limit, any connections or pipelines that put you over the task limit will be deactivated. If you transition from a paid plan to a free plan, you’ll be charged for any tasks you’ve already used.

Do you offer volume or annual pricing discounts?

If you’re interested in pre-purchasing task capacity or volume discounts, the Enterprise plan is probably for you! Contact us for a custom quote.

Can I purchase additional stream retention under the Team plan?

If you’d like to retain more than 100 GB of data or more than 14 days, contact support.

Streams

What is stream retention and why do I want it?

When data is written to a stream it is retained for a fixed amount of time or size, whichever comes first. This retention is what allows pipelines to tolerate failures, restarts, slow consumers, and other operational tasks you perform, without losing data. When stream data exceeds the retention time or size it is automatically deleted, from the oldest data to the newest.

How does stream retention work?

You control both time- and size-based retention settings on a per-stream basis. Accounts on plans with maximums on retention cannot exceed their limits. Size limits are per-account, while time limits are per-stream.

Example: Under the Team plan, all streams are allowed to retain data for 7 days however the total size of all streams cannot exceed 100 GB.

Pipelines

What functions are supported in pipeline SQL?

See the Function Reference for the list of supported functions.

Do you support user-defined functions (UDFs)?

No, we do not support user-defined functions for performance and security reasons. Let us know if we’re missing a function you need.

For the most sophisticated cases, we encourage you to use event-driven function-as-a-service services combined with messaging. Not only is this more secure, but it allows you full control over the processing performed. You can insert your function into a larger data flow by having a Decodable pipeline output to a messaging sink on which your function listens, and a second Decodable pipeline that receives the output from your function via a messaging source.

How many tasks does my pipeline need?

We know everyone hates this answer but, it depends. Task throughput varies based on the number of records, the size of each record, and the complexity of the pipeline itself (for example, how many functions it uses, whether it uses aggregation, the number of predicates in a where clause). You can experiment with task allocations and watch pipeline metrics to see how throughput changes. If you need help, let us know.