Known issues and limitations

Here are some known issues, limitations, and things to be aware of. Should any of these issues get in your way, let us know right away.


  • It’s not possible to see per-stream input metrics in the app, only aggregate metrics.


  • Pagination query parameters are ignored in List APIs. For example: GET /pipelines, /streams, /connections.


  • If you see frequent errors in an active connection, the task size assigned to the connection might be too small. Try restarting the connection and increasing the task size.

  • The REST and Datagen connectors don’t currently expose metrics.


  • If you see frequent errors in an active pipeline, the task size assigned to the pipeline might be too small. Try restarting the pipeline and increasing the task size.

  • When you edit the underlying SQL query for a pipeline, Decodable doesn’t check if the state associated with the new version of the pipeline is compatible with the previous version. If the state isn’t compatible, an error message is shown asking you to discard the state.


  • Previews ignore some errors. If you aren’t receiving preview results, that means there are no results or that the preview failed. To minimize errors and troubleshooting, do the following:

    • Make sure that there is data actively coming in through the input streams.

    • Change the Starting Position to Earliest and run the preview again. This option is only available when you are previewing a pipeline.


  • Custom stream retention settings aren’t yet user-facing. For now, contact support if you need help adjusting stream retention.

  • Multi-typed maps aren’t supported in avro schemas.


  • Sliding windows, that’s the hop() function, require the size argument to be a multiple of the slide interval. As an example, hop(table x, descriptor(ts), interval '1' second, interval '60' seconds) is valid while hop(table x, descriptor(ts), interval '3' seconds, interval '5' seconds) isn’t).

  • Session windows aren’t currently available.

  • When performing windowed aggregation, you must group by both the generated window_start and window_end fields.