Known issues and limitations

Here are some known issues, limitations, and things to be aware of. Should any of these issues get in your way, let us know right away.


  • It’s not possible to see per-stream input metrics in the app, only aggregate metrics.


  • Pagination query parameters are ignored in List APIs. For example: GET /pipelines, /streams, /connections.


  • If you see frequent errors in an active connection, the task size assigned to the connection might be too small. Try restarting the connection and increasing the task size.

  • The REST and Datagen connectors do not currently expose metrics.


  • If you see frequent errors in an active pipeline, the task size assigned to the pipeline might be too small. Try restarting the pipeline and increasing the task size.

  • When you edit the underlying SQL query for a pipeline, Decodable does not check if the state associated with the new version of the pipeline is compatible with the previous version. If the state is not compatible, an error message is shown asking you to discard the state.


  • Previews silently ignore some errors. If you are not receiving preview results, that means there are no results or that the preview failed. To minimize errors and troubleshooting, do the following:

    • Make sure that there is data actively coming in through the input stream(s).

    • Change the Starting Position to Earliest and run the preview again. This option is only available when you are previewing a pipeline.


  • Custom stream retention settings are not yet user-facing. For now, contact support if you need help adjusting stream retention.

  • Multi-typed maps are not supported in avro schemas.


  • Sliding windows, that is the hop() function, require the size argument to be a multiple of the slide interval. As an example, hop(table x, descriptor(ts), interval '1' second, interval '60' seconds) is valid while hop(table x, descriptor(ts), interval '3' seconds, interval '5' seconds) is not).

  • Session windows are not currently available.

  • When performing windowed aggregation, you must group by both the generated window_start and window_end fields.