Create pipelines using your own Apache Flink jobs

A pipeline is a set of data processing instructions that are written in SQL or expressed as an Apache Flink job. When you create a pipeline from an Apache Flink job, you are writing how you want your data to be processed in a JVM-based programming language of your choosing, such as Java or Scala. For example, you can create a pipeline that enriches incoming data by invoking an API from within the pipeline and sends that enriched data to a destination of your choosing.

If you are a developer with a use case where SQL is too inflexible or if you have an existing Apache Flink workload that you would like to migrate and use in Decodable, then create a JVM-based pipeline. Once created, you can upload the pipeline to Decodable as a JAR file where it can be managed alongside any SQL-based pipelines that you have.

This image shows the Pipelines page, where both SQL-based and JVM-based pipelines are managed.

This feature is currently available as a Tech Preview. If you would like access to this feature, contact us.