Definitions

This is the reference for the apply input and query output YAML format.

If you are new to declarative resource management in Decodable see the overview.

Each resource definition takes the form of a YAML document, which begins with a line of three hyphens ---. A YAML file is a series of YAML documents.

New to YAML? This excellent tutorial covers the basics and advanced concepts.
For a complete example with a set of resource definitions that work together see here.

Top-level structure

Each resource definition YAML doc has these fields:

---
kind: <resource_type>
metadata:
  name: <name_your_resource>
  description: <resource_description>
spec_version: v1
spec:
  <resource_specifications>
Field Required? Description

kind

Required

The kind of resource. One of: secret, connection, pipeline, stream.

metadata

Required

Values that identify or describe the resource, including name and description text strings. The name value is mapped to an ID when the YAML is applied. The description field is optional.

spec_version

Required: v1

Determines spec format and processing logic for this kind. Currently always v1.

spec

Required, except with kind: secret

The resource’s specifications. The format of the spec object is different for each Decodable resource kind. It contains nested fields specific to that resource.

The query output format extends this resource definition structure with a status section. The status contains non-writable fields managed by Decodable for the resource, and is ignored by apply.

Resource definitions

A resource definition YAML doc template is given below for each kind, each with a description of the contents of the spec field.

Connection

A template for a connection definition:

---
kind: connection
metadata:
  name: <name_your_connection>
  description: <description>
spec_version: v1
spec:
  connector: <connector_name>
  type: <type of connector>
  properties:
    <property>: <value>
    <property_2>: <value>
  stream_name: <stream>
  schema_v2:
    fields:
      - kind: <kind of field>
        name: <field name>
        type: <data type>
Field Description

spec.connector

The name of the connector. For example: mysql-cdc or s3-v2.

spec.type

The type of connector. Enter source if your connector receives data or sink if your connector sends data.

spec.properties

Each connector has its own set of properties that you can specify. To determine the property names and their valid values for a specific connector, refer to the corresponding documentation page for that connector.

Note: Secret properties, such as passwords, use the name of a secret resource, not the actual password or secret plaintext value.

spec.stream_name

The name of the stream that you want the connection to send or receive data from.

spec.schema_v2

The schema of the connection. This must match the schema of the stream that it’s connected to.

Stream

A template for a stream definition:

---
kind: stream
metadata:
  name: <name_your_stream>
  description: <description>
spec_version: v1
spec:
  schema_v2:
    fields:
      - kind: <kind of field>
        name: <field name>
        type: <data type>
    constraints:  # optional
      primary_key:
        - <field name>
    watermarks:  # optional
      - name: <name>
        expression: <watermark expression>
Field Required? Description

spec.schema_v2

Required

The schema of the stream. This must match the schema of the connection that it’s connected to.

spec.schema_v2.fields.kind

Required

One of: physical, computed, metadata.

For more information on these options, see the table in How to use the schema view.

spec.schema_v2.constraints.primary_key

Optional

Specify one or more fields to use as a primary key or partition key.

You must (only) specify a primary key if you are sending Change Data Capture (CDC) records. For more information on partition keys or primary keys, see the table in How to use the schema view.

spec.schema_v2.watermarks

Optional

A field that can be used to track the progression of event time in a pipeline.

Secret

A template for a secret definition:

---
kind: secret
metadata:
  name: <name_your_secret>
  description: <description>
spec_version: v1

Note that there is no spec for secret, but spec_version: v1 is still required.

Remember that a secret resource is created without a plaintext value (the actual password, for example), which must be set after creation.

Pipeline

SQL Pipeline

A template for an SQL pipeline definition:

---
kind: pipeline
metadata:
  name: <name_your_pipeline>
  description: <description>
spec_version: v1
spec:
  sql: |
    INSERT INTO stream_out
    SELECT ...
    FROM stream_in
Field Required? Description

spec.sql

Required

A SQL statement in the form:

INSERT INTO <output-stream> SELECT …​ FROM <input-stream>.

For example:

INSERT INTO stream_out SELECT LOWER(col1) FROM stream_in.

Custom Pipeline

The CLI apply command does special local processing for a custom pipeline.

The CLI apply command uploads any files named in the YAML spec fields job_file_path and config_file_paths, replacing these spec fields with job_file_sha and config_files.

The query command output reflects the stored file objects, via job_file_sha and config_files. These spec fields may also be used directly with apply if you know the file shas but don’t have the actual files locally.

See the spec field table below for details.

An apply template for a Custom Pipeline:

---
kind: pipeline
metadata:
  name: <name_your_pipeline>
  description: <description>
spec_version: v1
spec:
  type: JAVA
  job_file_path: pipeline.jar
  config_file_paths:
    - config.yaml
    - example.data
  properties:
    secrets:
      - <secret_name>
    flink_version: 1.18-java11
    additional_metrics:
      - some.flink.metric
Field Required? Description

type

Required

JAVA or PYTHON. Job file type.

job_file_path

Used by CLI apply

Path to job file (A JAR for JAVA) in local file system. CLI will upload this file as needed, converting here to job_file_sha.

job_file_sha

Required by API; returned by query

SHA-512 of contents of job file. Must match an already uploaded job file.

job_arguments

Optional

Argument string input to the custom pipeline program.

entry_class

Optional

Entry class of the custom pipeline. If not provided, the entry class must be specified in the file META-INF/MANIFEST.MF in the pipeline’s JAR file, using the Main-Class property key.

config_file_paths

Used by CLI apply

List of strings, each a path to a config file in local file system. CLI will upload these files as needed, converting here to config_files.

config_files

Optional; Used by API; Returned by query

List of objects (as follows)

config_files[*].name

Optional

Name of config file as exposed to the custom pipeline program.

config_files[*].sha

Required

SHA-512 of contents of config file. Must match an already uploaded config file.

properties

Optional

Object (as follows)

properties.secrets

Optional

List of strings, each a Secret name (or ID)

properties.flink_version

Optional

A specific Decodable-supported Flink version (string).

properties.additional_metrics

Optional

List of strings, each a Flink metric to expose in the _metrics stream for this custom pipeline.