Resource definitions

If you are new to declarative resource management in Decodable see the overview.

This is the reference for the apply input and query output YAML format.

There are four resource types in Decodable:

  • Connection

  • Stream

  • Pipeline

  • Secret

Each resource definition takes the form of a YAML document, which begins with a line of three hyphens ---. A YAML file is a series of YAML documents. If you’re new to YAML, you may find this excellent tutorial that covers the basics and advanced concepts useful.

For a complete example with a set of resource definitions that work together, see here.

Specification versions (spec_version)

All resource definitions support the v1 specification version. Connections and pipelines also support v2.

v2 adds support for execution details of the resource, such as state and task size.

Top-level structure

Each resource definition YAML document has these fields:

---
kind: <resource_type>
metadata:
  name: <name_your_resource>
  description: <resource_description>
  tags:
    <tag_key_1>: <tag_value_1>
spec_version: <v1|v2>
spec:
  <resource_specifications>
Field Required? Description

kind

Required

The kind of resource. One of: secret, connection, pipeline, stream.

metadata

Required

Values that identify or describe the resource, including name, description, and tags.

  • The name value is mapped to an ID when the YAML is applied.

  • The description field is optional and provides additional context about the resource.

  • The tags section is optional and consists of key/value pairs that help categorize the resource. Each tag key must be unique and is associated with an optional value.

spec_version

Required

Determines spec format and processing logic for this kind.

  • Must be v1 for streams and secrets.

  • Must be v1 or v2 for connections and pipelines.

spec

Required, except with kind: secret

The resource’s specifications. The format of the spec object is specific to each resource kind and spec_version combination. It contains nested fields specific to that resource.

Status

The query output format extends the above resource definition structure with a status section. The status contains non-writable fields managed by Decodable for the resource, and is ignored by apply.

An example status section looks like this:

status:
  version: 1
  is_latest: true
  active_version: 1
  latest_version: 1
  create_time: 2024-07-23T20:57:58.760+00:00
  last_activated_time: 2024-07-23T21:45:47.276+00:00
  execution:
    state: RUNNING

Connection

A template for a connection definition:

---
kind: connection
metadata:
  name: <name_your_connection>
  description: <description>
  tags:
    <tag_key_1>: <tag_value_1>
spec_version: v2
spec:
  connector: <connector_name>
  type: <type of connector>
  properties:
    <property_1>: <value>
    <property_2>: <value>
  stream_name: <stream>
  schema_v2:
    fields:
      - kind: physical
        name: <field name>
        type: <data type>
      - kind: computed
        name: <field name>
        expression: <data type>
      - kind: metadata
        name: <field name>
        type: <data type>
        key: <per-connector key>
  execution: # v2 spec only
    active: <true|false>
    task_size: <task size>
    task_count: <number of tasks>
Field Description

v1 and v2 spec

spec.connector

The name of the connector. For example: mysql-cdc or s3-v2.

spec.type

The type of connector. Enter source if your connector receives data or sink if your connector sends data.

spec.properties

Each connector has its own set of properties that you can specify. To determine the property names and their valid values for a specific connector, refer to the corresponding documentation page for that connector.

Important: Secret properties, such as passwords, use the name of a secret resource, not the actual password or secret plaintext value.

The following are only valid for connectors that support a single stream

spec.stream_name

The name of the stream that you want the connection to send or receive data from.

spec.schema_v2

The schema of the connection. This must be compatible with the schema of the stream that it’s connected to.

spec.schema_v2.fields[*].kind

One of: physical, computed, metadata.

For more information on how to use each field kind, see here.

The following are only valid for connectors that support a multiple streams

spec.stream_mappings

One or more mappings between Decodable streams and external resources in the system that the connection reads from or writes to.

Consult the connector documentation for the specific resource specifiers available.

For example:

spec:
  […]
  stream_mappings:
      - stream_name: epos_orders
        external_resource_specifier:
          database-name: camelot
          table-name: orders
      - stream_name: epos_products
        external_resource_specifier:
          database-name: merlin
          table-name: products_v2
      - stream_name: epos_products
        external_resource_specifier:
          database-name: excalibur
          table-name: products

Stream:external resource mappings may be 1:1, 1:N (in the case of a source), or N:1 (in the case of a sink).

That is, many external resources can be mapped to a single stream in a source, and many streams can be mapped to a single external resource in a sink.

v2 spec

spec.execution.active

Whether the connection is active or not.

Valid values: true, false

Default: false

spec.execution.task_size

The task size for the connection.

Valid values: S, M, L

Default: M

spec.execution.task_count

The number of tasks to run for the connection.

Default: 1

spec.execution.initial_start_position

When starting the connection for the first time, or after a reset-state operation, the start position in the source streams from which to read.

Valid values: earliest, latest

Default: latest

Stream

A template for a stream definition:

---
kind: stream
metadata:
  name: <name_your_stream>
  description: <description>
  tags:
    <tag_key_1>: <tag_value_1>
spec_version: v1
spec:
  schema_v2:
    fields:
      - kind: physical
        name: <field name>
        type: <data type>
      - kind: computed
        name: <field name>
        expression: <data type>
    constraints:  # optional
      primary_key:
        - <field name>
    watermarks:  # optional
      - name: <name>
        expression: <watermark expression>
Field Required? Description

spec.schema_v2

Required

The schema of the stream. This must be compatible with the schema of any connection that it’s connected to, and any pipeline that uses it.

spec.schema_v2.fields[*].kind

Required

One of: physical, computed.

For more information on how to use each field kind, see here.

spec.schema_v2.constraints.primary_key

Optional

Specify one or more fields to use as a primary key or partition key.

You must (only) specify a primary key if you are sending Change Data Capture (CDC) records. For more information on partition keys or primary keys, see the table in How to use the schema view.

spec.schema_v2.watermarks

Optional

A field that can be used to track the progression of event time in a pipeline.

Pipeline

SQL Pipeline

A template for a SQL Pipeline definition:

---
kind: pipeline
metadata:
  name: <name_your_pipeline>
  description: <description>
  tags:
    <tag_key_1>: <tag_value_1>
spec_version: v2
spec:
  sql: |
    INSERT INTO stream_out
    SELECT ...
    FROM stream_in
  execution: # v2 spec only
    active: <true|false>
    task_size: <task size>
    task_count: <number of tasks>
    initial_start_positions:
      stream_1: earliest
      stream_2: latest

Field

Description

v1 and v2 spec

spec.sql

A SQL statement in the form:

INSERT INTO <output-stream>
SELECT ... FROM <input-stream>

For example:

INSERT INTO stream_out
SELECT LOWER(col1) FROM stream_in

v2 spec only

spec.execution.active

Whether the pipeline is active or not. Boolean

Default: false

spec.execution.task_size

The task size for the pipeline.

Valid values: S, M, L

Default: M

spec.execution.task_count

The number of tasks to run for the pipeline.

Default: 1

spec.execution.initial_start_positions

For each stream, the start position from which to read when starting the pipeline for the first time, or after a reset-state operation.

Specify the stream name and its starting position.

Valid values: earliest, latest

Default: latest

For example:

spec:
  execution:
    initial_start_positions:
      stream_1: earliest
      stream_2: latest
Can’t be used with initial_snapshot_id.

spec.execution.initial_snapshot_id

When starting the pipeline for the first time, or after a reset-state operation, the snapshot from which to start.

For example:

spec:
  execution:
    initial_snapshot_id: 4e335631
Can’t be used with initial_start_positions.

Custom Pipeline

A template for a Custom Pipeline definition. All referenced files are assumed to be available in the same location as the YAML doc:

---
kind: pipeline
metadata:
  name: <name_your_pipeline>
  description: <description>
  tags:
    <tag_key_1>: <tag_value_1>
spec_version: v2
spec:
  type: JAVA
  job_file_path: pipeline.jar
  config_file_paths:
    - config.yaml
    - example.data
  properties:
    secrets:
      - <secret_name>
    flink_version: 1.18-java11
    additional_metrics:
      - some.flink.metric
  execution: # v2 spec only
    active: <true|false>
    task_size: <task size>
    task_count: <number of tasks>
    initial_snapshot_id: <snapshot id>

Field

Description

Required?

v1 and v2 spec

spec.type

Job file type. Either JAVA or PYTHON.

Required

spec.job_file_path

Path to job file (a JAR for JAVA, a ZIP for PYTHON) in local file system. CLI will upload this file as needed. Not available if invoking the API directly.

Either this or job_file_sha is required

spec.job_file_sha

SHA-512 of contents of job file. Must match an already uploaded job file.

Either this or job_file_path is required

spec.job_arguments

Argument string input to the Custom Pipeline program.

Optional

spec.entry_class

Entry class of the Custom Pipeline. If not provided, the entry class must be specified in the file META-INF/MANIFEST.MF in the pipeline’s JAR file, using the Main-Class property key.

Optional

spec.config_file_paths

List of strings, each a path to a configuration file in local file system. CLI will upload these files as needed. Not available if invoking the API directly.

Optional

spec.config_files

List of objects (as follows).

Optional

spec.config_files[*].name

Name of configuration file as exposed to the Custom Pipeline program.

Optional

spec.config_files[*].sha

SHA-512 of contents of configuration file. Must match an already uploaded configuration file.

Required

spec.properties

Object (as follows).

Required

spec.properties.secrets

List of strings, each a Secret name (or ID).

Optional

spec.properties.flink_version

A specific Decodable-supported Flink version (string).

Required

spec.properties.additional_metrics

List of strings, each a Flink metric to expose in the _metrics stream for this Custom Pipeline.

Optional

v2 spec only

spec.execution.active

Whether the pipeline is active or not. Boolean

Default: false

Optional

spec.execution.task_size

The task size for the pipeline.

Valid values: S, M, L

Default: M

Optional

spec.execution.task_count

The number of tasks to run for the pipeline.

Default: 1

Optional

spec.execution.initial_start_positions

For each stream, the start position from which to read when starting the pipeline for the first time, or after a reset-state operation.

Specify the stream name and its starting position.

Valid values: earliest, latest

Default: latest

Specify the stream name and its starting position. For example:

spec:
  execution:
    initial_start_positions:
      stream_1: earliest
      stream_2: latest
Can’t be used with initial_snapshot_id

Optional

spec.execution.initial_snapshot_id

When starting the pipeline for the first time, or after a reset-state operation, the snapshot from which to start.

For example:

spec:
  execution:
    initial_snapshot_id: 4e335631
Can’t be used with initial_start_positions.

Optional

Secret

A template for a secret definition:

---
kind: secret
metadata:
  name: <name_your_secret>
  description: <description>
  tags:
    <tag_key_1>: <tag_value_1>
spec_version: v1
spec:
  # Use _one_ of the following value_* fields.
  value_env_var: MY_PASSWORD
  value_file: ./secret.txt
  value_literal: "don't-tell-123"

A Decodable Secret resource stores a plaintext value (such as a password) securely, so that it may be used by Connections or Custom Pipelines. A Connection references a Secret by name from a (secret-type) property value.

The apply CLI command can set this plaintext value from an environment variable, a file, or a literal value, via special spec fields.

These fields can only be used to write the plaintext value: the query command output for a Secret will not include them. The fields aren’t considered part of the Secret resource. Instead they act as directives to the CLI apply command to write the plaintext value to the secure Secret storage in your Decodable account.
Field Description

spec.value_env_var

The name of an environment variable containing the plaintext value.

This is recommended for a simple plaintext value string such as a password.

Note that not all binary values can be stored in an environment variable, and that some shell-based techniques can strip newlines when setting an environment variable.

The apply command will error if no environment variable by this name is set.

spec.value_file

A path (relative to the YAML file’s directory) to a local file containing the plaintext value.

This is recommended for a multiline string (such as a private key) or binary plaintext value.

The file contents are used verbatim. Note that this includes any trailing newlines.

The apply command will error if no such file can be found and read.

spec.value_literal

A string with the literal plaintext value.

Be cautious storing any secret value this way. Care must be taken to ensure that any YAML file that uses this field isn’t committed to source control, emailed, sent over Slack, logged, etc.

This field may be appropriate for private ad-hoc use, and for use with generated YAML. The other two fields are recommended for most CI/CD use.

For API users: These fields are specially handled by the CLI apply command; the underlying /apply API doesn’t directly support them.

Secret definition without plaintext value

When no spec.value_* field is present—​as in query command output—​no plaintext value is written for the Secret resource on apply.

In this case there is no spec for the Secret definition, but spec_version: v1 is still required.

For a Secret created this way, the plaintext value may be manually set after apply, as a separate step. This can be done via the Decodable UI or CLI (decodable secret write <id> --value '<value>'). It need only be done once per Secret. Alternately, a spec.value_* field may be added and apply repeated.

If the Secret resource already exists and no spec.value_* field is provided then the existing plaintext value will remain unchanged. This may be appropriate for certain use cases.