Create and update resources You can apply Decodable resource definitions to your Decodable account, using a YAML file and the apply CLI command. If you are new to declarative resource management in Decodable see the overview. Applying a set of resources is idempotent and atomic: if anything goes wrong, no changes are made. If an apply succeeds, you can be sure the new state is valid, and your Decodable resources are in sync with your YAML files. A successful apply command response indicates which resources were created, updated, or remained unchanged. When you run the apply command to create or manage resources, the order of appearance of those resources in the YAML file doesn’t matter. The apply command always prioritizes dependency order. This means that secret and stream resources are always created before the connection and pipeline resources that depend on them. The response shown after applying always presents the results in the same order as the input resources. The apply process stops on the first error it finds, saving no changes to any resource. In this case, the response provides details about the error, rather than listing results for each input resource. The apply command operates atomically: either all changes succeed simultaneously, or no changes are made. Audit events are always sent to the _events stream for any created or updated resources (except in --dry-run mode). Types of operation The apply command supports creating and updating resources. Resources are identified by kind and metadata.name. To rename an existing resource add the id of the existing resource to the metadata field and change the name as required. See Renaming a resource. Resources can’t be deleted using the apply command. To delete resources created by an apply, use the query command with the delete operation. How to use the apply command You’ll need the Decodable CLI (version 1.20.0 or later). Step 1: Create resource definitions To start, you’ll need a YAML file to contain your resource definitions. There are three ways that you might create this: Manually create the YAML file. Each resource definition in the file is its own YAML document. It must begin with a line of three hyphens ---, and contain the following fields: --- kind: <resource_type> metadata: name: <name_your_resource> description: <resource_description> spec_version: v1 spec: <resource_specifications> For a full explanation of each field see the definition reference. For connections and their related streams, generate the resource definition using connection scan. This is only available for connectors that support input from or output to multiple streams. Export the definition of resources already in your account using query: decodable query --export > resources.yaml A complete example of a YAML file with resource definitions for a Decodable pipeline to move data between two Apache Kafka topics can be found below. Step 2: Set secret plaintext values If the YAML contains secrets, ensure any required plaintext values are written to referenced environment variables or files. The next step will error if you forget this, with an error message that includes which environment variable or file to create. See "Secret" for more information. Step 3: Apply the resource definitions Call apply with the name of one or more files holding the resource definitions. decodable apply resources.yaml The apply command accepts multiple input files. This can be especially useful with shell globbing (with *). For example: decodable apply resources/*.yaml applies all resources in all YAML files in the resources directory. If you specify - in place of an input file name, apply will read from stdin. This can be useful when used with decodable query or other shell commands. apply command output The apply command outputs a series of YAML documents, corresponding to the input. Each output document identifies the resource (by kind, name, and id), and shows the apply result for that resource ( created, updated, or unchanged). Here is an example of a successful response after applying: --- kind: secret name: my-secret id: 3a9ed7ce result: unchanged --- kind: connection name: my-kafka-source-connection id: a7b322d4 result: created --- kind: stream name: kafka-stream id: b5f8c2de result: updated Certain Decodable CLI actions (such as activating a resource and setting values for secrets) require a resource ID so retaining this output is recommended. Optionally preview changes with the --dry-run option Using the --dry-run option with the apply command simulates all changes defined in a YAML file without making any actual changes, allowing you to preview potential creations or updates and identify any errors in your resource definitions. For example: decodable apply resources.yaml --dry-run The resulting output is the same, except that each output YAML doc will also have: dry_run: true Step 4: Activate connections and pipelines To start processing data, activate all the connections and pipelines in your resource file, using the query command with the activate operation: decodable query resources.yaml --operation activate It can take several minutes after activation for some resources to be running. If you want to wait until all these resources are stable (running, since activated), use query with the --stabilize option: decodable query resources.yaml --stabilize When that command finishes, your activated resources will be running and processing data. Renaming a resource Get the ID for the resource to be renamed: Using the Decodable CLI query command: decodable query --keep-ids resources.yaml or decodable query --keep-ids --name '<resource-name>' Using the Decodable CLI non-declaratively: decodable <resource-kind> list Using Decodable Web: Find the resource on the Connections, Streams, Pipelines, or Secrets page. Select the ellipsis (…) next to the resource, and select Copy ID. In your YAML add an id field to the metadata section: --- kind: <resource_type> metadata: id: <existing_resource_id> name: <updated_resource_name> description: <resource_description> spec_version: v1 spec: <resource_specifications> Apply the changes. decodable apply resources.yaml Once applied, make sure to remove the metadata.id field from your YAML file. The new name will now identify the resource. Resource execution Requires use of the v2 specification. Connection and pipeline execution can be controlled declaratively, including activation, via their YAML resource definitions used with apply. Execution intent can thus be stored in source control. By default, a resource is inactive. The execution definition for a resource includes whether the resource should be active, its task size and count, and its initial start position. You can modify these at any time, even if the connection or pipeline is running. When you run apply, Decodable will start, stop, or restart the resource as needed to reflect the changes. Setting a resource to active (or inactive) via apply works the same way as activating (or deactivating) a resource by other means, such as the UI or query operation activate (or deactivate). However, apply can activate a resource as soon as it’s created, automatically, whereas other means require separate steps to create and activate. Here is an example of defining a pipeline that will be activated once created: --- kind: pipeline metadata: name: example_pipe description: Perform example processing spec_version: v2 spec: type: SQL sql: | INSERT INTO example_out SELECT LOWER(s) FROM example_in execution: active: true task_count: 1 task_size: M If your resource is defined using the v1 specification then apply will make no change to the execution state of the target resource in Decodable. If you use the v2 specification then apply will use default values if the spec.execution section of the resource definition is missing. This includes the default active: false, and thus will deactivate the resource even if it’s running. Example: A Decodable pipeline to move data between two Apache Kafka topics Let’s look at an example YAML file that creates the Decodable resources required to establish a connection between two Apache Kafka topics. It defines four resources: a secret containing our Apache Kafka password, a stream, and two connections: one to receive data from one Kafka topic and another to send data to another Apache Kafka topic. Create the required resources. In this step, we’ll define a secret for our connection, configure a source and sink connection, and define a stream to transport data between these connections. --- kind: secret metadata: name: kafka-password description: Password for Kafka SASL username spec_version: v1 spec: value_env_var: KAFKA_PASSWORD --- kind: connection metadata: name: My-Kafka-Connection description: A connection to my Kafka topic spec_version: v1 spec: connector: kafka # The name of the connector. type: source # The type of connector. Enter source if your connector receives data or sink if your connector sends data. properties: # The properties of the connector that you want to use. Refer to the connector's documentation for property names and their valid values. bootstrap.servers: <broker_list> topic: <source_topic_name> value.format: json security.protocol: SASL_SSL sasl.mechanism: SCRAM-SHA-256 sasl.username: <username> sasl.password: kafka-password # The name of the secret defined above. stream_name: my-kafka-stream # The name of the stream that you want this connector to send data to. schema_v2: # The schema of the connection. This must match the schema of the stream that it's connected to exactly, including any constraints like watermarks or primary key fields. fields: - kind: physical name: field1 type: string - kind: physical name: field2 type: string - kind: physical name: field3 type: string --- kind: stream metadata: name: my-kafka-stream description: An example Kafka stream spec_version: v1 spec: schema_v2: fields: - kind: physical name: field1 type: string - kind: physical name: field2 type: string - kind: physical name: field3 type: string --- kind: connection metadata: name: Sink-Kafka-Connection description: A connection to my Kafka topic spec_version: v1 spec: connector: kafka type: sink properties: bootstrap.servers: <broker_list> topic: <sink_topic_name> value.format: json security.protocol: SASL_SSL sasl.mechanism: SCRAM-SHA-256 sasl.username: <username> sasl.password: kafka-password # The name of the secret defined above. stream_name: my-kafka-stream # The name of the stream that you want this connector to receive data from. schema_v2: fields: - kind: physical name: field1 type: string - kind: physical name: field2 type: string - kind: physical name: field3 type: string A response containing the resources that were created or modified is shown. Set an environment variable with the password. export KAFKA_PASSWORD="<your-password>" See "Secret" for more information. Apply the YAML definitions to create the resources. decodable apply resources.yaml Connections and pipelines aren’t activated by default. Activate them to start processing data. Using Decodable Web: Depending on the kind of resource that you want to activate, navigate to the Connections or Pipelines page. Select the resource that you defined earlier. From the resource’s Overview page, select Start. Using the Decodable CLI: Connections: decodable connection activate <connection_id> Pipelines: decodable pipeline activate <pipeline_id>