Tips and tricks The following are tips and tricks for the Decodable CLI. Abbreviations and aliases While the CLI has descriptive names for commands such as connection and pipeline, it can become verbose once you’re accustomed to working with Decodable. Most commands support abbreviations and even aliases to make your life a little easier. All of these are listed in the help of each command. For example: decodable connection -h # Output: # Manage connections # # Usage: # decodable connection [command] # # Aliases: # connection, conn # # ... Note that conn is an alias for connection. Here are a few others: Command or Subcommand Aliases config conf connection conn pipeline pl list ls create new delete rm Using standard in/out A few commands - especially those that deal with SQL statements and other long strings - support reading from and/or writing to standard in and out, respectively. This makes it easy to script complex operations in a "Unix-like" way. For example, the pipeline command uses standard in. While you can provide the SQL directly on the command line, # Providing SQL on the command line to `create`: decodable pipeline create "INSERT INTO ... SELECT ..." you can also use the literal string - (a single dash) to tell the CLI to read from standard in instead. # A single dash tells the CLI to read from stdin. cat somefile.sql | decodable pipeline create - --name my-pipeline The config setup command is another example. Instead of writing the generated configuration to its default location, use -o - to write it to stdout instead! decodable config setup my-account -o - # Output: # # Decodable configuration file. # # # # Generated by esammer, 2021-08-07T18:27:18-07:00 # # version: 1.0.0 # active-profile: default # # profiles: # default: # account: my-account Metrics Want to know how your job is doing? Pipelines and connections will output throughput metrics telling you how many records and bytes they’re currently processing, as well as how many they have processed in their lifetime of being activated. decodable pipeline get <pipeline id> # Output: # # parse_envoy_logs # id <pipeline id> # type SQL # version 1 # active version 1 # latest version 1 # is latest true # target state RUNNING # actual state RUNNING # requested tasks 1 # actual tasks 1 # requested task size M # actual task size M # description Parse and structure Envoy logs for analysis # create time 2025-08-13T13:08:50Z # update time 2025-08-13T13:08:50Z # last activated time 2025-08-18T06:54:18Z # last runtime error - # input metrics # envoy_raw 2.0 records / 426.1 B per second | 321.0 records / 66.7 KB total # output metrics # http_events 2.0 records / 573.4 B per second | 321.0 records / 65.8 KB total # additional metrics # computed_completed_checkpoint_rate 6.000 # computed_failed_checkpoint_rate 0 # computed_last_checkpoint_full_size 1201 # computed_last_checkpoint_size 1201 # properties - # scheduled snapshots # enabled false # # [SQL_QUERY_HERE] For pipelines, metrics are available for each input or output stream. For connections, a single set of metrics is available for their throughput to their associated stream.