Postgres sink connector

Features

Connector name

postgres

Delivery guarantee

At least once

Supported task sizes

S, M, L

Multiplex capability

A single instance of this connector can write to multiple tables in a single schema and database

Supported stream types

This connector works with all Postgres-compatible databases. It connects to a Postgres table through Postgres JDBC connections.

There are many Postgres-compatible databases, but some of the main ones include:

  • Amazon Aurora

  • Amazon RDS

  • Azure CosmosDB for PostgreSQL

  • Azure Database for PostgreSQL

  • CockroachDB

  • Google AlloyDB

  • Google Cloud SQL

  • Google Spanner

  • Neon

  • Tembo

  • Timescale

  • YugabyteDB

Configuration properties

Property Description Required Default

hostname

The IP address or host name of the Postgres database server.

Yes

port

The port number of the Postgres database server.

5432

database-name

The name of the Postgres database.

Yes

schema-name

The schema containing your database table.

public

username

The username to use when connecting to the Postgres database.

Yes

password

The secret containing the password credentials.

This must be provided as a secret resource.

Yes

properties.jdbc.options

Any additional JDBC options that you want this connection to use.

See Connection Parameters in the JDBC documentation for a full list of available JDBC options.

Prerequisites

Table names

By default, Decodable uses the stream name as the name of the table it writes to. If a table already exists with that name and the schema of the stream matches the schema of the table, Decodable will write to the existing table. If it doesn’t exist, Decodable will create it.

You can change the name of the table to which Decodable writes either in the web interface, or by using output-resource-name-template when calling decodable connection scan.

The schema of each stream is automatically translated to Postgres, including:

  • field names

  • data types (See data types for how Decodable types map to Postgres types)

  • primary keys

Writing data to multiple tables

A single instance of this connector can write to multiple tables in a single schema and database

If you are using the CLI to create or edit a connection with this connector, you must use the declarative approach. You can generate the connection definition for the tables that you want to write to decodable connection scan.

Resource specifier keys

When using the decodable connection scan command of the Decodable CLI to create a connection specification, the following resource specifier keys are available:

Name Description

table-name

The table name

Connector starting state and offsets

A new sink connection will start reading from the Latest point in the source Decodable stream. This means that only data that’s written to the stream when the connection has started will be sent to the external system. You can override this when you start the connection to Earliest if you want to send all the existing data on the source stream to the target system, along with all new data that arrives on the stream.

When you restart a sink connection it will continue to read data from the point it most recently stored in the checkpoint before the connection stopped. You can also opt to discard the connection’s state and restart it afresh from Earliest or Latest as described above.

Learn more about starting state here.

Data types mapping

The following table describes the mapping of Decodable data types to their Postgres data type counterparts.

Decodable Type Postgres Type

CHAR(n)

CHAR(n)

VARCHAR(n)

VARCHAR(n)

STRING

TEXT

BOOLEAN

BOOLEAN

DECIMAL(p, s)

DECIMAL(p, s)

SMALLINT

SMALLINT

INT/INTEGER

INT

BIGINT

BIGINT

FLOAT

FLOAT

DOUBLE [PRECISION]

DOUBLE

DATE

DATE

TIME(p)

TIME(p)

TIMESTAMP(p) [WITHOUT TIME ZONE]

TIMESTAMP(p)

TIMESTAMP(p) WITH LOCAL TIME ZONE

TIMESTAMP(p)