You are viewing documentation for an outdated version of Debezium.
If you want to view the latest stable version of this page, please go here.

Exporting CloudEvents

CloudEvents is a specification for describing event data in a common way. Its aim is to provide interoperability across services, platforms and systems. Debezium enables you to configure a Db2, Informix, MongoDB, MySQL, Oracle, PostgreSQL, or SQL Server connector to emit change event records that conform to the CloudEvents specification.

Support for CloudEvents is in an incubating state. This means that exact semantics, configuration options, and other details may change in future revisions based on feedback. Please let us know your specific requirements or if you encounter any problems while using this feature.

The CloudEvents specification defines:

  • A set of standardized event attributes

  • Rules for defining custom attributes

  • Encoding rules for mapping event formats to serialized representations such as JSON or Apache Avro

  • Protocol bindings for transport layers such as Apache Kafka, HTTP or AMQP

To configure a Debezium connector to emit change event records that conform to the CloudEvents specification, Debezium provides the io.debezium.converters.CloudEventsConverter, which is a Kafka Connect message converter.

Currently, only structured mapping mode can be used. The CloudEvents change event envelope can be JSON or Avro, and you can use JSON or Avro as the data format for each envelope type. It is expected that a future Debezium release will support binary mapping mode.

For information about using Avro, see:

Example event format

The following example shows what a CloudEvents change event record emitted by a PostgreSQL connector looks like. In this example, the PostgreSQL connector is configured to use JSON as the CloudEvents format envelope and also as the data format.

{
  "id" : "name:test_server;lsn:29274832;txId:565",   (1)
  "source" : "/debezium/postgresql/test_server",     (2)
  "specversion" : "1.0",                             (3)
  "type" : "io.debezium.postgresql.datachangeevent", (4)
  "time" : "2020-01-13T13:55:39.738Z",               (5)
  "datacontenttype" : "application/json",            (6)
  "iodebeziumop" : "r",                              (7)
  "iodebeziumversion" : "2.5.4.Final",        (8)
  "iodebeziumconnector" : "postgresql",
  "iodebeziumname" : "test_server",
  "iodebeziumtsms" : "1578923739738",
  "iodebeziumsnapshot" : "true",
  "iodebeziumdb" : "postgres",
  "iodebeziumschema" : "s1",
  "iodebeziumtable" : "a",
  "iodebeziumlsn" : "29274832",
  "iodebeziumxmin" : null,
  "iodebeziumtxid": "565",                           (9)
  "iodebeziumtxtotalorder": "1",
  "iodebeziumtxdatacollectionorder": "1",
  "data" : {                                         (10)
    "before" : null,
    "after" : {
      "pk" : 1,
      "name" : "Bob"
    }
  }
}
Table 1. Descriptions of fields in a CloudEvents change event record
Item Description

1

Unique ID that the connector generates for the change event based on the change event’s content.

2

The source of the event, which is the logical name of the database as specified by the topic.prefix property in the connector’s configuration.

3

The CloudEvents specification version.

4

Connector type that generated the change event. The format of this field is io.debezium.CONNECTOR_TYPE.datachangeevent. Valid values for CONNECTOR_TYPE are db2, informix, mongodb, mysql, oracle, postgresql, or sqlserver.

5

Time of the change in the source database.

6

Describes the content type of the data attribute. Possible values are json, as in this example, or avro.

7

An operation identifier. Possible values are r for read, c for create, u for update, or d for delete.

8

All source attributes that are known from Debezium change events are mapped to CloudEvents extension attributes by using the iodebezium prefix for the attribute name.

9

When enabled in the connector, each transaction attribute that is known from Debezium change events is mapped to a CloudEvents extension attribute by using the iodebeziumtx prefix for the attribute name.

10

The actual data change. Depending on the operation and the connector, the data might contain before, after, or patch fields.

The following example also shows what a CloudEvents change event record emitted by a PostgreSQL connector looks like. In this example, the PostgreSQL connector is again configured to use JSON as the CloudEvents format envelope, but this time the connector is configured to use Avro for the data format.

{
  "id" : "name:test_server;lsn:33227720;txId:578",
  "source" : "/debezium/postgresql/test_server",
  "specversion" : "1.0",
  "type" : "io.debezium.postgresql.datachangeevent",
  "time" : "2020-01-13T14:04:18.597Z",
  "datacontenttype" : "application/avro",            (1)
  "dataschema" : "http://my-registry/schemas/ids/1", (2)
  "iodebeziumop" : "r",
  "iodebeziumversion" : "2.5.4.Final",
  "iodebeziumconnector" : "postgresql",
  "iodebeziumname" : "test_server",
  "iodebeziumtsms" : "1578924258597",
  "iodebeziumsnapshot" : "true",
  "iodebeziumdb" : "postgres",
  "iodebeziumschema" : "s1",
  "iodebeziumtable" : "a",
  "iodebeziumtxId" : "578",
  "iodebeziumlsn" : "33227720",
  "iodebeziumxmin" : null,
  "iodebeziumtxid": "578",
  "iodebeziumtxtotalorder": "1",
  "iodebeziumtxdatacollectionorder": "1",
  "data" : "AAAAAAEAAgICAg=="                        (3)
}
Table 2. Descriptions of fields in a CloudEvents event record for a connector that uses Avro to format data
Item Description

1

Indicates that the data attribute contains Avro binary data.

2

URI of the schema to which the Avro data adheres.

3

The data attribute contains base64-encoded Avro binary data.

It is also possible to use Avro for the envelope as well as the data attribute.

Example configuration

Configure io.debezium.converters.CloudEventsConverter in your Debezium connector configuration. The following example shows how to configure the CloudEvents converter to emit change event records that have the following characteristics:

  • Use JSON as the envelope.

  • Use the schema registry at http://my-registry/schemas/ids/1 to serialize the data attribute as binary Avro data.

...
"value.converter": "io.debezium.converters.CloudEventsConverter",
"value.converter.serializer.type" : "json",          (1)
"value.converter.data.serializer.type" : "avro",
"value.converter.avro.schema.registry.url": "http://my-registry/schemas/ids/1"
...
Table 3. Description of fields in CloudEvents converter configuration
Item Description

1

Specifying the serializer.type is optional, because json is the default.

The CloudEvents converter converts Kafka record values. In the same connector configuration, you can specify key.converter if you want to operate on record keys. For example, you might specify StringConverter, LongConverter, JsonConverter, or AvroConverter.

Configuration of sources of metadata and some CloudEvents fields

By default, the metadata.source property consists of three parts, as seen in the following example:

"value,id:generate,type:generate"

The first part specifies the source for retrieving a record’s metadata; the permitted values are value and header. The next parts specify how to obtain the id and type fields of a CloudEvent; the permitted values are generate and header.

Obtaining record metadata

To construct a CloudEvent, the converter requires source, operation, and transaction metadata. Generally, the converter can retrieve the metadata from a record’s value. But in some cases, before the converter receives a record, the record might be processed in such a way that metadata is not present in its value, for example, after the record is processed by the Outbox Event Router SMT. To preserve the required metadata, you can use the following approach to pass the metadata in the record headers.

Procedure
  1. Implement a mechanism for recording the metadata in the record’s headers before the record reaches the converter, for example, by using the HeaderFrom SMT.

  2. Set the value of the converter’s metadata.source property to header.

The following example shows the configuration for a connector that uses the Outbox Event Router SMT, and the HeaderFrom SMT:

...
"tombstones.on.delete": false,
"transforms": "addMetadataHeaders,outbox",
"transforms.addMetadataHeaders.type": "org.apache.kafka.connect.transforms.HeaderFrom$Value",
"transforms.addMetadataHeaders.fields": "source,op,transaction",
"transforms.addMetadataHeaders.headers": "source,op,transaction",
"transforms.addMetadataHeaders.operation": "copy",
"transforms.addMetadataHeaders.predicate": "isHeartbeat",
"transforms.addMetadataHeaders.negate": true,
"transforms.outbox.type": "io.debezium.transforms.outbox.EventRouter",
"transforms.outbox.table.expand.json.payload": true,
"transforms.outbox.table.fields.additional.placement": "type:header",
"predicates": "isHeartbeat",
"predicates.isHeartbeat.type": "org.apache.kafka.connect.transforms.predicates.TopicNameMatches",
"predicates.isHeartbeat.pattern": "__debezium-heartbeat.*",
"value.converter": "io.debezium.converters.CloudEventsConverter",
"value.converter.metadata.source": "header",
"header.converter": "org.apache.kafka.connect.json.JsonConverter",
"header.converter.schemas.enable": true
...
To use the HeaderFrom transformation, it might be necessary to filter tombstone and heartbeat messages.

The header value of the metadata.source property is a global setting. As a result, even if you omit parts of a property’s value, such as the id and type sources, the converter generates header values for the omitted parts.

Obtaining id and type of a CloudEvent

By default, the CloudEvents converter automatically generates values for id and type fields of a CloudEvent. You can customize the way that the converter populates these fields by changing the defaults and specifying the fields' values in the appropriate headers. For example:

"value.converter.metadata.source": "value,id:header,type:header"

With the preceding configuration in effect, you could configure upstream functions to add id and type headers with the values that you want to pass to the CloudEvents converter.

If you want to provide values only for id header, use:

"value.converter.metadata.source": "value,id:header,type:generate"

To provide metadata, id, and type in headers, use the short syntax:

"value.converter.metadata.source": "header"

Configuration options

When you configure a Debezium connector to use the CloudEvent converter you can specify the following options.

Table 4. Descriptions of CloudEvents converter configuration options

Option

Default

Description

json

The encoding type to use for the CloudEvents envelope structure. The value can be json or avro.

json

The encoding type to use for the data attribute. The value can be json or avro.

N/A

Any configuration options to be passed through to the underlying converter when using JSON. The json. prefix is removed.

N/A

Any configuration options to be passed through to the underlying converter when using Avro. The avro. prefix is removed. For example, for Avro data, you would specify the avro.schema.registry.url option.

none

Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. The value can be none or avro.

none

Specifies CloudEvents schema name under which the schema is registered in a Schema Registry. The setting is ignored when serializer.type is json in which case the value is schemaless. If not set the default algorithm will be used to generate the schema name: ${serverName}.${databaseName}.CloudEvents.Envelope.

true

Specifies whether the converter includes extension attributes when it generates a cloud event. The value can be true or false.

value,id:generate,type:generate

A comma-separated list that specifies the sources from which the converter retrieves metadata (source, operation, transaction), along with the names of the CloudEvent id, and type fields. The first element in the list is a global setting that specifies the source of the metadata. The source of metadata can be value or header. This first element is followed by a set of pairs that specify the name of a CloudEvent field (id or type), and the source for obtaining the field’s value: generate or header. Separate the values in each pair with a colon, for example:

value,id:header,type:generate