You are viewing documentation for the current development version of Debezium.
If you want to view the latest stable version of this page, please go here.

Debezium Connector for Oracle

Overview

Debezium ingests change events from Oracle using the native LogMiner database package or the XStream API. While the connector may work with a variety of Oracle versions and editions, only Oracle EE 12 and 19 have been tested.

How the Oracle Connector Works

To optimally configure and run a Debezium Oracle connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and uses metadata.

Snapshots

Most Oracle servers are configured to not retain the complete history of the database in the redo logs, so the Debezium Oracle connector would be unable to see the entire history of the database by simply reading the logs. Consequently, the first time the connector starts, it performs an initial consistent snapshot of the database. the default behavior for performing a snapshot consists of the following steps. you can change this behavior by setting the snapshot.mode connector configuration property to a value other than initial.

  1. Determine the tables to be captured

  2. Obtain a ROW SHARE MODE lock on each of the monitored tables to ensure that no structural changes can occur to any of the tables.

  3. Read the current SCN ("system change number") position in the server’s redo log.

  4. Capture the structure of all relevant tables.

  5. Release the locks obtained in step 2, i.e. the locks are held only for a short period of time.

  6. Scan all of the relevant database tables and schemas as valid at the SCN position read in step 3 (SELECT * FROM …​ AS OF SCN 123), and generate a READ event for each row and write that event to the appropriate table-specific Kafka topic.

  7. Record the successful completion of the snapshot in the connector offsets.

If the connector fails, is rebalanced, or stops after step 1 begins but before step 7 completes, upon restart the connector will begin a new snapshot. After the connector completes its initial snapshot, the Debezium connector continues streaming from the position that it read in step 3. This ensures that the connector does not miss any updates. If the connector stops again for any reason, upon restart, the connector continues streaming changes from where it previously left off.

Table 1. Settings for snapshot.mode connector configuration property
Setting Description

initial

The connector performs a database snapshot after which it will transition to streaming changes.

schema_only

The connector captures the structure of all relevant tables, performing all the steps described above, except it does not create any READ events representing the dataset at the point of the connector’s start-up.

Topics Names

Schema Change Topic

The Debezium Oracle connector stores the history of schema changes in a database history topic. This topic reflects an internal connector state and you should not use it directly. Applications that require notifications about schema changes should obtain the information from the public schema change topic. the connector writes all of these events to a Kafka topic named <serverName>, where serverName is the name of the connector that is specified in the database.server.name configuration property.

The schema change topic message format is in an incubating state and may change without notice.

Debezium emits a new message to this topic whenever a new table is streamed from or when the structure of the table is altered. The message contains a logical representation of the table schema.

The example of the message is:

{
  "schema": {
  ...
  },
  "payload": {
    "source": {
      "version": "1.6.0.Final",
      "connector": "oracle",
      "name": "server1",
      "ts_ms": 1588252618953,
      "snapshot": "true",
      "db": "ORCLPDB1",
      "schema": "DEBEZIUM",
      "table": "CUSTOMERS",
      "txId" : null,
      "scn" : "1513734",
      "commit_scn": "1513734",
      "lcr_position" : null
    },
    "databaseName": "ORCLPDB1", (1)
    "schemaName": "DEBEZIUM", (1)
    "ddl": "CREATE TABLE \"DEBEZIUM\".\"CUSTOMERS\" \n   (    \"ID\" NUMBER(9,0) NOT NULL ENABLE, \n    \"FIRST_NAME\" VARCHAR2(255), \n    \"LAST_NAME" VARCHAR2(255), \n    \"EMAIL\" VARCHAR2(255), \n     PRIMARY KEY (\"ID\") ENABLE, \n     SUPPLEMENTAL LOG DATA (ALL) COLUMNS\n   ) SEGMENT CREATION IMMEDIATE \n  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 \n NOCOMPRESS LOGGING\n  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645\n  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1\n  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)\n  TABLESPACE \"USERS\" ", (2)
    "tableChanges": [ (3)
      {
        "type": "CREATE", (4)
        "id": "\"ORCLPDB1\".\"DEBEZIUM\".\"CUSTOMERS\"", (5)
        "table": { (6)
          "defaultCharsetName": null,
          "primaryKeyColumnNames": [ (7)
            "ID"
          ],
          "columns": [ (8)
            {
              "name": "ID",
              "jdbcType": 2,
              "nativeType": null,
              "typeName": "NUMBER",
              "typeExpression": "NUMBER",
              "charsetName": null,
              "length": 9,
              "scale": 0,
              "position": 1,
              "optional": false,
              "autoIncremented": false,
              "generated": false
            },
            {
              "name": "FIRST_NAME",
              "jdbcType": 12,
              "nativeType": null,
              "typeName": "VARCHAR2",
              "typeExpression": "VARCHAR2",
              "charsetName": null,
              "length": 255,
              "scale": null,
              "position": 2,
              "optional": false,
              "autoIncremented": false,
              "generated": false
            },
            {
              "name": "LAST_NAME",
              "jdbcType": 12,
              "nativeType": null,
              "typeName": "VARCHAR2",
              "typeExpression": "VARCHAR2",
              "charsetName": null,
              "length": 255,
              "scale": null,
              "position": 3,
              "optional": false,
              "autoIncremented": false,
              "generated": false
            },
            {
              "name": "EMAIL",
              "jdbcType": 12,
              "nativeType": null,
              "typeName": "VARCHAR2",
              "typeExpression": "VARCHAR2",
              "charsetName": null,
              "length": 255,
              "scale": null,
              "position": 4,
              "optional": false,
              "autoIncremented": false,
              "generated": false
            }
          ]
        }
      }
    ]
  }
}
Table 2. Descriptions of fields in messages emitted to the schema change topic
Item Field name Description

1

databaseName
schemaName

Identifies the database and the schema that contain the change.

2

ddl

This field contains the DDL responsible for the schema change.

3

tableChanges

An array of one or more items that contain the schema changes generated by a DDL command.

4

type

Describes the kind of change. The value is one of the following:

  • CREATE - table created

  • ALTER - table modified

  • DROP - table deleted

5

id

Full identifier of the table that was created, altered, or dropped. In the case of a table rename, this will be a concatenation of <old>,<new> table names.

6

table

Represents table metadata after the applied change.

7

primaryKeyColumnNames

List of columns that compose the table’s primary key.

8

columns

Metadata for each column in the changed table.

In messages that the connector sends to the schema change topic, the key is the name of the database that contains the schema change. In the following example, the payload field contains the key:

{
  "schema": {
    "type": "struct",
    "fields": [
      {
        "type": "string",
        "optional": false,
        "field": "databaseName"
      }
    ],
    "optional": false,
    "name": "io.debezium.connector.oracle.SchemaChangeKey"
  },
  "payload": {
    "databaseName": "ORCLPDB1"
  }
}

Transaction Metadata

Debezium can generate events that represents transaction metadata boundaries and enrich data messages.

Transaction boundaries

Debezium generates events for every transaction BEGIN and END. Every event contains

  • status - BEGIN or END

  • id - string representation of unique transaction identifier

  • event_count (for END events) - total number of events emmitted by the transaction

  • data_collections (for END events) - an array of pairs of data_collection and event_count that provides number of events emitted by changes originating from given data collection

Following is an example of what a message looks like:

{
  "status": "BEGIN",
  "id": "5.6.641",
  "event_count": null,
  "data_collections": null
}

{
  "status": "END",
  "id": "5.6.641",
  "event_count": 2,
  "data_collections": [
    {
      "data_collection": "ORCLPDB1.DEBEZIUM.CUSTOMER",
      "event_count": 1
    },
    {
      "data_collection": "ORCLPDB1.DEBEZIUM.ORDER",
      "event_count": 1
    }
  ]
}

The transaction events are written to the topic named <database.server.name>.transaction.

Data events enrichment

When transaction metadata is enabled the data message Envelope is enriched with a new transaction field. This field provides information about every event in the form of a composite of fields:

  • id - string representation of unique transaction identifier

  • total_order - the absolute position of the event among all events generated by the transaction

  • data_collection_order - the per-data collection position of the event among all events that were emitted by the transaction

Following is an example of what a message looks like:

{
  "before": null,
  "after": {
    "pk": "2",
    "aa": "1"
  },
  "source": {
...
  },
  "op": "c",
  "ts_ms": "1580390884335",
  "transaction": {
    "id": "5.6.641",
    "total_order": "1",
    "data_collection_order": "1"
  }
}

Data change events

All data change events produced by the Oracle connector have a key and a value, although the structure of the key and value depend on the table from which the change events originated (see Topic names).

The Debezium Oracle connector ensures that all Kafka Connect schema names are valid Avro schema names. This means that the logical server name must start with Latin letters or an underscore (e.g., [a-z,A-Z,_]), and the remaining characters in the logical server name and all characters in the schema and table names must be Latin letters, digits, or an underscore (e.g., [a-z,A-Z,0-9,\_]). If not, then all invalid characters will automatically be replaced with an underscore character.

This can lead to unexpected conflicts when the logical server name, schema names, and table names contain other characters, and the only distinguishing characters between table full names are invalid and thus replaced with underscores.

Debezium and Kafka Connect are designed around continuous streams of event messages, and the structure of these events may change over time. This could be difficult for consumers to deal with, so to make it easy Kafka Connect makes each event self-contained. Every message key and value has two parts: a schema and payload. The schema describes the structure of the payload, while the payload contains the actual data.

Any change that is performed by the SYS or SYSTEM user accounts will not be captured by the connector.

Change event keys

For a given table, the change event’s key will have a structure that contains a field for each column in the primary key (or unique key constraint) of the table at the time the event was created.

Consider a customers table defined in the inventory database schema:

CREATE TABLE customers (
  id NUMBER(9) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1001) NOT NULL PRIMARY KEY,
  first_name VARCHAR2(255) NOT NULL,
  last_name VARCHAR2(255) NOT NULL,
  email VARCHAR2(255) NOT NULL UNIQUE
);

If the database.server.name configuration property has the value server1, every change event for the customers table while it has this definition will feature the same key structure, which in JSON looks like this:

{
    "schema": {
        "type": "struct",
        "fields": [
            {
                "type": "int32",
                "optional": false,
                "field": "ID"
            }
        ],
        "optional": false,
        "name": "server1.INVENTORY.CUSTOMERS.Key"
    },
    "payload": {
        "ID": 1004
    }
}

The schema portion of the key contains a Kafka Connect schema describing what is in the key portion, and in our case that means that the payload value is not optional, is a structure defined by a schema named server1.DEBEZIUM.CUSTOMERS.Key, and has one required field named id of type int32. If you look at the value of the key’s payload field, you can see that it is indeed a structure (which in JSON is just an object) with a single id field, whose value is 1004.

Therefore, you can interpret this key as describing the row in the inventory.customers table (output from the connector named server1) whose id primary key column had a value of 1004.

Change event values

Like the message key, the value of a change event message has a schema section and payload section. The payload section of every change event value produced by the Oracle connector has an envelope structure with the following fields:

  • op is a mandatory field that contains a string value describing the type of operation. Values for the Oracle connector are c for create (or insert), u for update, d for delete, and r for read (in the case of a snapshot).

  • before is an optional field that if present contains the state of the row before the event occurred. The structure will be described by the server1.INVENTORY.CUSTOMERS.Value Kafka Connect schema, which the server1 connector uses for all rows in the inventory.customers table.

Whether or not this field and its elements are available is highly dependent on the Supplemental Logging configuration applying to the table.

  • after is an optional field that if present contains the state of the row after the event occurred. The structure is described by the same server1.INVENTORY.CUSTOMERS.Value Kafka Connect schema used in before.

  • source is a mandatory field that contains a structure describing the source metadata for the event, which in the case of Oracle contains these fields: the Debezium version, the connector name, whether the event is part of an ongoing snapshot or not, the transaction id (not while snapshotting), the SCN of the change, and a timestamp representing the point in time when the record was changed in the source database (during snapshotting, this is the point in time of snapshotting).

The commit_scn field is optional and describes the SCN of the transaction commit that the change event participates within. This field is only present when using the LogMiner connection adapter.

  • ts_ms is optional and if present contains the time (using the system clock in the JVM running the Kafka Connect task) at which the connector processed the event.

And of course, the schema portion of the event message’s value contains a schema that describes this envelope structure and the nested fields within it.

Create events

Let’s look at what a create event value might look like for our customers table:

{
    "schema": {
        "type": "struct",
        "fields": [
            {
                "type": "struct",
                "fields": [
                    {
                        "type": "int32",
                        "optional": false,
                        "field": "ID"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "FIRST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "LAST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "EMAIL"
                    }
                ],
                "optional": true,
                "name": "server1.DEBEZIUM.CUSTOMERS.Value",
                "field": "before"
            },
            {
                "type": "struct",
                "fields": [
                    {
                        "type": "int32",
                        "optional": false,
                        "field": "ID"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "FIRST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "LAST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "EMAIL"
                    }
                ],
                "optional": true,
                "name": "server1.DEBEZIUM.CUSTOMERS.Value",
                "field": "after"
            },
            {
                "type": "struct",
                "fields": [
                    {
                        "type": "string",
                        "optional": true,
                        "field": "version"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "name"
                    },
                    {
                        "type": "int64",
                        "optional": true,
                        "field": "ts_ms"
                    },
                    {
                        "type": "string",
                        "optional": true,
                        "field": "txId"
                    },
                    {
                        "type": "string",
                        "optional": true,
                        "field": "scn"
                    },
                    {
                        "type": "string",
                        "optional": true,
                        "field": "commit_scn"
                    },
                    {
                        "type": "boolean",
                        "optional": true,
                        "field": "snapshot"
                    }
                ],
                "optional": false,
                "name": "io.debezium.connector.oracle.Source",
                "field": "source"
            },
            {
                "type": "string",
                "optional": false,
                "field": "op"
            },
            {
                "type": "int64",
                "optional": true,
                "field": "ts_ms"
            }
        ],
        "optional": false,
        "name": "server1.DEBEZIUM.CUSTOMERS.Envelope"
    },
    "payload": {
        "before": null,
        "after": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "annek@noanswer.org"
        },
        "source": {
            "version": "1.6.0.Final",
            "name": "server1",
            "ts_ms": 1520085154000,
            "txId": "6.28.807",
            "scn": "2122185",
            "commit_scn": "2122185",
            "snapshot": false
        },
        "op": "c",
        "ts_ms": 1532592105975
    }
}

If we look at the schema portion of this event’s value, we can see the schema for the envelope, the schema for the source structure (which is specific to the Oracle connector and reused across all events), and the table-specific schemas for the before and after fields.

The names of the schemas for the before and after fields are of the form logicalName.schemaName.tableName.Value, and thus are entirely independent from all other schemas for all other tables. This means that when using the Avro Converter, the resulting Avro schems for each table in each logical source have their own evolution and history.

If we look at the payload portion of this event’s value, we can see the information in the event, namely that it is describing that the row was created (since op=c), and that the after field value contains the values of the new inserted row’s' ID, FIRST_NAME, LAST_NAME, and EMAIL columns.

It may appear that the JSON representations of the events are much larger than the rows they describe. This is true, because the JSON representation must include the schema and the payload portions of the message. It is possible and even recommended to use the Avro Converter to dramatically decrease the size of the actual messages written to the Kafka topics.

Update events

The value of an update change event on this table will actually have the exact same schema, and its payload will be structured the same but will hold different values. Here’s an example:

{
    "schema": { ... },
    "payload": {
        "before": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "annek@noanswer.org"
        },
        "after": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "anne@example.com"
        },
        "source": {
            "version": "1.6.0.Final",
            "name": "server1",
            "ts_ms": 1520085811000,
            "txId": "6.9.809",
            "scn": "2125544",
            "commit_scn": "2125544",
            "snapshot": false
        },
        "op": "u",
        "ts_ms": 1532592713485
    }
}

When we compare this to the value in the insert event, we see a couple of differences in the payload section:

  • The op field value is now u, signifying that this row changed because of an update

  • The before field now has the state of the row with the values before the database commit

  • The after field now has the updated state of the row, and here was can see that the EMAIL value is now anne@example.com.

  • The source field structure has the same fields as before, but the values are different since this event is from a different position in the redo log.

  • The ts_ms shows the timestamp that Debezium processed this event.

There are several things we can learn by just looking at this payload section. We can compare the before and after structures to determine what actually changed in this row because of the commit. The source structure tells us information about Oracle’s record of this change (providing traceability), but more importantly this has information we can compare to other events in this and other topics to know whether this event occurred before, after, or as part of the same Oracle commit as other events.

When the columns for a row’s primary/unique key are updated, the value of the row’s key has changed so Debezium will output three events: a DELETE event and a tombstone event with the old key for the row, followed by an INSERT event with the new key for the row.

Delete events

So far we’ve seen samples of create and update events. Now, let’s look at the value of a delete event for the same table. Once again, the schema portion of the value will be exactly the same as with the create and update events:

{
    "schema": { ... },
    "payload": {
        "before": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "anne@example.com"
        },
        "after": null,
        "source": {
            "version": "1.6.0.Final",
            "name": "server1",
            "ts_ms": 1520085153000,
            "txId": "6.28.807",
            "scn": "2122184",
            "commit_scn": "2122184",
            "snapshot": false
        },
        "op": "d",
        "ts_ms": 1532592105960
    }
}

If we look at the payload portion, we see a number of differences compared with the create or update event payloads:

  • The op field value is now d, signifying that this row was deleted

  • The before field now has the state of the row that was deleted with the database commit.

  • The after field is null, signifying that the row no longer exists

  • The source field structure has many of the same values as before, except the ts_ms, scn and txId fields have changed

  • The ts_ms shows the timestamp that Debezium processed this event.

This event gives a consumer all kinds of information that it can use to process the removal of this row.

The Oracle connector’s events are designed to work with Kafka log compaction, which allows for the removal of some older messages as long as at least the most recent message for every key is kept. This allows Kafka to reclaim storage space while ensuring the topic contains a complete dataset and can be used for reloading key-based state.

When a row is deleted, the delete event value listed above still works with log compaction, since Kafka can still remove all earlier messages with that same key. But only if the message value is null will Kafka know that it can remove all messages with that same key. To make this possible, Debezium’s Oracle connector always follows the delete event with a special tombstone event that has the same key but null value.

Data Type mappings

The Oracle connector represents changes to rows with events that are structured like the table in which the rows exists. The event contains a field for each column value. How that value is represented in the event depends on the Oracle data type of the column. The following sections describe how the connector maps oracle data types to a literal type and a semantic type in event fields.

  • literal type describes how the value is literally represented using Kafka Connect schema types: INT8, INT16, INT32, INT64, FLOAT32, FLOAT64, BOOLEAN, STRING, BYTES, ARRAY, MAP, and STRUCT.

  • semantic type describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.

Support for further data types will be added in subsequent releases. Please file a JIRA issue for any specific types that may be missing.

Character types

The following table describes how the connector maps basic character types.

Table 3. Mappings for Oracle basic character types
Oracle Data Type Literal type (schema type) Semantic type (schema name) and Notes

CHAR[(M)]

STRING

n/a

NCHAR[(M)]

STRING

n/a

NVARCHAR2[(M)]

STRING

n/a

VARCHAR[(M)]

STRING

n/a

VARCHAR2[(M)]

STRING

n/a

Binary and Character LOB types

Support for these data types is currently in incubating state, i.e. exact semantics, configuration options etc. may change in future revisions, based on feedback we receive. Please let us know if you encounter any problems while using these data types.

The following table describes how the connector maps binary and character LOB types.

Table 4. Mappings for Oracle binary and character LOB types
Oracle Data Type Literal type (schema type) Semantic type (schema name) and Notes

BLOB

BYTES

The raw bytes.

CLOB

STRING

n/a

LONG

n/a

This data type is not supported.

LONG RAW

n/a

This data type is not supported.

NCLOB

STRING

n/a

RAW

n/a

This data type is not supported.

Numeric types

The following table describes how the connector maps numeric types.

Table 5. Mappings for Oracle numeric data types
Oracle Data Type Literal type (schema type) Semantic type (schema name) and Notes

BINARY_FLOAT

FLOAT32

n/a

BINARY_DOUBLE

FLOAT64

n/a

DECIMAL[(P, S)]

BYTES / INT8 / INT16 / INT32 / INT64

org.apache.kafka.connect.data.Decimal if using BYTES

Handled equivalently to NUMBER (note that S defaults to 0 for DECIMAL).

DOUBLE PRECISION

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

FLOAT[(P)]

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

INTEGER, INT

BYTES

org.apache.kafka.connect.data.Decimal

INTEGER is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the INT types could store

NUMBER[(P[, *])]

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

NUMBER(P, S <= 0)

INT8 / INT16 / INT32 / INT64

NUMBER columns with a scale of 0 represent integer numbers; a negative scale indicates rounding in Oracle, e.g. a scale of -2 will cause rounding to hundreds.

Depending on the precision and scale, a matching Kafka Connect integer type will be chosen:

  • P - S < 3, INT8

  • P - S < 5, INT16

  • P - S < 10, INT32

  • P - S < 19, INT64

  • P - S >= 19, BYTES (org.apache.kafka.connect.data.Decimal).

NUMBER(P, S > 0)

BYTES

org.apache.kafka.connect.data.Decimal

NUMERIC[(P, S)]

BYTES / INT8 / INT16 / INT32 / INT64

org.apache.kafka.connect.data.Decimal if using BYTES

Handled equivalently to NUMBER (note that S defaults to 0 for NUMERIC).

SMALLINT

BYTES

org.apache.kafka.connect.data.Decimal

SMALLINT is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the INT types could store

REAL

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

Boolean types

Oracle does not natively have support for a BOOLEAN data type; however, it is common practice to use other data types with certain semantics to simulate the concept of a logical BOOLEAN data type.

The operator can configure the out-of-the-box NumberOneToBooleanConverter custom converter that would either map all NUMBER(1) columns to a BOOLEAN or if the selector parameter is set, then a subset of columns could be enumerated using a comma-separated list of regular expressions.

Following is an example configuration:

converters=boolean
boolean.type=io.debezium.connector.oracle.converters.NumberOneToBooleanConverter
boolean.selector=.*MYTABLE.FLAG,.*.IS_ARCHIVED

Decimal types

The setting of the Oracle connector configuration property, decimal.handling.mode determines how the connector maps decimal types.

When the decimal.handling.mode property is set to precise, the connector uses Kafka Connect org.apache.kafka.connect.data.Decimal logical type for all DECIMAL and NUMERIC columns. This is the default mode.

However, when the decimal.handling.mode property is set to double, the connector will represent the values as Java double values with schema type FLOAT64.

The last possible setting for the decimal.handling.mode configuration property is string. In this case, the connector reprsents DECIMAL and NUMERIC values as their formatted string representation with schema type STRING.

Temporal types

Other than Oracle’s INTERVAL, TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE data types, the other temporal types depend on the value of the time.precision.mode configuration property.

When the time.precision.mode configuration property is set to adaptive (the default), then the connector will determine the literal and semantic type for the temporal types based on the column’s data type definition so that events exactly represent the values in the database:

Oracle data type Literal type (schema type) Semantic type (schema name) and Notes

DATE

INT64

io.debezium.time.Timestamp

Represents the number of milliseconds past epoch, and does not include timezone information.

INTERVAL DAY[(M)] TO SECOND

FLOAT64

io.debezium.time.MicroDuration

The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average.

INTERVAL YEAR[(M)] TO MONTH

FLOAT64

io.debezium.time.MicroDuration

The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average.

TIMESTAMP(0 - 3)

INT64

io.debezium.time.Timestamp

Represents the number of milliseconds past epoch, and does not include timezone information.

TIMESTAMP, TIMESTAMP(4 - 6)

INT64

io.debezium.time.MicroTimestamp

Represents the number of microseconds past epoch, and does not include timezone information.

TIMESTAMP(7 - 9)

INT64

io.debezium.time.NanoTimestamp

Represents the number of nanoseconds past epoch, and does not include timezone information.

TIMESTAMP WITH TIME ZONE

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp with timezone information.

TIMESTAMP WITH LOCAL TIME ZONE

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp in UTC.

When the time.precision.mode configuration property is set to connect, then the connector will use the predefined Kafka Connect logical types. This may be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. Since Oracle supports precision that exceeds what Kafka Connect’s logical types support, using connect time precision will result in a loss of precision when the database column has a fractional second precision value that is greater than 3:

Oracle data type Literal type (schema type) Semantic type (schema name) and Notes

DATE

INT32

org.apache.kafka.connect.data.Date

Represents the number of days since the epoch.

INTERVAL DAY[(M)] TO SECOND

FLOAT64

io.debezium.time.MicroDuration

The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average.

INTERVAL YEAR[(M)] TO MONTH

FLOAT64

io.debezium.time.MicroDuration

The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average.

TIMESTAMP(0 - 3)

INT64

org.apache.kafka.connect.data.Timestamp

Represents the number of milliseconds since epoch, and does not include timezone information.

TIMESTAMP(4 - 6)

INT64

org.apache.kafka.connect.data.Timestamp

Represents the number of milliseconds since epoch, and does not include timezone information.

TIMESTAMP(7 - 9)

INT64

org.apache.kafka.connect.data.Timestamp

Represents the number of milliseconds since epoch, and does not include timezone information.

TIMESTAMP WITH TIME ZONE

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp with timezone information.

TIMESTAMP WITH LOCAL TIME ZONE

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp in UTC.

Setting up Oracle

The following database set up steps are necessary to use the Debezium Oracle connector. These steps assume the use of the multi-tenancy configuration with a container database and at least one pluggable database. If you intend to not use multi-tenancy configuration, the following steps may require adjustment.

You can find a template for setting up Oracle in a virtual machine (via Vagrant) in the oracle-vagrant-box/ repository.

Preparing the Database

Configuration needed for Oracle LogMiner
ORACLE_SID=ORACLCDB dbz_oracle sqlplus /nolog

CONNECT sys/top_secret AS SYSDBA
alter system set db_recovery_file_dest_size = 10G;
alter system set db_recovery_file_dest = '/opt/oracle/oradta/recovery_area' scope=spfile;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should now "Database log mode: Archive Mode"
archive log list

exit;

In addition, supplemental logging must be enabled for captured tables or the database in order for data changes to capture the before state of changed database rows. The following illustrates how to configure this on a specific table, which is the ideal choice to minimize the amount of information captured in the Oracle redo logs.

ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Minimal supplemental logging must be enabled at the database level and can be configured as follows.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Creating users for the connector

The Debezium Oracle connector requires that users accounts be set up with specific permissions so that the connector can capture change events. The following briefly describes these user configurations using a multi-tenant database model.

While database changes performed by the connector user account will be captured by the connector, changes made by the SYS and SYSTEM user accounts will not.

Creating the connector’s LogMiner user
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
  CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf'
    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  exit;

sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
  CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf'
    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  exit;

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba

  CREATE USER c##dbzuser IDENTIFIED BY dbz
    DEFAULT TABLESPACE logminer_tbs
    QUOTA UNLIMITED ON logminer_tbs
    CONTAINER=ALL;

  GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL;
  GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$DATABASE to c##dbzuser CONTAINER=ALL;
  GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ANY TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL;
  GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ANY TRANSACTION TO c##dbzuser CONTAINER=ALL;
  GRANT LOGMINING TO c##dbzuser CONTAINER=ALL;

  GRANT CREATE TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT LOCK ANY TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT ALTER ANY TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT CREATE SEQUENCE TO c##dbzuser CONTAINER=ALL;

  GRANT EXECUTE ON DBMS_LOGMNR TO c##dbzuser CONTAINER=ALL;
  GRANT EXECUTE ON DBMS_LOGMNR_D TO c##dbzuser CONTAINER=ALL;

  GRANT SELECT ON V_$LOG TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOG_HISTORY TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOGMNR_LOGS TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOGMNR_CONTENTS TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOGMNR_PARAMETERS TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOGFILE TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$ARCHIVED_LOG TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO c##dbzuser CONTAINER=ALL;

  exit;

Deployment

To deploy a Debezium Oracle connector, you install the Debezium Oracle connector archive, configure the connector, and start the connector by adding its configuration to Kafka Connect.

Prerequisites
Procedure
  1. Download the Debezium Oracle connector plug-in archive.

  2. Extract the files into your Kafka Connect environment.

  3. Add the directory with the JAR files to Kafka Connect’s plugin.path.

  4. Configure the connector and add the configuration to your Kafka Connect cluster.

  5. Restart your Kafka Connect process to pick up the new JAR files.

Obtaining the Oracle JDBC driver and XStream API files

The Debezium Oracle connector requires the Oracle JDBC driver (ojdbc8.jar) to connect to Oracle databases. If the connector uses XStreams to access the database, you must also have the XStream API (xstreams.jar). Licensing requirements prohibit Debezium from including these files in the Oracle connector archive. However, the required files are available for free download as part of the Oracle Instant Client. The following steps describe how to download the Oracle Instant Client and extract the required files.

Procedure
  1. From a browser, download the Oracle Instant Client package for your operating system.

  2. Extract the archive and then open the instantclient_<VERSION> directory.

    For example:

    instantclient_21_1/
    ├── adrci
    ├── BASIC_LITE_LICENSE
    ├── BASIC_LITE_README
    ├── genezi
    ├── libclntshcore.so -> libclntshcore.so.21.1
    ├── libclntshcore.so.12.1 -> libclntshcore.so.21.1
    
    ...
    
    ├── ojdbc8.jar
    ├── ucp.jar
    ├── uidrvci
    └── xstreams.jar
  3. Copy the ojdbc8.jar and xstreams.jar files, and add them to the <KAFKA_HOME>/libs directory, for example, kafka/libs.

    In environments that use the Oracle LogMiner implementation, copy only the ojdbc8.jar file. The xstreams.jar file is only required in environments that use the Oracle XStreams implementation.

  4. If you are using the XStreams implementation, create an environment variable, LD_LIBRARY_PATH, and set its value to the path to the Instant Client directory, for example:

    LD_LIBRARY_PATH=/path/to/instant_client/

    The LD_LIBRARY_PATH environment variable is not required if you run the Oracle LogMiner implementation.

Debezium Oracle connector configuration

Typically, you register a Debezium Oracle connector by submitting a JSON request that specifies the configuration properties for the connector. The following example shows a JSON request for registering an instance of the Debezium Oracle connector with logical name server1 at port 1521:

You can choose to produce events for a subset of the schemas and tables in a database. Optionally, you can ignore, mask, or truncate columns that contain sensitive data, that are larger than a specified size, or that you do not need.

Example: Debezium Oracle connector configuration
{
    "name": "inventory-connector",  // <`>`
    "config": {
        "connector.class" : "io.debezium.connector.oracle.OracleConnector",  (2)
        "database.hostname" : "<ORACLE_IP_ADDRESS>",  (3)
        "database.port" : "1521",  (4)
        "database.user" : "c##dbzuser",  (5)
        "database.password" : "dbz",   (6)
        "database.dbname" : "ORCLCDB",  (7)
        "database.server.name" : "server1",  (8)
        "tasks.max" : "1",  (9)
        "database.pdb.name" : "ORCLPDB1",  (10)
        "database.history.kafka.bootstrap.servers" : "kafka:9092", (11)
        "database.history.kafka.topic": "schema-changes.inventory"  (12)
    }
}
1 The name of our connector when we register it with a Kafka Connect service.
2 The name of this Oracle connector class.
3 The address of the Oracle instance.
4 The port number of the Oracle instance.
5 The name of the Oracle user
6 The password for the Oracle user
7 The name of the database to capture changes from.
8 Logical name that identifies and provides a namespace for the Oracle database server from which the connector captures changes.
9 The maximum number of tasks to create for this connector.
10 The name of the Oracle pluggable database that the connector captures changes from. Used in container database (CDB) installations only.
11 The list of Kafka brokers that this connector will use to write and recover DDL statements to the database history topic.
12 The name of the database history topic where the connector will write and recover DDL statements. This topic is for internal use only and should not be used by consumers.

In the previous example, the database.hostname and database.port properties are used to define the connection to the database host. However, in more complex Oracle deployments, or in deployments that use TNS names, you can use an alternative method in which you specify a JDBC URL.

The following JSON example shows the same configuration as in the preceding example, except that it uses a JDBC URL to connect to the database.

Example: Debezium Oracle connector configuration that uses a JDBC URL to connect to the database
{
    "name": "inventory-connector",
    "config": {
        "connector.class" : "io.debezium.connector.oracle.OracleConnector",
        "tasks.max" : "1",
        "database.server.name" : "server1",
        "database.user" : "c##dbzuser",
        "database.password" : "dbz",
        "database.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 1>)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 2>)(PORT=1521)))(CONNECT_DATA=SERVICE_NAME=)(SERVER=DEDICATED)))",
        "database.dbname" : "ORCLCDB",
        "database.pdb.name" : "ORCLPDB1",
        "database.history.kafka.bootstrap.servers" : "kafka:9092",
        "database.history.kafka.topic": "schema-changes.inventory"
    }
}

Pluggable vs Non-Pluggable databases

Oracle Database supports the following deployment types:

Container database (CDB)

A database that can contain multiple pluggable databases (PDBs). Database clients connect to each PDB as if it were a standard, non-CDB database.

Non-container database (non-CDB)

A standard Oracle database, which does not support the creation of pluggable databases.

Example: Debezium connector configuration for CDB deployments
{
  "config": {
    "connector.class" : "io.debezium.connector.oracle.OracleConnector",
    "tasks.max" : "1",
    "database.server.name" : "server1",
    "database.hostname" : "<oracle ip>",
    "database.port" : "1521",
    "database.user" : "c##dbzuser",
    "database.password" : "dbz",
    "database.dbname" : "ORCLCDB",
    "database.pdb.name" : "ORCLPDB1",
    "database.history.kafka.bootstrap.servers" : "kafka:9092",
    "database.history.kafka.topic": "schema-changes.inventory"
  }
}

When you configure a Debezium Oracle connector for use with an Oracle CDB, you must specify a value for the property database.pdb.name, which names the PDB that you want the connector to capture changes from. For non-CDB installation, do not specify the database.pdb.name property.

Example: Debezium Oracle connector configuration for non-CDB deployments
{
    "config": {
        "connector.class" : "io.debezium.connector.oracle.OracleConnector",
        "tasks.max" : "1",
        "database.server.name" : "server1",
        "database.hostname" : "<oracle ip>",
        "database.port" : "1521",
        "database.user" : "c##dbzuser",
        "database.password" : "dbz",
        "database.dbname" : "ORCLCDB",
        "database.history.kafka.bootstrap.servers" : "kafka:9092",
        "database.history.kafka.topic": "schema-changes.inventory"
    }
}

For the complete list of the configuration properties that you can set for the Debezium Oracle connector, see Oracle connector properties.

You can send this configuration with a POST command to a running Kafka Connect service. The service records the configuration and starts a connector task that performs the following operations:

  • Connects to the Oracle database.

  • Reads the redo log.

  • Records change events to Kafka topics.

Adding connector configuration

To start running a Debezium Oracle connector, create a connector configuration, and add the configuration to your Kafka Connect cluster.

Prerequisites
Procedure
  1. Create a configuration for the Oracle connector.

  2. Use the Kafka Connect REST API to add that connector configuration to your Kafka Connect cluster.

Results

When the connector starts, it performs a consistent snapshot of the Oracle databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming the change event records to Kafka topics.

Connector Properties

The Debezium Oracle connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:

Required Debezium Oracle connector configuration properties

The following configuration properties are required unless a default value is available.

Property

Default

Description

Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.)

The name of the Java class for the connector. Always use a value of io.debezium.connector.oracle.OracleConnector for the Oracle connector.

1

The maximum number of tasks that should be created for this connector. The Oracle connector always uses a single task and therefore does not use this value, so the default is always acceptable.

IP address or hostname of the Oracle database server.

Integer port number of the Oracle database server.

Name of the user to use when connecting to the Oracle database server.

Password to use when connecting to the Oracle database server.

Name of the database to connect to. Must be the CDB name when working with the CDB + PDB model.

Raw database jdbc url. This property can be used when more flexibility is needed and can support raw TNS names or RAC connection strings.

Name of the Oracle pluggable datbase to connect to. Use this property with container database (CDB) installations only.

Logical name that identifies and provides a namespace for the particular Oracle database server being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names emanating from this connector. Only alphanumeric characters, hyphens and underscores must be used.

logminer

The adapter implementation to be used to stream database changes. logminer (the default) to use the native Oracle LogMiner API; xstream to use the Oracle XStreams API.

initial

A mode for taking an initial snapshot of the structure and optionally data of captured tables. Supported values are initial (will take a snapshot of structure and data of captured tables; useful if topics should be populated with a complete representation of the data from the captured tables) and schema_only (will take a snapshot of the structure of captured tables only; useful if only changes happening from now onwards should be propagated to topics). Once the snapshot is complete, the connector will continue reading change events from the database’s redo logs.

shared

Controls whether and how long the connector holds a table lock, which prevents certain types of changes table operations, while the connector is performing a snapshot. Possible settings are:

shared - allows concurrent access to the table but prevents any session from acquiring a table exclusive lock (specifically, the connector will acquire a ROW SHARE`level lock while capturing the schemas of the tables).

`none
- prevents the connector from acquiring any table locks during the snapshot. This setting is only safe to use if and only if no schema changes are happening while the snapshot is running.

All tables specified in table.include.list

An optional, comma-separated list of regular expressions that match names of fully-qualified table names (<db-name>.<schema-name>.<name>) included in table.include.list for which you want to take the snapshot.

Controls which rows from tables are included in snapshot.
This property contains a comma-separated list of fully-qualified tables (SCHEMA_NAME.TABLE_NAME). Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id snapshot.select.statement.overrides.[SCHEMA_NAME].[TABLE_NAME]. The value of those properties is the SELECT statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.
Note: This setting has impact on snapshots only. Events captured during log reading are not affected by it.

An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes. Any schema name not included in schema.include.list is excluded from having its changes captured. By default, all non-system schemas have their changes captured. Do not also set the schema.exclude.list property. When using LogMiner, only POSIX regular expressions are supported.

An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes. Any schema whose name is not included in schema.exclude.list has its changes captured, with the exception of system schemas. Do not also set the schema.include.list property. When using LogMiner, only POSIX regular expressions are supported.

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored; any table not included in the include list will be excluded from monitoring. Each identifier is of the form schemaName.tableName. By default the connector will monitor every non-system table in each monitored database. May not be used with table.exclude.list. When using LogMiner, only POSIX regular expressions are supported.

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring; any table not included in the exclude list will be monitored. Each identifier is of the form schemaName.tableName. May not be used with table.include.list. When using LogMiner, only POSIX regular expressions are supported.

An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be included in the change event message values. Fully-qualified names for columns are of the form schemaName.tableName.columnName. Note that primary key columns are always included in the event’s key, even if not included in the value. Do not also set the column.exclude.list property.

An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values. Fully-qualified names for columns are of the form schemaName.tableName.columnName. Note that primary key columns are always included in the event’s key, also if excluded from the value. Do not also set the column.include.list property.

An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be pseudonyms in the change event message values with a field value consisting of the hashed value using the algorithm hashAlgorithm and salt salt. Based on the used hash function referential integrity is kept while data is pseudonymized. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. The hash is automatically shortened to the length of the column.

Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer or zero. Fully-qualified names for columns are of the form pdbName.schemaName.tableName.columnName.

Example:

column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName

where CzQMA0cB5K is a randomly selected salt.

Note: Depending on the hashAlgorithm used, the salt selected and the actual data set, the resulting masked data set may not be completely anonymized.

precise

Specifies how the connector should handle floating point values for NUMBER, DECIMAL and NUMERIC columns: precise (the default) represents them precisely using java.math.BigDecimal values represented in change events in a binary form; or double represents them using double values, which may result in a loss of precision but will be far easier to use. string option encodes values as formatted string which is easy to consume but a semantic information about the real type is lost. See Decimal types.

fail

Specifies how the connector should react to exceptions during processing of events. fail will propagate the exception (indicating the offset of the problematic event), causing the connector to stop.
warn will cause the problematic event to be skipped and the offset of the problematic event to be logged.
skip will cause the problematic event to be skipped.

8192

Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the binlog reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the max.batch.size property.

2048

Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048.

0

Long value for the maximum size in bytes of the blocking queue. The feature is disabled by default, it will be active if it’s set with a positive long value.

1000

Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second.

true

Controls whether a delete event is followed by a tombstone event.

true - a delete operation is represented by a delete event and a subsequent tombstone event.

false - only a delete event is emitted.

After a source record is deleted, emitting a tombstone event (the default behavior) allows Kafka to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic.

A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key.
Each item (regular expression) must match the <fully-qualified table>:<a comma-separated list of columns> representing the custom key.
Fully-qualified tables could be defined as pdbName.schemaName.tableName.

An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be truncated in the change event message values if the field values are longer than the specified number of characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer. Fully-qualified names for columns are of the form pdbName.schemaName.tableName.columnName.

An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be replaced in the change event message values with a field value consisting of the specified number of asterisk (*) characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer or zero. Fully-qualified names for columns are of the form pdbName.schemaName.tableName.columnName.

An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters __debezium.source.column.type, __debezium.source.column.length and __debezium.source.column.scale will be used to propagate the original type name and length (for variable-width types), respectively. Useful to properly size corresponding columns in sink databases. Fully-qualified names for columns are of the form tableName.columnName, or schemaName.tableName.columnName.

An optional comma-separated list of regular expressions that match the database-specific data type name of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters __debezium.source.column.type, __debezium.source.column.length and __debezium.source.column.scale will be used to propagate the original type name and length (for variable-width types), respectively. Useful to properly size corresponding columns in sink databases. Fully-qualified data type names are of the form tableName.typeName, or schemaName.tableName.typeName. See the list of Oracle-specific data type names.

0

Controls how frequently heartbeat messages are sent.
This property contains an interval in milli-seconds that defines how frequently the connector sends messages into a heartbeat topic. This can be used to monitor whether the connector is still receiving change events from the database. You also should leverage heartbeat messages in cases where only records in non-captured tables are changed for a longer period of time. In such situation the connector would proceed to read the log from the database but never emit any change messages into Kafka, which in turn means that no offset updates will be committed to Kafka. This will cause the redo log files to be retained by the database longer than needed (as the connector actually has processed them already but never got a chance to flush the latest retrieved SCN to the database) and also may result in more change events to be re-sent after a connector restart. Set this parameter to 0 to not send heartbeat messages at all.
Disabled by default.

__debezium-heartbeat

Controls the naming of the topic to which heartbeat messages are sent.
The topic is named according to the pattern <heartbeat.topics.prefix>.<server.name>.

An interval in milli-seconds that the connector should wait before taking a snapshot after starting up;
Can be used to avoid snapshot interruptions when starting multiple connectors in a cluster, which may cause re-balancing of connectors.

2000

Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector will read the table contents in multiple batches of this size. Defaults to 2000.

true when connector configuration explicitly specifies the key.converter or value.converter parameters to use Avro, otherwise defaults to false.

Whether field names will be sanitized to adhere to Avro naming requirements. See Avro naming for more details.

false

When set to true Debezium generates events with transaction boundaries and enriches data events envelope with transaction metadata.

See Transaction Metadata for additional details.

redo_log_catalog

The mining strategy controls how Oracle LogMiner builds and uses a given data dictionary for resolving table and column ids to names.

redo_log_catalog - Writes the data dictionary to the online redo logs causing more archive logs to be generated over time. This also enables tracking DDL changes against captured tables, so if the schema changes frequently this is the ideal choice.

online_catalog - Uses the database’s current data dictionary to resolve object ids and does not write any extra information to the online redo logs. This allows LogMiner to mine substantially faster but at the expense that DDL changes cannot be tracked. If the captured table(s) schema changes infrequently or never, this is the ideal choice.

1000

The minimum SCN interval size that this connector will try to read from redo/archive logs. Active batch size will be also increased/decreased by this amount for tuning connector throughput when needed.

100000

The maximum SCN interval size that this connector will use when reading from redo/archive logs.

20000

The starting SCN interval size that the connector will use for reading data from redo/archive logs.

0

The minimum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.

3000

The maximum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.

1000

The starting amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.

200

The maximum amount of time up or down that the connector will use to tune the optimal sleep time when reading data from logminer. Value is in milliseconds.

10000

The number of content records that will be fetched from the log miner content view.

0

The number of hours in the past from SYSDATE to mine archive logs. Using the default 0 will mine all archive logs.

false

Controls whether or not the connector mines changes from just archive logs or a combination of the online redo logs and archive logs (the default).

For some environments where online redo logs are archived frequently enough to cause LogMiner session failures, this opt-in feature only mines archive logs which are guaranteed to be reliable unlike the circular buffer nature of redo logs that can be archived at any point, leading to LogMiner session errors. Note that by using this opt-in feature, events will be emitted with a certain amount of latency that is entirely dependent on how frequently online redo logs are archived by Oracle.

0

Positive integer value that specifies the number of hours to retain long running transactions between redo log switches. When set to 0, transactions are retained until a commit or rollback is detected.

The LogMiner adapter maintains an in-memory buffer of all running transactions. As all DML operations that are part of a transaction will be buffered until a commit or rollback is detected, long-running transactions should be avoided in order to not overflow that buffer. Any transaction that exceeds this configured value will be discarded entirely and no messages emitted for the operations that were part of the transaction.

While this option allows the behavior to be configured on a case-by-case basis, we have plans to enhance this behavior in a future release by means of adding a scalable transaction buffer, (see DBZ-3123).

Specifies the configured Oracle archive destination to use when mining archive logs with LogMiner.

The default behavior automatically selects the first valid, local configured destination; however a specific destination can be used by specifying the destination name, i.e. LOG_ARCHIVE_DEST_5.

List of database users to exclude from the LogMiner query; useful if there’s changes from specific users which you’d always like to exclude from the capturing process.

false

Controls whether or not large object (CLOB or BLOB) column values are emitted in change events.

By default, change events will have large object columns but they will not have any values. There is a certain amount of overhead in processing and managing large object column types and payloads. In order to capture large object values and have them serialized in the change events, set this option to true.

A comma-separated list of RAC node host names or addresses. This field is required to enable Oracle RAC support.

comma-separated list of operation types that will be skipped during streaming. The operations include: c for inserts/create, u for updates, and d for deletes. By default, no operations are skipped.

Debezium connector database history configuration properties

Debezium provides a set of database.history.* properties that control how the connector interacts with the schema history topic.

The following table describes the database.history properties for configuring the Debezium connector.

Table 6. Connector database history configuration properties
Property Default Description

The full name of the Kafka topic where the connector stores the database schema history.

A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. This connection is used for retrieving the database schema history previously stored by the connector, and for writing each DDL statement read from the source database. Each pair should point to the same Kafka cluster used by the Kafka Connect process.

100

An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms.

4

The maximum number of times that the connector should try to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is recovery.attempts x recovery.poll.interval.ms.

false

A Boolean value that specifies whether the connector should ignore malformed or unknown database statements or stop processing so a human can fix the issue. The safe default is false. Skipping should be used only with care as it can lead to data loss or mangling when the binlog is being processed.

Deprecated and scheduled for removal in a future release; use database.history.store.only.captured.tables.ddl instead.

false

A Boolean value that specifies whether the connector should record all DDL statements

true records only those DDL statements that are relevant to tables whose changes are being captured by Debezium. Set to true with care because missing data might become necessary if you change which tables have their changes captured.

The safe default is false.

false

A Boolean value that specifies whether the connector should record all DDL statements

true records only those DDL statements that are relevant to tables whose changes are being captured by Debezium. Set to true with care because missing data might become necessary if you change which tables have their changes captured.

The safe default is false.

Pass-through database history properties for configuring producer and consumer clients


Debezium relies on a Kafka producer to write schema changes to database history topics. Similarly, it relies on a Kafka consumer to read from database history topics when a connector starts. You define the configuration for the Kafka producer and consumer clients by assigning values to a set of pass-through configuration properties that begin with the database.history.producer.* and database.history.consumer.* prefixes. The pass-through producer and consumer database history properties control a range of behaviors, such as how these clients secure connections with the Kafka broker, as shown in the following example:

database.history.producer.security.protocol=SSL
database.history.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.producer.ssl.keystore.password=test1234
database.history.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.producer.ssl.truststore.password=test1234
database.history.producer.ssl.key.password=test1234

database.history.consumer.security.protocol=SSL
database.history.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.consumer.ssl.keystore.password=test1234
database.history.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.consumer.ssl.truststore.password=test1234
database.history.consumer.ssl.key.password=test1234

Debezium strips the prefix from the property name before it passes the property to the Kafka client.

See the Kafka documentation for more details about Kafka producer configuration properties and Kafka consumer configuration properties.

Debezium connector pass-through database driver configuration properties

The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix database.*. For example, the connector passes properties such as database.foobar=false to the JDBC URL.

As is the case with the pass-through properties for database history clients, Debezium strips the prefixes from the properties before it passes them to the database driver.

Monitoring

The Debezium Oracle connector has three metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.

Please refer to the monitoring documentation for details of how to expose these metrics via JMX.

Snapshot Metrics

The MBean is debezium.oracle:type=connector-metrics,context=snapshot,server=<database.server.name>.

Attributes Type Description

string

The last snapshot event that the connector has read.

long

The number of milliseconds since the connector has read and processed the most recent event.

long

The total number of events that this connector has seen since last started or reset.

long

The number of events that have been filtered by include/exclude list filtering rules configured on the connector.

string[]

The list of tables that are captured by the connector.

int

The length the queue used to pass events between the snapshotter and the main Kafka Connect loop.

int

The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop.

int

The total number of tables that are being included in the snapshot.

int

The number of tables that the snapshot has yet to copy.

boolean

Whether the snapshot was started.

boolean

Whether the snapshot was aborted.

boolean

Whether the snapshot completed.

long

The total number of seconds that the snapshot has taken so far, even if not complete.

Map<String, Long>

Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table.

long

The maximum buffer of the queue in bytes. It will be enabled if max.queue.size.in.bytes is passed with a positive long value.

long

The current data of records in the queue in bytes.

Streaming Metrics

The MBean is debezium.oracle:type=connector-metrics,context=streaming,server=<database.server.name>.

Attributes Type Description

string

The last streaming event that the connector has read.

long

The number of milliseconds since the connector has read and processed the most recent event.

long

The total number of events that this connector has seen since last started or reset.

long

The number of events that have been filtered by include/exclude list filtering rules configured on the connector.

string[]

The list of tables that are captured by the connector.

int

The length the queue used to pass events between the streamer and the main Kafka Connect loop.

int

The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop.

boolean

Flag that denotes whether the connector is currently connected to the database server.

long

The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running.

long

The number of processed transactions that were committed.

Map<String, String>

The coordinates of the last received event.

string

Transaction identifier of the last processed transaction.

long

The maximum buffer of the queue in bytes.

long

The current data of records in the queue in bytes.

The Debezium Oracle connector also provides the following additional streaming metrics:

Table 7. Descriptions of additional streaming metrics
Attributes Type Description

string

The most recent system change number that has been processed.

string

The oldest system change number in the transaction buffer.

string

The last committed system change number from the transaction buffer.

string

The system change number currently written to the connector’s offsets.

string[]

Array of the log files that are currently mined.

long

The minimum number of logs specified for any LogMiner session.

long

The maximum number of logs specified for any LogMiner session.

string[]

Array of the current state for each mined logfile with the format filename|status.

int

The number of times the database has performed a log switch for the last day.

long

The number of DML operations observed in the last LogMiner session query.

long

The maximum number of DML operations observed while processing a single LogMiner session query.

long

The total number of DML operations observed.

long

The total number of LogMiner session query (aka batches) performed.

long

The duration of the last LogMiner session query’s fetch in milliseconds.

long

The maximum duration of any LogMiner session query’s fetch in milliseconds.

long

The duration for processing the last LogMiner query batch results in milliseconds.

long

The time in milliseconds spent parsing DML event SQL statements.

long

The duration in milliseconds to start the last LogMiner session.

long

The longest duration in milliseconds to start a LogMiner session.

long

The total duration in milliseconds spent by the connector starting LogMiner sessions.

long

The minimum duration in milliseconds spent processing results from a single LogMiner session.

long

The maximum duration in milliseconds spent processing results from a single LogMiner session.

long

The total duration in milliseconds spent processing results from LogMiner sessions.

long

The total duration in milliseconds spent by the JDBC driver fetching the next row to be processed from the log mining view.

long

The total number of rows processed from the log mining view across all sessions.

int

The number of entries fetched by the log mining query per database round-trip.

long

The number of milliseconds the connector sleeps before fetching another batch of results from the log mining view.

long

The maximum number of rows/second processed from the log mining view.

long

The average number of rows/second processed from the log mining.

long

The average number of rows/second processed from the log mining view for the last batch.

long

The number of connection problems detected.

int

The number of hours that transactions will be retained by the connector’s in-memory buffer without being committed or rolled back before being discarded. See log.mining.transaction.retention for more details.

long

The number of current active transactions in the transaction buffer.

long

The number of committed transactions in the transaction buffer.

long

The number of rolled back transactions in the transaction buffer.

long

The average number of committed transactions per second in the transaction buffer.

long

The number of registered DML operations in the transaction buffer.

long

The time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer.

long

The maximum time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer.

long

The minimum time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer.

string[]

An array of abandoned transaction identifiers removed from the transaction buffer due to their age. See log.mining.transaction.retention.hours for details.

string[]

An array of transaction identifiers that have been mined and rolled back in the transaction buffer.

long

The duration of the last transaction buffer commit operation in milliseconds.

long

The duration of the longest transaction buffer commit operation in milliseconds.

int

The number of errors detected.

int

The number of warnings detected.

int

The number of times the system change number has been checked for advancement and remains unchanged. This is an indicator that long-running transaction(s) are ongoing and preventing the connector from flushing the latest processed system change number to the connector’s offsets. Under optimal operations, this should always be or remain close to 0.

int

The number of DDL records that have been detected but could not be parsed by the DDL parser. This should always be 0; however when allowing unparsable DDL to be skipped, this metric can be used to determine if any warnings have been written to the connector logs.

long

The current mining session’s user global area (UGA) memory consumption in bytes.

long

The maximum mining session’s user global area (UGA) memory consumption in bytes across all mining sessions.

long

The current mining session’s process global area (PGA) memory consumption in bytes.

long

The maximum mining session’s process global area (PGA) memory consumption in bytes across all mining sessions.

Schema History Metrics

The MBean is debezium.oracle:type=connector-metrics,context=schema-history,server=<database.server.name>.

Attributes Type Description

string

One of STOPPED, RECOVERING (recovering history from the storage), RUNNING describing the state of the database history.

long

The time in epoch seconds at what recovery has started.

long

The number of changes that were read during recovery phase.

long

the total number of schema changes applied during recovery and runtime.

long

The number of milliseconds that elapsed since the last change was recovered from the history store.

long

The number of milliseconds that elapsed since the last change was applied.

string

The string representation of the last change recovered from the history store.

string

The string representation of the last applied change.

Surrogate schema evolution

The Oracle connector will automatically track and apply table schema changes by parsing DDL from the redo logs. In the event that the DDL parser encounters an unsupported statement, the connector offers an alternative way to handle applying the schema change should the need arise.

By default, the connector will stop when an unparseable DDL statement is encountered. This DDL can be manually performed by using Debezium signalling to trigger the update of the database schema.

The type of the schema update action is schema-changes. It will update the schema of all tables enumerated in the signal parameters. The message does not contain the update to the schema, but the complete new schema structure.

Table 8. Action parameters
Name Description

database

The name of the Oracle database.

schema

The name of the schema where changes are applied.

changes

An array containing the requested schema updates.

changes.type

Type of the schema change, usually ALTER

changes.id

The fully-qualified name of the table

changes.table

The fully-qualified name of the table

changes.table.defaultCharsetName

The character set name used for the table if different from database default

changes.table.primaryKeyColumnNames

Array with the name of columns composing the primary key

changes.table.columns

Array with the column metadata

…​columns.name

The name of the column

…​columns.jdbcType

The JDBC type of the column as defined at JDBC API

…​columns.typeName

The name of the column type

…​columns.typeExpression

The full column type definition

…​columns.charsetName

The column character set if different from the default

…​columns.length

The length/size constraint of the column

…​columns.scale

The scale of numeric column

…​columns.position

The position of the column in the table starting with 1

…​columns.optional

Boolean true if column value is not mandatory

…​columns.autoIncremented

Boolean true if column value is automatically calculated from a sequence

…​columns.generated

Boolean true if column value is automatically calculated

Once the schema-changes signal has been inserted, the connector will need to be restarted with an altered configuration that includes specifying the database.history.skip.unparseable.ddl option as true. Once the connector’s commit SCN has advanced beyond the DDL change, it’s recommended that the connector’s configuration be returned to its previous state so that unparseable DDL statements aren’t skipped unexpectedly.

Table 9. Example of a logging record
Column Value

id

924e3ff8-2245-43ca-ba77-2af9af02fa07

type

schema-changes

data

{
   "database":"ORCLPDB1",
   "schema":"DEBEZIUM",
   "changes":[
      {
         "type":"ALTER",
         "id":"\"ORCLPDB1\".\"DEBEZIUM\".\"CUSTOMER\"",
         "table":{
            "defaultCharsetName":null,
            "primaryKeyColumnNames":[
               "ID",
               "NAME"
            ],
            "columns":[
               {
                  "name":"ID",
                  "jdbcType":2,
                  "typeName":"NUMBER",
                  "typeExpression":"NUMBER",
                  "charsetName":null,
                  "length":9,
                  "scale":0,
                  "position":1,
                  "optional":false,
                  "autoIncremented":false,
                  "generated":false
               },
               {
                  "name":"NAME",
                  "jdbcType":12,
                  "typeName":"VARCHAR2",
                  "typeExpression":"VARCHAR2",
                  "charsetName":null,
                  "length":1000,
                  "position":2,
                  "optional":true,
                  "autoIncremented":false,
                  "generated":false
               },
               {
                  "name":"SCORE",
                  "jdbcType":2,
                  "typeName":"NUMBER",
                  "typeExpression":"NUMBER",
                  "charsetName":null,
                  "length":6,
                  "scale":2,
                  "position":3,
                  "optional":true,
                  "autoIncremented":false,
                  "generated":false
               },
               {
                  "name":"REGISTERED",
                  "jdbcType":93,
                  "typeName":"TIMESTAMP(6)",
                  "typeExpression":"TIMESTAMP(6)",
                  "charsetName":null,
                  "length":6,
                  "position":4,
                  "optional":true,
                  "autoIncremented":false,
                  "generated":false
               }
            ]
         }
      }
   ]
}

XStreams support

The Debezium Oracle connector by default ingests changes using native Oracle LogMiner. The connector can be toggled to use Oracle XStream instead and to do so specific database and connector configurations must be used that differ from that of LogMiner. In order to use the XStream API, you need to have a license for the GoldenGate product (though it is not required that GoldenGate itself is installed).

Preparing the Database

Configuration needed for Oracle XStream
ORACLE_SID=ORCLCDB dbz_oracle sqlplus /nolog

CONNECT sys/top_secret AS SYSDBA
alter system set db_recovery_file_dest_size = 5G;
alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
alter system set enable_goldengate_replication=true;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should show "Database log mode: Archive Mode"
archive log list

exit;

In addition, supplemental logging must be enabled for captured tables or the database in order for data changes to capture the before state of changed database rows. The following illustrates how to configure this on a specific table, which is the ideal choice to minimize the amount of information captured in the Oracle redo logs.

ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Creating users for the connector

The Debezium Oracle connector requires that users accounts be set up with specific permissions so that the connector can capture change events. The following briefly describes these user configurations using a multi-tenant database model.

Creating an XStream Administrator user
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
  CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_adm_tbs.dbf'
    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  exit;

sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
  CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_adm_tbs.dbf'
    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  exit;

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
  CREATE USER c##dbzadmin IDENTIFIED BY dbz
    DEFAULT TABLESPACE xstream_adm_tbs
    QUOTA UNLIMITED ON xstream_adm_tbs
    CONTAINER=ALL;

  GRANT CREATE SESSION, SET CONTAINER TO c##dbzadmin CONTAINER=ALL;

  BEGIN
     DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
        grantee                 => 'c##dbzadmin',
        privilege_type          => 'CAPTURE',
        grant_select_privileges => TRUE,
        container               => 'ALL'
     );
  END;
  /

  exit;
Creating the connector’s XStream user
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
  CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_tbs.dbf'
    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  exit;

sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
  CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_tbs.dbf'
    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  exit;

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
  CREATE USER c##dbzuser IDENTIFIED BY dbz
    DEFAULT TABLESPACE xstream_tbs
    QUOTA UNLIMITED ON xstream_tbs
    CONTAINER=ALL;

  GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL;
  GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$DATABASE to c##dbzuser CONTAINER=ALL;
  GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL;
  GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL;
  exit;

Create an XStream Outbound Server

Create an XStream Outbound server (given the right privileges, this may be done automatically by the connector going forward, see DBZ-721):

Create an XStream Outbound Server
sqlplus c##dbzadmin/dbz@//localhost:1521/ORCLCDB
DECLARE
  tables  DBMS_UTILITY.UNCL_ARRAY;
  schemas DBMS_UTILITY.UNCL_ARRAY;
BEGIN
    tables(1)  := NULL;
    schemas(1) := 'debezium';
  DBMS_XSTREAM_ADM.CREATE_OUTBOUND(
    server_name     =>  'dbzxout',
    table_names     =>  tables,
    schema_names    =>  schemas);
END;
/
exit;
Configure the XStream user account to connect to the XStream Outbound Server
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
BEGIN
  DBMS_XSTREAM_ADM.ALTER_OUTBOUND(
    server_name  => 'dbzxout',
    connect_user => 'c##dbzuser');
END;
/
exit;

A single XStream Outbound server cannot be shared by multiple Debezium Oracle connectors. Each connector requires a unique XStream Outbound connector to be configured.

Configuring the XStream adapter

By default Debezium uses Oracle LogMiner to ingest change events from Oracle. In order to use Oracle XStreams, the connector configuration must be adjusted to enable this adapter.

The following example configuration illustrates that by adding the database.connection.adapter and database.out.server.name, the connector can be toggled to use the XStream API implementation.

{
    "name": "inventory-connector",
    "config": {
        "connector.class" : "io.debezium.connector.oracle.OracleConnector",
        "tasks.max" : "1",
        "database.server.name" : "server1",
        "database.hostname" : "<oracle ip>",
        "database.port" : "1521",
        "database.user" : "c##dbzuser",
        "database.password" : "dbz",
        "database.dbname" : "ORCLCDB",
        "database.pdb.name" : "ORCLPDB1",
        "database.history.kafka.bootstrap.servers" : "kafka:9092",
        "database.history.kafka.topic": "schema-changes.inventory",
        "database.connection.adapter": "xstream",
        "database.out.server.name" : "dbzxout"
    }
}

Connector properties

The following configuration properties are required when using XStreams unless a default value is available.

Property

Default

Description

Name of the XStream outbound server configured in the database.

Behavior when things go wrong

Debezium is a distributed system that captures all changes in multiple upstream databases; it never misses or loses an event. When the system is operating normally or being managed carefully then Debezium provides exactly once delivery of every change event record.

If a fault does happen then the system does not lose any events. However, while it is recovering from the fault, it might repeat some change events. In these abnormal situations, Debezium, like Kafka, provides at least once delivery of change events.

The rest of this section describes how Debezium handles various kinds of faults and problems.

ORA-25191 - Cannot reference overflow table of an index-organized table

Oracle may issue this error during the snapshot phase when encountering an index-organized table (IOT). This error means that the connector has attempted to execute an operation that must be executed against the parent index-organized table that contains the specified overflow table.

To resolve this, the IOT name used in the SQL operation should be replaced with the parent index-organized table name. To determine the parent index-organized table name, use the following SQL:

SELECT IOT_NAME
  FROM DBA_TABLES
 WHERE OWNER='<tablespace-owner>'
   AND TABLE_NAME='<iot-table-name-that-failed>'

The connector’s table.include.list or table.exclude.list configuration options should then be adjusted to explicitly include or exclude the appropriate tables to avoid the connector from attempting to capture changes from the child index-organized table.

LogMiner adapter does not capture changes made by SYS or SYSTEM

Oracle uses the SYS and SYSTEM accounts for lots of internal changes and therefore the connector automatically filters changes made by these users when fetching changes from LogMiner. Please make sure to use a non-SYS, non-SYSTEM user account for changes to be emitted by the Debezium Oracle connector.