Debezium Connector for SQL Server

This connector is in an early alpha version and is incomplete. Work is under intensive development and we hope to move to the first beta version soon. Want to help us work on it? Learn how.

This connector is in incubating state and is not feature-complete (most notably, there’s no support for DDL changes while the connector is running) and the structure of emitted CDC messages may change in future revisions.

Debezium’s SQL Server Connector can monitor and record the row-level changes in the schemas of a SQL Server 2017 database. This connector was added in Debezium 0.9.0.

The first time it connects to a SQL Server database/cluster, it reads a consistent snapshot of all of the schemas. When that snapshot is complete, the connector continuously streams the changes that were committed to SQL Server and generates corresponding insert, update and delete events. All of the events for each table are recorded in a separate Kafka topic, where they can be easily consumed by applications and services.

Overview

The functionality of the connector is based upon change data capture feature provided by SQL Server Standard (since SQL Server 2016 SP1) or Enterprise edition. Using this mechanism a SQL Server capture process monitors all databases and tables the user is interested in and stores the changes into specifically created CDC tables that have stored procedure facade.

The database operator must enable CDC that should be monitored by Debezium plug-in. The connector then produces a change event for every row-level insert, update, and delete operation that was published via CDC API, recording all the change events for each table in a separate Kafka topic. The client applications read the Kafka topics that correspond to the database tables they’re interested in following, and react to every row-level event it sees in those topics.

Database operator normally enables CDC in the mid-life of database an/or table. This means that the connector won’t have the complete history of all changes that have been made to the database. Therefore, when the SQL Server connector first connects to a particular SQL Server database, it starts by performing a consistent snapshot of each of the database schemas. After the connector completes the snapshot, it continues streaming changes from the exact point at which the snapshot was made. This way, we start with a consistent view of all of the data, yet continue reading without having lost any of the changes made while the snapshot was taking place.

The connector is also tolerant of failures. As the connector reads changes and produces events, it records the position in the database log (LSN / Log Sequence Number), that is associated with CDC record, with each event. If the connector stops for any reason (including communication failures, network problems, or crashes), upon restart it simply continues reading the CDC tables where it last left off. This includes snapshots: if the snapshot was not completed when the connector is stopped, upon restart it will begin a new snapshot.

Setting up SQL Server

Before using the Debezium SQL Server connector to monitor the changes committed on SQL Server, first enable CDC on a monitored database. Please bear in mind that CDC cannot be enabled for master database.

-- ====
-- Enable Database for CDC template
-- ====
USE MyDB
GO
EXEC sys.sp_cdc_enable_db
GO

Then enable CDC for each table that you plan to monitor

-- =========
-- Enable a Table Specifying Filegroup Option Template
-- =========
USE MyDB
GO

EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name   = N'MyTable',
@role_name     = N'MyRole',
@filegroup_name = N'MyDB_CT',
@supports_net_changes = 1
GO

SQL Server on Azure

The SQL Server plug-in has not been tested with SQL Server on Azure. We welcome any feedback from a user to try the plug-in with database in managed environment.

How the SQL Server connector works

TBD

Data types

As described above, the SQL Server connector represents the changes to rows with events that are structured like the table in which the row exist. The event contains a field for each column value, and how that value is represented in the event depends on the MySQL data type of the column. This section describes this mapping.

The following table describes how the connector maps each of the SQL Server data types to a literal type and semantic type within the events' fields. Here, the literal type describes how the value is literally represented using Kafka Connect schema types, namely INT8, INT16, INT32, INT64, FLOAT32, FLOAT64, BOOLEAN, STRING, BYTES, ARRAY, MAP, and STRUCT. The semantic type describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.

SQL Server Data Type Literal type (schema type) Semantic type (schema name) Notes

DATETIMEOFFSET[(P)]

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp with timezone information, where the timezone is GMT

BIT

BOOLEAN

n/a

TINYINT

INT16

n/a

SMALLINT

INT16

n/a

INT

INT32

n/a

BIGINT

INT64

n/a

REAL

FLOAT32

n/a

FLOAT[(N)]

FLOAT64

n/a

CHAR[(N)]

STRING

n/a

VARCHAR[(N)]

STRING

n/a

TEXT

STRING

n/a

NCHAR[(N)]

STRING

n/a

NVARCHAR[(N)]

STRING

n/a

NTEXT

STRING

n/a

XML

STRING

io.debezium.data.Xml

Contains the string representation of a XML document

Other data type mappings are described in the following sections.

If present, a column’s default value will be propagated to the corresponding field’s Kafka Connect schema. Change messages will contain the field’s default value (unless an explicit column value had been given), so there should rarely be the need to obtain the default value from the schema. Passing the default value helps though with satisfying the compatibility rules when using Avro as serialization format together with the Confluent schema registry.

Temporal values

SQL Server Data Type Literal type (schema type) Semantic type (schema name) Notes

DATETIME2(7)

INT64

io.debezium.time.NanoTimestamp

Represents the number of nanoseconds past epoch, and does not include timezone information.

DATE

INT32

io.debezium.time.Date

Represents the number of days since epoch.

TIME(0), TIME(1), TIME(2), TIME(3)

INT32

io.debezium.time.Time

Represents the number of milliseconds past midnight, and does not include timezone information.

TIME(4), TIME(5), TIME(6)

INT64

io.debezium.time.MicroTime

Represents the number of microseconds past midnight, and does not include timezone information.

TIME(7)

INT64

io.debezium.time.NanoTime

Represents the number of nanoseconds past midnight, and does not include timezone information.

DATETIME

INT64

io.debezium.time.Timestamp

Represents the number of milliseconds past epoch, and does not include timezone information.

SMALLDATETIME

INT64

io.debezium.time.Timestamp

Represents the number of milliseconds past epoch, and does not include timezone information.

DATETIME2(0), DATETIME2(1), DATETIME2(2), DATETIME2(3)

INT64

io.debezium.time.Timestamp

Represents the number of milliseconds past epoch, and does not include timezone information.

DATETIME2(4), DATETIME2(5), DATETIME2(6)

INT64

io.debezium.time.MicroTimestamp

Represents the number of microseconds past epoch, and does not include timezone information.

Timestamp values

The DATETIME, SMALLDATETIME and DATETIME2 types represent a timestamp without time zone information. Such columns are converted into an equivalent Kafka Connect value based on UTC. So for instance the DATETIME2 value "2018-06-20 15:13:16.945104" will be represented by a io.debezium.time.MicroTimestamp with the value "1529507596945104".

Note that the timezone of the JVM running Kafka Connect and Debezium does not affect this conversion.

Decimal values

PostgreSQL Data Type Literal type (schema type) Semantic type (schema name) Notes

MONEY

BYTES

org.apache.kafka.connect.data.Decimal

The scale schema parameter contains an integer representing how many digits the decimal point was shifted. The connect.decimal.precision schema parameter contains an integer representing the precision of the given decimal value.

NUMERIC[(P[,S])]

BYTES

org.apache.kafka.connect.data.Decimal

The scale schema parameter contains an integer representing how many digits the decimal point was shifted. The connect.decimal.precision schema parameter contains an integer representing the precision of the given decimal value.

DECIMAL[(P[,S])]

BYTES

org.apache.kafka.connect.data.Decimal

The scale schema parameter contains an integer representing how many digits the decimal point was shifted. The connect.decimal.precision schema parameter contains an integer representing the precision of the given decimal value.

SMALLMONEY

BYTES

org.apache.kafka.connect.data.Decimal

The scale schema parameter contains an integer representing how many digits the decimal point was shifted. The connect.decimal.precision schema parameter contains an integer representing the precision of the given decimal value.

Deploying a connector

If you’ve already installed Zookeeper, Kafka, and Kafka Connect, then using Debezium’s SQL Server` connector is easy. Simply download the connector’s plugin archive, extract the JARs into your Kafka Connect environment, and add the directory with the JARs to Kafka Connect’s classpath. Restart your Kafka Connect process to pick up the new JARs.

If immutable containers are your thing, then check out Debezium’s Docker images for Zookeeper, Kafka and Kafka Connect with the SQL Server connector already pre-installed and ready to go. You can even run Debezium on OpenShift.

To use the connector to produce change events for a particular SQL Server database or cluster:

  1. enable the CDC on SQL Server to publish the CDC events in the database

  2. create a configuration file for the SQL Server Connector and use the Kafka Connect REST API to add that connector to your Kafka Connect cluster.

When the connector starts, it will grab a consistent snapshot of the schemas in your SQL Server database and start streaming changes, producing events for every inserted, updated, and deleted row. You can also choose to produce events for a subset of the schemas and tables. Optionally ignore, mask, or truncate columns that are sensitive, too large, or not needed.

Example configuration

Using the MySQL connector is straightforward. Here is an example of the configuration for a MySQL connector that monitors a MySQL server at port 3306 on 192.168.99.100, which we logically name fullfillment:

{
  "name": "inventory-connector",  (1)
  "config": {
    "connector.class": "io.debezium.connector.sqlserver.SqlServerConnector", (2)
    "database.hostname": "192.168.99.100", (3)
    "database.port": "1433", (4)
    "database.user": "sa", (5)
    "database.password": "Password!", (6)
    "database.dbname": "testDB", (7)
    "database.server.name": "fullfillment", (8)
    "table.whitelist": "customers", (9)
    "database.history.kafka.bootstrap.servers": "kafka:9092", (10)
    "database.history.kafka.topic": "dbhistory.fullfillment" (11)
  }
}
1 The name of our connector when we register it with a Kafka Connect service.
2 The name of this SQL Server connector class.
3 The address of the SQL Server instance.
4 The port number of the SQL Server instance.
5 The name of the SQL Server user
6 The password for the SQL Server user
7 The name of the database to capture changes from
8 The logical name of the SQL Server instance/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used.
9 A list of all tables whose changes Debezium should capture
10 The list of Kafka brokers that this connector will use to write and recover DDL statements to the database history topic.
11 The name of the database history topic where the connector will write and recover DDL statements. This topic is for internal use only and should not be used by consumers.

See the complete list of connector properties that can be specified in these configurations.

This configuration can be sent via POST to a running Kafka Connect service, which will then record the configuration and start up the one connector task that will connect to the MySQL database, read the binlog, and record events to Kafka topics.

Connector properties

The following configuration properties are required unless a default value is available.

Property Default Description

table.blacklist

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring; any table not included in the blacklist will be monitored. Each identifier is of the form schemaName.tableName. May not be used with table.whitelist.

name

Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.)

connector.class

The name of the Java class for the connector. Always use a value of io.debezium.connector.sqlserver.SqlServerConnector for the SQL Server connector.

tasks.max

1

The maximum number of tasks that should be created for this connector. The SQL Server connector always uses a single task and therefore does not use this value, so the default is always acceptable.

database.hostname

IP address or hostname of the SQL Server database server.

database.port

1433

Integer port number of the SQL Server database server.

database.user

Username to use when when connecting to the SQL Server database server.

database.password

Password to use when when connecting to the SQL Server database server.

database.dbname

The name of the SQL Server database from which to stream the changes

database.server.name

Logical name that identifies and provides a namespace for the particular SQL Server database server being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names eminating from this connector.

database.history.kafka.topic

The full name of the Kafka topic where the connector will store the database schema history.

database.history​.kafka.bootstrap.servers

A list of host/port pairs that the connector will use for establishing an initial connection to the Kafka cluster. This connection will be used for retrieving database schema history previously stored by the connector, and for writing each DDL statement read from the source database. This should point to the same Kafka cluster used by the Kafka Connect process.

table.whitelist

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored; any table not included in the whitelist will be excluded from monitoring. Each identifier is of the form schemaName.tableName. By default the connector will monitor every non-system table in each monitored schema. May not be used with table.blacklist.

The following advanced configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector’s configuration.

Property Default Description

heartbeat.topics.prefix

__debezium-heartbeat

Controls the naming of the topic to which heartbeat messages are sent.
The topic is named according to the pattern <heartbeat.topics.prefix>.<server.name>.

snapshot.mode

initial

A mode for taking an initial snapshot of the structure and optionally data of captured tables. Supported values are initial (will take a snapshot of structure and data of captured tables; useful if topics should be populated with a complete representation of the data from the captured tables) and initial_schema_only (will take a snapshot of the structure of captured tables only; useful if only changes happening from now onwards should be propagated to topics). Once the snapshot is complete, the connector will continue reading change events from the database’s redo logs.

snapshot.locking.mode

none

Controls how long the connector locks the montiored tables for snapshot execution. The default is none which means that the connector does not hold any locks for all monitored tables. Using a value of exlusive ensures that the connector holds the exlusive lock (and thus prevents any reads and updates) for all monitored tables.

poll.interval.ms

1000

Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second.

max.queue.size

8192

Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the binlog reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the max.batch.size property.

max.batch.size

2048

Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048.

heartbeat.interval.ms

0

Controls how frequently heartbeat messages are sent.
This property contains an interval in milli-seconds that defines how frequently the connector sends messages into a heartbeat topic. This can be used to monitor whether the connector is still receiving change events from the database. You also should leverage heartbeat messages in cases where only records in non-captured tables are changed for a longer period of time. In such situation the connector would proceed to read the log from the database but never emit any change messages into Kafka, which in turn means that no offset updates will be committed to Kafka. This may result in more change events to be re-sent after a connector restart. Set this parameter to 0 to not send heartbeat messages at all.
Disabled by default.

The connector also supports pass-through configuration properties that are used when creating the Kafka producer and consumer. Specifically, all connector configuration properties that begin with the database.history.producer. prefix are used (without the prefix) when creating the Kafka producer that writes to the database history, and all those that begin with the prefix database.history.consumer. are used (without the prefix) when creating the Kafka consumer that reads the database history upon connector startup.

For example, the following connector configuration properties can be used to secure connections to the Kafka broker:

In addition to the pass-through to the Kafka producer and consumer, the properties starting with database., e.g. database.applicationName=debezium are passed to the JDBC URL.

database.history.producer.security.protocol=SSL
database.history.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.producer.ssl.keystore.password=test1234
database.history.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.producer.ssl.truststore.password=test1234
database.history.producer.ssl.key.password=test1234
database.history.consumer.security.protocol=SSL
database.history.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.consumer.ssl.keystore.password=test1234
database.history.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.consumer.ssl.truststore.password=test1234
database.history.consumer.ssl.key.password=test1234

Be sure to consult the Kafka documentation for all of the configuration properties for Kafka producers and consumers. (The SQL Server connector does use the new consumer.)

back to top