Debezium Change Log

All notable changes for Debezium releases are documented in this file. Release numbers follow Semantic Versioning.

Release 0.5.1 (June 9, 2017)

Kafka compatibility

This release has been built against Kafka Connect 0.10.2.0 and has been tested with version 0.10.2.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.5.1 from any of the earlier 0.4.1, 0.4.0, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.5.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.5.1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes the following change that affect existing installations which capture system tables:

  • MySQL connector should apply database and table filters to system dbs/tables DBZ-242

New Features

This release includes the following new features:

  • MySQL Connector should support 'Point' data type DBZ-222

  • Support tstzrange column type on Postgres DBZ-280

Fixes

This release includes the following fixes:

  • Control how Debezium connectors maps tables to topics for sharding and other use cases DBZ-121

  • MySqlConnector Table and Database recommenders cause timeouts on large instances DBZ-232

  • Option to disable SSL certificate validation for PostgreSQL DBZ-244

  • Let enum types implement EnumeratedValue DBZ-262

  • The MySQL connector is failing with the DDL statements. DBZ-198

  • Correct MongoDB build DBZ-213

  • MongoDB connector should handle new primary better DBZ-214

  • Validate that database.server.name and database.history.kafka.topic have different values DBZ-215

  • When restarting Kafka Connect, we get io.debezium.text.ParsingException DBZ-216

  • Postgres connector crash on a database managed by Django DBZ-223

  • MySQL Connector doesn’t handle any value above '2147483647' for 'INT UNSIGNED' types DBZ-228

  • MySqlJdbcContext#userHasPrivileges() is broken for multiple privileges DBZ-229

  • Postgres Connector does not work when "sslmode" is "require" DBZ-238

  • Test PostgresConnectorIT.shouldSupportSSLParameters is incorrect DBZ-245

  • Recommender and default value broken for EnumeratedValue type DBZ-246

  • PG connector is CPU consuming DBZ-250

  • MySQL tests are interdependent DBZ-251

  • MySQL DDL parser fails on "ANALYZE TABLE" statement DBZ-253

  • Binary fields with trailing "00" are truncated DBZ-254

  • Enable Maven repository caching on Travis DBZ-274

  • Memory leak and excessive CPU usage when using materialized views DBZ-277

  • Postgres task should fail when connection to server is lost DBZ-281

  • Fix some wrong textual descriptions of default values DBZ-282

  • Apply consistent default value for Postgres port DBZ-237

  • Make Docker images run on OpenShift DBZ-240

  • Don’t mention default value for "database.server.name" DBZ-243

Release 0.5.0 (March 27, 2017)

Kafka compatibility

This release has been built against Kafka Connect 0.10.2.0 and has been tested with version 0.10.2.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.5.0 from any of the earlier 0.4.1, 0.4.0, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.5.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.5.0 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes the following changes that are likely to affect existing installations:

  • Upgraded from Kafka 0.10.1.1 to 0.10.2.0. DBZ-203

This release has no breaking changes since the previous release.

New Features

This release has no new features since the previous release.

Fixes

This release includes the following fixes relative to the 0.4.1 release:

  • MySQL connector now better handles DDL statements with BEGIN…​END blocks, especially those that use IF() functions and CASE…​WHEN statements. DBZ-198

  • MySQL connector handles 2-digit years in DATETIME, DATE, TIMESTAMP, and YEAR columns in the same way as MySQL. DBZ-205

Release 0.4.1 (March 17, 2017)

Kafka compatibility

This release has been tested with Kafka Connect 0.10.1.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.4.1 from any of the earlier 0.4.0, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.4.1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.4.1 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has no breaking changes since the previous release.

New Features

This release adds/improves to the MySQL connector preliminary support for Amazon RDS and Amazon Aurora (MySQL compatibility) (see DBZ-140).

Fixes

This release includes the following fixes relative to the 0.4.0 release:

  • MySQL connector now allows filtering production of DML events by GTIDs. DBZ-188

  • Support InnoDB savepoints. DBZ-196

  • Corrected MySQL DDL parser. DBZ-193 DBZ-198

  • Improved handling of MySQL connector’s built-in tables. DBZ-194

  • MySQL connector properly handles invalid/blank enum literal values. DBZ-197

  • MySQL connector properly handles reserved names as column names. DBZ-200

  • MongoDB connector properly generates event keys based upon ObjectID for updates. DBZ-201

Release 0.4.0 (February 7, 2017)

Kafka compatibility

This release has been tested with Kafka Connect 0.10.1.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.4.0 from any of the earlier 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.4.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.4.0 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has no breaking changes since the previous release.

New Features

This release includes a new PostgreSQL connector (see DBZ-3) and adds to the MySQL connector preliminary support for Amazon RDS and Amazon Aurora (MySQL compatibility) (see DBZ-140).

Fixes

This release includes the following fixes relative to the 0.3.6 release:

  • Update Kafka dependencies to 0.10.1.1. DBZ-173

  • Update MySQL binary log client library to 0.9.0. DBZ-186

  • MySQL should apply GTID filters to database history. DBZ-185

  • Add names of database and table to the MySQL event metadata. DBZ-184

  • Add the MySQL thread ID to the MySQL event metadata. DBZ-113

  • Corrects MySQL connector to properly handle timezone information for TIMESTAMP. DBZ-183

  • Correct MySQL DDL parser to handle CREATE TRIGGER command with DEFINER clauses. DBZ-176

  • Update MongoDB Java driver and MongoDB server versions. DBZ-187

  • MongoDB connector should restart incomplete initial sync. DBZ-182

  • MySQL and PostgreSQL connectors should load JDBC driver independently of DriverManager. DBZ-177

  • Upgrade MySQL binlog client library to support new binlog events added with MySQL 5.7. DBZ-174

  • EmbeddedEngine should log all errors. DBZ-178

  • PostgreSQL containers' generated Protobuf source moved to separate directory. DBZ-179

Release 0.3.6 (December 21, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.6 from any of the earlier 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.3.6 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.6 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has no breaking changes since the previous release.

New Features

There are no new features in this release.

Fixes

This release includes the following fixes to the 0.3.5 release:

  • Deleting a Debezium connector in Kafka Connect no longer causes NPEs. DBZ-138

  • MongoDB connector properly connects to a sharded cluster and the primaries for each replica set. DBZ-170, DBZ-167

  • Stopping the MySQL connector while in the middle of a snapshot now cloasses all MySQL resources. DBZ-166

  • MySQL connector properly parses with ON UPDATE timestamp values. DBZ-169

  • MySQL connector ignores CREATE FUNCTION DDL statements. DBZ-162

  • MySQL connector properly parses CREATE TABLE script with ENUM type and default value 'b'. DBZ-160

  • MySQL connector now properly supports NVARCHAR columns. DBZ-142

  • MySQL connector’s snapshot process now uses SHOW TABLE STATUS …​ rather than SELECT COUNT(\*) to obtain an estimate of the number of rows for each table, and can even forgo this step if all tables are to be streamed. DBZ-152

  • MySQL connector’s snaphot process ignores "artificial" database names exposed by MySQL. DBZ-164

  • MySQL connector ignores XA statements appearing in the binlog. DBZ-168

  • MySQL connector no longer expects GTID set information on older MySQL versions. DBZ-161

  • Improved the EmbeddedEngine and fixed several issues. DBZ-156

  • Upgrade to the latest Docker Maven plugin DBZ-157

Release 0.3.5 (November 9, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

We strongly urge all users to upgrade to this release from earlier versions. In prior versions, the MySQL connector may stop without completing all updates in a transaction, and when the connector restarts it starts with the next transaction and therefore might fail to capture some of the change events in the earlier transaction. This release fixes this issue so that when restarting it will always pick up where it left off, even if that point is in the middle of a transaction. Note that this fix only takes affect once a connector is upgraded and restarted. Also, this fix does not affect or alter the content of change events produced by the connector. See the issue for more details.

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.5 from 0.3.4, 0.3.3, 0.3.2, 0.3.1, 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.5 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.5 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has no backward-incompatible changes since the 0.3.4 release.

New Features

  • MySQL connector now supports failover to MySQL masters that are slaves of multiple other MySQL servers/clusters, as long as the new MySQL master has all of the transactions (as specified by GTID sets) the connector had previously seen. The connector can be configured to include or exclude particular GTID sources. DBZ-143

Fixes

This release includes the following fixes to the 0.3.4 release:

  • Restarting MySQL connector will no longer lose or miss events from the previous transaction that was incompletely processed prior to the easlier shutdown. The content of change events are unaffected. DBZ-144

  • Shutting down MySQL connector task database and quickly terminating the Kafka Connect process may cause connector to be restarted in a strange state when Kafka Connect is restarted, but this no longer results in a null pointer exception in the Kafka database history. DBZ-146

  • MySQL connector now has option to treat DECIMAL and NUMERIC columns as double values rather than java.math.BigDecimal values that are encoded in the messages by Kafka Connect in binary form. This option may result in lost precision, but makes the values far easier for consumers to work with them. DBZ-147

  • MySQL connector tests now take into account daylight savings time in the expected results. DBZ-148

  • MySQL connector now properly treats BINARY columns as binary values rather than string values. DBZ-149

  • MySQL connector now handles updates to a row’s primary/unique key by issuing DELETE and tombstone events for the row with the old key, and then an INSERT event for the row with the new key. Previously, the INSERT was emitted before the DELETE. DBZ-150

  • MySQL connector now handles ENUM and SET literals with parentheses. DBZ-153

Release 0.3.4 (October 25, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.4 from 0.3.3, 0.3.2, 0.3.1, 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.4 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.4 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has one breaking changes since the 0.3.3 release:

  • MySQL connector produced change events with a ts_sec field that now shows correct timestamp in seconds past epoch as found from the MySQL server events. In previous releases the last 3 digits in this field were truncated. DBZ-139

New Features

  • MySQL connector has a new SCHEMA_ONLY snapshot mode. When the connector starts up for the first time and uses this snapshot mode, the connector captures the current table schemas without reading any data, and then proceeds to read the binlog. The resulting change event streams do not have all the data in the databases, but do include those change events that occurred after the snapshot started. This may be useful for consumers that only need to know the changes since the connector was started. DBZ-133

  • MySQL connector supports the MySQL JSON datatype. These JSON values are represented as STRING values in the change events, although the name of the field’s Kafka Connect schema is io.debezium.data.Json to signal to consumers that the string value is actually a JSON document, array, or scalar. DBZ-126

  • MySQL connector metrics are exposed via JMX. All of the Debezium Docker images can expose the JMX data via a custom port. See the Monitoring Debezium document for more details. DBZ-134

Fixes

This release includes no other fixes.

Release 0.3.3 (October 18, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.3 from 0.3.2, 0.3.1, 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.3 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.3 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes no breaking changes since the 0.3.2 release.

New Features

This release includes no new features since the 0.3.2 release.

Fixes

This release includes the following fixes to the 0.3.2 release:

  • MySQL connector now works with MySQL 5.5. DBZ-115

  • MySQL connector now handles BIT(n) column values. DBZ-123

  • MySQL connector supports failing over based on subset of GTIDs. DBZ-129

  • MySQL connector processes GTIDs with line feeds and carriage returns. DBZ-135

  • MySQL connector has improved output of GTIDs and status when reading the binary log. DBZ-130, DBZ-131

  • MySQL connector properly handles multi-character ENUM and SET values. DBZ-132

Release 0.3.2 (September 26, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.2 from 0.3.1, 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.2 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes no breaking changes since the 0.3.1 release.

New Features

This release includes no new features since the 0.3.1 release.

Fixes

This release includes the following fixes to the 0.3.1 release:

  • MySQL connector now handles zero-value dates. DBZ-114

  • MySQL connector no longer prints out password-related configuration properties, though KAFKA-4171 for a similar issue with Kafka Connect. DBZ-122

  • MySQL connector no longer causes "Error registering AppInfo mbean" warning in Kafka Connect. DBZ-124

  • MySQL connector periodically outputs status when reading binlog. DBZ-116

  • MongoDB connector periodically outputs status when reading binlog. DBZ-117

  • MySQL connector correctly uses long for the server.id configuration property. DBZ-118

  • MySQL connector fails or warns when MySQL is not using row-level logging. DBZ-128

Release 0.3.1 (August 30, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.1 from 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.1 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes no breaking changes compared to the 0.3.0 release.

New Features

  • Added support for secure (encrypted) connections to MySQL. DBZ-99

Fixes

This release includes the following fixes to the 0.3.0 release:

  • MySQL connector now properly decodes string values from the binlog based upon the column’s character set encoding as read by the DDL statement. Upon upgrade and restart, the connector will re-read the recorded database history and now associate the columns with their the character sets, and any newly processed events will use properly encoded strings values. As expected, previously generated events are never altered. Force a snapshot to regenerate events for the servers. DBZ-102

  • Corrected how the MySQL connector parses some DDL statements. DBZ-106

  • Corrected the MySQL connector to handle MySQL server GTID sets with newline characters. DBZ-107, DBZ-111

  • Corrected the MySQL connector’s startup logic properly compare the MySQL SSL-related system properties to prevent overwriting them. The connector no longer fails when the system properties are the same, which can happen upon restart or starting a second MySQL connector with the same keystore. DBZ-112

  • Removed unused code and test case. DBZ-108

  • Ensure that the MySQL error code and SQLSTATE are included in exceptions reported by the connector. DBZ-109

Release 0.3.0 (August 16, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.0 from 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.0 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes one potentially breaking changes from the 0.2.4 release:

  • By default the MySQL connector now represents temporal values with millisecond, microsecond, or nanosecond precision based upon the precision of the source database columns. This changes the schema name of these fields to Debezium-specific constants, and the meaning/interpretation of the literal values now depends on this schema name. To enable previous behavior that always used millisecond precision using only Kafka Connect logical types, set time.precision.mode connector property to connect. DBZ-91

New Features

  • Added the MongoDB connector, which can capture and record the changes within a MongoDB replica set or MongoDB sharded cluster. In the latter case, the connector even automatically handles the addition or removal of shards. DBZ-2

Fixes

This release includes all of the fixes from the 0.2.4 release, and also includes the following fixes:

  • Corrected how the MySQL connector handles TINYINT columns. DBZ-84

  • MySQL snapshots records DDL statements as separate events on the schema change topic. DBZ-97

  • MySQL connector tolerates binlog filename missing from ROTATE events in certain situations. DBZ-95

  • The Kafka Connect schema names used in the MySQL connector’s change events are now always Avro-compatible schema names. Now, using the Avro converter with a database.server.name value, database names, or table names that contain Avro-incompatible characters produce log warnings but no longer result in errors during serialization and Avro schema generation. Whenever possible, use a database.server.name value that contains alphanumeric and underscore characters. DBZ-86

Release 0.2.4 (August 16, 2016)

August 16, 2016 - Detailed release notes

Upgrading

There are no backward-incompatible changes when upgrading to 0.2.4 from 0.2.3 or 0.2.2. Gracefully stop the running 0.2.3 connector, remove the 0.2.3 plugin files, install the 0.2.4 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.4 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Kafka compatibility

This release requires Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details.

Fixes

This release includes all of the fixes from the 0.2.3 release plus the following fixes:

  • Stream result set rows when taking snapshot of MySQL databases to prevent out of memory problems with very large databases. DBZ-94

  • Add more verbose logging statements to the MySQL connector to show progress and activity during snapshots. DBZ-92

  • Corrected potential error during graceful MySQL connector shutdown. DBZ-103

Release 0.2.4 (August 16, 2016)

Upgrading

There are no backward-incompatible changes when upgrading to 0.2.4 from 0.2.3 or 0.2.2. Gracefully stop the running 0.2.3 connector, remove the 0.2.3 plugin files, install the 0.2.4 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.4 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Kafka compatibility

This release requires Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details.

Fixes

This release includes all of the fixes from the 0.2.3 release plus the following fixes:

  • Stream result set rows when taking snapshot of MySQL databases to prevent out of memory problems with very large databases. DBZ-94

  • Add more verbose logging statements to the MySQL connector to show progress and activity during snapshots. DBZ-92

  • Corrected potential error during graceful MySQL connector shutdown. DBZ-103

Release 0.2.3 (July 26, 2016)

Kafka compatibility

This release requires Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details.

Upgrading

There are no backward-incompatible changes when upgrading to 0.2.3 from 0.2.2. Gracefully stop the running 0.2.2 connector, remove the 0.2.2 plugin files, install the 0.2.3 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.3 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Fixes

This release includes all of the fixes from the 0.2.2 release plus the following fixes:

  • Corrected parsing errors when MySQL DDL statements are generated by Liquibase. DBZ-83

  • Corrected support of MySQL TINYINT and SMALLINT types. DBZ-84, DBZ-87

  • Corrected support of MySQL temporal types, including DATE, TIME, and TIMESTAMP. DBZ-85

  • Corrected call to MySQL SHOW MASTER STATUS so that it works on pre-5.7 versions of MySQL. DBZ-82

Release 0.2.2 (June 22, 2016)

Kafka compatibility

This release can be used with Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details.

Upgrading

Check the backward-incompatible changes when upgrading to 0.2.2 from 0.2.1 or 0.2.0.

When you decide to upgrade the MySQL connector to 0.2.2 from 0.2.1 or 0.2.0, gracefully stop the running 0.2.1 connector, remove the 0.2.1 plugin files, install the 0.2.2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.2 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Backwards-incompatible changes

  • Removed several methods in the GtidSet class inside the MySQL connector. The class was introduced in 0.2. This change will only affect applications explicitly using the class (by reusing the MySQL connector JAR), and will not affect how the MySQL connector works. DBZ-79

  • The source field within each MySQL change event now contains the binlog position of that event (rather than the next event). The structure of the change events (and semantics of other values remain) the same as with 0.2.1. Note that this change may adversely clients that are explicitly comparing the position values across multiple events. DBZ-71

Fixes

This release includes all of the fixes from the 0.2.1 release plus the following fixes:

  • Correct how the MySQL connector records offsets with multi-row MySQL events so that, even if the connector experiences a non-graceful shutdown (i.e., crash) after committing the offset of some of the rows from such an event, upon restart the connector will resume with the remaining rows in that multi-row event. Previously, the connector might incorrectly restart at the next event. DBZ-73

  • Shutdown of the MySQL connector immediately after a snapshot completes (before another change event is reccorded) will now be properly marked as complete. DBZ-77

Release 0.2.1 (June 10, 2016)

Kafka compatibility

This release can be used with Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. DBZ-80

Upgrading

Check the backward-incompatible changes when upgrading to 0.2.1 from 0.2.0.

When you decide to upgrade the MySQL connector to 0.2.1 from 0.2.0, gracefully stop the running 0.2.0 connector, remove the 0.2.0 plugin files, install the 0.2.1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.1 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Backwards-incompatible changes

  • Corrected the names of the Avro-compliant Kafka Connect schemas generated by the MySQL connector for the before and after fields in its data change events. Consumers that require knowledge (by name) of the particular schemas used in 0.2 events may have trouble consuming events produced by the 0.2.1 (or later) connector. DBZ-72

Fixes

This release includes all of the fixes from the 0.2.0 release plus the following fixes:

  • The MySQL connector’s plugin archive now contains the MySQL JDBC driver JAR file required by the connector. DBZ-71

Release 0.2.0 (June 8, 2016)

See the complete list of issues addressed in this release.

The 0.2.0 release contained a significant issue, and 0.2.1 was quickly released to fix the problem. We recommend using a newer release than 0.2.

Backwards-incompatible changes

  • Completely redesigned the structure of event messages produced by MySQL connector and stored in Kafka topics. Events now contain an envelope structure with information about the source event, the kind of operation (create/insert, update, delete, read), the time that Debezium processed the event, and the state of the row before and/or after the event. The messages written to each topic have a distinct Avro-compliant Kafka Connect schema that reflects the structure of the source table, which may vary over time independently from the schemas of all other topics. See the documentation for details. This envelope structure will likely be used by future connectors. DBZ-50, DBZ-52, DBZ-45, DBZ-60

  • MySQL connector handles deletion of a row by recording a delete event message whose value contains the state of the removed row (and other metadata), followed by a tombstone event message with a null value to signal Kafka’s log compaction that all messages with the same key can be garbage collected. See the documentation for details. DBZ-44

  • Changed the format of events that the MySQL connector writes to its schema change topic, through which consumers can access events with the DDL statements applied to the database(s). The format change makes it possible for consumers to correlate these events with the data change events. DBZ-43, DBZ-55

New features

  • MySQL connector supports high availability MySQL cluster topologies. See the documentation for details. DBZ-37

  • MySQL connector now by default starts by performing a consistent snapshot of the schema and contents of the upstream MySQL databases in its current state. See the documentation for details about how this works and how it impacts other database clients. DBZ-31

  • MySQL connector can be configured to exclude, truncate, or mask specific columns in events. DBZ-29

  • MySQL connector events can be serialized using the Confluent Avro converter or the JSON converter. Previously, only the JSON converter could be used. DBZ-29, DBZ-63, DBZ-64

Changes

  • DDL parsing framework identifies table affected by statements via a new listener callback. DBZ-38

  • The database.binlog configuration property was required in version 0.1 of the MySQL connector, but now it is no longer used because of the new snapshot feature. If provided, it will be quietly ignored. DBZ-31

Bug fixes

  • MySQL connector now properly parses COMMIT statements, the REFERENCES clauses of CREATE TABLE statements, and statements with CHARSET shorthand of CHARACTER SET. DBZ-48, DBZ-49, DBZ-57

  • MySQL connector properly handles binary values that are hexadecimal strings DBZ-61

Release 0.1.0 (March 17, 2016)

See the complete list of issues addressed in this release.

Kafka compatibility

This release can be used with Kafka Connect 0.9.0.1 (or a subsequent API-compatible release).

Added

  • MySQL connector for ingesting change events from MySQL databases. DBZ-1

  • Kafka Connect plugin archive for MySQL connector. DBZ-17

  • Simple DDL parsing framework that can be extended and used by various connectors. DBZ-1

  • Framework for embedding a single Kafka Connect connector inside an application. DBZ-8

back to top