Debezium Change Log

All notable changes for Debezium releases are documented in this file. Release numbers follow Semantic Versioning.

Release 0.7.3 (February 14th, 2018)

Kafka compatibility

This release has been built against Kafka Connect 1.0.0 and has been tested with version 1.0.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.7.3 from any of the earlier 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.7.3 plugin files, and restart the connector using the same configuration. Upon restart, the 0.7.3 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

A new namespace for parameters was [created](https://issues.jboss.org/browse/DBZ-576) - internal - that is used for parameters that are not documented and should not be used as they are subject of changes without warning. As a result of this change the undocumented parameter database.history.ddl.filter was renamed to internal.database.history.ddl.filter.

OpenShift deployment now uses templates and images from [Strimzi project](https://issues.jboss.org/browse/DBZ-545).

New Features

This release includes the following new features:

  • MySQL connector should automatically create database history topic DBZ-278

  • Change OpenShift instructions to use Strimzi DBZ-545

  • Create an internal namespace for configuration options not intended for general usage DBZ-576

  • Make ChainedReader immutable DBZ-583

  • Snapshots are not interruptable with the Postgres connector DBZ-586

  • Add optional field with Debezium version to "source" element of messages DBZ-593

  • Add the ability to control the strategy for committing offsets by the offset store DBZ-537

  • Create support for arrays of PostGIS types DBZ-571

  • Add option for controlling whether to produce tombstone records on DELETE operations DBZ-582

  • Add example for using the MongoDB event flattening SMT DBZ-567

  • Prefix the names of all threads spawned by Debezium with "debezium-" DBZ-587

Fixes

This release includes the following fixes:

  • Force DBZ to commit regularly DBZ-220

  • Carry over SourceInfo.restartEventsToSkip to next binlog file handling cause binlog events are not written to kafka DBZ-572

  • Numeric arrays not handled correctly DBZ-577

  • Debezium skipping binlog events silently DBZ-588

  • Stop the connector if WALs to continue from aren’t available DBZ-590

  • Producer thread of DB history topic leaks after connector shut-down DBZ-595

  • Integration tests should have completely isolated environment and configuration/setup files DBZ-300

  • MongoDB integration tests should have completely isolated environment and configuration/setup files DBZ-579

  • Extract a separate change event class to be re-used across connectors DBZ-580

  • Propagate producer errors to Kafka Connect in MongoDB connector DBZ-581

  • Shutdown thread pool used for MongoDB snaphots once it’s not needed anymore DBZ-594

  • Refactor type and array handling for Postgres DBZ-609

  • Avoid unneccessary schema refreshs DBZ-616

  • Incorrect type retrieved by stream producer for column TIMESTAMP (0) WITH TIME ZONE DBZ-618

Release 0.7.2 (January 25th, 2018)

Kafka compatibility

This release has been built against Kafka Connect 1.0.0 and has been tested with version 1.0.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.7.2 from any of the earlier 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.7.2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.7.2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

This release includes the following new features:

  • As a Debezium user, I would like MySQL Connector to support 'Spatial' data types DBZ-208

  • Allow easy consumption of MongoDB CDC events by other connectors DBZ-409

  • New snapshotting mode for recovery of DB history topic DBZ-443

  • Add support for Postgres VARCHAR array columns DBZ-506

  • Unified Geometry type support DBZ-507

  • Add support for "snapshot.select.statement.overrides" option for Postgres DBZ-510

  • Make PostGIS optional in Postgres Docker images DBZ-526

  • Provide an option to only store DDL statements referring to captured tables in DB history topic DBZ-541

  • Add ToC to tutorial and make section captions linkable DBZ-369

  • Remove Zulu JDK images DBZ-449

  • Add example for sending CDC events to Elasticsearch DBZ-502

  • Adapt examples to MongoDB 3.6 DBZ-509

  • Backport add-ons definition from add-ons repo DBZ-520

  • Set up pull request build job for testing the PG connector with wal2json DBZ-568

Fixes

This release includes the following fixes:

  • Debezium MySQL connector only works for lower-case table names on case-insensitive file systems DBZ-392

  • Numbers after decimal point are different between source and destination DBZ-423

  • Fix support for date arrays DBZ-494

  • Changes in type contraints will not trigger new schema DBZ-504

  • Task is still running after connector is paused DBZ-516

  • NPE happened for PAUSED task DBZ-519

  • Possibility of commit LSN before record is consumed/notified DBZ-521

  • Snapshot fails when encountering null MySQL TIME fields DBZ-522

  • Debezium unable to parse DDLs in MySql with RESTRICT contstraint DBZ-524

  • DateTimeFormatter Exception in wal2json DBZ-525

  • Multiple partitions does not work in ALTER TABLE DBZ-530

  • Incorrect lookup in List in MySqlDdlParser.parseCreateView DBZ-534

  • Improve invalid DDL statement logging DBZ-538

  • Fix required protobuf version in protobuf decoder documentation DBZ-542

  • EmbeddedEngine strips settings required to use KafkaOffsetBackingStore DBZ-555

  • Handling of date arrays collides with handling of type changes via wal2json DBZ-558

  • ROLLBACK to savepoint cannot be parsed DBZ-411

  • Avoid usage of deprecated numeric types constructors DBZ-455

  • Don’t add source and JavaDoc JARs to Kafka image DBZ-489

Release 0.7.1 (December 20th, 2017)

Kafka compatibility

This release has been built against Kafka Connect 1.0.0 and has been tested with version 1.0.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.7.1 from any of the earlier 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.7.1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.7.1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

This release includes the following new features:

  • Provide a wal2json plug-in mode enforcing RDS environment DBZ-517

Fixes

This release includes the following fixes:

  • For old connector OID should be used to detect schema change DBZ-512

  • AWS RDS Postgresql 9.6.5 not supporting "include-not-null" = "true" in connector setup DBZ-513

  • RecordsStreamProducerIT.shouldNotStartAfterStop can make subsequent test dependent DBZ-518

Known issues

  • PostgreSQL Connector does not detect schema changes in type constraints - e.g. the length of array datatype DBZ-504

Release 0.7.0 (December 15th, 2017)

Kafka compatibility

This release has been built against Kafka Connect 1.0.0 and has been tested with version 1.0.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.7.0 from any of the earlier 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.7.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.7.0 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes the following changes that can affect existing installations:

  • Change default setting for BIGINT UNSIGNED handling DBZ-461; UNSIGNED BIGINT is treated by default as int64, not as Decimal before. Set bigint.unsigned.handling.mode if you need to continue with the original behaviour.

  • Invalid value for HourOfDay ConnectException when the value of MySQL TIME filed is above 23:59:59 DBZ-342; The default mapping for MySQL TIME(0-3) columns has changed. Such columns can store values from -838:59:59.000000 to 838:59:59.000000, which cannot be stored as milliseconds in an int32 field (the previous default mapping). Hence the default mapping has changed to int64 and the semantic type io.debezium.time.MicroTime, i.e. values represent microseconds.
    If you prefer to keep the previous mapping (which only should be done if it’s guaranteed that no values are stored in that column whose milliseconds value exceeds int32), you can do so by specifying the connector option time.precision.mode=adaptive (see the connector documentation for further details).
    This change does not affect other connectors.

  • Postgres connectors stops to work after concurrent schema changes and updates DBZ-379; PostgreSQL connector was using JDBC metadata to get additional type information when it was processing logical events. This could lead to a race condition when database schema was updated and Debezium was still processing events with old schema structure.
    To mitigate the problem a new version of Protocol Buffers decoder plugin was introduced that passes additional type metadata with each event. The connector is backward compatible with the old decoder plugin (using the original approach) but we strongly recommend to upgrade it to the latest one as soon as possible.
    The race condition issue can still happen when primary key structure is changed for the table as this information is still obtained from JDBC metadata. To properly handle primary key change you should follow the rules

    • Application should be placed in a read-only mode, not writing any new data actively

    • PostgreSQL connector must consume all remaining events from the database

    • Primary key change is executed

    • Application can switch back to regular mode

  • Hardcoded schema version overrides schema registry version DBZ-466; The schema version returned for CDC message values (schema type dbserver1.inventory.customers.Envelope) has changed. While always 1 was returned in earlier versions, the schema version as managed by the schema registry will be returned in case the Avro serializer/deserializer is used. Null will be returned when using the JSON serializer/deserializer. Note that the schema version is only set during Avro message serialization, i.e. an SMT applied on the source side will also get null when querying for the schema version, as SMTs will be applied before the serialization.

New Features

This release includes the following new features:

  • PostgreSQL connector should work on Amazon RDS and be able to use the available plugin DBZ-256

  • Build Debezium against Kafka 1.0.0 DBZ-432

  • Build Debezium images with Kafka 1.0.0 DBZ-433

  • Protobuf message should contain type modifiers DBZ-485

  • Protobuf message should contain optional flags DBZ-486

  • Better support for large append-only tables by making the snapshotting process restartable DBZ-349

  • Support new wal2json type specifiers DBZ-453

  • Optionally return raw value for unsupported column types DBZ-498

  • Provide Postgres example image for 0.7 DBZ-382

  • Create an automated build for Postgres example image in Docker Hub DBZ-383

  • Move configuration of ProtoBuf code generation to Postgres module DBZ-416

  • Provide MongoDB example image for Debezium 0.7 DBZ-451

  • Upgrade to Confluent Platform 4.0 DBZ-492

  • Set up CI job for testing Postgres with new wal2json type identifiers DBZ-495

  • Change PostgreSQL connector to support multiple plugins DBZ-257

  • PostgreSQL connector should support the wal2json logical decoding plugin DBZ-258

  • Provide instructions for using Debezium on Minishift DBZ-364

  • Modify BinlogReader to process transactions via buffer DBZ-405

  • Modify BinlogReader to support transactions of unlimited size DBZ-406

Fixes

This release includes the following fixes:

  • Data are read from the binlog and not written into Kafka DBZ-390

  • MySQL connector may not read database history to end DBZ-464

  • connect-base image advertises wrong port by default DBZ-467

  • INSERT statements being written to db history topic DBZ-469

  • MySQL Connector does not handle properly startup/shutdown DBZ-473

  • Cannot parse NOT NULL COLLATE in DDL DBZ-474

  • Failed to parse the sql statement of RENAME user DBZ-475

  • Exception when parsing enum field with escaped characters values DBZ-476

  • All to insert null value into numeric array columns DBZ-478

  • produceStrings method slow down on sending messages DBZ-479

  • Failing unit tests when run in EST timezone DBZ-491

  • PostgresConnector falls with RejectedExecutionException DBZ-501

  • Docker images cannot be re-built when a new version of ZooKeeper/Kafka is released DBZ-503

  • Insert ids as long instead of float for MongoDB example image DBZ-470

  • Port changes in 0.6 Docker files into 0.7 files DBZ-463

  • Add check to release process to make sure all issues are assigned to a component DBZ-468

  • Document requirement for database history topic to be not partitioned DBZ-482

  • Remove dead code from MySqlSchema DBZ-483

  • Remove redundant calls to pfree DBZ-496

Known issues

  • PostgreSQL Connector does not detect schema changes in type constraints - e.g. the length of array datatype DBZ-504

Release 0.6.2 (November 15th, 2017)

Kafka compatibility

This release has been built against Kafka Connect 0.11.0.1 and has been tested with version 0.11.0.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.6.2 from any of the earlier 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.6.2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.6.2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

  • Timestamp field not handle time zone correctly DBZ-260

    • This issue finally fixes a long standing bug in timestamp timezone handling. If there is a client that was depending on this bug to provide value without the correct offset then it has to be fixed.

New Features

This release includes the following new features:

  • Log current position in MySQL binlog to simplify debugging DBZ-401

  • Support PostgreSQL 10 DBZ-424

  • Create a Docker image for PostgreSQL 10 DBZ-426

  • Add example for using Avro messages DBZ-430

  • Make postGIS dependency optional DBZ-445

  • Avro console-consumer example in docs DBZ-458

  • Docker micro version tags (e.g., 0.6.1) DBZ-418

  • Create a CI job for testing with PostgreSQL 10 DBZ-427

  • Upgrade dependencies in Docker images to match Kafka 0.11.0.1 DBZ-450

Fixes

This release includes the following fixes:

  • Connector fails and stops when coming across corrupt event DBZ-217

  • [Postgres] Interval column causes exception during handling of DELETE DBZ-259

  • The scope of the Kafka Connect dependency should be "provided" DBZ-285

  • KafkaCluster#withKafkaConfiguration() does not work DBZ-323

  • MySQL connector "initial_only" snapshot mode results in CPU spike from ConnectorTask polling DBZ-396

  • Allow to omit COLUMN word in ALTER TABLE MODIFY/ALTER/CHANGE DBZ-412

  • MySQL connector should handle stored procedure definitions DBZ-415

  • Support constraints without name in DDL statement DBZ-419

  • Short field not null throw an exception DBZ-422

  • ALTER TABLE cannot change default value of column DBZ-425

  • DDL containing text column with length specification cannot be parsed DBZ-428

  • Integer column with negative default value causes MySQL connector to crash DBZ-429

  • MySQL procedure parser handles strings and keywords as same tokens DBZ-437

  • Mongo initial sync misses records with initial.sync.max.threads > 1 DBZ-438

  • Can’t parse DDL containing PRECISION clause without parameters DBZ-439

  • Task restart triggers MBean to register twice DBZ-447

  • Remove slowness in KafkaDatabaseHistoryTest DBZ-456

Release 0.6.1 (October 26th, 2017)

Kafka compatibility

This release has been built against Kafka Connect 0.11.0.1 and has been tested with version 0.11.0.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.6.1 from any of the earlier 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.6.1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.6.1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

There should be no breaking changes in this relese.

New Features

This release includes the following new features:

  • Support for UNSIGNED BIGINT to not be treated as byte[] DBZ-363

  • Make Debezium build on Java 9 DBZ-227

  • Add a test for "PAGE_CHECKSUM" DDL option DBZ-336

  • Provide tutorial Docker Compose files for MongoDB and Postgres DBZ-361

  • Upgrade to latest Kafka 0.11.x DBZ-367

  • Prevent warning when building the plug-ins DBZ-370

  • Replace hard-coded version references with variables DBZ-371

  • Upgrade to latest version of mysql-binlog-connector-java DBZ-398

  • Create wal2json CI job DBZ-403

  • Travis jobs tests are failing due to Postgres DBZ-404

Fixes

This release includes the following fixes:

  • Avoid NullPointerException when closing MySQL connector after another error DBZ-378

  • RecordsStreamProducer#streamChanges() can die on an exception without failing the connector DBZ-380

  • Encoding to JSON does not support all MongoDB types DBZ-385

  • MySQL connector does not filter out DROP TEMP TABLE statements from DB history topic DBZ-395

  • Binlog Reader is registering MXBean when using "initial_only" snapshot mode DBZ-402

  • A column named column, even when properly escaped, causes exception DBZ-408

Release 0.6.0 (September 21st, 2017)

Kafka compatibility

This release has been built against Kafka Connect 0.11.0.0 and has been tested with version 0.11.0.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.6.0 from any of the earlier 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.6.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.6.0 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes the following change that affects existing installations that capture MongoDB:

  • Add support for different MongoDB id types in key struct DBZ-306; the key payload continues to be a string in all cases, but it will be created using MongoDB’s extended JSON serialization (strict mode). So e.g. an int key will result in a key payload such as { "id" : "1234" }, a String key will yield { "id" : "\"1234\"" } and an ObjectId key will yield { "id" : "{\"$oid\" : \"596e275826f08b2730779e1f\"}" }. This allows to restore the key into the correct type from the serialized representation. Note that the id field has been renamed from "_id" into "id". This is to be consistent with the format used by the other Debezium connectors; also it allows you to tell apart messages written by earlier Debezium versions from messages written by 0.6 and beyond.

New Features

This release includes the following new features:

  • Use new Kafka 0.10 properties for listeners and advertised listeners DBZ-39

  • Add docker-compose handling for Debezium tutorial DBZ-127

  • Topic configuration requirements are not clearly documented DBZ-241

  • Upgrade Docker images to Kafka 0.11.0.0 DBZ-305

  • add support for different mongodb _id types in key struct DBZ-306

  • Add SMT implementation to convert CDC event structure to more traditional row state structure DBZ-226

  • Support SSL connection to Mongodb DBZ-343

  • Support DEC and FIXED type for mysql ddl parser DBZ-359

Fixes

This release includes the following fixes:

  • MySQL snapshotter is not guaranteed to give a consistent snapshot DBZ-210

  • MySQL connector stops consuming data from binlog after server restart DBZ-219

  • Warnings and notifications from PostgreSQL are ignored by the connector DBZ-279

  • BigDecimal has mismatching scale value for given Decimal schema error. DBZ-318

  • Views in database stop PostgreSQL connector DBZ-319

  • Don’t pass database history properties to the JDBC connection DBZ-333

  • Sanitize readings from database history topic DBZ-341

  • Support UNION for ALTER TABLE DBZ-346

  • Debezium fails to start when schema history topic contains unparseable SQL DBZ-347

  • JDBC Connection is not closed after schema refresh DBZ-356

  • MySQL integration tests should have completely isolated environment and configuration/setup files DBZ-304

Release 0.5.2 (August 17, 2017)

Kafka compatibility

This release has been built against Kafka Connect 0.10.2.0 and has been tested with version 0.10.2.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.5.2 from any of the earlier 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.5.2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.5.2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

There should be no breaking changes in this relese.

New Features

This release includes the following new features:

Fixes

This release includes the following fixes:

  • Images cannot run on OpenShift online DBZ-267

  • NPE when processing null value in POINT column DBZ-284

  • Postgres Connector: error of mismatching scale value for Decimal and Numeric data types DBZ-287

  • Postgres connector fails with array columns DBZ-297

  • Postgres connector fails with quoted type names DBZ-298

  • LogicalTableRouter SMT uses wrong comparison for validation DBZ-326

  • LogicalTableRouter SMT has a broken key replacement validation DBZ-327

  • Pre-compile and simplify some regular expressions DBZ-311

  • JMX metrics for MySQL connector should be documented DBZ-293

  • PostgreSQL integration tests should have completely isolated environment and configuration/setup files DBZ-301

  • Move snapshot Dockerfile into separated directory DBZ-321

  • Cover ByLogicalTableRouter SMT in reference documentation DBZ-325

  • Add documentation for JDBC url pass-through properties DBZ-330

Release 0.5.1 (June 9, 2017)

Kafka compatibility

This release has been built against Kafka Connect 0.10.2.0 and has been tested with version 0.10.2.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.5.1 from any of the earlier 0.4.1, 0.4.0, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.5.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.5.1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes the following change that affect existing installations which capture system tables:

  • MySQL connector should apply database and table filters to system dbs/tables DBZ-242

New Features

This release includes the following new features:

  • MySQL Connector should support 'Point' data type DBZ-222

  • Support tstzrange column type on Postgres DBZ-280

Fixes

This release includes the following fixes:

  • Control how Debezium connectors maps tables to topics for sharding and other use cases DBZ-121

  • MySqlConnector Table and Database recommenders cause timeouts on large instances DBZ-232

  • Option to disable SSL certificate validation for PostgreSQL DBZ-244

  • Let enum types implement EnumeratedValue DBZ-262

  • The MySQL connector is failing with the DDL statements. DBZ-198

  • Correct MongoDB build DBZ-213

  • MongoDB connector should handle new primary better DBZ-214

  • Validate that database.server.name and database.history.kafka.topic have different values DBZ-215

  • When restarting Kafka Connect, we get io.debezium.text.ParsingException DBZ-216

  • Postgres connector crash on a database managed by Django DBZ-223

  • MySQL Connector doesn’t handle any value above '2147483647' for 'INT UNSIGNED' types DBZ-228

  • MySqlJdbcContext#userHasPrivileges() is broken for multiple privileges DBZ-229

  • Postgres Connector does not work when "sslmode" is "require" DBZ-238

  • Test PostgresConnectorIT.shouldSupportSSLParameters is incorrect DBZ-245

  • Recommender and default value broken for EnumeratedValue type DBZ-246

  • PG connector is CPU consuming DBZ-250

  • MySQL tests are interdependent DBZ-251

  • MySQL DDL parser fails on "ANALYZE TABLE" statement DBZ-253

  • Binary fields with trailing "00" are truncated DBZ-254

  • Enable Maven repository caching on Travis DBZ-274

  • Memory leak and excessive CPU usage when using materialized views DBZ-277

  • Postgres task should fail when connection to server is lost DBZ-281

  • Fix some wrong textual descriptions of default values DBZ-282

  • Apply consistent default value for Postgres port DBZ-237

  • Make Docker images run on OpenShift DBZ-240

  • Don’t mention default value for "database.server.name" DBZ-243

Release 0.5.0 (March 27, 2017)

Kafka compatibility

This release has been built against Kafka Connect 0.10.2.0 and has been tested with version 0.10.2.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.5.0 from any of the earlier 0.4.1, 0.4.0, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.5.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.5.0 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes the following changes that are likely to affect existing installations:

  • Upgraded from Kafka 0.10.1.1 to 0.10.2.0. DBZ-203

This release has no breaking changes since the previous release.

New Features

This release has no new features since the previous release.

Fixes

This release includes the following fixes relative to the 0.4.1 release:

  • MySQL connector now better handles DDL statements with BEGIN…​END blocks, especially those that use IF() functions and CASE…​WHEN statements. DBZ-198

  • MySQL connector handles 2-digit years in DATETIME, DATE, TIMESTAMP, and YEAR columns in the same way as MySQL. DBZ-205

Release 0.4.1 (March 17, 2017)

Kafka compatibility

This release has been tested with Kafka Connect 0.10.1.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.4.1 from any of the earlier 0.4.0, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.4.1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.4.1 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has no breaking changes since the previous release.

New Features

This release adds/improves to the MySQL connector preliminary support for Amazon RDS and Amazon Aurora (MySQL compatibility) (see DBZ-140).

Fixes

This release includes the following fixes relative to the 0.4.0 release:

  • MySQL connector now allows filtering production of DML events by GTIDs. DBZ-188

  • Support InnoDB savepoints. DBZ-196

  • Corrected MySQL DDL parser. DBZ-193 DBZ-198

  • Improved handling of MySQL connector’s built-in tables. DBZ-194

  • MySQL connector properly handles invalid/blank enum literal values. DBZ-197

  • MySQL connector properly handles reserved names as column names. DBZ-200

  • MongoDB connector properly generates event keys based upon ObjectID for updates. DBZ-201

Release 0.4.0 (February 7, 2017)

Kafka compatibility

This release has been tested with Kafka Connect 0.10.1.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.4.0 from any of the earlier 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.4.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.4.0 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has no breaking changes since the previous release.

New Features

This release includes a new PostgreSQL connector (see DBZ-3) and adds to the MySQL connector preliminary support for Amazon RDS and Amazon Aurora (MySQL compatibility) (see DBZ-140).

Fixes

This release includes the following fixes relative to the 0.3.6 release:

  • Update Kafka dependencies to 0.10.1.1. DBZ-173

  • Update MySQL binary log client library to 0.9.0. DBZ-186

  • MySQL should apply GTID filters to database history. DBZ-185

  • Add names of database and table to the MySQL event metadata. DBZ-184

  • Add the MySQL thread ID to the MySQL event metadata. DBZ-113

  • Corrects MySQL connector to properly handle timezone information for TIMESTAMP. DBZ-183

  • Correct MySQL DDL parser to handle CREATE TRIGGER command with DEFINER clauses. DBZ-176

  • Update MongoDB Java driver and MongoDB server versions. DBZ-187

  • MongoDB connector should restart incomplete initial sync. DBZ-182

  • MySQL and PostgreSQL connectors should load JDBC driver independently of DriverManager. DBZ-177

  • Upgrade MySQL binlog client library to support new binlog events added with MySQL 5.7. DBZ-174

  • EmbeddedEngine should log all errors. DBZ-178

  • PostgreSQL containers' generated Protobuf source moved to separate directory. DBZ-179

Release 0.3.6 (December 21, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.6 from any of the earlier 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.3.6 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.6 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has no breaking changes since the previous release.

New Features

There are no new features in this release.

Fixes

This release includes the following fixes to the 0.3.5 release:

  • Deleting a Debezium connector in Kafka Connect no longer causes NPEs. DBZ-138

  • MongoDB connector properly connects to a sharded cluster and the primaries for each replica set. DBZ-170, DBZ-167

  • Stopping the MySQL connector while in the middle of a snapshot now cloasses all MySQL resources. DBZ-166

  • MySQL connector properly parses with ON UPDATE timestamp values. DBZ-169

  • MySQL connector ignores CREATE FUNCTION DDL statements. DBZ-162

  • MySQL connector properly parses CREATE TABLE script with ENUM type and default value 'b'. DBZ-160

  • MySQL connector now properly supports NVARCHAR columns. DBZ-142

  • MySQL connector’s snapshot process now uses SHOW TABLE STATUS …​ rather than SELECT COUNT(\*) to obtain an estimate of the number of rows for each table, and can even forgo this step if all tables are to be streamed. DBZ-152

  • MySQL connector’s snaphot process ignores "artificial" database names exposed by MySQL. DBZ-164

  • MySQL connector ignores XA statements appearing in the binlog. DBZ-168

  • MySQL connector no longer expects GTID set information on older MySQL versions. DBZ-161

  • Improved the EmbeddedEngine and fixed several issues. DBZ-156

  • Upgrade to the latest Docker Maven plugin DBZ-157

Release 0.3.5 (November 9, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

We strongly urge all users to upgrade to this release from earlier versions. In prior versions, the MySQL connector may stop without completing all updates in a transaction, and when the connector restarts it starts with the next transaction and therefore might fail to capture some of the change events in the earlier transaction. This release fixes this issue so that when restarting it will always pick up where it left off, even if that point is in the middle of a transaction. Note that this fix only takes affect once a connector is upgraded and restarted. Also, this fix does not affect or alter the content of change events produced by the connector. See the issue for more details.

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.5 from 0.3.4, 0.3.3, 0.3.2, 0.3.1, 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.5 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.5 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has no backward-incompatible changes since the 0.3.4 release.

New Features

  • MySQL connector now supports failover to MySQL masters that are slaves of multiple other MySQL servers/clusters, as long as the new MySQL master has all of the transactions (as specified by GTID sets) the connector had previously seen. The connector can be configured to include or exclude particular GTID sources. DBZ-143

Fixes

This release includes the following fixes to the 0.3.4 release:

  • Restarting MySQL connector will no longer lose or miss events from the previous transaction that was incompletely processed prior to the easlier shutdown. The content of change events are unaffected. DBZ-144

  • Shutting down MySQL connector task database and quickly terminating the Kafka Connect process may cause connector to be restarted in a strange state when Kafka Connect is restarted, but this no longer results in a null pointer exception in the Kafka database history. DBZ-146

  • MySQL connector now has option to treat DECIMAL and NUMERIC columns as double values rather than java.math.BigDecimal values that are encoded in the messages by Kafka Connect in binary form. This option may result in lost precision, but makes the values far easier for consumers to work with them. DBZ-147

  • MySQL connector tests now take into account daylight savings time in the expected results. DBZ-148

  • MySQL connector now properly treats BINARY columns as binary values rather than string values. DBZ-149

  • MySQL connector now handles updates to a row’s primary/unique key by issuing DELETE and tombstone events for the row with the old key, and then an INSERT event for the row with the new key. Previously, the INSERT was emitted before the DELETE. DBZ-150

  • MySQL connector now handles ENUM and SET literals with parentheses. DBZ-153

Release 0.3.4 (October 25, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.4 from 0.3.3, 0.3.2, 0.3.1, 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.4 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.4 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release has one breaking changes since the 0.3.3 release:

  • MySQL connector produced change events with a ts_sec field that now shows correct timestamp in seconds past epoch as found from the MySQL server events. In previous releases the last 3 digits in this field were truncated. DBZ-139

New Features

  • MySQL connector has a new SCHEMA_ONLY snapshot mode. When the connector starts up for the first time and uses this snapshot mode, the connector captures the current table schemas without reading any data, and then proceeds to read the binlog. The resulting change event streams do not have all the data in the databases, but do include those change events that occurred after the snapshot started. This may be useful for consumers that only need to know the changes since the connector was started. DBZ-133

  • MySQL connector supports the MySQL JSON datatype. These JSON values are represented as STRING values in the change events, although the name of the field’s Kafka Connect schema is io.debezium.data.Json to signal to consumers that the string value is actually a JSON document, array, or scalar. DBZ-126

  • MySQL connector metrics are exposed via JMX. All of the Debezium Docker images can expose the JMX data via a custom port. See the Monitoring Debezium document for more details. DBZ-134

Fixes

This release includes no other fixes.

Release 0.3.3 (October 18, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.3 from 0.3.2, 0.3.1, 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.3 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.3 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes no breaking changes since the 0.3.2 release.

New Features

This release includes no new features since the 0.3.2 release.

Fixes

This release includes the following fixes to the 0.3.2 release:

  • MySQL connector now works with MySQL 5.5. DBZ-115

  • MySQL connector now handles BIT(n) column values. DBZ-123

  • MySQL connector supports failing over based on subset of GTIDs. DBZ-129

  • MySQL connector processes GTIDs with line feeds and carriage returns. DBZ-135

  • MySQL connector has improved output of GTIDs and status when reading the binary log. DBZ-130, DBZ-131

  • MySQL connector properly handles multi-character ENUM and SET values. DBZ-132

Release 0.3.2 (September 26, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.2 from 0.3.1, 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.2 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes no breaking changes since the 0.3.1 release.

New Features

This release includes no new features since the 0.3.1 release.

Fixes

This release includes the following fixes to the 0.3.1 release:

  • MySQL connector now handles zero-value dates. DBZ-114

  • MySQL connector no longer prints out password-related configuration properties, though KAFKA-4171 for a similar issue with Kafka Connect. DBZ-122

  • MySQL connector no longer causes "Error registering AppInfo mbean" warning in Kafka Connect. DBZ-124

  • MySQL connector periodically outputs status when reading binlog. DBZ-116

  • MongoDB connector periodically outputs status when reading binlog. DBZ-117

  • MySQL connector correctly uses long for the server.id configuration property. DBZ-118

  • MySQL connector fails or warns when MySQL is not using row-level logging. DBZ-128

Release 0.3.1 (August 30, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.1 from 0.3.0, 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.1 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes no breaking changes compared to the 0.3.0 release.

New Features

  • Added support for secure (encrypted) connections to MySQL. DBZ-99

Fixes

This release includes the following fixes to the 0.3.0 release:

  • MySQL connector now properly decodes string values from the binlog based upon the column’s character set encoding as read by the DDL statement. Upon upgrade and restart, the connector will re-read the recorded database history and now associate the columns with their the character sets, and any newly processed events will use properly encoded strings values. As expected, previously generated events are never altered. Force a snapshot to regenerate events for the servers. DBZ-102

  • Corrected how the MySQL connector parses some DDL statements. DBZ-106

  • Corrected the MySQL connector to handle MySQL server GTID sets with newline characters. DBZ-107, DBZ-111

  • Corrected the MySQL connector’s startup logic properly compare the MySQL SSL-related system properties to prevent overwriting them. The connector no longer fails when the system properties are the same, which can happen upon restart or starting a second MySQL connector with the same keystore. DBZ-112

  • Removed unused code and test case. DBZ-108

  • Ensure that the MySQL error code and SQLSTATE are included in exceptions reported by the connector. DBZ-109

Release 0.3.0 (August 16, 2016)

Kafka compatibility

This release requires Kafka Connect 0.10.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.9.0.x due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details, and Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL connector, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade the MySQL connector to 0.3.0 from 0.2.4, 0.2.3, 0.2.2, or 0.2.1, gracefully stop the running connector, remove the old plugin files, install the 0.3.0 plugin files, and restart the connector using the same configuration. Upon restart, the 0.3.0 MySQL connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Breaking changes

This release includes one potentially breaking changes from the 0.2.4 release:

  • By default the MySQL connector now represents temporal values with millisecond, microsecond, or nanosecond precision based upon the precision of the source database columns. This changes the schema name of these fields to Debezium-specific constants, and the meaning/interpretation of the literal values now depends on this schema name. To enable previous behavior that always used millisecond precision using only Kafka Connect logical types, set time.precision.mode connector property to connect. DBZ-91

New Features

  • Added the MongoDB connector, which can capture and record the changes within a MongoDB replica set or MongoDB sharded cluster. In the latter case, the connector even automatically handles the addition or removal of shards. DBZ-2

Fixes

This release includes all of the fixes from the 0.2.4 release, and also includes the following fixes:

  • Corrected how the MySQL connector handles TINYINT columns. DBZ-84

  • MySQL snapshots records DDL statements as separate events on the schema change topic. DBZ-97

  • MySQL connector tolerates binlog filename missing from ROTATE events in certain situations. DBZ-95

  • The Kafka Connect schema names used in the MySQL connector’s change events are now always Avro-compatible schema names. Now, using the Avro converter with a database.server.name value, database names, or table names that contain Avro-incompatible characters produce log warnings but no longer result in errors during serialization and Avro schema generation. Whenever possible, use a database.server.name value that contains alphanumeric and underscore characters. DBZ-86

Release 0.2.4 (August 16, 2016)

August 16, 2016 - Detailed release notes

Upgrading

There are no backward-incompatible changes when upgrading to 0.2.4 from 0.2.3 or 0.2.2. Gracefully stop the running 0.2.3 connector, remove the 0.2.3 plugin files, install the 0.2.4 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.4 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Kafka compatibility

This release requires Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details.

Fixes

This release includes all of the fixes from the 0.2.3 release plus the following fixes:

  • Stream result set rows when taking snapshot of MySQL databases to prevent out of memory problems with very large databases. DBZ-94

  • Add more verbose logging statements to the MySQL connector to show progress and activity during snapshots. DBZ-92

  • Corrected potential error during graceful MySQL connector shutdown. DBZ-103

Release 0.2.4 (August 16, 2016)

Upgrading

There are no backward-incompatible changes when upgrading to 0.2.4 from 0.2.3 or 0.2.2. Gracefully stop the running 0.2.3 connector, remove the 0.2.3 plugin files, install the 0.2.4 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.4 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Kafka compatibility

This release requires Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details.

Fixes

This release includes all of the fixes from the 0.2.3 release plus the following fixes:

  • Stream result set rows when taking snapshot of MySQL databases to prevent out of memory problems with very large databases. DBZ-94

  • Add more verbose logging statements to the MySQL connector to show progress and activity during snapshots. DBZ-92

  • Corrected potential error during graceful MySQL connector shutdown. DBZ-103

Release 0.2.3 (July 26, 2016)

Kafka compatibility

This release requires Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details.

Upgrading

There are no backward-incompatible changes when upgrading to 0.2.3 from 0.2.2. Gracefully stop the running 0.2.2 connector, remove the 0.2.2 plugin files, install the 0.2.3 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.3 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Fixes

This release includes all of the fixes from the 0.2.2 release plus the following fixes:

  • Corrected parsing errors when MySQL DDL statements are generated by Liquibase. DBZ-83

  • Corrected support of MySQL TINYINT and SMALLINT types. DBZ-84, DBZ-87

  • Corrected support of MySQL temporal types, including DATE, TIME, and TIMESTAMP. DBZ-85

  • Corrected call to MySQL SHOW MASTER STATUS so that it works on pre-5.7 versions of MySQL. DBZ-82

Release 0.2.2 (June 22, 2016)

Kafka compatibility

This release can be used with Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. See DBZ-80 for details.

Upgrading

Check the backward-incompatible changes when upgrading to 0.2.2 from 0.2.1 or 0.2.0.

When you decide to upgrade the MySQL connector to 0.2.2 from 0.2.1 or 0.2.0, gracefully stop the running 0.2.1 connector, remove the 0.2.1 plugin files, install the 0.2.2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.2 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Backwards-incompatible changes

  • Removed several methods in the GtidSet class inside the MySQL connector. The class was introduced in 0.2. This change will only affect applications explicitly using the class (by reusing the MySQL connector JAR), and will not affect how the MySQL connector works. DBZ-79

  • The source field within each MySQL change event now contains the binlog position of that event (rather than the next event). The structure of the change events (and semantics of other values remain) the same as with 0.2.1. Note that this change may adversely clients that are explicitly comparing the position values across multiple events. DBZ-71

Fixes

This release includes all of the fixes from the 0.2.1 release plus the following fixes:

  • Correct how the MySQL connector records offsets with multi-row MySQL events so that, even if the connector experiences a non-graceful shutdown (i.e., crash) after committing the offset of some of the rows from such an event, upon restart the connector will resume with the remaining rows in that multi-row event. Previously, the connector might incorrectly restart at the next event. DBZ-73

  • Shutdown of the MySQL connector immediately after a snapshot completes (before another change event is reccorded) will now be properly marked as complete. DBZ-77

Release 0.2.1 (June 10, 2016)

Kafka compatibility

This release can be used with Kafka Connect 0.9.0.1 (or a subsequent API-compatible release), and is known to be incompatible with Kafka Connect 0.10.0 due to binary incompatible changes in the Kafka 0.10.0 API. DBZ-80

Upgrading

Check the backward-incompatible changes when upgrading to 0.2.1 from 0.2.0.

When you decide to upgrade the MySQL connector to 0.2.1 from 0.2.0, gracefully stop the running 0.2.0 connector, remove the 0.2.0 plugin files, install the 0.2.1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.2.1 connector will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

Backwards-incompatible changes

  • Corrected the names of the Avro-compliant Kafka Connect schemas generated by the MySQL connector for the before and after fields in its data change events. Consumers that require knowledge (by name) of the particular schemas used in 0.2 events may have trouble consuming events produced by the 0.2.1 (or later) connector. DBZ-72

Fixes

This release includes all of the fixes from the 0.2.0 release plus the following fixes:

  • The MySQL connector’s plugin archive now contains the MySQL JDBC driver JAR file required by the connector. DBZ-71

Release 0.2.0 (June 8, 2016)

See the complete list of issues addressed in this release.

The 0.2.0 release contained a significant issue, and 0.2.1 was quickly released to fix the problem. We recommend using a newer release than 0.2.

Backwards-incompatible changes

  • Completely redesigned the structure of event messages produced by MySQL connector and stored in Kafka topics. Events now contain an envelope structure with information about the source event, the kind of operation (create/insert, update, delete, read), the time that Debezium processed the event, and the state of the row before and/or after the event. The messages written to each topic have a distinct Avro-compliant Kafka Connect schema that reflects the structure of the source table, which may vary over time independently from the schemas of all other topics. See the documentation for details. This envelope structure will likely be used by future connectors. DBZ-50, DBZ-52, DBZ-45, DBZ-60

  • MySQL connector handles deletion of a row by recording a delete event message whose value contains the state of the removed row (and other metadata), followed by a tombstone event message with a null value to signal Kafka’s log compaction that all messages with the same key can be garbage collected. See the documentation for details. DBZ-44

  • Changed the format of events that the MySQL connector writes to its schema change topic, through which consumers can access events with the DDL statements applied to the database(s). The format change makes it possible for consumers to correlate these events with the data change events. DBZ-43, DBZ-55

New features

  • MySQL connector supports high availability MySQL cluster topologies. See the documentation for details. DBZ-37

  • MySQL connector now by default starts by performing a consistent snapshot of the schema and contents of the upstream MySQL databases in its current state. See the documentation for details about how this works and how it impacts other database clients. DBZ-31

  • MySQL connector can be configured to exclude, truncate, or mask specific columns in events. DBZ-29

  • MySQL connector events can be serialized using the Confluent Avro converter or the JSON converter. Previously, only the JSON converter could be used. DBZ-29, DBZ-63, DBZ-64

Changes

  • DDL parsing framework identifies table affected by statements via a new listener callback. DBZ-38

  • The database.binlog configuration property was required in version 0.1 of the MySQL connector, but now it is no longer used because of the new snapshot feature. If provided, it will be quietly ignored. DBZ-31

Bug fixes

  • MySQL connector now properly parses COMMIT statements, the REFERENCES clauses of CREATE TABLE statements, and statements with CHARSET shorthand of CHARACTER SET. DBZ-48, DBZ-49, DBZ-57

  • MySQL connector properly handles binary values that are hexadecimal strings DBZ-61

Release 0.1.0 (March 17, 2016)

See the complete list of issues addressed in this release.

Kafka compatibility

This release can be used with Kafka Connect 0.9.0.1 (or a subsequent API-compatible release).

Added

  • MySQL connector for ingesting change events from MySQL databases. DBZ-1

  • Kafka Connect plugin archive for MySQL connector. DBZ-17

  • Simple DDL parsing framework that can be extended and used by various connectors. DBZ-1

  • Framework for embedding a single Kafka Connect connector inside an application. DBZ-8

back to top