Release Notes for Debezium 1.0

All notable changes for Debezium releases are documented in this file. Release numbers follow Semantic Versioning.

Release 1.0.3.Final (March 12th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.4.0 and has been tested with version 2.4.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.0.3.Final from any of the earlier 1.0.x, 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.0.3.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.0.3.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

  • ExtractNewRecordState - add.source.fields should strip spaces from comma-separated list of fields DBZ-1772

  • Add ability to insert fields from op field in ExtractNewRecordState SMT DBZ-1452

Fixes

This release includes the following fixes:

  • Debezium skips messages after restart DBZ-1824

  • Unable to listen to binlogs for tables with a period in the table names DBZ-1834

  • Redundant calls to refresh schema when using user defined types in PostgreSQL DBZ-1849

  • postgres oid is too large to cast to integer DBZ-1850

Other changes

This release includes also other changes:

  • Test on top of AMQ Streams DBZ-924

  • Verify correctness of JMX metrics DBZ-1664

  • Test with AMQ Streams 1.4 connector operator DBZ-1714

  • hstore.handling.mode docs seem inaccurate (and map shows null values) DBZ-1758

  • Misleading warning message about uncommitted offsets DBZ-1840

  • Modularize tutorial DBZ-1845

  • Modularize the monitoring doc DBZ-1851

  • Document PostgreSQL connector metrics DBZ-1858

Release 1.0.2.Final (February 27th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.4.0 and has been tested with version 2.4.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.0.2.Final from any of the earlier 1.0.x, 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.0.2.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.0.2.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

The default value of MySQL config option gtid.new.channel.position was originally set to latest that should never be used in a production. The default value was thus set to earliest and the config option is scheduled for removal (DBZ-1705). The MySQL config option event.deserialization.failure.handling.mode was renamed to event.processing.failure.handling.mode to make the naming consistent with other connectors (DBZ-1826).

New Features

  • Add option to skip unprocesseable event DBZ-1760

Fixes

This release includes the following fixes:

  • Postgres Connector losing data on restart due to commit() being called before events produced to Kafka DBZ-1766

  • TINYINT(1) value range restricted on snapshot. DBZ-1773

  • MySQL source connector fails while parsing new AWS RDS internal event DBZ-1775

  • Incosistency in MySQL TINYINT mapping definition DBZ-1800

  • Supply of message.key.columns disables primary keys. DBZ-1825

Other changes

This release includes also other changes:

  • Backport debezium-testing module to 1.0.x DBZ-1819

Release 1.0.1.Final (February 7th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.4.0 and has been tested with version 2.4.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.0.1.Final from any of the earlier 1.0.x, 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.0.1.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.0.1.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

Before updating the DecoderBufs logical decoding plug-in in your Postgres database to this new version (or when pulling the debezium/postgres container image for that new version), it is neccessary to upgrade the Debezium Postgres connector to 1.0.1.Final or 1.1.0.Alpha2 or later (DBZ-1052).

New Features

There are no new features in this release.

Fixes

This release includes the following fixes:

  • Make slot creation in PostgreSQL more resilient DBZ-1684

  • Support boolean as default for INT(1) column in MySQL DBZ-1689

  • SIGNAL statement is not recognized by DDL parser DBZ-1691

  • When using in embedded mode MYSQL connector fails DBZ-1693

  • Connector error after adding a new not null column to table in Postgres DBZ-1698

  • MySQL connector fails to parse trigger DDL DBZ-1699

  • MySQL connector doesn’t use default value of connector.port DBZ-1712

  • ANTLR parser cannot parse MariaDB Table DDL with TRANSACTIONAL attribute DBZ-1733

  • Postgres connector does not support proxied connections DBZ-1738

  • GET DIAGNOSTICS statement not parseable DBZ-1740

  • MySql password logged out in debug log level DBZ-1748

Other changes

This release includes also other changes:

  • Add tests for using fallback values with default REPLICA IDENTITY DBZ-1158

  • Migrate all attribute name/value pairs to Antora component descriptors DBZ-1687

  • Remove overlap of different documentation config files DBZ-1729

  • Don’t fail upon receiving unkown operation events DBZ-1747

  • Upgrade to Mongo Java Driver version 3.12.1 DBZ-1761

Release 1.0.0.Final (December 18th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.4.0 and has been tested with version 2.4.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.0.0.Final from any of the earlier 1.0.x, 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.0.0.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.0.0.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

The incubating SerDes type io.debezium.serde.Serdes introduced in Debezium 1.0.0.CR1 has been renamed into io.debezium.serde.DebeziumSerdes to avoid conflicting with the Apache Kafka type of the same simple name (DBZ-1670).

Like other relational connectors, the MySQL connector now supports the option snapshot.lock.timeout.ms, defaulting to a timeout of 10 sec. When upgrading a connector an doing new snapshots, this timeout now might apply, whereas the connector would have waited indefinitely before to obtain the required locks. In that case the timeout should be adjusted as per your specific requirements (DBZ-1671).

New Features

  • Support streaming changes from SQL Server "AlwaysOn" replica DBZ-1642

Fixes

This release includes the following fixes:

  • Interpret Sql Server timestamp timezone correctly DBZ-1643

  • Sorting a HashSet only to put it back into a HashSet DBZ-1650

  • Function with RETURN only statement cannot be parsed DBZ-1659

  • Enum value resolution not working while streaming with wal2json or pgoutput DBZ-1680

Other changes

This release includes also other changes:

  • Globally ensure in tests that records can be serialized DBZ-824

  • Allow upstream teststuite to run with productised dependencies DBZ-1658

  • Upgrade to latest PostgreSQL driver 42.2.9 DBZ-1660

  • Generate warning for connectors with automatically dropped slots DBZ-1666

  • Regression test for MySQL dates in snapshot being off by one DBZ-1667

  • Rename Serdes to DebeziumSerdes DBZ-1670

  • Build against Apache Kafka 2.4 DBZ-1676

  • When PostgreSQL schema refresh fails, allow error to include root cause DBZ-1677

  • Prepare testsuite for RHEL 8 protobuf plugin RPM DBZ-1536

Release 1.0.0.CR1 (December 10th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.3.1 and has been tested with version 2.3.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.0.0.CR1 from any of the earlier 1.0.x, 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.0.0.CR1 plugin files, and restart the connector using the same configuration. Upon restart, the 1.0.0.CR1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

For the SQL Server and Oracle connectors, the snapshot mode initial_schema_only has been deprecated and will be removed in a future version. Please use schema_only instead (DBZ-585).

New Features

  • Transaction level TRANSACTION_READ_COMMITTED not implemented DBZ-1480

  • Provide change event JSON Serde for Kafka Streams DBZ-1533

  • Provide MongoDB 4.2 image DBZ-1626

  • Support PostgreSQL enum types DBZ-920

  • Upgrade container images to Java 11 DBZ-969

  • Support MongoDB 4.0 transaction DBZ-1215

  • Make connection timeout configurable in MySQL connection URL DBZ-1632

  • Support for arrays of uuid (_uuid) DBZ-1637

  • Add test matrix for SQL Server DBZ-1644

Fixes

This release includes the following fixes:

  • Empty history topic treated as not existing DBZ-1201

  • Incorrect handling of type alias DBZ-1413

  • Blacklisted columns are not being filtered out when generating a Kafka message from a CDC event DBZ-1617

  • IoUtil Bugfix DBZ-1621

  • VariableLatch Bugfix DBZ-1622

  • The oracle connector scans too many objects while attempting to determine the most recent ddl time DBZ-1631

  • Connector does not update its state correctly when processing compound ALTER statement DBZ-1645

  • Outbox event router shouldn’t lower-case topic names DBZ-1648

Other changes

This release includes also other changes:

  • Consolidate configuration parameters DBZ-585

  • Merge the code for upscaling decimal values with scale lower than defined DBZ-825

  • Make Debezium project Java 11 compatible DBZ-1402

  • Run SourceClear DBZ-1602

  • Extend MySQL to test Enum with column.propagate.source.type DBZ-1636

  • Sticky ToC hides tables in PG connector docs DBZ-1652

  • Antora generates build warning DBZ-1654

Release 1.0.0.Beta3 (November 14th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.3.1 and has been tested with version 2.3.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.0.0.Beta3 from any of the earlier 1.0.x, 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.0.0.Beta3 plugin files, and restart the connector using the same configuration. Upon restart, the 1.0.0.Beta3 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

Configuration parameter drop_on_stop of PostgreSQL connector has been renamed to drop.on.stop (DBZ-1595) to make it consistent with other parameter names.

New Features

  • Standardize source info for Cassandra connector DBZ-1408

  • Clarify presence of old values when not using REPLICA IDENTITY FULL DBZ-1518

  • Propagate replicator exception so failure reason is available from Connect DBZ-1583

  • Envelope methods should accept Instant instead of long for "ts" parameter DBZ-1607

Fixes

This release includes the following fixes:

  • Debezium Erroneously Reporting No Tables to Capture DBZ-1519

  • Debezium Oracle connector attempting to analyze tables DBZ-1569

  • Null values in "before" are populated with "__debezium_unavailable_value" DBZ-1570

  • Postgresql 11+ pgoutput plugin error with truncate DBZ-1576

  • Regression of postgres Connector times out in schema discovery for DBs with many tables DBZ-1579

  • The ts_ms value is not correct during the snapshot processing DBZ-1588

  • LogInterceptor is not thread-safe DBZ-1590

  • Heartbeats are not generated for non-whitelisted tables DBZ-1592

  • Config tombstones.on.delete is missing from SQL Server Connector configDef DBZ-1593

  • AWS RDS Performance Insights screwed a little by non-closed statement in "SELECT COUNT(1) FROM pg_publication" DBZ-1596

  • Update Postgres documentation to use ts_ms instead of ts_usec DBZ-1610

  • Exception while trying snapshot schema of non-whitelisted table DBZ-1613

Other changes

This release includes also other changes:

  • Auto-format source code upon build DBZ-1392

  • Update documentation based on Technology Preview DBZ-1543

  • Reduce size of Postgres container images DBZ-1549

  • Debezium should not use SHARE UPDATE EXCLUSIVE MODE locks DBZ-1559

  • Allows tags to be passed to CI jobs DBZ-1578

  • Upgrade MongoDB driver to 3.11 DBZ-1597

  • Run formatter validation in Travis CI DBZ-1603

  • Place formatting rules into Maven module DBZ-1605

  • Upgrade to Kafka 2.3.1 DBZ-1612

  • Allow per-connector setting for schema/catalog precedence in TableId use DBZ-1555

Release 1.0.0.Beta2 (October 24th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.3.0 and has been tested with version 2.3.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.0.0.Beta2 from any of the earlier 1.0.x, 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.0.0.Beta2 plugin files, and restart the connector using the same configuration. Upon restart, the 1.0.0.Beta2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

  • Update tooling image to use latest kafkacat DBZ-1522

  • Validate configured replication slot names DBZ-1525

  • Make password field to be hidden for MS SQL connector DBZ-1554

  • Raise a warning about growing backlog DBZ-1565

  • Support Postgres LTREE columns DBZ-1336

Fixes

This release includes the following fixes:

  • Aborting snapshot due to error when last running 'UNLOCK TABLES': Only REPEATABLE READ isolation level is supported for START TRANSACTION WITH CONSISTENT SNAPSHOT in RocksDB Storage Engine. DBZ-1428

  • MySQL Connector fails to parse DDL containing the keyword VISIBLE for index definitions DBZ-1534

  • MySQL connector fails to parse DDL - GRANT SESSION_VARIABLES_ADMIN…​ DBZ-1535

  • Mysql connector: The primary key cannot reference a non-existant column 'id' in table '*' DBZ-1560

  • Incorrect source struct’s collection field when dot is present in collection name DBZ-1563

  • Transaction left open after db snapshot DBZ-1564

Other changes

This release includes also other changes:

  • Add Postgres 12 to testing matrix DBZ-1542

  • Update Katacoda learning experience DBZ-1548

Release 1.0.0.Beta1 (October 17th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.3.0 and has been tested with version 2.3.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.0.0.Beta1 from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.0.0.Beta1 plugin files, and restart the connector using the same configuration. Upon restart, the 1.0.0.Beta1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

The ExtractNewDocumentState and EventRouter SMTs now propagate any heartbeat or schema change messages unchanged instead of dropping them as before. This is to ensure consistency with the ExtractNewRecordState SMT (DBZ-1513).

The new Postgres connector option interval.handling.mode allows to control whether INTERVAL columns should be exported as microseconds (previous behavior, remains the default) or as ISO 8601 formatted string (DBZ-1498). The following upgrade order must be maintained when existing connectors capture INTERVAL columns:

  1. Upgrade the Debezium Kafka Connect Postgres connector

  2. Upgrade the logical decoding plug-in installed in the database

  3. (Optionally) switch interval.handling.mode to string

In particular it should be avoided to upgrade the logical decoding plug-in before the connector, as this will cause no value to be exported for INTERVAL columns.

New Features

  • Provide alternative mapping for INTERVAL DBZ-1498

  • Ensure message keys have correct field order DBZ-1507

  • Image incorrect on Deploying Debezium on OpenShift DBZ-1545

  • Indicate table locking issues in log DBZ-1280

Fixes

This release includes the following fixes:

  • Debezium fails to snapshot large databases DBZ-685

  • Connector Postgres runs out of disk space DBZ-892

  • Debezium-MySQL Connector Fails while parsing AWS RDS internal events DBZ-1492

  • MongoDB ExtractNewDocumentState SMT blocks heartbeat messages DBZ-1513

  • pgoutput string decoding depends on JVM default charset DBZ-1532

  • Whitespaces not stripped from table.whitelist DBZ-1546

Other changes

This release includes also other changes:

  • Upgrade to latest JBoss Parent POM DBZ-675

  • CheckStyle: Flag missing whitespace DBZ-1341

  • Upgrade to the latest Checkstyle plugin DBZ-1355

  • CheckStyle: no code after closing braces DBZ-1391

  • Add "adopters" file DBZ-1460

  • Add Google Analytics to Antora-published pages DBZ-1526

  • Create 0.10 RPM for postgres-decoderbufs DBZ-1540

  • Postgres documentation fixes DBZ-1544