Release Notes for Debezium 0.10
All notable changes for Debezium releases are documented in this file. Release numbers follow Semantic Versioning.
- Release 0.10.0.Final (October 2nd, 2019)
- Release 0.10.0.CR2 (September 26th, 2019)
- Release 0.10.0.CR1 (September 10th, 2019)
- Release 0.10.0.Beta4 (August 16th, 2019)
- Release 0.10.0.Beta3 (July 23rd, 2019)
- Release 0.10.0.Beta2 (June 27th, 2019)
- Release 0.10.0.Beta1 (June 11th, 2019)
- Release 0.10.0.Alpha2 (June 3rd, 2019)
- Release 0.10.0.Alpha1 (May 28th, 2019)
Release 0.10.0.Final (October 2nd, 2019)
See the https://issues.redhat.com/secure/ReleaseNote.jspa?projectId=12317320&version=12339267(complete list of issues).
Kafka compatibility
This release has been built against Kafka Connect 2.3.0 and has been tested with version 2.3.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.
Upgrading
Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.
When you decide to upgrade one of these connectors to 0.10.0.Final from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.10.0.Final plugin files, and restart the connector using the same configuration. Upon restart, the 0.10.0.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.
If you are using our docker images then do not forget to pull them fresh from Docker registry.
Release 0.10.0.CR2 (September 26th, 2019)
See the https://issues.redhat.com/secure/ReleaseNote.jspa?projectId=12317320&version=12342807(complete list of issues).
Kafka compatibility
This release has been built against Kafka Connect 2.3.0 and has been tested with version 2.3.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.
Upgrading
Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.
When you decide to upgrade one of these connectors to 0.10.0.CR2 from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.10.0.CR2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.10.0.CR2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.
If you are using our docker images then do not forget to pull them fresh from Docker registry.
Breaking changes
The data type MicroDuration for representing INTERVAL columns (as supported by the Postgres and Oracle connectors) has been changed to use int64 rather than float64. The reason being that there are no fractional microseconds values expected. For cases where the microseconds of an interval would overflow int64
, there’ll be an alternative String-based mapping be provided in a future Debezium release, which will allow to exactly represent interval values based on their year, month, day etc. parts (see DBZ-1498).
The behavior of unchanged TOASTed columns has changed in this release (see DBZ-1367). Please upgrade the PostgreSQL connector in conjunction with the Decoderbufs plugin to guarantee that these columns are handled correctly. Please refer to the PostgreSQL connector documentation for more information on unchanged TOASTed columns.
New Features
-
Allow user to customize key for DB tables through configuration DBZ-1015
-
Replace Custom Schema with Pluggable Serializers via KC Schema in Cassandra Connector DBZ-1405
-
Porting insert fields from source struct feature to ExtractNewDocumentState SMT DBZ-1442
-
Add column_id column to metadata section in messages in Kafka topic DBZ-1483
Fixes
This release includes the following fixes:
-
Cannot use Avro for fields with dash in name DBZ-1044
-
Detection of unsupported include-unchanged-toast parameter is failing DBZ-1399
-
Possible issue with Debezium not properly shutting down PG connections during Connect rebalance DBZ-1426
-
Common error when PG connector cannot connect is confusing DBZ-1427
-
Postgres connector does not honor
publication.name
configuration DBZ-1436 -
Wrong interrupt handling DBZ-1438
-
CREATE DATABASE and TABLE statements do not support DEFAULT charset DBZ-1470
-
Avoid NPE at runtime in EventRouter when incorrect configuration is given. DBZ-1495
-
java.time.format.DateTimeParseException: java.time.format.DateTimeParseException DBZ-1501
Other changes
This release includes also other changes:
-
Publish container images to quay.io DBZ-1178
-
Document installation of DecoderBufs plug-in via RPM on Fedora DBZ-1286
-
Fix intermittendly failing Postgres tests DBZ-1383
-
Add MongoDB 4.2 to testing matrix DBZ-1389
-
Upgrade to latest Postgres driver DBZ-1462
-
Use old SMT name in 0.9 docs DBZ-1471
-
Speak of "primary" and "secondary" nodes in the Postgres docs DBZ-1472
-
PostgreSQL
snapshot.mode
connector option description should include 'exported' DBZ-1473 -
Update example tutorial to show using Avro configuration at connector level DBZ-1474
-
Upgrade protobuf to version 3.8.0 DBZ-1475
-
Logging can be confusing when using fallback replication stream methods DBZ-1479
-
Remove info on when an option was introduced from the docs DBZ-1493
-
Unstable Mysql connector Integration test (shouldProcessCreateUniqueIndex) DBZ-1500
-
Update PostgreSQL documentation DBZ-1503
-
DocumentTest#shouldCreateArrayFromValues() fails on Windows DBZ-1508
Release 0.10.0.CR1 (September 10th, 2019)
See the complete list of issues.
Kafka compatibility
This release has been built against Kafka Connect 2.3.0 and has been tested with version 2.3.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.
Upgrading
Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.
When you decide to upgrade one of these connectors to 0.10.0.CR1 from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.10.0.CR1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.10.0.CR1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.
If you are using our docker images then do not forget to pull them fresh from Docker registry.
Breaking changes
The ProtoBuf library use by PostgreSQL plugin has been upgraded.
SQL Server connector now supports Kafka Connect’s temporal datatypes. At the same time the default temporal mode is no longer adaptive_time_microseconds
but adaptive
. Mode adaptive_time_microseconds
is no longer supported.
Fixes
This release includes the following fixes:
-
Date conversion broken if date more than 3000 year DBZ-949
-
Overflowed Timestamp in Postgres Connection DBZ-1205
-
Debezium does not expect a year larger than 9999 DBZ-1255
-
ExportedSnapshotter and InitialOnlySnapshotter should not always execute a snapshot. DBZ-1437
-
Source Fields Not Present on Delete Rewrite DBZ-1448
-
NPE raises when a new connector has nothing to commit DBZ-1457
-
MongoDB connector throws NPE on "op=n" DBZ-1464
Release 0.10.0.Beta4 (August 16th, 2019)
See the complete list of issues.
Kafka compatibility
This release has been built against Kafka Connect 2.3.0 and has been tested with version 2.3.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.
Upgrading
Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.
When you decide to upgrade one of these connectors to 0.10.0.Beta4 from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.10.0.Beta4 plugin files, and restart the connector using the same configuration. Upon restart, the 0.10.0.Beta4 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.
If you are using our docker images then do not forget to pull them fresh from Docker registry.
Breaking changes
The default format of the message values produced by the outbox event router has been changed. It will solely contain the value of the payload
column by default. In order to add the eventType
value that previously was part of the message value, use the "additional field" configuration option with a placement option of envelope
. In this case, the message value will be a complex structure containing the payload
key and one additional key for each further field.
Fixes
This release includes the following fixes:
-
Debezium for MySQL fails on GRANT DELETE ON <table> DBZ-1411
-
Debezium for MySQL tries to flush a table for a database not in the database whitelist DBZ-1414
-
Table scan is performed anyway even if snapshot.mode is set to initial_schema_only DBZ-1417
-
SMT ExtractNewDocumentState does not support Heartbeat events DBZ-1430
-
Postgres connector does not honor
publication.name
configuration DBZ-1436
Other changes
This release includes also other changes:
-
Issue with debezium embedded documentation DBZ-393
-
Refactor Postgres connector to be based on new framework classes DBZ-777
-
Don’t obtain new connection each time when getting xmin position DBZ-1381
-
Unify handling of attributes in EventRouter SMT DBZ-1385
-
DockerHub: show container specific README files DBZ-1387
-
Remove unused dependencies from Cassandra connector DBZ-1424
-
Simplify custom engine name parsing grammar DBZ-1432
Release 0.10.0.Beta3 (July 23rd, 2019)
See the complete list of issues.
Kafka compatibility
This release has been built against Kafka Connect 2.3.0 and has been tested with version 2.3.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.
Upgrading
Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.
When you decide to upgrade one of these connectors to 0.10.0.Beta3 from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.10.0.Beta3 plugin files, and restart the connector using the same configuration. Upon restart, the 0.10.0.Beta3 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.
If you are using our docker images then do not forget to pull them fresh from Docker registry.
Breaking changes
The value of heartbeat messages has been changed, it now contains a field with the timestamp of the heartbeat. Note that the message format of heartbeat messages is considered an implementation detail of Debezium, i.e. its format may be altered incompatibly and consumers should not rely on any specific format.
New Features
-
Handle tables without primary keys DBZ-916
-
Define exposed connector metrics in MySQL DBZ-1120
-
Set heartbeat interval for the binlog reader DBZ-1338
-
Outbox router should skip heartbeat messages by default DBZ-1388
-
Introduce number ofEventsInError metric DBZ-1222
-
Add option to skip table locks when snapshotting DBZ-1238
-
Explore built-in logical decoding added in Postgres 10 DBZ-766
-
Support deletion events in the outbox routing SMT DBZ-1320
-
Expose metric for progress of DB history recovery DBZ-1356
Fixes
This release includes the following fixes:
-
Incorrect offset may be committed despite unparseable DDL statements DBZ-599
-
SavePoints are getting stored in history topic DBZ-794
-
delete message "op:d" on tables with unique combination of 2 primary keys = (composite keys) , the d records are not sent DBZ-1180
-
When a MongoDB collection haven’t had activity for a period of time an initial sync is triggered DBZ-1198
-
Restore compatibility with Kafka 1.x DBZ-1361
-
no viable alternative at input 'LOCK DEFAULT' DBZ-1376
-
NullPointer Exception on getReplicationSlotInfo for Postgres DBZ-1380
-
CHARSET is not supported for CAST function DBZ-1397
-
Aria engine is not known by Debezium parser DBZ-1398
-
Debezium does not get the first change after creating the replication slot in PostgreSQL DBZ-1400
-
Built-in database filter throws NPE DBZ-1409
-
Error processing RDS heartbeats DBZ-1410
-
PostgreSQL Connector generates false alarm for empty password DBZ-1379
Other changes
This release includes also other changes:
-
Developer Preview Documentation DBZ-1284
-
Upgrade to Apache Kafka 2.3 DBZ-1358
-
Stabilize test executions on CI DBZ-1362
-
Handling tombstone emission option consistently DBZ-1365
-
Avoid creating unnecessary type metadata instances; only init once per column. DBZ-1366
-
Fix tests to run more reliably on Amazon RDS DBZ-1371
Release 0.10.0.Beta2 (June 27th, 2019)
See the complete list of issues.
Kafka compatibility
This release has been built against Kafka Connect 2.3.0 and has been tested with version 2.3.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.
Upgrading
Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.
When you decide to upgrade one of these connectors to 0.10.0.Beta2 from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.10.0.Beta2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.10.0.Beta2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.
If you are using our docker images then do not forget to pull them fresh from Docker registry.
Fixes
This release includes the following fixes:
-
Events for TRUNCATE TABLE not being emitted DBZ-708
-
Connector consumes huge amount of memory DBZ-1065
-
Exception when starting the connector on Kafka Broker 0.10.1.0 DBZ-1270
-
Raise warning when renaming table causes it to be captured or not captured any longer DBZ-1278
-
no viable alternative at input 'ALTER TABLE
documents
RENAME INDEX' DBZ-1329 -
MySQL DDL parser - issue with triggers and NEW DBZ-1331
-
MySQL DDL parser - issue with COLLATE in functions DBZ-1332
-
Setting "include.unknown.datatypes" to true works for streaming but not during snapshot DBZ-1335
-
PostgreSQL db with materialized view failing during snapshot DBZ-1345
-
Switch RecordsStreamProducer to use non-blocking stream call DBZ-1347
-
Can’t parse create definition on the mysql connector DBZ-1348
-
String literal should support utf8mb3 charset DBZ-1349
-
NO_AUTO_CREATE_USER sql mode is not supported in MySQL 8 DBZ-1350
-
Incorrect assert for invalid timestamp check in MySQL 8 DBZ-1353
Release 0.10.0.Beta1 (June 11th, 2019)
See the complete list of issues.
Kafka compatibility
This release has been built against Kafka Connect 2.2.1 and has been tested with version 2.2.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.
Upgrading
Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.
When you decide to upgrade one of these connectors to 0.10.0.Beta1 from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.10.0.Beta1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.10.0.Beta1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.
If you are using our docker images then do not forget to pull them fresh from Docker registry.
New Features
-
Issue a warning for filters not matching any table/database DBZ-1242
Fixes
This release includes the following fixes:
-
Multiple cdc entries with exactly the same commitLsn and changeLsn DBZ-1152
-
PostGIS does not work in Alpine images DBZ-1307
-
Processing MongoDB document contains UNDEFINED type causes exception with MongoDB Unwrap SMT DBZ-1315
-
Partial zero date datetime/timestamp will fail snapshot DBZ-1318
-
Default value set null when modify a column from nullable to not null DBZ-1321
-
Out-of-order chunks don’t initiate commitTime DBZ-1323
-
NullPointerException when receiving noop event DBZ-1317
Release 0.10.0.Alpha2 (June 3rd, 2019)
See the complete list of issues.
Kafka compatibility
This release has been built against Kafka Connect 2.2.0 and has been tested with version 2.2.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.
Upgrading
Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.
When you decide to upgrade one of these connectors to 0.10.0.Alpha2 from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.10.0.Alpha2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.10.0.Alpha2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.
If you are using our docker images then do not forget to pull them fresh from Docker registry.
Breaking changes
The snapshot marking has been overhauled DBZ-1295. Originally the snapshot marker has been field with boolean value indicating whther the record was obtained via snapshot or not. Now it has been turned into three state string enumeration indicating the record came from snapshot (true), is last in the snapshot (last) or is from streaming (false).
Other changes
This release includes also other changes:
-
Replace Predicate<Column> with ColumnNameFilter DBZ-1092
-
Upgrade ZooKeeper to 3.4.14 DBZ-1298
-
Upgrade Docker tooling image DBZ-1301
-
Upgrade Debezium Postgres Example image to 11 DBZ-1302
-
Create profile to build assemblies without drivers DBZ-1303
-
Modify release pipeline to use new Dockerfiles DBZ-1304
-
Add 3rd party licences DBZ-1306
-
Remove unused methods from ReplicationStream DBZ-1310
Release 0.10.0.Alpha1 (May 28th, 2019)
See the complete list of issues.
Kafka compatibility
This release has been built against Kafka Connect 2.2.0 and has been tested with version 2.2.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.
Upgrading
Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.
When you decide to upgrade one of these connectors to 0.10.0.Alpha1 from any of the earlier 0.10.x, 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.10.0.Alpha1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.10.0.Alpha1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.
If you are using our docker images then do not forget to pull them fresh from Docker registry.
Breaking changes
All connectors now share the common source info block fields DBZ-596. This led to the renaming and/or change of content of some of the source fields. We are providing an option source.struct.version=v1
to use legacy source info block.
Unwrap SMTs have been renamed DBZ-677 to better express their use.
MySQL connector now consistently handle database.history.store.only.monitored.tables.ddl
for both snapshot and streaming mode DBZ-683. This leads to changes in the contents of database history topic.
MySQL legacy DDL parser has been removed DBZ-736 and was fully replaced with ANTLR-based parser.
Oracle and SQL Server connectors now contain database, schema, and table names in the source info block DBZ-875.
MongoDB now contains both database and collection name in source info block DBZ-1175. The original ns
field has been dropped.
Metric NumberOfEventsSkipped
is now available only for MySQL connector DBZ-1209.
All deprecated features and configuration options DBZ-1234 have been removed from the codebase and are no longer available.
Outbox routing SMT option names have been renamed to follow a consistent naming schema DBZ-1289.
Fixes
This release includes the following fixes:
-
MySQL connection with client authentication does not work DBZ-1228
-
Unhandled exception prevents snapshot.mode : when_needed functioning DBZ-1244
-
MySQL connector stops working with a NullPointerException error DBZ-1246
-
CREATE INDEX can fail for non-monitored tables after connector restart DBZ-1264
-
Create a spec file for RPM for postgres protobuf plugin DBZ-1272
-
Last transaction events get duplicated on EmbeddedEngine MySQL connector restart DBZ-1276
Other changes
This release includes also other changes:
-
Misleading description for column.mask.with.length.chars parameter DBZ-1290
-
Clean up integration tests under integration-tests DBZ-263
-
Consolidate DDL parser tests DBZ-733
-
Document "database.ssl.mode" option DBZ-985
-
Synchronize MySQL grammar with upstream grammar DBZ-1127
-
Add FAQ entry about -XX:+UseStringDeduplication JVM flag DBZ-1139
-
Test and handle time 24:00:00 supported by PostgreSQL DBZ-1164
-
Define final record format for MySQL, Postgres, SQL Server and MongoDB DBZ-1235
-
Improve error reporting in case of misaligned schema and data DBZ-1257
-
Adding missing contributors to COPYRIGHT.txt DBZ-1259
-
Automate contributor check during release pipeline. DBZ-1282