Release Notes for Debezium 0.9

Release 0.9.5.Final (May 2nd, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.2.0 and has been tested with version 2.2.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.5.Final from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.5.Final plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.5.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

  • Upgrade to Kafka 2.2.0 DBZ-1227

  • Ability to specify batch size during snapshot DBZ-1247

  • Postgresql ARRAY support DBZ-1076

  • Add support macaddr and macaddr8 PostgreSQL column types DBZ-1193

Fixes

This release includes the following fixes:

  • Failing to specify value for database.server.name results in invalid Kafka topic name DBZ-212

  • Escape sequence handling needs to be unified DBZ-481

  • Postgres Connector times out in schema discovery for DBs with many tables DBZ-1214

  • Oracle connector: JDBC transaction can only capture single DML record DBZ-1223

  • Enable enumeration options to contain escaped characters or commas. DBZ-1226

  • Antlr parser fails on column named with MODE keyword DBZ-1233

  • Lost precision for timestamp with timezone DBZ-1236

  • NullpointerException due to optional value for commitTime DBZ-1241

  • Default value for datetime(0) is incorrectly handled DBZ-1243

  • Postgres connector failing because empty state data is being stored in offsets topic DBZ-1245

  • Default value for Bit does not work for larger values DBZ-1249

  • Microsecond precision is lost when reading timetz data from Postgres. DBZ-1260

Other changes

This release includes also other changes:

  • Zookeeper image documentation does not describe txns mountpoint DBZ-1231

  • Parse enum and set options with Antlr DBZ-739

Release 0.9.4.Final (April 11th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.1.1 and has been tested with version 2.1.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.4.Final from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.4.Final plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.4.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

  • Add MySQL Connector metric to expose "number of filtered events" DBZ-1206

  • Support TLS 1.2 for MySQL DBZ-1208

  • Create new MysqlConnector metric exposing if the connector is tracking offsets using GTIDs or not. DBZ-1221

  • Add support for columns of type INET DBZ-1189

Fixes

This release includes the following fixes:

  • Incorrect value for datetime field for '0001-01-01 00:00:00' DBZ-1143

  • PosgreSQL DecoderBufs crash when working with geometries in "public" schema DBZ-1144

  • [postgres] differing logic between snapsnot and streams for create record DBZ-1163

  • Error while deserializing binlog event DBZ-1191

  • MySQL connector throw an exception when captured invalid datetime DBZ-1194

  • Error when alter Enum column with CHARACTER SET DBZ-1203

  • Mysql: Getting ERROR Failed due to error: connect.errors.ConnectException: For input string: "false" DBZ-1204

  • MySQL connection timeout after bootstrapping a new table DBZ-1207

  • SLF4J usage issues DBZ-1212

  • JDBC Connection Not Closed in MySQL Connector Snapshot Reader DBZ-1218

  • Support FLOAT(p) column definition style DBZ-1220

Other changes

This release includes also other changes:

  • Add WhitespaceAfter check to Checkstyle DBZ-362

  • Document RDS Postgres wal_level behavior DBZ-1219

Release 0.9.3.Final (March 25th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.1.1 and has been tested with version 2.1.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.3.Final from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.3.Final plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.3.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

  • Support Outbox SMT as part of Debezium core DBZ-1169

  • Add support for partial recovery from lost slot in postgres DBZ-1082

Fixes

This release includes the following fixes:

  • Postgresql Snapshot with a table that has > 8192records hangs DBZ-1161

  • HStores fail to Snapshot properly DBZ-1162

  • NullPointerException When there are multiple tables in different schemas in the whitelist DBZ-1166

  • Cannot set offset.flush.interval.ms via docker entrypoint DBZ-1167

  • Missing Oracle OCI library is not reported as error DBZ-1170

  • RecordsStreamProducer forgets to convert commitTime from nanoseconds to microseconds DBZ-1174

  • MongoDB Connector doesn’t fail on invalid hosts configuration DBZ-1177

  • Handle NPE errors when trying to create history topic against confluent cloud DBZ-1179

  • The Postgres wal2json streaming and non-streaming decoders do not process empty events DBZ-1181

  • Can’t continue after snapshot is done DBZ-1184

  • ParsingException for SERIAL keyword DBZ-1185

  • STATS_SAMPLE_PAGES config cannot be parsed DBZ-1186

  • MySQL Connector generates false alarm for empty password DBZ-1188

Other changes

This release includes also other changes:

  • Ensure no brace-less if() blocks are used in the code base DBZ-1039

  • Align Oracle DDL parser code to use the same structure as MySQL DBZ-1192

Release 0.9.2.Final (February 22nd, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.1.1 and has been tested with version 2.1.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.2.Final from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.2.Final plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.2.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

  • Add snapshotting mode NEVER for MongoDB connector DBZ-867

  • Allow passing of arbitrary parameters when replication slot is started DBZ-1130

Fixes

This release includes the following fixes:

  • Integer default value for DECIMAL column fails with Avro Converter DBZ-1077

  • connect binds only to hostname interface DBZ-1108

  • Connector fails to connect to binlog on connectors rebalance, throws ServerException DBZ-1132

  • Fail to parse MySQL TIME with values bigger than 23:59:59.999999 DBZ-1137

  • Test dependencies shouldn’t be part of the SQL Server connector archive DBZ-1138

  • Emit correctly-typed fallback values for replica identity DEFAULT DBZ-1141

  • Unexpected exception while streaming changes from row with unchanged toast DBZ-1146

  • SQL syntax error near '"gtid_purged"' DBZ-1147

  • Postgres delete operations throwing DataException DBZ-1149

  • Antlr parser fails on column names that are keywords DBZ-1150

  • SqlServerConnector doesn’t work with table names with "special characters" DBZ-1153

Other changes

This release includes also other changes:

  • Describe topic-level settings to ensure event consumption when log compaction is enabled DBZ-1136

  • Upgrade binlog client to 0.19.0 DBZ-1140

  • Upgrade kafkacat to 1.4.0-RC1 DBZ-1148

  • Upgrade Avro connector version to 5.1.2 DBZ-1156

  • Upgrade to Kafka 2.1.1 DBZ-1157

Release 0.9.1.Final (February 13th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.1.0 and has been tested with version 2.1.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.1.Final from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.1.Final plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.1.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

  • Provide new container image with tooling for examples and demos DBZ-1125

Fixes

This release includes the following fixes:

  • BigDecimal has mismatching scale value for given Decimal schema error due to permissive mysql ddl DBZ-983

  • Primary key changes cause UnsupportedOperationException DBZ-997

  • java.lang.IllegalArgumentException: timeout value is negative DBZ-1019

  • Connector consumes huge amount of memory DBZ-1065

  • Strings.join() doesn’t apply conversation for first element DBZ-1112

  • NPE if database history filename has no parent folder DBZ-1122

  • Generated columns not supported by DDL parser DBZ-1123

  • Advancing LSN in the first iteration - possible data loss DBZ-1128

  • Incorrect LSN comparison can cause out of order processing DBZ-1131

Other changes

This release includes also other changes:

  • io.debezium.connector.postgresql.PostgisGeometry shouldn’t use DatatypeConverter DBZ-962

  • Schema change events should be of type ALTER when table is modified DBZ-1121

  • Wal2json ISODateTimeFormatTest fails with a locale other than Locale.ENGLISH DBZ-1126

Known issues

A potential race condition was identified in upstream library for MySQL’s binary log processing. The problem exhibits as the issue DBZ-1132. If you are affected by it we propose as the workaround to increase Kafka Connect configuration options task.shutdown.graceful.timeout.ms and connect.rebalance.timeout.ms. If the problem persists please disable keepalive thread via Debezium configration option connect.keep.alive.

Release 0.9.0.Final (February 5th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.1.0 and has been tested with version 2.1.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.0.Final from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.0.Final plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.0.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

  • Expose more useful metrics and improve Grafana dashboard DBZ-1040

Fixes

This release includes the following fixes:

  • Allow to use drop-slot-on-close option with wal2json DBZ-1111

  • MySqlDdlParser does not support adding multiple partitions in a single ALTER TABLE …​ ADD PARTITION statement DBZ-1113

  • Debezium fails to take a lock during snapshot DBZ-1115

  • Data from Postgres partitioned table written to wrong topic during snapshot DBZ-1118

Other changes

This release includes also other changes:

  • Clarify whether DDL parser is actually needed for SQL Server connector DBZ-1096

  • Add design description to SqlServerStreamingChangeEventSource DBZ-1097

  • Put out message about missing LSN at WARN level DBZ-1116

Release 0.9.0.CR1 (January 19th, 2019)

Kafka compatibility

This release has been built against Kafka Connect 2.1.0 and has been tested with version 2.1.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.0.CR1 from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.0.CR1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.0.CR1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

SQL Server connector has re-worked semantics of snapshot modes (DBZ-947).
SQL Server connector also adds a new field to offsets in the streaming mode (DBZ-1090) which could prevent seamless upgrading of versions. We recommend to re-register and restart the connector.
SQL Server connector has changed the schema name of messages schemas (DBZ-1089), superfluous database name has been dropped.

New Features

  • Snapshot isolation level overhaul DBZ-947

  • Kafka docker image - support for topic cleanup policy DBZ-1038

  • Optimize sys.fn_cdc_map_lsn_to_time() calls DBZ-1078

  • Fallback to restart_lsn if confirmed_flush_lsn is not found DBZ-1081

  • table.whitelist option update for an existing connector doesn’t work DBZ-175

  • EmbeddedEngine should allow for more flexible record consumption DBZ-1080

  • Client-side column blacklisting in SQL Server connector DBZ-1067

  • column.propagate.source.type missing scale DBZ-1073

Fixes

This release includes the following fixes:

  • ArrayIndexOutOfBoundsException when a column is deleted (Postgres) DBZ-996

  • Messages from tables without PK and with REPLICA IDENTITY FULL DBZ-1029

  • Inconsistent schema name in streaming and snapshotting phase DBZ-1051

  • "watch-topic" and "create-topic" commands fail DBZ-1057

  • Antlr Exception: mismatched input '.' expecting {<EOF>, '--'} DBZ-1059

  • MySQL JDBC Context sets the wrong truststore password DBZ-1062

  • Unsigned smallint column in mysql failing due to out of range error DBZ-1063

  • NULL Values are replaced by default values even in NULLABLE fields DBZ-1064

  • Uninformative "Found previous offset" log DBZ-1066

  • SQL Server connector does not persist LSNs in Kafka DBZ-1069

  • [debezium] ERROR: option \"include-unchanged-toast\" = \"0\" is unknown DBZ-1083

  • Debezium fails when consuming table without primary key with turned on topic routing DBZ-1086

  • Wrong message key and event used when primary key is updated DBZ-1088

  • Connect schema name is wrong for SQL Server DBZ-1089

  • Incorrect LSN tracking - possible data loss DBZ-1090

  • Race condition in EmbeddedEngine shutdown DBZ-1103

Other changes

This release includes also other changes:

  • Intermittent failures in RecordsStreamProducerIT#shouldPropagateSourceColumnTypeToSchemaParameter() DBZ-781

  • Assert MongoDB supported versions DBZ-988

  • Describe how to do DDL changes for SQL Server DBZ-993

  • Verify version of wal2json on RDS DBZ-1056

  • Move SQL Server connector to main repo DBZ-1084

  • Don’t enqueue further records when connector is stopping DBZ-1099

  • Race condition in SQLServer tests during snapshot phase DBZ-1101

  • Remove columnNames field from TableImpl DBZ-1105

  • column.propagate.source.type missing scale DBZ-387

  • write catch-up binlog reader DBZ-388

  • changes to Snapshot and Binlog readers to allow for concurrent/partial running DBZ-389

Release 0.9.0.Beta2 (December 19th, 2018)

Kafka compatibility

This release has been built against Kafka Connect 2.1.0 and has been tested with version 2.1.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.0.Beta2 from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.0.Beta2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.0.Beta2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

The MongoDB CDC Event Flattening transformation now by default removes deletion messages (DBZ-563). The previous default was to keep them.

New Features

  • Add support for Oracle 11g DBZ-954

  • UnwrapFromMongoDbEnvelope refactor DBZ-1020

  • Add option for dropping deletes and tombstone events to MongoDB struct recreation SMT DBZ-563

  • Expose "snapshot.delay.ms" option for all connectors DBZ-966

  • Convey original operation type when using flattening SMTs DBZ-971

  • Provide last event and captured tables in metrics DBZ-978

  • Skip MySQL BinLog Event in case of Invalid Cell Values DBZ-1010

Fixes

This release includes the following fixes:

  • BinaryLogClient can’t disconnect when adding records after shutdown has been initiated DBZ-604

  • UnwrapFromMongoDbEnvelope fails when encountering $unset operator DBZ-612

  • "no known snapshots" error when DBs rows are large DBZ-842

  • MongoDB connector stops processing oplog events after encountering "new primary" event DBZ-848

  • MySQL active-passive: brief data loss on failover when Debezium encounters new GTID channel DBZ-923

  • ConnectException: Only REPEATABLE READ isolation level is supported for START TRANSACTION WITH CONSISTENT SNAPSHOT in RocksDB Storage Engine DBZ-960

  • ConnectException during ALTER TABLE for non-whitelisted table DBZ-977

  • UnwrapFromMongoDbEnvelope fails when encountering full updates DBZ-987

  • UnwrapFromMongoDbEnvelope fails when encountering Tombstone messages DBZ-989

  • Postgres schema changes detection (not-null constraint) DBZ-1000

  • NPE in SqlServerConnectorTask#cleanupResources() if connector failed to start DBZ-1002

  • Explicitly initialize history topic in HistorizedRelationalDatabaseSchema DBZ-1003

  • BinlogReader ignores GTIDs for empty database DBZ-1005

  • NPE in MySqlConnectorTask.stop() DBZ-1006

  • The name of captured but not whitelisted table is not logged DBZ-1007

  • GTID set is not properly initialized after DB failover DBZ-1008

  • Postgres Connector fails on none nullable MACADDR field during initial snapshot DBZ-1009

  • Connector crashes with java.lang.NullPointerException when using multiple sinks to consume the messages DBZ-1017

  • Postgres connector fails upon event of recently deleted table DBZ-1021

  • ORA-46385: DML and DDL operations are not allowed on table "AUDSYS"."AUD$UNIFIED" DBZ-1023

  • Postgres plugin does not signal the end of snapshot properly DBZ-1024

  • MySQL Antlr runtime.NoViableAltException DBZ-1028

  • Debezium 0.8.2 and 0.8.3.Final Not Available on Confluent Hub DBZ-1030

  • Snapshot of tables with reserved names fails DBZ-1031

  • UnwrapFromMongoDbEnvelope doesn’t support operation header on tombstone messages DBZ-1032

  • Mysql binlog reader lost data if restart task when last binlog event is QUERY event. DBZ-1033

  • The same capture instance name is logged twice DBZ-1047

Other changes

This release includes also other changes:

  • MySQL 8 compatibility DBZ-688

  • Don’t hard code list of supported MySQL storage engines in Antlr grammar DBZ-992

  • Provide updated KSQL example DBZ-999

  • Update to Kafka 2.1 DBZ-1001

  • Skipt Antlr tests when tests are skipped DBZ-1004

  • Fix expected records counts in MySQL tests DBZ-1016

  • Cannot run tests against Kafka 1.x DBZ-1037

  • Configure MySQL Matrix testing job to test with and without GTID DBZ-1050

Release 0.9.0.Beta1 (November 20th, 2018)

Kafka compatibility

This release has been built against Kafka Connect 2.0.1 and has been tested with version 2.0.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.0.Beta1 from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.0.Beta1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.0.Beta1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

MySQL Connector now uses Antlr parser as the default.

New Features

  • Add STATUS_STORAGE_TOPIC environment variable to container images DBZ-893

  • Support Postgres 11 in Decoderbufs DBZ-955

  • Define the data directory where tests are storing their data DBZ-963

  • Upgrade Kafka to 2.0.1 DBZ-979

  • Implement unified metrics across connectors DBZ-776

  • Initial snapshot using snapshot isolation level DBZ-941

  • Add decimal.handling.mode for SQLServer Configuration DBZ-953

  • Support pass-through of "database." properties to JDBC driver DBZ-964

  • Handle changes of table definitions and tables created while streaming DBZ-812

Fixes

This release includes the following fixes:

  • Error while parsing JSON column type for MySQL DBZ-935

  • wal2json CITEXT columns set to empty strings DBZ-937

  • Base docker image is deprecated DBZ-939

  • Mysql connector failed to parse add partition statement DBZ-959

  • PostgreSQL replication slots not updated in transactions DBZ-965

  • wal2json_streaming decoder does not provide the right plugin name DBZ-970

  • Create topics command doesn’t work in Kafka docker image DBZ-976

  • Antlr parser: support quoted engine names in DDL DBZ-990

Other changes

This release includes also other changes:

  • Switch to Antlr-based parser implementation by default DBZ-757

  • Support RENAME column syntax from MySQL 8.0 DBZ-780

  • Fix documentation of 'array.encoding' option DBZ-925

  • Support MongoDB 4.0 DBZ-974

Release 0.9.0.Alpha2 (October 4th, 2018)

Kafka compatibility

This release has been built against Kafka Connect 2.0.0 and has been tested with version 2.0.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.0.Alpha2 from any of the earlier 0.9.x, 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.0.Alpha2 plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.0.Alpha2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

MySQL JDBC driver was upgraded to version 8.x. Kafka has been upgraded to version 2.0.0.

New Features

  • Build Alpine Linux versions of the PostgreSQL containers DBZ-705

  • Refactor methods to read MySQL sytem variables DBZ-849

  • Correct param name for excludeColumns(String fullyQualifiedTableNames) DBZ-854

  • Make BinlogReader#informAboutUnknownTableIfRequired() log with tableId DBZ-855

  • MySQL identifier with dot or space could not be parsed DBZ-878

  • Use postgres:10 instead of postgres:10.0 as base docker image DBZ-929

  • Support temporary replication slots with Postgres >= 10 DBZ-934

  • Support white/black-listing Mongo fields DBZ-633

  • Postgres connector - add database, schema and table names to "source" section of records DBZ-866

  • Support renaming Mongo fields DBZ-881

  • use tcpKeepAlive by default DBZ-895

  • Hstore support in Postgresql-connector DBZ-898

  • Add connector type to source info DBZ-918

Fixes

This release includes the following fixes:

  • Global read lock not release when exception raised during snapshot DBZ-769

  • Abort loops in MongoPrimary#execute() if the connector is stopped DBZ-784

  • Initial synchronization is not interrupted DBZ-838

  • Kafka database history miscounting attempts even if there are more database history records to consume DBZ-853

  • Schema_only snapshot on idle server - offsets not stored after snapshot DBZ-859

  • DDL parsing in MySQL - default value of primary key is set to null DBZ-860

  • Antlr DDL parser exception for "create database …​ CHARSET=…​" DBZ-864

  • Error when MongoDB collection contains characters not compatible with kafka topic naming DBZ-865

  • AlterTableParserListener does not remove column definition listeners DBZ-869

  • MySQL parser does not recognize 0 as default value for date/time DBZ-870

  • Antlr parser ignores table whitelist filter DBZ-872

  • A new column might not be added with ALTER TABLE antlr parser DBZ-877

  • MySQLConnectorTask always reports it has the required Binlog file from MySQL DBZ-880

  • Execution of RecordsStreamProducer.closeConnections() is susceptible to race condition DBZ-887

  • Watch-topic command in docker image uses unsupported parameter DBZ-890

  • SQLServer should use only schema and table name in table naming DBZ-894

  • Prevent resending of duplicate change events after restart DBZ-897

  • PostgresConnection.initTypeRegistry() takes ~24 mins DBZ-899

  • java.time.format.DateTimeParseException: Text '1970-01-01 00:00:00' in mysql ALTER DBZ-901

  • org.antlr.v4.runtime.NoViableAltException on CREATE DEFINER=web@% PROCEDURE `…​ DBZ-903

  • MySQL default port is wrong in tutorial link DBZ-904

  • RecordsStreamProducer should report refresh of the schema due to different column count DBZ-907

  • MongoDbConnector returns obsolete config values during validation DBZ-908

  • Can’t parse create definition on the mysql connector DBZ-910

  • RecordsStreamProducer#columnValues() does not take into account unchanged TOASTed columns, refreshing table schemas unnecessarily DBZ-911

  • Wrong type in timeout call for Central wait release DBZ-914

  • Exception while parsing table schema with invalid default value for timestamp field DBZ-927

  • Discard null fields in MongoDB event flattening SMT DBZ-928

Other changes

This release includes also other changes:

  • Create Travis CI build for debezium-incubator repository DBZ-817

  • Cache prepared statements in JdbcConnection DBZ-819

  • Upgrade to Kafka 2.0.0 DBZ-858

  • Upgrad SQL Server image to CU9 GDR2 release DBZ-873

  • Speed-up Travis builds using parallel build DBZ-874

  • Add version format check into the release pipeline DBZ-884

  • Handle non-complete list of plugins DBZ-885

  • Parametrize wait time for Maven central sync DBZ-889

  • Assert non-empty release in release script DBZ-891

  • Upgrade Postgres driver to 42.2.5 DBZ-912

  • Upgrade MySQL JDBC driver to version 8.0.x DBZ-763

  • Upgrade MySQL binlog connector DBZ-764

Release 0.9.0.Alpha1 (July 26th, 2018)

Kafka compatibility

This release has been built against Kafka Connect 1.1.1 and has been tested with version 1.1.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, or PostgreSQL connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 0.9.0.Alpha1 from any of the earlier 0.8.x, 0.7.x, 0.6.x, 0.5.x, 0.4.x, 0.3.x, 0.2.x, or 0.1.x versions, first check the upgrading notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 0.9.0.Alpha1 plugin files, and restart the connector using the same configuration. Upon restart, the 0.9.0.Alpha1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

The Oracle connector was storing event timestamp in the source block in field ts_sec. The time stamp is in fact measured in milliseconds to so the field was renamed to ts_ms.

New Features

  • Ingest change data from SQL Server databases DBZ-40

  • Oracle connector implementation cont’d (initial snapshotting etc.) DBZ-716

  • Implement initial snapshotting for Oracle DBZ-720

  • Implement capturing of streamed changes DBZ-787

  • Implement initial snapshotting for SQL Server DBZ-788

  • Emit NUMBER columns as Int32/Int64 if precision and scale allow DBZ-804

  • Support heartbeat messages for Oracle DBZ-815

  • Upgrade to Kafka 1.1.1 DBZ-829

Fixes

This release includes the following fixes:

  • Offset remains with "snapshot" set to true after completing schema only snapshot DBZ-803

  • Misleading timestamp field name DBZ-795

  • Adjust scale of decimal values to column’s scale if present DBZ-818

  • Avoid NPE if commit is called before any offset is prepared DBZ-826

Other changes

This release includes also other changes:

  • Make DatabaseHistory set-up code re-usable DBZ-816

  • Use TableFilter contract instead of Predicate<TableId> DBZ-793

  • Expand SourceInfo DBZ-719

  • Provide Maven module and Docker set-up DBZ-786

  • Avoid a few raw type warnings DBZ-801

back to top