Release Notes for Debezium 1.2

All notable changes for Debezium releases are documented in this file. Release numbers follow Semantic Versioning.

Release 1.2.5.Final (September 24th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.5.Final from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.5.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.5.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

There are no new features in this release.

Fixes

This release includes the following fixes:

  • Fix Quarkus datasource configuration for Quarkus 1.9 DBZ-2558

Other changes

This release includes also other changes:

  • Prepare revised SMT docs (filter and content-based routing) for downstream DBZ-2567

  • Swap closing square bracket for curly brace in downstream title annotations DBZ-2577

Release 1.2.4.Final (September 17th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.4.Final from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.4.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.4.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

The SMTs for content-based routing and filtering – both using JSR 223 scripting engines for script evaluation – have been moved from the Debezium core module into a separate artifact (DBZ-2549). This must be added to the plug-in directories of those connector(s) for which you wish to use those SMTs. When using the Debezium container image for Kafka Connect, set the environment variable ENABLE_DEBEZIUM_SCRIPTING to true in order to do so. This change was done so to allow for exposing scripting functionality only in environments with an appropriately secured Kafka Connect configuration interface.

New Features

There are no new features in this release.

Fixes

There are no new fixes in this release.

Other changes

This release includes also other changes:

  • Document outbox event router SMT DBZ-2480

  • Unify representation of events - part two - update other connector doc DBZ-2501

  • Add annotations to support splitting files for downstream docs DBZ-2539

  • Prepare message filtering SMT doc for product release DBZ-2460

  • Prepare content-based router SMT doc for product release DBZ-2519

Release 1.2.3.Final (September 8th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.3.Final from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.3.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.3.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

There are no new features in this release.

Fixes

This release includes the following fixes:

  • JSON functions in MySQL grammar unsupported DBZ-2453

Other changes

This release includes also other changes:

  • CloudEvents remains TP but has avro support downstream DBZ-2245

  • Prepare DB2 connector doc for TP DBZ-2403

  • Adjust outbox extension to updated Quarkus semantics DBZ-2465

  • Doc tweaks required to automatically build Db2 content in downstream user guide DBZ-2500

Release 1.2.2.Final (August 25th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.2.Final from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.2.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.2.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

There are no new features in this release.

Fixes

This release includes the following fixes:

  • Adding new table to cdc causes the sqlconnector to fail DBZ-2303

  • LSNs in replication slots are not monotonically increasing DBZ-2338

  • Transaction data loss when process restarted DBZ-2397

  • java.lang.NullPointerException in ByLogicalTableRouter.java DBZ-2412

Other changes

This release includes also other changes:

  • Refactor: Add domain type for LSN DBZ-2200

  • Miscellaneous small doc updates for the 1.2 release DBZ-2399

  • Update some doc file names DBZ-2402

Release 1.2.1.Final (July 16th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.1.Final from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.1.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.1.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

  • Document content based routing and filtering for MongoDB DBZ-2255

  • Handle MariaDB syntax add column IF EXISTS as part of alter table DDL DBZ-2219

  • Add Apicurio converters to Connect container image DBZ-2083

Fixes

This release includes the following fixes:

  • MongoDB connector is not resilient to Mongo connection errors DBZ-2141

  • MySQL connector should filter additional DML binlog entries for RDS by default DBZ-2275

  • Concurrent access to a thread map DBZ-2278

  • Postgres connector may skip events during snapshot-streaming transition DBZ-2288

  • MySQL connector emits false error while missing a required data DBZ-2301

  • io.debezium.engine.spi.OffsetCommitPolicy.PeriodicCommitOffsetPolicy can’t be initiated due to NoSuchMethod error DBZ-2302

  • Allow single dimension DECIMAL in CAST DBZ-2305

  • MySQL JSON functions are missing from the grammar DBZ-2318

  • Description in documentation metrics tables is bold and shouldn’t be DBZ-2326

  • ALTER TABLE with timestamp default CURRENT_TIMESTAMP not null fails the task DBZ-2330

Other changes

This release includes also other changes:

  • Unstable tests in SQL Server connector DBZ-2217

  • Intermittent test failure on CI - SqlServerConnectorIT#verifyOffsets() DBZ-2220

  • Intermittent test failure on CI - MySQL DBZ-2229

  • Intermittent test failure on CI - SqlServerChangeTableSetIT#readHistoryAfterRestart() DBZ-2231

  • Failing test MySqlSourceTypeInSchemaIT.shouldPropagateSourceTypeAsSchemaParameter DBZ-2238

  • Intermittent test failure on CI - MySqlConnectorRegressionIT#shouldConsumeAllEventsFromDatabaseUsingBinlogAndNoSnapshot() DBZ-2243

  • Use upstream image in ApicurioRegistryTest DBZ-2256

  • Intermittent failure of MongoDbConnectorIT.shouldConsumeTransaction DBZ-2264

  • Intermittent test failure on CI - MySqlSourceTypeInSchemaIT#shouldPropagateSourceTypeByDatatype() DBZ-2269

  • Intermittent test failure on CI - MySqlConnectorIT#shouldNotParseQueryIfServerOptionDisabled DBZ-2270

  • Intermittent test failure on CI - RecordsStreamProducerIT#testEmptyChangesProducesHeartbeat DBZ-2271

  • Incorrect dependency from outbox to core module DBZ-2276

  • Slowness in FieldRenamesTest DBZ-2286

  • Create GitHub Action for verifying correct formatting DBZ-2287

  • Clarify expectations for replica identity and key-less tables DBZ-2307

  • Jenkins worker nodes must be logged in to Docker Hub DBZ-2312

  • Upgrade PostgreSQL driver to 4.2.14 DBZ-2317

  • Intermittent test failure on CI - PostgresConnectorIT#shouldOutputRecordsInCloudEventsFormat DBZ-2319

  • Intermittent test failure on CI - TablesWithoutPrimaryKeyIT#shouldProcessFromStreaming DBZ-2324

  • Intermittent test failure on CI - SqlServerConnectorIT#readOnlyApplicationIntent DBZ-2325

  • Intermittent test failure on CI - SnapshotIT#takeSnapshotWithOldStructAndStartStreaming DBZ-2331

Release 1.2.0.Final (June 24th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.0.Final from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.0.Final plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.0.Final connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

There are no breaking changes in this release.

New Features

There are no new features in this release.

Fixes

This release includes the following fixes:

  • Test failure due to superfluous schema change event emitted on connector start DBZ-2211

  • Intermittent test failures on CI DBZ-2232

  • Test SimpleSourceConnectorOutputTest.shouldGenerateExpected blocked DBZ-2241

  • CloudEventsConverter should use Apicurio converter for Avro DBZ-2250

  • Default value is not properly set for non-optional columns DBZ-2267

Other changes

This release includes also other changes:

  • Diff MySQL connector 0.10 and latest docs DBZ-1997

  • Remove redundant property in antora.yml DBZ-2223

  • Binary log client is not cleanly stopped in testsuite DBZ-2221

  • Intermittent test failure on CI - Postgres DBZ-2230

  • Build failure with Kafka 1.x DBZ-2240

  • Intermittent test failure on CI - SqlServerConnectorIT#readOnlyApplicationIntent() DBZ-2261

  • Test failure BinlogReaderIT#shouldFilterAllRecordsBasedOnDatabaseWhitelistFilter() DBZ-2262

Release 1.2.0.CR2 (June 18th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.0.CR2 from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.0.CR2 plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.0.CR2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

Debezium Server distribution package has been moved to a different URL and has been renamed to conform to standard industry practises (DBZ-2212).

New Features

  • DB2 connector documentation ambiguous regarding licensing DBZ-1835

  • Optimize SQLServer connector query DBZ-2120

  • Documentation for implementing StreamNameMapper DBZ-2163

  • Update architecture page DBZ-2096

Fixes

This release includes the following fixes:

  • Encountered error when snapshotting collection type column DBZ-2117

  • Missing dependencies for Debezium Server Pulsar sink DBZ-2201

Other changes

This release includes also other changes:

  • Tests Asserting No Open Transactions Failing DBZ-2176

  • General test harness for End-2-End Benchmarking DBZ-1812

  • Add tests for datatype.propagate.source.type for all connectors DBZ-1916

  • Productize CloudEvents support DBZ-2019

  • [Doc] Add Debezium Architecture to downstream documentation DBZ-2029

  • Transaction metadata documentation DBZ-2069

  • Inconsistent test failures DBZ-2177

  • Add Jandex plugin to Debezium Server connectors DBZ-2192

  • Ability to scale wait times in OCP test-suite DBZ-2194

  • CI doesn’t delete mongo and sql server projects on successful runs DBZ-2195

  • Document database history and web server port for Debezium Server DBZ-2198

  • Do not throw IndexOutOfBoundsException when no task configuration is available DBZ-2199

  • Upgrade Apicurio to 1.2.2.Final DBZ-2206

  • Intermitent test failures DBZ-2207

  • Increase Pulsar Server timeouts DBZ-2210

  • Drop distribution from Debezium Server artifact name DBZ-2214

Release 1.2.0.CR1 (June 10th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.0.CR1 from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.0.CR1 plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.0.CR1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

The format of whitelist/blacklist filter expressions for the Oracle connector has changed: the database name is not to be given as part of these any longer (the reason being that each connector only ever is configured in the scope of exactly one database). Filters like ORCLPDB1.SOMESCHEMA.SOMETABLE must be adjusted to SOMESCHEMA.SOMETABLE. The same applies for configuration properties referencing specific table columns, such as column.propagate.source.type.

The format of whitelist/blacklist filter expressions for the SQL Server connector has changed: the database name is not to be given as part of these any longer (the reason being that each connector only ever is configured in the scope of exactly one database). Filters like testDB.dbo.orders must be adjusted to dbo.orders. The old format still is supported, but should not be used any longer and will be de-supported in a future version. The same applies for configuration properties referencing specific table columns, such as column.propagate.source.type.

New Features

  • Restrict the set of tables with a publication when using pgoutput DBZ-1813

  • Support configuring different encodings for binary source data DBZ-1814

  • Add API for not registering metrics MBean into the platform MBean server DBZ-2089

  • Unable to handle UDT data DBZ-2091

  • Improve SQL Server reconnect during shutdown and connection resets DBZ-2106

  • OpenShift tests for SQL Server connector before GA DBZ-2113

  • OpenShift tests for MongoDB Connector before GA DBZ-2114

  • Log begin/end of schema recovery on INFO level DBZ-2149

  • Allow outbox EventRouter to pass non-String based Keys DBZ-2152

  • Introduce API checks DBZ-2159

  • Bump mysql binlog version DBZ-2160

  • Postgresql - Allow for include.unknown.datatypes to return string instead of hash DBZ-1266

  • Consider Apicurio registry DBZ-1639

  • Debezium Server should support Google Cloud PubSub DBZ-2092

  • Sink adapter for Apache Pulsar DBZ-2112

Fixes

This release includes the following fixes:

  • Transaction opened by Debezium is left idle and never committed DBZ-2118

  • Don’t call markBatchFinished() in finally block DBZ-2124

  • kafka SSL passwords need to be added to the Sensitive Properties list DBZ-2125

  • Intermittent test failure on CI - SQL Server DBZ-2126

  • CREATE TABLE query is giving parsing exception DBZ-2130

  • Misc. Javadoc and docs fixes DBZ-2136

  • Avro schema doesn’t change if a column default value is dropped DBZ-2140

  • Multiple SETs not supported in trigger DBZ-2142

  • Don’t validate internal database.history.connector.* config parameters DBZ-2144

  • ANTLR parser doesn’t handle MariaDB syntax drop index IF EXISTS as part of alter table DDL DBZ-2151

  • Casting as INT causes a ParsingError DBZ-2153

  • Calling function UTC_TIMESTAMP without parenthesis causes a parsing error DBZ-2154

  • Could not find or load main class io.debezium.server.Main DBZ-2170

  • MongoDB connector snapshot NPE in case of document field named "op" DBZ-2116

  • Adapt to changed TX representation in oplog in Mongo 4.2 DBZ-2216

  • Intermittent test failure — Multiple admin clients with same id DBZ-2228

Other changes

This release includes also other changes:

  • Adding tests and doc updates around column masking and truncating DBZ-775

  • Refactor/use common configuration parameters DBZ-1657

  • Develop sizing recommendations, load tests etc. DBZ-1662

  • Add performance test for SMTs like filters DBZ-1929

  • Add banner to older doc versions about them being outdated DBZ-1951

  • SMT Documentation DBZ-2021

  • Instable integration test with Testcontainers DBZ-2033

  • Add test for schema history topic for Oracle connector DBZ-2056

  • Random test failures DBZ-2060

  • Set up CI jobs for JDK 14/15 DBZ-2065

  • Introduce Any type for server to seamlessly integrate with Debezium API DBZ-2104

  • Update AsciiDoc markup in doc files for downstream reuse DBZ-2105

  • Upgrade to Quarkus 1.5.0.Final DBZ-2119

  • Additional AsciiDoc markup updates needed in doc files for downstream reuse DBZ-2129

  • Refactor & Extend OpenShift test-suite tooling to prepare for MongoDB and SQL Server DBZ-2132

  • OpenShift tests are failing when waiting for Connect metrics to be exposed DBZ-2135

  • Support incubator build in product release jobs DBZ-2137

  • Rebase MySQL grammar on the latest upstream version DBZ-2143

  • Await coordinator shutdown in embedded engine DBZ-2150

  • More meaningful exception in case of replication slot conflict DBZ-2156

  • Intermittent test failure on CI - Postgres DBZ-2157

  • OpenShift pipeline uses incorrect projects for Mongo and Sql Server deployment DBZ-2164

  • Incorrect polling timeout in AbstractReader DBZ-2169

Release 1.2.0.Beta2 (May 19th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.0.Beta2 from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.0.Beta2 plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.0.Beta2 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

The snapshot mode initial_schema_only was renamed schema_only for Db2 connector (DBZ-2051).

The previously deprecated options operation.header and add.source.fields of the ExtractNewRecordState have been removed; please use add.headers and add.fields instead (DBZ-1828).

When instantiating the Debezium container in integration tests with Testcontainers, the full image name must be given now, e.g. 1debezium/connect:1.2.0.Beta2`. This is to allow for using custom container images in tests, e.g. containing additional SMTs, converters or sink connectors (DBZ-2070).

New Features

  • Add JDBC driver versions to docs DBZ-2031

  • Add a few more loggings for Cassandra Connector DBZ-2066

  • Provide ready-to-use standalone application based on the embedded engine DBZ-651

  • Add option to skip LSN timestamp queries DBZ-1988

  • Add option to logical topic router for controlling placement of table information DBZ-2034

  • Add headers and topic name into scripting transforms DBZ-2074

  • Filter and content-based router SMTs should be restrictable to certain topics DBZ-2024

Fixes

This release includes the following fixes:

  • Avro schema doesn’t change if a column default value changes from 'foo' to 'bar' DBZ-2061

  • DDL statement throws error if compression keyword contains backticks (``) DBZ-2062

  • Error and connector stops when DDL contains algorithm=instant DBZ-2067

  • Debezium Engine advanced record consuming example broken DBZ-2073

  • Unable to parse MySQL ALTER statement with named primary key DBZ-2080

  • Missing schema-serializer dependency for Avro DBZ-2082

  • TinyIntOneToBooleanConverter doesn’t seem to work with columns having a default value DBZ-2085

Other changes

This release includes also other changes:

  • Add ability to insert fields from op field in ExtractNewDocumentState DBZ-1791

  • Test with MySQL 8.0.20 DBZ-2041

  • Update debezium-examples/tutorial README docker-compose file is missing DBZ-2059

  • Skip tests that are no longer compatible with Kafka 1.x DBZ-2068

  • Remove additional Jackson dependencies as of AK 2.5 DBZ-2076

  • Make EventProcessingFailureHandlingIT resilient against timing issues DBZ-2078

  • Tar packages must use posix format DBZ-2088

  • Remove unused sourceInfo variable DBZ-2090

Release 1.2.0.Beta1 (May 7th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.5.0 and has been tested with version 2.5.0 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.0.Beta1 from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.0.Beta1 plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.0.Beta1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

Field eventType was removed from Outbox router SMT (DBZ-2014).

JDBC driver has been upgrade to the version to 42.2.12 (DBZ-2027). Due to changes in the driver behaviour it is necessary to keep Debezium and driver versions aligned.

Debezium API now allows conversion to JSON and Avro types distinctly for key and value (DBZ-1970). To enable this feature it was necessary to modify the incubating Debezium API.

New Features

  • Don’t try to database history topic if it exists already DBZ-1886

  • Deleted database history should be detected for all connectors DBZ-1923

  • Provide anchors to connector parameters DBZ-1933

  • move static methods TRUNCATE_COLUMN and MASK_COLUMN as attributes to RelationalDatabaseConnectorConfig DBZ-1972

  • Implement SKIPPED_OPERATIONS for mysql DBZ-1895

  • User facing schema history topic for SQL Server DBZ-1904

  • Multiline stack traces can be collapsed into a single log event DBZ-1913

  • Introduce column.whitelist for Postgres Connector DBZ-1962

  • Add support for Postgres time, timestamp array columns DBZ-1969

  • Add support for Postgres Json and Jsonb array columns DBZ-1990

  • Content-based topic routing based on scripting languages DBZ-2000

  • Support different converters for key/value in embedded engine DBZ-1970

Fixes

This release includes the following fixes:

  • bit varying column has value that is too large to be cast to a long DBZ-1949

  • PostgreSQL Sink connector with outbox event router and Avro uses wrong default io.confluent schema namespace DBZ-1963

  • Stop processing new commitlogs in cdc folder DBZ-1985

  • [Doc] Debezium User Guide should provide example of DB connector yaml and deployment instructions DBZ-2011

  • ExtractNewRecordState SMT spamming logs for heartbeat messages DBZ-2036

  • MySQL alias FLUSH TABLE not handled DBZ-2047

  • Embedded engine not compatible with Kafka 1.x DBZ-2054

Other changes

This release includes also other changes:

  • Blog post and demo about Debezium + Camel DBZ-1656

  • Refactor connector config code to share the configuration definition DBZ-1750

  • DB2 connector follow-up refactorings DBZ-1753

  • Oracle JDBC driver available in Maven Central DBZ-1878

  • Align snapshot/streaming semantics in MongoDB documentation DBZ-1901

  • Add MySQL 5.5 and 5.6 to test matrix. DBZ-1953

  • Upgrade to Quarkus to 1.4.1 release DBZ-1975

  • Version selector on releases page should show all versions DBZ-1979

  • Upgrade to Apache Kafka 2.5.0 and Confluent Platform 5.5.0 DBZ-1981

  • Fix broken link DBZ-1983

  • Update Outbox Quarkus extension yaml DBZ-1991

  • Allow for simplified property references in filter SMT with graal.js DBZ-1993

  • Avoid broken cross-book references in downstream docs DBZ-1999

  • Fix wrong attribute name in MongoDB connector DBZ-2006

  • Upgrade formatter and Impsort plugins DBZ-2007

  • Clarify support for non-primary key tables in PostgreSQL documentation DBZ-2010

  • Intermittent test failure on CI DBZ-2030

  • Cleanup Postgres TypeRegistry DBZ-2038

  • Upgrade to latest parent pom and checkstyle DBZ-2039

  • Reduce build output to avoid maximum log length problems on CI DBZ-2043

  • Postgres TypeRegistry makes one query per enum type at startup DBZ-2044

  • Remove obsolete metrics from downstream docs DBZ-1947

Release 1.2.0.Alpha1 (April 16th, 2020)

Kafka compatibility

This release has been built against Kafka Connect 2.4.1 and has been tested with version 2.4.1 of the Kafka brokers. See the Kafka documentation for compatibility with other versions of Kafka brokers.

Upgrading

Before upgrading the MySQL, MongoDB, PostgreSQL or SQL Server connectors, be sure to check the backward-incompatible changes that have been made since the release you were using.

When you decide to upgrade one of these connectors to 1.2.0.Alpha1 from any earlier versions, first check the migration notes for the version you’re using. Gracefully stop the running connector, remove the old plugin files, install the 1.2.0.Alpha1 plugin files, and restart the connector using the same configuration. Upon restart, the 1.2.0.Alpha1 connectors will continue where the previous connector left off. As one might expect, all change events previously written to Kafka by the old connector will not be modified.

If you are using our docker images then do not forget to pull them fresh from Docker registry.

Breaking changes

For the SQL Server connector, the previously deprecated snapshot mode initial_schema_only has been removed. The mode schema_only should be used instead, providing the same behavior and semantics (DBZ-1945).

The previously deprecated message transformations UnwrapFromEnvelope and UnwrapMongoDbEnvelope have been removed. Instead, please use ExtractNewRecordState and ExtractNewDocumentState, respectively (DBZ-1968).

New Features

  • Expose original value for PK updates DBZ-1531

  • New column masking mode: consistent hashing DBZ-1692

  • Provide a filtering SMT DBZ-1782

  • Support converters for embedded engine DBZ-1807

  • Enhance MongoDB connector metrics DBZ-1859

  • SQL Server connector: support reconnect after the database connection is broken DBZ-1882

  • Support SMTs in embedded engine DBZ-1930

  • Snapshot metrics shows TotalNumberOfEventsSeen as zero DBZ-1932

Fixes

This release includes the following fixes:

  • java.lang.IllegalArgumentException: Timestamp format must be yyyy-mm-dd hh:mm:ss[.fffffffff] DBZ-1744

  • Snapshot lock timeout setting is not documented DBZ-1914

  • AvroRuntimeException when publishing transaction metadata DBZ-1915

  • Connector restart logic throttles for the first 2 seconds DBZ-1918

  • Wal2json empty change event could cause NPE above version 1.0.3.final DBZ-1922

  • Misleading error message on lost database connection DBZ-1926

  • Cassandra CDC should not move and delete processed commitLog file under testing mode DBZ-1927

  • Broken internal links and anchors in documentation DBZ-1935

  • Dokumentation files in modules create separate pages, should be partials instead DBZ-1944

  • Validation of binlog_row_image is not compatible with MySQL 5.5 DBZ-1950

  • High CPU usage when idle DBZ-1960

  • Outbox Quarkus Extension throws NPE in quarkus:dev mode DBZ-1966

  • Cassandra Connector: unable to deserialize column mutation with reversed type DBZ-1967

Other changes

This release includes also other changes:

  • Replace Custom CassandraTopicSelector with DBZ’s TopicSelector class in Cassandra Connector DBZ-1407

  • Improve documentation on WAL disk space usage for Postgres connector DBZ-1732

  • Outbox Quarkus Extension: Update version of extension used by demo DBZ-1786

  • Community newsletter 1/2020 DBZ-1806

  • Remove obsolete SnapshotChangeRecordEmitter DBZ-1898

  • Fix typo in Quarkus Outbox extension documentation DBZ-1902

  • Update schema change topic section of SQL Server connector doc DBZ-1903

  • Documentation should link to Apache Kafka upstream docs DBZ-1906

  • Log warning about insufficient retention time for DB history topic DBZ-1905

  • The error messaging around binlog configuration is missleading DBZ-1911

  • Restore documentation of MySQL event structures DBZ-1919

  • Link from monitoring page to connector-specific metrics DBZ-1920

  • Update snapshot.mode options in SQL Server documentation DBZ-1924

  • Update build and container images to Apache Kafka 2.4.1 DBZ-1925

  • Avoid Thread#sleep() calls in Oracle connector tests DBZ-1942

  • Different versions of Jackson components pulled in as dependencies DBZ-1943

  • Remove deprecated connector option value "initial_schema_only" DBZ-1945

  • Add docs for mask column and truncate column features DBZ-1954

  • Upgrade MongoDB driver to 3.12.3 DBZ-1958

  • Remove deprecated unwrap SMTs DBZ-1968