
We are excited to announce the release of Debezium 3.3.0.CR1, bringing significant improvements to reliability, innovation, and compatibility. This release builds on Kafka 4.1.0 and introduces several key enhancements that will streamline your change data capture workflows.
Breaking changes
With any new major release of software, there is often several breaking changes. The Debezium 3.3.0.CR1 release is no exception, so let’s discuss the major changes you should be aware of.
Db2 offset position validation is unreliable
Due to reliability issues with offset position validation in Db2, we’ve temporarily disabled validation to prevent false failures (DBZ-9470). As a result, the when_needed
snapshot mode is currently unavailable for Db2 connectors.
Impact: If you’re using when_needed
snapshot mode with Db2, you will need to use an alternative mode until this limitation is resolved in a future release.
JDBC sink data type precision changes
The upgrade to Hibernate 7.1.0.Final brings more precise data type handling, particularly for temporal and floating-point data (DBZ-9481):
- Temporal Types
-
The
time
andtimestamp
columns now default to higher precision. For example, Oracle time and timestamp columns will be created using 9-digit precision instead of the previous default of 6-digits. - Floating-point Types
-
Debezium explicitly prefers
float
,real
, anddouble precision
for representing floating-point values. If you need Oracle’s binary float and double data types instead, sethibernate.dialect.oracle.use_binary_floats
totrue
in your connector configuration.
Only new temporal type columns will be added using the new 9-digit precision while existing columns are unaffected. If you’d prefer your existing columns to be defined with the higher precision for consistency, this must be done manually. |
New features and improvements
Kafka 4.1.0 foundation for better performance
Debezium 3.3.0.CR1 is built against Kafka Connect 4.1.0 and has been thoroughly tested with Kafka brokers version 4.1.0 (DBZ-9460). This ensures Debezium can run on the latest, most stable Kafka infrastructure.
Before upgrading, review the Kafka documentation to ensure compatibility with your existing Kafka broker versions. |
JDBC sink self-heals against database errors
The JDBC sink connector now automatically retries SQL exceptions that occur during change processing, providing a crucial buffer for self-healing scenarios to improve the connector’s resilience (https://issues.redhat.com/browse/DBZ-7772).
This is particularly valuable in multi-task environments where concurrent writes to the same table might cause lock conflicts. Instead of failing completely or delegating the restart to the runtime,the connector now recovers from these transient issues itself, significantly improving the overall reliability of the connector.
Smarter Oracle LogMiner archive destination management
A new precedence-based archive destination strategy can be used for certain Oracle connector environments. Previously, users had to specify a single destination (i.e. LOG_ARCHIVE_DEST_2
), which required manual configuration changes during failover scenarios when the new primary uses a different destination name.
Users can now configure multiple destinations in priority order using a comma-separated list (https://issues.redhat.com/browse/DBZ-9041). For example, LOG_ARCHIVE_DEST_1,LOG_ARCHIVE_DEST_2
. The connector will intelligently select the first destination that is both local and valid, adapting to failover scenarios requiring no configuration change.
As an example, the Oracle primary instance uses LOG_ARCHIVE_DEST_1
and the standby uses LOG_ARCHIVE_DEST_2
. Using the new priority order feature, the connector seamlessly switches from LOG_ARCHIVE_DEST_1
to LOG_ARCHIVE_DEST_2
when failover occurs.
Please note that this is only useful when a standby environment becomes the new primary environment during a disaster-recovery failover scenario. This is not the same as when an Oracle Real Application Cluster (RAC) node becomes unavailable and the connector connects to the next available node on the cluster. In the latter, all nodes on the cluster share the same archive destination configuration and priority order doesn’t apply. |
Enhanced Informix default value localization support
The Informix connector now handles locale-specific database configurations more intelligently (DBZ-9181). Instead of assuming a US locale, the connector properly parses locale-dependent values like DBMONEY
, DBCENTURY
, and DBDATE
based on your actual database configuration.
This improvement ensures more accurate data capture across diverse international deployments and eliminates potential data parsing errors in non-US environments.
Other changes
-
A transaction mined across two queries can randomly cause unsupported operations DBZ-8747
-
Add REST API to get retrieve the list of tables DBZ-9317
-
Source and Destination entities must be linked to the Connection entity DBZ-9333
-
Implement Kafka connection validator DBZ-9334
-
Unpin netty image pin DBZ-9390
-
Update JDBC sink connector doc to identify the data types that Debezium does not support DBZ-9403
-
Expose endpoint for get json schemas about connection DBZ-9420
-
Add missing destinations to Debezium Platform DBZ-9442
-
Oracle connector reselect exception handling (ORA-01555 + ORA-22924) DBZ-9446
-
In case of readonly usage the DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG should not be used DBZ-9452
-
Improve maven compiler config for Debezium Platform DBZ-9453
-
Zookeeper-less kafka for DockerRhel executions DBZ-9462
-
Debezium server fails with CNFE DBZ-9468
-
OutOfMemory exception when recreating list of tables for snapshot callables DBZ-9472
-
Debezium Server raise "AttributeNotFoundException QueueTotalCapacity" with SqlServer source DBZ-9477
-
Getting "Unknown column in 'field list'" when column name contains backtick DBZ-9479
-
Update Mockito to 5.19.0 DBZ-9480
-
Update to AssertJ 3.27.5 DBZ-9482
-
MySQL Event get header throws NullPointerException DBZ-9483
-
Declare source/transforms in ServiceLoader manifests so it can be compatible with new plugin discovery mode DBZ-9493
In total, 29 issues were resolved in Debezium 3.3.0.CR1. The list of changes can also be found in our release notes.
A big thank you to all the contributors from the community who worked diligently on this release:
Alvar Viana Gomez, Alvar Viana, Chris Cranford, Gabriel Cerioni, Giovanni Panice, Guangnan Shi, Indra Shukla, Jiri Pechanec, Lars M. Johansson, Lucas Gazire, Mario Fiore Vitale, Pranav Tiwari, Rajendra Dangwal, Robert Roldan, Sergei Nikolaev, Thomas Thornton, Vojtech Juranek, Wouter Coekaerts, leoloel
Chris Cranford
Chris is a software engineer at IBM and formerly Red Hat where he works on Debezium and deepens his expertise in all things Oracle and Change Data Capture on a daily basis. He previously worked on Hibernate, the leading open-source JPA persistence framework, and continues to contribute to Quarkus. Chris is based in North Carolina, United States.

About Debezium
Debezium is an open source distributed platform that turns your existing databases into event streams, so applications can see and respond almost instantly to each committed row-level change in the databases. Debezium is built on top of Kafka and provides Kafka Connect compatible connectors that monitor specific database management systems. Debezium records the history of data changes in Kafka logs, so your application can be stopped and restarted at any time and can easily consume all of the events it missed while it was not running, ensuring that all events are processed correctly and completely. Debezium is open source under the Apache License, Version 2.0.
Get involved
We hope you find Debezium interesting and useful, and want to give it a try. Follow us on Twitter @debezium, chat with us on Zulip, or join our mailing list to talk with the community. All of the code is open source on GitHub, so build the code locally and help us improve ours existing connectors and add even more connectors. If you find problems or have ideas how we can improve Debezium, please let us know or log an issue.