Debezium Blog

In today’s dynamic data environments, detecting and understanding data mutation patterns is critical for system reliability. In this blog post, we’ll explore how to use Debezium for comprehensive database activity logging and analysis in microservice architectures. We’ll delve into how Debezium captures row-level changes and streams them in real-time, enabling immediate visibility into database operations. By integrating with analytics tools, we’ll see how to build detailed activity dashboards that reveal the volume and nature of operations per table. These insights are invaluable for identifying unexpected patterns, such as a sudden drop in inserts caused by a new microservice deployment with a bug. You will learn how to set up Debezium, configure it for this specific use case, and utilize the generated data to create actionable dashboards.

The Debezium community is in the homestretch for the next major milestone, Debezium 3. We wanted to take this opportunity to remind the community of our plans regarding Debezium’s container images…​

Hello everyone, Jakub here. You may have noticed that there wasn’t much happening around Debezium UI lately. This, however, would be only partially true. We own you an explanation in this regard, so please bear with me. Let’s start with the status of the current UI project. It became increasing clear that while UI for Debezium is an important part of our vision, developing a UI strictly tied to Kafka Connect is not the right...

In this article, we are going to present and demonstrate a new feature delivered in Debezium 2.4 - the integration with the TimescaleDB database.

TimescaleDB is an open-source database designed to make SQL scalable for time-series data. It is implemented as an extension for the PostgreSQL database. This fact leads us to re-use the standard Debezium PostgreSQL connector and implement TimescaleDB support as a single message transform (SMT).

In the realm of data streaming optimization, even subtle improvements can make a significant impact. This article focuses on one such refinement: the introduction of batch support in Debezium’s JDBC connector. We’ll guide you through the process of enabling batches and share the practical outcomes of our performance testing.

With Debezium 2.3, we introduced a preview of a brand new Debezium Operator with the aim to provide seamless deployment of Debezium Server to Kubernetes (k8s) clusters. The Debezium 2.4.0.Final release brings the next step towards the full support of this component. With this release, we are happy to announce that Debezium Operator is now available in the OperatorHub catalog for Kubernetes as well as in the community operator catalog embedded in the OpenShift and OKD distributions. The operator remains in the incubation phase; however, the full support of this component is approaching fast.

Welcome to the third installment of our series on Debezium Signaling and Notifications. In this article, we continue our exploration of Debezium signaling and notifications. In particular, we will delve into how to enable and manage these features using the JMX channel.

We will also explore how to send signals and get notifications through the REST API leveraging Jolokia.

Welcome to this series of articles dedicated to signaling and notifications in Debezium! This post serves as the second installment in the series, where we will discuss how to customize the signal and notification channels in Debezium.

Debezium 2.3 introduced new improvements in signaling and notification capabilities. You can set up new signals and notification channels in addition to the pre-defined signals and notification channels offered by Debezium. This feature enables users to customize the system to suit their unique needs and combine it with their existing infrastructure or third-party solutions. It enables effective monitoring and a proactive response to data changes by precisely capturing and communicating signal events and triggering notifications through preferred channels.

The first article in this series, Signaling and Notifications in Debezium, provides an overview of the signaling and notification features in Debezium. It also discusses the available channels & their use cases for various scenarios.

This post is the final part of a 3-part series to explore using Debezium to ingest changes from an Oracle database using Oracle LogMiner. In case you missed it, the first installment of this series is found here and the second installment is found here.

In this third and final installment, we are going to build on what we have done in the previous two posts, focusing on the following areas:

Welcome to this series of articles dedicated to signaling and notifications in Debezium! This post serves as the first installment in the series, where we will introduce the signaling and notification features offered by Debezium and discuss the available channels for interacting with the platform.

In the subsequent parts of this series, we will delve deeper into customizing signaling channels and explore additional topics such as JMX signaling and notifications.

In this post, we are going to talk about a CDC-CQRS pipeline between a normalized relational database, MySQL, as the command database and a de-normalized NoSQL database, MongoDB, as the query database resulting in the creation of DDD Aggregates via Debezium & Kafka-Streams.

This tutorial was originally published by QuestDB, where guest contributor, Yitaek Hwang, shows us how to stream data into QuestDB with change data capture via Debezium and Kafka Connect.

This post is part of a 3-part series to explore using Debezium to ingest changes from an Oracle database using Oracle LogMiner. In case you missed it, the first part of this series is here.

In this second installment, we will build on what we did in part one by deploying the Oracle connector using Zookeeper, Kafka, and Kafka Connect. We are going to discuss a variety of configuration options for the connector and why they’re essential. And finally, we’re going to see the connector in action!

This post is part of a 3-part series to explore using Debezium to ingest changes from an Oracle database using Oracle LogMiner. Throughout the series, we’ll examine all the steps to setting up a proof of concept (POC) deployment for Debezium for Oracle. We will discuss setup and configurations as well as the nuances of multi-tenancy. We will also dive into any known pitfalls and concerns you may need to know and how to debug specific problems. And finally, we’ll talk about performance and monitoring to maintain a healthy connector deployment.

Throughout this exercise, we hope that this will show you just how simple it is to deploy Debezium for Oracle. This installation and setup portion of the series may seem quite complicated, but many of these steps likely already exist in a pre-existing environment. We will dive into each step, explaining it is essential should you use a container image deployment.

Today, it is a common practise to build data lakes for analytics, reporting or machine learning needs.

In this blog post we will describe a simple way to build a data lake. The solution is using a realtime data pipeline based on Debezium, supporting ACID transactions, SQL updates and is highly scalable. And it’s not required to have Apache Kafka or Apache Spark applications to build the data feed, reducing complexity of the overall solution.