Debezium Blog

In this post, we are going to talk about a CDC-CQRS pipeline between a normalized relational database, MySQL, as the command database and a de-normalized NoSQL database, MongoDB, as the query database resulting in the creation of DDD Aggregates via Debezium & Kafka-Streams.

This tutorial was originally published by QuestDB, where guest contributor, Yitaek Hwang, shows us how to stream data into QuestDB with change data capture via Debezium and Kafka Connect.

This post is part of a 3-part series to explore using Debezium to ingest changes from an Oracle database using Oracle LogMiner. In case you missed it, the first part of this series is here.
In this second installment, we will build on what we did in part one by deploying the Oracle connector using Zookeeper, Kafka, and Kafka Connect. We are going to discuss a variety of configuration options for the connector and why they’re essential. And finally, we’re going to see the connector in action!

This post is part of a 3-part series to explore using Debezium to ingest changes from an Oracle database using Oracle LogMiner. Throughout the series, we’ll examine all the steps to setting up a proof of concept (POC) deployment for Debezium for Oracle. We will discuss setup and configurations as well as the nuances of multi-tenancy. We will also dive into any known pitfalls and concerns you may need to know and how to debug specific problems. And finally, we’ll talk about performance and monitoring to maintain a healthy connector deployment.
Throughout this exercise, we hope that this will show you just how simple it is to deploy Debezium for Oracle. This installation and setup portion of the series may seem quite complicated, but many of these steps likely already exist in a pre-existing environment. We will dive into each step, explaining it is essential should you use a container image deployment.

Today, it is a common practise to build data lakes for analytics, reporting or machine learning needs.
In this blog post we will describe a simple way to build a data lake. The solution is using a realtime data pipeline based on Debezium, supporting ACID transactions, SQL updates and is highly scalable. And it’s not required to have Apache Kafka or Apache Spark applications to build the data feed, reducing complexity of the overall solution.