I think that using Kafka to get CDC events is fine. The problem, in my
case, is really about how to proceed:
 1) do I need to create Flink tables before reading CDC events or is there
a way to automatically creating Flink tables when they gets created via a
DDL event (assuming a filter on the name of the tables?
 2) How to handle changes in the table structure (adding or removing
columns)...? Is Flink able to react to this?
 3) CSC is a common use case (IMHO) and it's perfect for migrating or test
to an event driven architecture. So I expect Flink to be able to easily
allow to query Dynamic tables coming from a db (via Debezium) without
implementing the logic to handle insert/delete/update statements

What do you think?

Il Gio 18 Lug 2019, 13:17 miki haiat <miko5...@gmail.com> ha scritto:

> I actually thinking   about this option as well .
> Im assuming that the correct way to implement it ,  is to integrate
> debezium embedded   to source function ?
>
>
>
> [1] https://github.com/debezium/debezium/tree/master/debezium-embedded
>
>
> On Wed, Jul 17, 2019 at 7:08 PM Flavio Pompermaier <pomperma...@okkam.it>
> wrote:
>
>> Hi to all,
>> I'd like to know whether it exists or not an example about how to
>> leverage Debezium as a CDC source and to feed a Flink Table (From MySQL for
>> example).
>>
>> Best,
>> Flavio
>>
>

Reply via email to