Hi!
Thanks for the clarification. Flink currently does not have the
functionality to "revert all operations till some point". What I would
suggest is still to discard the resulting tables and run the pipeline from
the point when the filtering logic is changed. If the pipeline has
processed some
Hi!
Thanks for your reply! I think i didn't make myself clear for problem 1, so i
draw a picture.
1."tables in DB": things start at the database, and we sync the tables'
changelog to dynamic tables by CDC tool. Each changelog includes RowData and
RowKind such as INSERT / UPDATE / DELETE.
Hi!
For problem 1, Flink does not support deleting specific records. As you're
running a batch job, I suggest creating a new table based on the new filter
condition. Even if you can delete the old records you'll still have to
generate the new ones, so why not generate them directly into a new
Hi, community!
I am working on building data processing pipeline based on changelog(CDC) and i
met two problems.
--(sql_0)--> Table A --(sql_1)---> Table B --->other tables downstream
--(sql_2)--->Table C---> other tables downstream
Table A is generated based on