[ 
https://issues.apache.org/jira/browse/FLINK-35313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844843#comment-17844843
 ] 

JJJJude commented on FLINK-35313:
---------------------------------

migrate from[ 
https://github.com/apache/flink-cdc/issues/1898|https://github.com/apache/flink-cdc/issues/1898]

> Add upsert changelog mode to avoid UPDATE_BEFORE records push down
> ------------------------------------------------------------------
>
>                 Key: FLINK-35313
>                 URL: https://issues.apache.org/jira/browse/FLINK-35313
>             Project: Flink
>          Issue Type: New Feature
>          Components: Flink CDC
>            Reporter: JJJJude
>            Priority: Major
>
> I try to use flink sql to write mysql cdc-data into redis as a dimension 
> table for other business use. When executing {{UPDATE}} DML, the cdc-data 
> will be converted into {{-D (UPDATE_BEFORE)}} and {{+I (UPDATE_AFTER)}} two 
> records to sink redis. However, delete first will cause other data streams to 
> be lost(NULL) when join data, which is unacceptable.
> I think we can add support for [upser changelog 
> mode|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/dynamic_tables/#table-to-stream-conversion]
>  by adding changelogMode option with mandatory primary key 
> configuration.Basically, with {{changelogMode=upsert}} we will avoid 
> {{UPDATE_BEFORE}} rows and we will require a primary key for the table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to