[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weijie Guo updated FLINK-22826: --- Fix Version/s: 2.0.0 (was: 1.20.0) > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Minor > Labels: auto-deprioritized-major, stale-blocker > Fix For: 2.0.0 > > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lincoln lee updated FLINK-22826: Fix Version/s: 1.20.0 (was: 1.19.0) > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Minor > Labels: auto-deprioritized-major, stale-blocker > Fix For: 1.20.0 > > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Ge updated FLINK-22826: Fix Version/s: 1.19.0 (was: 1.18.0) > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Minor > Labels: auto-deprioritized-major, stale-blocker > Fix For: 1.19.0 > > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-22826: - Fix Version/s: 1.18.0 (was: 1.17.0) > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Minor > Labels: auto-deprioritized-major, stale-blocker > Fix For: 1.18.0 > > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huang Xingbo updated FLINK-22826: - Fix Version/s: 1.17.0 (was: 1.16.0) > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Minor > Labels: auto-deprioritized-major, stale-blocker > Fix For: 1.17.0 > > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-22826: Fix Version/s: 1.16.0 > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Minor > Labels: auto-deprioritized-major, stale-blocker > Fix For: 1.15.0, 1.16.0 > > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kurt Young updated FLINK-22826: --- Fix Version/s: 1.15.0 > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Minor > Labels: auto-deprioritized-major, stale-blocker > Fix For: 1.15.0 > > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22826: --- Labels: auto-deprioritized-major stale-blocker (was: stale-blocker stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Minor > Labels: auto-deprioritized-major, stale-blocker > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22826: --- Labels: stale-blocker stale-major (was: stale-blocker) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Major but is unassigned and neither itself nor its Sub-Tasks have been updated for 60 days. I have gone ahead and added a "stale-major" to the issue". If this ticket is a Major, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Major > Labels: stale-blocker, stale-major > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-22826: - Priority: Major (was: Blocker) > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Major > Labels: stale-blocker > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22826: --- I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as a Blocker but is unassigned and neither itself nor its Sub-Tasks have been updated for 1 days. I have gone ahead and marked it "stale-blocker". If this ticket is a Blocker, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Blocker > Labels: stale-blocker > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22826: --- Labels: stale-blocker (was: ) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as a Blocker but is unassigned and neither itself nor its Sub-Tasks have been updated for 1 days. I have gone ahead and marked it "stale-blocker". If this ticket is a Blocker, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Blocker > Labels: stale-blocker > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join
[ https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] 徐州州 updated FLINK-22826: Summary: flink sql1.13.1 causes data loss based on change log stream data join (was: flink sql1.13.1基于change log流数据join导致数据丢失) > flink sql1.13.1 causes data loss based on change log stream data join > - > > Key: FLINK-22826 > URL: https://issues.apache.org/jira/browse/FLINK-22826 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.12.0, 1.13.1 >Reporter: 徐州州 >Priority: Blocker > > {code:java} > insert into dwd_order_detail > select >ord.Id, >ord.Code, >Status > concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id > as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as uuids, > TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date > from > orders ord > left join order_extend oed on ord.Id=oed.OrderId and oed.IsDeleted=0 and > oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS > TIMESTAMP) > or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP) > ) and ord.IsDeleted=0; > {code} > My upsert-kafka table for PRIMARY KEY for uuids. > This is the logic of my kafka based canal-json stream data join and write to > Upsert-kafka tables I confirm that version 1.12 also has this problem I just > upgraded from 1.12 to 1.13. > I look up a user s order data and order number XJ0120210531004794 in > canal-json original table as U which is normal. > {code:java} > | +U | XJ0120210531004794 | 50 | > | +U | XJ0120210531004672 | 50 | > {code} > But written to upsert-kakfa via join, the data consumed from upsert kafka is, > {code:java} > | +I | XJ0120210531004794 | 50 | > | -U | XJ0120210531004794 | 50 | > {code} > The order is two records this sheet in orders and order_extend tables has not > changed since created -U status caused my data loss not computed and the > final result was wrong. -- This message was sent by Atlassian Jira (v8.3.4#803005)