[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-03-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴彦祖 updated FLINK-16070:

Attachment: image-2020-03-12-14-07-37-429.png

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Assignee: godfrey he
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
> Attachments: image-2020-03-12-14-07-37-429.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-03-11 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-16070:
-
Affects Version/s: (was: 1.11.0)
   1.10.0

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Assignee: godfrey he
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-03-11 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16070:
---
Fix Version/s: 1.11.0

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: godfrey he
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16070:
---
Labels: pull-request-available  (was: )

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: godfrey he
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-16070:
---
Description: 
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner works well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can not extract correct unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 

  was:
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner works well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 


> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.10.1
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16070:

Fix Version/s: (was: 1.11.0)
   1.10.1

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.10.1
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16070:

Priority: Critical  (was: Major)

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-16070:
---
Description: 
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner works well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 

  was:
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 


> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-16070:
---
Description: 
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 

  was:
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 


> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner work well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-16070:
---
Description: 
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 

  was:
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
NSERT INTO ES6_ZHANGLE_OUTPUT  SELECT aggId, pageId, ts_min as ts,  count(case 
when eventId = 'exposure' then 1 else null end) as expoCnt,  count(case when 
eventId = 'click' then 1 else null end) as clkCnt  FROM  (    SELECT        
'ZL_001' as aggId,        pageId,        eventId,        recvTime,        
ts2Date(recvTime) as ts_min    from kafka_zl_etrack_event_stream    where 
eventId in ('exposure', 'click')  ) as t1  group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 


> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner work well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)