[jira] [Commented] (FLINK-16068) table with keyword-escaped columns and computed_column_expression columns

2020-02-15 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037741#comment-17037741
 ] 

Benchao Li commented on FLINK-16068:


[~jark] I'm interested in this issue, thanks for assigning this to me.

> table with keyword-escaped columns and computed_column_expression columns
> -
>
> Key: FLINK-16068
> URL: https://issues.apache.org/jira/browse/FLINK-16068
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: pangliang
>Priority: Critical
> Fix For: 1.10.1
>
>
> I use sql-client to create a table with keyword-escaped column and 
> computed_column_expression column, like this:
> {code:java}
> CREATE TABLE source_kafka (
> log STRING,
> `time` BIGINT,
> pt as proctime()
> ) WITH (
>   'connector.type' = 'kafka',   
>   'connector.version' = 'universal',
>   'connector.topic' = 'k8s-logs',
>   'connector.startup-mode' = 'latest-offset',
>   'connector.properties.zookeeper.connect' = 
> 'zk-1.zk:2181,zk-2.zk:2181,zk-3.zk:2181/kafka',
>   'connector.properties.bootstrap.servers' = 'kafka.default:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'format.type'='json',
>   'format.fail-on-missing-field' = 'true',
>   'update-mode' = 'append'
> );
> {code}
> Then I simply used it :
> {code:java}
> SELECT * from source_kafka limit 10;{code}
> got an exception:
> {code:java}
> java.io.IOException: Fail to run stream sql job
>   at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:164)
>   at 
> org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callSelect(FlinkStreamSqlInterpreter.java:108)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:203)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:151)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.interpret(FlinkSqlInterrpeter.java:104)
>   at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:676)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:569)
>   at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
>   at 
> org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:121)
>   at 
> org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.table.api.SqlParserException: SQL parse failed. 
> Encountered "time" at line 1, column 12.
> Was expecting one of:
> "ABS" ...
> "ARRAY" ...
> "AVG" ...
> "CARDINALITY" ...
> "CASE" ...
> "CAST" ...
> "CEIL" ...
> "CEILING" ...
> ..
> 
>   at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:50)
>   at 
> org.apache.flink.table.planner.calcite.SqlExprToRexConverterImpl.convertToRexNodes(SqlExprToRexConverterImpl.java:79)
>   at 
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.scala:111)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3328)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2357)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2005)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:646)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:627)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3181)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:563)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:148)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:135)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:522)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on issue #10486: [FLINK-15131][connector/source] Add the APIs for Source (FLIP-27).

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #10486: [FLINK-15131][connector/source] Add 
the APIs for Source (FLIP-27).
URL: https://github.com/apache/flink/pull/10486#issuecomment-562960855
 
 
   
   ## CI report:
   
   * 8c84da76b5e6919173d15a369a204c12d8a53da2 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140127857) 
   * 240770a653a44e2d40f242c53d2c6c132704434d Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/140195896) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3347)
 
   * d394d8528ae2c9f4cbee3426e12a348d5773d593 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141114012) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3603)
 
   * 6c1ab220113e6a66f34598f01f9c8441d1312046 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16078) Translate "Tuning Checkpoints and Large State" page into Chinese

2020-02-15 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-16078:
--
Parent: FLINK-16073
Issue Type: Sub-task  (was: Bug)

> Translate "Tuning Checkpoints and Large State" page into Chinese
> 
>
> Key: FLINK-16078
> URL: https://issues.apache.org/jira/browse/FLINK-16078
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: Yu Li
>Priority: Major
> Fix For: 1.11.0
>
>
> Complete the translation in `docs/ops/state/large_state_tuning.zh.md`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037724#comment-17037724
 ] 

Jark Wu commented on FLINK-16070:
-

I think this is a bug in the {{FlinkRelMdUniqueKeys}}. I didn't look into the 
code, I don't know why it lost the unique key when there is a constant in the 
group keys. 

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.10.1
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16078) Translate "Tuning Checkpoints and Large State" page into Chinese

2020-02-15 Thread Yu Li (Jira)
Yu Li created FLINK-16078:
-

 Summary: Translate "Tuning Checkpoints and Large State" page into 
Chinese
 Key: FLINK-16078
 URL: https://issues.apache.org/jira/browse/FLINK-16078
 Project: Flink
  Issue Type: Bug
  Components: chinese-translation, Documentation
Affects Versions: 1.11.0
Reporter: Yu Li
 Fix For: 1.11.0


Complete the translation in `docs/ops/state/large_state_tuning.zh.md`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16068) table with keyword-escaped columns and computed_column_expression columns

2020-02-15 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037722#comment-17037722
 ] 

Jark Wu commented on FLINK-16068:
-

Thanks for looking into this [~libenchao]. I think the bug is in 
{{org.apache.flink.table.planner.calcite.SqlExprToRexConverterImpl#convertToRexNodes}}
 where we should escape all field names and functions for the constructed 
query. 

If you are interested in it, I can assign it to you [~libenchao]. 

> table with keyword-escaped columns and computed_column_expression columns
> -
>
> Key: FLINK-16068
> URL: https://issues.apache.org/jira/browse/FLINK-16068
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: pangliang
>Priority: Major
> Fix For: 1.10.1
>
>
> I use sql-client to create a table with keyword-escaped column and 
> computed_column_expression column, like this:
> {code:java}
> CREATE TABLE source_kafka (
> log STRING,
> `time` BIGINT,
> pt as proctime()
> ) WITH (
>   'connector.type' = 'kafka',   
>   'connector.version' = 'universal',
>   'connector.topic' = 'k8s-logs',
>   'connector.startup-mode' = 'latest-offset',
>   'connector.properties.zookeeper.connect' = 
> 'zk-1.zk:2181,zk-2.zk:2181,zk-3.zk:2181/kafka',
>   'connector.properties.bootstrap.servers' = 'kafka.default:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'format.type'='json',
>   'format.fail-on-missing-field' = 'true',
>   'update-mode' = 'append'
> );
> {code}
> Then I simply used it :
> {code:java}
> SELECT * from source_kafka limit 10;{code}
> got an exception:
> {code:java}
> java.io.IOException: Fail to run stream sql job
>   at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:164)
>   at 
> org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callSelect(FlinkStreamSqlInterpreter.java:108)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:203)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:151)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.interpret(FlinkSqlInterrpeter.java:104)
>   at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:676)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:569)
>   at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
>   at 
> org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:121)
>   at 
> org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.table.api.SqlParserException: SQL parse failed. 
> Encountered "time" at line 1, column 12.
> Was expecting one of:
> "ABS" ...
> "ARRAY" ...
> "AVG" ...
> "CARDINALITY" ...
> "CASE" ...
> "CAST" ...
> "CEIL" ...
> "CEILING" ...
> ..
> 
>   at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:50)
>   at 
> org.apache.flink.table.planner.calcite.SqlExprToRexConverterImpl.convertToRexNodes(SqlExprToRexConverterImpl.java:79)
>   at 
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.scala:111)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3328)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2357)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2005)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:646)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:627)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3181)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:563)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:148)
>   at 
> 

[jira] [Updated] (FLINK-16068) table with keyword-escaped columns and computed_column_expression columns

2020-02-15 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16068:

Priority: Critical  (was: Major)

> table with keyword-escaped columns and computed_column_expression columns
> -
>
> Key: FLINK-16068
> URL: https://issues.apache.org/jira/browse/FLINK-16068
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: pangliang
>Priority: Critical
> Fix For: 1.10.1
>
>
> I use sql-client to create a table with keyword-escaped column and 
> computed_column_expression column, like this:
> {code:java}
> CREATE TABLE source_kafka (
> log STRING,
> `time` BIGINT,
> pt as proctime()
> ) WITH (
>   'connector.type' = 'kafka',   
>   'connector.version' = 'universal',
>   'connector.topic' = 'k8s-logs',
>   'connector.startup-mode' = 'latest-offset',
>   'connector.properties.zookeeper.connect' = 
> 'zk-1.zk:2181,zk-2.zk:2181,zk-3.zk:2181/kafka',
>   'connector.properties.bootstrap.servers' = 'kafka.default:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'format.type'='json',
>   'format.fail-on-missing-field' = 'true',
>   'update-mode' = 'append'
> );
> {code}
> Then I simply used it :
> {code:java}
> SELECT * from source_kafka limit 10;{code}
> got an exception:
> {code:java}
> java.io.IOException: Fail to run stream sql job
>   at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:164)
>   at 
> org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callSelect(FlinkStreamSqlInterpreter.java:108)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:203)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:151)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.interpret(FlinkSqlInterrpeter.java:104)
>   at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:676)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:569)
>   at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
>   at 
> org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:121)
>   at 
> org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.table.api.SqlParserException: SQL parse failed. 
> Encountered "time" at line 1, column 12.
> Was expecting one of:
> "ABS" ...
> "ARRAY" ...
> "AVG" ...
> "CARDINALITY" ...
> "CASE" ...
> "CAST" ...
> "CEIL" ...
> "CEILING" ...
> ..
> 
>   at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:50)
>   at 
> org.apache.flink.table.planner.calcite.SqlExprToRexConverterImpl.convertToRexNodes(SqlExprToRexConverterImpl.java:79)
>   at 
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.scala:111)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3328)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2357)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2005)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:646)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:627)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3181)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:563)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:148)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:135)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:522)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:436)

[jira] [Created] (FLINK-16077) Translate "Custom State Serialization" page into Chinese

2020-02-15 Thread Yu Li (Jira)
Yu Li created FLINK-16077:
-

 Summary: Translate "Custom State Serialization" page into Chinese
 Key: FLINK-16077
 URL: https://issues.apache.org/jira/browse/FLINK-16077
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Affects Versions: 1.11.0
Reporter: Yu Li
 Fix For: 1.11.0


Complete the translation in `docs/dev/stream/state/custom_serialization.zh.md`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16076) Translate "Queryable State" page into Chinese

2020-02-15 Thread Yu Li (Jira)
Yu Li created FLINK-16076:
-

 Summary: Translate "Queryable State" page into Chinese
 Key: FLINK-16076
 URL: https://issues.apache.org/jira/browse/FLINK-16076
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Affects Versions: 1.11.0
Reporter: Yu Li
 Fix For: 1.11.0


Complete the translation in `docs/dev/stream/state/queryable_state.zh.md`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16075) Translate "The Broadcast State Pattern" page into Chinese

2020-02-15 Thread Yu Li (Jira)
Yu Li created FLINK-16075:
-

 Summary: Translate "The Broadcast State Pattern" page into Chinese
 Key: FLINK-16075
 URL: https://issues.apache.org/jira/browse/FLINK-16075
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Affects Versions: 1.11.0
Reporter: Yu Li
 Fix For: 1.11.0


Complete the translation in `docs/dev/stream/state/broadcast_state.zh.md`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16074) Translate the "Overview" page for "State & Fault Tolerance" into Chinese

2020-02-15 Thread Yu Li (Jira)
Yu Li created FLINK-16074:
-

 Summary: Translate the "Overview" page for "State & Fault 
Tolerance" into Chinese
 Key: FLINK-16074
 URL: https://issues.apache.org/jira/browse/FLINK-16074
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Affects Versions: 1.11.0
Reporter: Yu Li
 Fix For: 1.11.0


Complete the translation in `docs/dev/stream/state/index.zh.md`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15553) Create table ddl support comment after computed column

2020-02-15 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037720#comment-17037720
 ] 

hailong wang edited comment on FLINK-15553 at 2/16/20 7:13 AM:
---

Hi [~danny0405], could you help me hava a look? I have aleady had a patch for 
it according to the above. Thank  you ~


was (Author: hailong wang):
Hi [~danny0405], could you help me hava a look? I have aleady had a patch for 
it according to the above.

> Create table ddl support  comment after computed column
> ---
>
> Key: FLINK-15553
> URL: https://issues.apache.org/jira/browse/FLINK-15553
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.11.0
>
>
> For now, we can define computed column in create table ddl, but we can not 
> add comment after it just like regular table column, So we should support it, 
>  it's grammar as follows:
> {code:java}
> col_name AS expr  [COMMENT 'string']
> {code}
> My idea is, we can introduce  class
> {code:java}
>  SqlTableComputedColumn{code}
> to wrap name, expression and comment,  And just get the element from it will 
> be ok.
> As for parserImpls.ftl, it can be like as follows:
> {code:java}
> identifier = SimpleIdentifier()
> 
> expr = Expression(ExprContext.ACCEPT_NON_QUERY)
> [   {
> String p = SqlParserUtil.parseString(token.image);
> comment = SqlLiteral.createCharString(p, getPos());
> }]
> {
> SqlTableComputedColumn tableComputedColumn =
> new SqlTableComputedColumn(identifier, expr, comment, getPos());
> context.columnList.add(tableComputedColumn);
> }{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15959) Add min/max number of slots configuration to limit total number of slots

2020-02-15 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037719#comment-17037719
 ] 

Yangze Guo edited comment on FLINK-15959 at 2/16/20 7:10 AM:
-

Hi, [~liuyufei]. Thanks for proposing this change. 
Regarding your suggestion, I think ResourceManager does not need to throw an 
exception when exceeding the maximum limit. AFAIK, for batch jobs, the job 
graph could be executed without all the slot requests are fulfilled. We may 
just give an information-level log in this scenario. For stream jobs, those 
slot requests could not be fulfilled would fail in the timeout check.

BTW, since it touches the Public interface, I think we need to open a FLIP for 
this change. I also wanna help to introduce the maximum resource limitation for 
task executors, there could be an interrelationship between the maximum and 
minimum limit. Would you like to work for it together?


was (Author: karmagyz):
Hi, [~liuyufei]. Thanks for proposing this change. 
Regarding your suggestion, I think ResourceManager does not need to throw an 
exception when exceeding the maximum limit. AFAIK, for batch jobs, the job 
graph could be executed without all the slot requests are fulfilled. We may 
just give an information-level log in this scenario. For stream jobs, those 
slot requests could not be fulfilled would fail by the timeout check.

BTW, since it touches the Public interface, I think we need to open a FLIP for 
this change. I also wanna introduce the maximum resource limitation for task 
executors, there could be an interrelationship between the maximum and minimum 
limit. Would you like to work for it together?

> Add min/max number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15959) Add min/max number of slots configuration to limit total number of slots

2020-02-15 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037719#comment-17037719
 ] 

Yangze Guo commented on FLINK-15959:


Hi, [~liuyufei]. Thanks for proposing this change. 
Regarding your suggestion, I think ResourceManager does not need to throw an 
exception when exceeding the maximum limit. AFAIK, for batch jobs, the job 
graph could be executed without all the slot requests are fulfilled. We may 
just give an information-level log in this scenario. For stream jobs, those 
slot requests could not be fulfilled would fail by the timeout check.

BTW, since it touches the Public interface, I think we need to open a FLIP for 
this change. I also wanna introduce the maximum resource limitation for task 
executors, there could be an interrelationship between the maximum and minimum 
limit. Would you like to work for it together?

> Add min/max number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15553) Create table ddl support comment after computed column

2020-02-15 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037720#comment-17037720
 ] 

hailong wang commented on FLINK-15553:
--

Hi [~danny0405], could you help me hava a look? I have aleady had a patch for 
it according to the above.

> Create table ddl support  comment after computed column
> ---
>
> Key: FLINK-15553
> URL: https://issues.apache.org/jira/browse/FLINK-15553
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.11.0
>
>
> For now, we can define computed column in create table ddl, but we can not 
> add comment after it just like regular table column, So we should support it, 
>  it's grammar as follows:
> {code:java}
> col_name AS expr  [COMMENT 'string']
> {code}
> My idea is, we can introduce  class
> {code:java}
>  SqlTableComputedColumn{code}
> to wrap name, expression and comment,  And just get the element from it will 
> be ok.
> As for parserImpls.ftl, it can be like as follows:
> {code:java}
> identifier = SimpleIdentifier()
> 
> expr = Expression(ExprContext.ACCEPT_NON_QUERY)
> [   {
> String p = SqlParserUtil.parseString(token.image);
> comment = SqlLiteral.createCharString(p, getPos());
> }]
> {
> SqlTableComputedColumn tableComputedColumn =
> new SqlTableComputedColumn(identifier, expr, comment, getPos());
> context.columnList.add(tableComputedColumn);
> }{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16073) Translate "State & Fault Tolerance" pages into Chinese

2020-02-15 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-16073:
--
Parent: (was: FLINK-11526)
Issue Type: Task  (was: Sub-task)

> Translate "State & Fault Tolerance" pages into Chinese
> --
>
> Key: FLINK-16073
> URL: https://issues.apache.org/jira/browse/FLINK-16073
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
> Fix For: 1.11.0
>
>
> Translate all "State & Fault Tolerance" related pages into Chinese, including 
> pages under `docs/dev/stream/state/` and `docs/ops/state`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16073) Translate "State & Fault Tolerance" pages into Chinese

2020-02-15 Thread Yu Li (Jira)
Yu Li created FLINK-16073:
-

 Summary: Translate "State & Fault Tolerance" pages into Chinese
 Key: FLINK-16073
 URL: https://issues.apache.org/jira/browse/FLINK-16073
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Affects Versions: 1.11.0
Reporter: Yu Li
 Fix For: 1.11.0


Translate all "State & Fault Tolerance" related pages into Chinese, including 
pages under `docs/dev/stream/state/` and `docs/ops/state`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16073) Translate "State & Fault Tolerance" pages into Chinese

2020-02-15 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li reassigned FLINK-16073:
-

Assignee: Yu Li

Assigning to myself to drive this.

> Translate "State & Fault Tolerance" pages into Chinese
> --
>
> Key: FLINK-16073
> URL: https://issues.apache.org/jira/browse/FLINK-16073
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
> Fix For: 1.11.0
>
>
> Translate all "State & Fault Tolerance" related pages into Chinese, including 
> pages under `docs/dev/stream/state/` and `docs/ops/state`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037708#comment-17037708
 ] 

Leonard Xu commented on FLINK-16070:


Hi, [~danny0405] thank you for your explanation.
the query comes from user mail list and I think it's a corner case too.

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.10.1
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11100: [FLINK-15562][docs] Add Example settings.xml for maven archetype command wh…

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #11100: [FLINK-15562][docs] Add Example 
settings.xml for maven archetype command wh…
URL: https://github.com/apache/flink/pull/11100#issuecomment-586663648
 
 
   
   ## CI report:
   
   * 02620095a58249cf77e0c801541060e5b86df404 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149126405) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5213)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11100: [FLINK-15562][docs] Add Example settings.xml for maven archetype command wh…

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #11100: [FLINK-15562][docs] Add Example 
settings.xml for maven archetype command wh…
URL: https://github.com/apache/flink/pull/11100#issuecomment-586663648
 
 
   
   ## CI report:
   
   * 02620095a58249cf77e0c801541060e5b86df404 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149126405) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5213)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Danny Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037698#comment-17037698
 ] 

Danny Chen commented on FLINK-16070:


Thanks for reporting this [~Leonard Xu] ~

The AggregateProjectPullUpConstantsRule would reduce the constant grouping keys 
of aggregate, add for blink-planner, we deduce the unique keys with aggregate 
grouping keys:

{code:java}
  def getUniqueKeysOnAggregate(
  grouping: Array[Int],
  mq: RelMetadataQuery,
  ignoreNulls: Boolean): util.Set[ImmutableBitSet] = {
// group by keys form a unique key
ImmutableSet.of(ImmutableBitSet.of(grouping.indices: _*))
  }
{code}

To fix this problem, you can append the constant grouping key to he set.

Actually i'm curious why you need the constant key in the metadata ?

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.10.1
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on issue #11093: [FLINK-16055][hive] Avoid catalog functions when listing Hive built-i…

2020-02-15 Thread GitBox
lirui-apache commented on issue #11093: [FLINK-16055][hive] Avoid catalog 
functions when listing Hive built-i…
URL: https://github.com/apache/flink/pull/11093#issuecomment-58731
 
 
   cc @bowenli86 @JingsongLi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11100: [FLINK-15562][docs] Add Example settings.xml for maven archetype command wh…

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #11100: [FLINK-15562][docs] Add Example 
settings.xml for maven archetype command wh…
URL: https://github.com/apache/flink/pull/11100#issuecomment-586663648
 
 
   
   ## CI report:
   
   * 02620095a58249cf77e0c801541060e5b86df404 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/149126405) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5213)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11100: [FLINK-15562][docs] Add Example settings.xml for maven archetype command wh…

2020-02-15 Thread GitBox
flinkbot commented on issue #11100: [FLINK-15562][docs] Add Example 
settings.xml for maven archetype command wh…
URL: https://github.com/apache/flink/pull/11100#issuecomment-586663648
 
 
   
   ## CI report:
   
   * 02620095a58249cf77e0c801541060e5b86df404 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11100: [FLINK-15562][docs] Add Example settings.xml for maven archetype command wh…

2020-02-15 Thread GitBox
flinkbot commented on issue #11100: [FLINK-15562][docs] Add Example 
settings.xml for maven archetype command wh…
URL: https://github.com/apache/flink/pull/11100#issuecomment-586662121
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 02620095a58249cf77e0c801541060e5b86df404 (Sun Feb 16 
02:36:23 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15562) Improve the maven archetype command for SNAPSHOT versions document

2020-02-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15562:
---
Labels: pull-request-available  (was: )

> Improve the maven archetype command for SNAPSHOT versions document
> --
>
> Key: FLINK-15562
> URL: https://issues.apache.org/jira/browse/FLINK-15562
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: Jeff Zhang
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
>
> For SNAPSHOT versions, if using 3.0 or higher version maven, the 
> "archetypeCatalog" is not valid. In FLINK-7839, we add note to inform that 
> issue. 
> In this ticket, we'd like to documents an example settings.xml and pass it to 
> maven.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ opened a new pull request #11100: [FLINK-15562][docs] Add Example settings.xml for maven archetype command wh…

2020-02-15 Thread GitBox
KarmaGYZ opened a new pull request #11100: [FLINK-15562][docs] Add Example 
settings.xml for maven archetype command wh…
URL: https://github.com/apache/flink/pull/11100
 
 
   …en using Maven 3.0 or higher
   
   
   
   ## What is the purpose of the change
   
   Add Example settings.xml for maven archetype command when using Maven 3.0 or 
higher. 
   
   ## Brief change log
   
   - Add Example settings.xml for maven archetype command when using Maven 3.0 
or higher.
   - Remove the redundant intent and blank lines in the corresponding docs.
   
   After this change:
   
![image](https://user-images.githubusercontent.com/8684799/74598173-8a672f00-50a7-11ea-9f67-3d5edb8e7afc.png)
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16072) Optimize the performance of the write/read null mask in RowCoder

2020-02-15 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-16072:


 Summary: Optimize the performance of the write/read null mask in 
RowCoder
 Key: FLINK-16072
 URL: https://issues.apache.org/jira/browse/FLINK-16072
 Project: Flink
  Issue Type: Improvement
  Components: API / Python
Reporter: Huang Xingbo
 Fix For: 1.11.0


Optimizing the write/read null mask in RowCoder will gain some performance 
improvements in python udf.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16071) Optimize the cost of the get item of the Row in operation and coder

2020-02-15 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-16071:


 Summary: Optimize the cost of the get item of the Row in operation 
and coder
 Key: FLINK-16071
 URL: https://issues.apache.org/jira/browse/FLINK-16071
 Project: Flink
  Issue Type: Improvement
  Components: API / Python
Reporter: Huang Xingbo
 Fix For: 1.11.0


Optimizing the cost of the get item of the Row will gain tremendous 
improvements in python udf performance



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16069) Create TaskDeploymentDescriptor in future.

2020-02-15 Thread huweihua (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huweihua updated FLINK-16069:
-
Description: 
The deploy of tasks will take long time when we submit a high parallelism job. 
And Execution#deploy run in mainThread, so it will block JobMaster process 
other akka messages, such as Heartbeat. The creation of 
TaskDeploymentDescriptor take most of time. We can put the creation in future.

For example, A job [source(8000)->sink(8000)], the total 16000 tasks from 
SCHEDULED to DEPLOYING took more than 1mins. This caused the heartbeat of 
TaskManager timeout and job never success.

  was:
The deploy of tasks will took long time when we submit a high parallelism job. 
And Execution#deploy run in mainThread, so it will block JobMaster process 
other akka messages, such as Heartbeat. The creation of 
TaskDeploymentDescriptor take most of time. We can put the creation in future.

For example, A job [source(8000)->sink(8000)], the total 16000 tasks from 
SCHEDULED to DEPLOYING took more than 1mins. This caused the heartbeat of 
TaskManager timeout and job never success.


> Create TaskDeploymentDescriptor in future.
> --
>
> Key: FLINK-16069
> URL: https://issues.apache.org/jira/browse/FLINK-16069
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Reporter: huweihua
>Priority: Major
>
> The deploy of tasks will take long time when we submit a high parallelism 
> job. And Execution#deploy run in mainThread, so it will block JobMaster 
> process other akka messages, such as Heartbeat. The creation of 
> TaskDeploymentDescriptor take most of time. We can put the creation in future.
> For example, A job [source(8000)->sink(8000)], the total 16000 tasks from 
> SCHEDULED to DEPLOYING took more than 1mins. This caused the heartbeat of 
> TaskManager timeout and job never success.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15975) Test may fail due if HashMap iterates in a different order

2020-02-15 Thread testfixer0 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

testfixer0 updated FLINK-15975:
---
Description: The test `testMap` in `HiveGenericUDFTest` may fail due if 
HashMap iterates in a different order. `testMap` depends on `getConversion` and 
`toFlinkObject` in class `MapObjectInspector`. When the `inspector` is the 
instance of `MapObjectInspector`, it can return a `HashMap`. However, `HashMap` 
does not guarantee any specific order of entries. Thus, the test can fail due 
to a different iteration order.  (was: The test `testMap` in 
`HiveGenericUDFTest` fails. As you suggested, we'd be better to change 
assertions in test code to cater to the indeterminacy.  So, I propose a new 
proposal. )

> Test may fail due if HashMap iterates in a different order
> --
>
> Key: FLINK-15975
> URL: https://issues.apache.org/jira/browse/FLINK-15975
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: testfixer0
>Priority: Minor
>  Labels: pull-request-available
> Attachments: FLINK-15975-000.patch, FLINK-15975-001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test `testMap` in `HiveGenericUDFTest` may fail due if HashMap iterates 
> in a different order. `testMap` depends on `getConversion` and 
> `toFlinkObject` in class `MapObjectInspector`. When the `inspector` is the 
> instance of `MapObjectInspector`, it can return a `HashMap`. However, 
> `HashMap` does not guarantee any specific order of entries. Thus, the test 
> can fail due to a different iteration order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15975) Test may fail due if HashMap iterates in a different order

2020-02-15 Thread testfixer0 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

testfixer0 updated FLINK-15975:
---
Summary: Test may fail due if HashMap iterates in a different order  (was: 
Use LinkedHashMap for deterministic iterations)

> Test may fail due if HashMap iterates in a different order
> --
>
> Key: FLINK-15975
> URL: https://issues.apache.org/jira/browse/FLINK-15975
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: testfixer0
>Priority: Minor
>  Labels: pull-request-available
> Attachments: FLINK-15975-000.patch, FLINK-15975-001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test `testMap` in `HiveGenericUDFTest` fails. As you suggested, we'd be 
> better to change assertions in test code to cater to the indeterminacy.  So, 
> I propose a new proposal. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15975) Use LinkedHashMap for deterministic iterations

2020-02-15 Thread testfixer0 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

testfixer0 updated FLINK-15975:
---
Issue Type: Bug  (was: Improvement)

> Use LinkedHashMap for deterministic iterations
> --
>
> Key: FLINK-15975
> URL: https://issues.apache.org/jira/browse/FLINK-15975
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: testfixer0
>Priority: Minor
>  Labels: pull-request-available
> Attachments: FLINK-15975-000.patch, FLINK-15975-001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test `testMap` in `HiveGenericUDFTest` fails. As you suggested, we'd be 
> better to change assertions in test code to cater to the indeterminacy.  So, 
> I propose a new proposal. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-15982) 'Quickstarts Java nightly end-to-end test' is failed on travis

2020-02-15 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reopened FLINK-15982:


I need to reopen this ticket. The test is still failing in the latest master: 
https://travis-ci.org/apache/flink/jobs/650817656

I forgot to export the FLINK_VERSION.
This will be fixed in this PR:  https://github.com/apache/flink/pull/10976/

> 'Quickstarts Java nightly end-to-end test' is failed on travis
> --
>
> Key: FLINK-15982
> URL: https://issues.apache.org/jira/browse/FLINK-15982
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.11.0
>Reporter: Jark Wu
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> ==
> Running 'Quickstarts Java nightly end-to-end test'
> ==
> TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491
> Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 22:16:44.021 [INFO] Scanning for projects...
> 22:16:44.095 [INFO] 
> 
> 22:16:44.095 [INFO] BUILD FAILURE
> 22:16:44.095 [INFO] 
> 
> 22:16:44.098 [INFO] Total time: 0.095 s
> 22:16:44.099 [INFO] Finished at: 2020-02-10T22:16:44+00:00
> 22:16:44.143 [INFO] Final Memory: 5M/153M
> 22:16:44.143 [INFO] 
> 
> 22:16:44.144 [ERROR] The goal you specified requires a project to execute but 
> there is no POM in this directory 
> (/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491).
>  Please verify you invoked Maven from the correct directory. -> [Help 1]
> 22:16:44.144 [ERROR] 
> 22:16:44.145 [ERROR] To see the full stack trace of the errors, re-run Maven 
> with the -e switch.
> 22:16:44.145 [ERROR] Re-run Maven using the -X switch to enable full debug 
> logging.
> 22:16:44.145 [ERROR] 
> 22:16:44.145 [ERROR] For more information about the errors and possible 
> solutions, please read the following articles:
> 22:16:44.145 [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MissingProjectException
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test_quickstarts.sh:
>  line 57: cd: flink-quickstart-java: No such file or directory
> cp: cannot create regular file 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491/flink-quickstart-java/src/main/java/org/apache/flink/quickstart/Elasticsearch5SinkExample.java':
>  No such file or directory
> sed: can't read 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491/flink-quickstart-java/src/main/java/org/apache/flink/quickstart/Elasticsearch5SinkExample.java:
>  No such file or directory
> awk: fatal: cannot open file `pom.xml' for reading (No such file or directory)
> sed: can't read pom.xml: No such file or directory
> sed: can't read pom.xml: No such file or directory
> 22:16:45.312 [INFO] Scanning for projects...
> 22:16:45.386 [INFO] 
> 
> 22:16:45.386 [INFO] BUILD FAILURE
> 22:16:45.386 [INFO] 
> 
> 22:16:45.391 [INFO] Total time: 0.097 s
> 22:16:45.391 [INFO] Finished at: 2020-02-10T22:16:45+00:00
> 22:16:45.438 [INFO] Final Memory: 5M/153M
> 22:16:45.438 [INFO] 
> 
> 22:16:45.440 [ERROR] The goal you specified requires a project to execute but 
> there is no POM in this directory 
> (/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491).
>  Please verify you invoked Maven from the correct directory. -> [Help 1]
> 22:16:45.440 [ERROR] 
> 22:16:45.440 [ERROR] To see the full stack trace of the errors, re-run Maven 
> with the -e switch.
> 22:16:45.440 [ERROR] Re-run Maven using the -X switch to enable full debug 
> logging.
> 22:16:45.440 [ERROR] 
> 22:16:45.440 [ERROR] For more information about the errors and possible 
> solutions, please read the following articles:
> 22:16:45.440 [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MissingProjectException
> 

[jira] [Commented] (FLINK-16068) table with keyword-escaped columns and computed_column_expression columns

2020-02-15 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037541#comment-17037541
 ] 

Benchao Li commented on FLINK-16068:


[~pangliang] Thanks for reporting this, and your nice example sql which can be 
easily reproduced.

[~jark] After looking into the stack trace and source code, I found it's 
because in {{CatalogSourceTable}}:L103, we use field names and computed column 
expressions to construct a new expr list, and we simply use field name as the 
expression for non computed column.

IMO, a simple fix here is we can add escape character around the field name. 
(actually, in our internal computed column implementation, we encountered this 
problem as well, and resolved it this way).

> table with keyword-escaped columns and computed_column_expression columns
> -
>
> Key: FLINK-16068
> URL: https://issues.apache.org/jira/browse/FLINK-16068
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: pangliang
>Priority: Major
> Fix For: 1.10.1
>
>
> I use sql-client to create a table with keyword-escaped column and 
> computed_column_expression column, like this:
> {code:java}
> CREATE TABLE source_kafka (
> log STRING,
> `time` BIGINT,
> pt as proctime()
> ) WITH (
>   'connector.type' = 'kafka',   
>   'connector.version' = 'universal',
>   'connector.topic' = 'k8s-logs',
>   'connector.startup-mode' = 'latest-offset',
>   'connector.properties.zookeeper.connect' = 
> 'zk-1.zk:2181,zk-2.zk:2181,zk-3.zk:2181/kafka',
>   'connector.properties.bootstrap.servers' = 'kafka.default:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'format.type'='json',
>   'format.fail-on-missing-field' = 'true',
>   'update-mode' = 'append'
> );
> {code}
> Then I simply used it :
> {code:java}
> SELECT * from source_kafka limit 10;{code}
> got an exception:
> {code:java}
> java.io.IOException: Fail to run stream sql job
>   at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:164)
>   at 
> org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callSelect(FlinkStreamSqlInterpreter.java:108)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:203)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:151)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.interpret(FlinkSqlInterrpeter.java:104)
>   at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:676)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:569)
>   at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
>   at 
> org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:121)
>   at 
> org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.table.api.SqlParserException: SQL parse failed. 
> Encountered "time" at line 1, column 12.
> Was expecting one of:
> "ABS" ...
> "ARRAY" ...
> "AVG" ...
> "CARDINALITY" ...
> "CASE" ...
> "CAST" ...
> "CEIL" ...
> "CEILING" ...
> ..
> 
>   at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:50)
>   at 
> org.apache.flink.table.planner.calcite.SqlExprToRexConverterImpl.convertToRexNodes(SqlExprToRexConverterImpl.java:79)
>   at 
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.scala:111)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3328)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2357)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2005)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:646)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:627)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3181)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to 
TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#issuecomment-584007380
 
 
   
   ## CI report:
   
   * 86c4939042e8af2da1ac1e5900225f4f0310fa04 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148149333) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4991)
 
   * 18beb9a3c1ed78a6cbc1bc5ba9f96b51bbf8eeea Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148181501) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5000)
 
   * a9125de0859c43262d41c4cecee53ceeb807cf27 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148340324) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5041)
 
   * 52bc7dcf579ef6e5e466ec93451cbf56451b1d41 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148547890) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5093)
 
   * a307994428657e1025786975b7cf11544afa3ab8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148567327) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5104)
 
   * 4749ac089576b36668e412acdbe56f55b4ffbdc6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148576458) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5106)
 
   * 8c7c673b8bf46297d499db5725c127986533c8ff Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148711047) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5119)
 
   * b1f4018140c5b3b881de6da6d5f69422a4b09831 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149097149) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5200)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to 
TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#issuecomment-584007380
 
 
   
   ## CI report:
   
   * 86c4939042e8af2da1ac1e5900225f4f0310fa04 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148149333) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4991)
 
   * 18beb9a3c1ed78a6cbc1bc5ba9f96b51bbf8eeea Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148181501) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5000)
 
   * a9125de0859c43262d41c4cecee53ceeb807cf27 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148340324) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5041)
 
   * 52bc7dcf579ef6e5e466ec93451cbf56451b1d41 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148547890) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5093)
 
   * a307994428657e1025786975b7cf11544afa3ab8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148567327) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5104)
 
   * 4749ac089576b36668e412acdbe56f55b4ffbdc6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148576458) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5106)
 
   * 8c7c673b8bf46297d499db5725c127986533c8ff Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148711047) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5119)
 
   * b1f4018140c5b3b881de6da6d5f69422a4b09831 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149097149) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5200)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to 
TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#issuecomment-584007380
 
 
   
   ## CI report:
   
   * 86c4939042e8af2da1ac1e5900225f4f0310fa04 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148149333) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4991)
 
   * 18beb9a3c1ed78a6cbc1bc5ba9f96b51bbf8eeea Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148181501) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5000)
 
   * a9125de0859c43262d41c4cecee53ceeb807cf27 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148340324) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5041)
 
   * 52bc7dcf579ef6e5e466ec93451cbf56451b1d41 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148547890) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5093)
 
   * a307994428657e1025786975b7cf11544afa3ab8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148567327) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5104)
 
   * 4749ac089576b36668e412acdbe56f55b4ffbdc6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148576458) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5106)
 
   * 8c7c673b8bf46297d499db5725c127986533c8ff Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148711047) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5119)
 
   * b1f4018140c5b3b881de6da6d5f69422a4b09831 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/149097149) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5200)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to 
TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#issuecomment-584007380
 
 
   
   ## CI report:
   
   * 86c4939042e8af2da1ac1e5900225f4f0310fa04 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148149333) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4991)
 
   * 18beb9a3c1ed78a6cbc1bc5ba9f96b51bbf8eeea Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148181501) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5000)
 
   * a9125de0859c43262d41c4cecee53ceeb807cf27 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148340324) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5041)
 
   * 52bc7dcf579ef6e5e466ec93451cbf56451b1d41 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148547890) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5093)
 
   * a307994428657e1025786975b7cf11544afa3ab8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148567327) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5104)
 
   * 4749ac089576b36668e412acdbe56f55b4ffbdc6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148576458) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5106)
 
   * 8c7c673b8bf46297d499db5725c127986533c8ff Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148711047) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5119)
 
   * b1f4018140c5b3b881de6da6d5f69422a4b09831 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-16070:
---
Description: 
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner works well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can not extract correct unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 

  was:
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner works well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 


> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.10.1
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16070:

Fix Version/s: (was: 1.11.0)
   1.10.1

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.10.1
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16070:

Priority: Critical  (was: Major)

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037516#comment-17037516
 ] 

Jingsong Lee commented on FLINK-16070:
--

Hi [~Leonard Xu] , thanks for reporting.

{{FlinkRelMdUniqueKeys}} is in the flink code, it deal with unqiue keys, which 
means maybe it has some bugs lead to this bug. CC author: [~godfreyhe] 

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037512#comment-17037512
 ] 

Leonard Xu commented on FLINK-16070:


I'm not sure the root cause comes from blink planner or underlying Calcite up 
to now.

cc: [~jark] [~danny0405]

 

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-16070:
---
Description: 
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner works well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 

  was:
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 


> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-16070:
---
Description: 
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 

  was:
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 


> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner work well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-16070:
---
Description: 
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
INSERT INTO ES6_ZHANGLE_OUTPUT  
 SELECT aggId, pageId, ts_min as ts,  
   count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  
   count(case when eventId = 'click' then 1 else null end) as clkCnt  
 FROM  (    
 SELECT        
   'ZL_001' as aggId,
        pageId,        
eventId,        
recvTime,        
ts2Date(recvTime) as ts_min    
 from kafka_zl_etrack_event_stream    
 where eventId in ('exposure', 'click')  
 ) as t1  
 group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 

  was:
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
NSERT INTO ES6_ZHANGLE_OUTPUT  SELECT aggId, pageId, ts_min as ts,  count(case 
when eventId = 'exposure' then 1 else null end) as expoCnt,  count(case when 
eventId = 'click' then 1 else null end) as clkCnt  FROM  (    SELECT        
'ZL_001' as aggId,        pageId,        eventId,        recvTime,        
ts2Date(recvTime) as ts_min    from kafka_zl_etrack_event_stream    where 
eventId in ('exposure', 'click')  ) as t1  group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 


> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner work well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can extract unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-15 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-16070:
--

 Summary: Blink planner can not extract correct unique key for 
UpsertStreamTableSink 
 Key: FLINK-16070
 URL: https://issues.apache.org/jira/browse/FLINK-16070
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: Leonard Xu
 Fix For: 1.11.0


I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail 
list[1] that Blink planner can not extract correct unique key for following 
query, but legacy planner work well. 
{code:java}
// user code
NSERT INTO ES6_ZHANGLE_OUTPUT  SELECT aggId, pageId, ts_min as ts,  count(case 
when eventId = 'exposure' then 1 else null end) as expoCnt,  count(case when 
eventId = 'click' then 1 else null end) as clkCnt  FROM  (    SELECT        
'ZL_001' as aggId,        pageId,        eventId,        recvTime,        
ts2Date(recvTime) as ts_min    from kafka_zl_etrack_event_stream    where 
eventId in ('exposure', 'click')  ) as t1  group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in 
`*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well in  
`*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* 
`. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] knaufk commented on a change in pull request #11092: [FLINK-15999] Extract “Concepts” material from API/Library sections and start proper concepts section

2020-02-15 Thread GitBox
knaufk commented on a change in pull request #11092: [FLINK-15999] Extract 
“Concepts” material from API/Library sections and start proper concepts section
URL: https://github.com/apache/flink/pull/11092#discussion_r379550966
 
 

 ##
 File path: docs/concepts/stream-processing.md
 ##
 @@ -0,0 +1,96 @@
+---
+title: Stream Processing
+nav-id: stream-processing
+nav-pos: 1
+nav-title: Stream Processing
+nav-parent_id: concepts
+---
+
+
+`TODO: Add introduction`
+* This will be replaced by the TOC
+{:toc}
+
+## A Unified System for Batch & Stream Processing
+
+`TODO`
+
+{% top %}
+
+## Programs and Dataflows
+
+The basic building blocks of Flink programs are **streams** and
+**transformations**. Conceptually a *stream* is a (potentially never-ending)
+flow of data records, and a *transformation* is an operation that takes one or
+more streams as input, and produces one or more output streams as a result.
+
+When executed, Flink programs are mapped to **streaming dataflows**, consisting
+of **streams** and transformation **operators**. Each dataflow starts with one
+or more **sources** and ends in one or more **sinks**. The dataflows resemble
+arbitrary **directed acyclic graphs** *(DAGs)*. Although special forms of
+cycles are permitted via *iteration* constructs, for the most part we will
+gloss over this for simplicity.
+
+
+
+Often there is a one-to-one correspondence between the transformations in the
+programs and the operators in the dataflow. Sometimes, however, one
+transformation may consist of multiple transformation operators.
+
+{% top %}
+
+## Parallel Dataflows
+
+Programs in Flink are inherently parallel and distributed. During execution, a
+*stream* has one or more **stream partitions**, and each *operator* has one or
+more **operator subtasks**. The operator subtasks are independent of one
+another, and execute in different threads and possibly on different machines or
+containers.
+
+The number of operator subtasks is the **parallelism** of that particular
+operator. The parallelism of a stream is always that of its producing operator.
+Different operators of the same program may have different levels of
+parallelism.
+
+
+
+Streams can transport data between two operators in a *one-to-one* (or
 
 Review comment:
   To me the way Fabian presents theses data exchange patterns in his book is 
easier to understand. I think it was: 
   
   * Forward
   * Broadcast
   * Random
   * Keyed
   
   IMHO the additional classification in "Redistributing" and "One-to-one" does 
not help. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] carp84 commented on a change in pull request #11095: [FLINK-10918][rocks-db-backend] Fix incremental checkpoints on Windows.

2020-02-15 Thread GitBox
carp84 commented on a change in pull request #11095: 
[FLINK-10918][rocks-db-backend] Fix incremental checkpoints on Windows.
URL: https://github.com/apache/flink/pull/11095#discussion_r379828410
 
 

 ##
 File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBStateDownloader.java
 ##
 @@ -120,14 +121,14 @@ private void downloadDataForStateHandle(
CloseableRegistry closeableRegistry) throws IOException {
 
FSDataInputStream inputStream = null;
-   FSDataOutputStream outputStream = null;
+   OutputStream outputStream = null;
 
try {
-   FileSystem restoreFileSystem = 
restoreFilePath.getFileSystem();
inputStream = remoteFileHandle.openInputStream();
closeableRegistry.registerCloseable(inputStream);
 
-   outputStream = 
restoreFileSystem.create(restoreFilePath, FileSystem.WriteMode.OVERWRITE);
+   Files.createDirectories(restoreFilePath.getParent());
+   outputStream = Files.newOutputStream(restoreFilePath, 
StandardOpenOption.CREATE, StandardOpenOption.WRITE);
 
 Review comment:
   It seems to me we could simply call `Files.newOutputStream(restoreFilePath)` 
here, which perfectly matches the "overwrite" semantic.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16010) Support notFollowedBy with interval as the last part of a Pattern

2020-02-15 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037508#comment-17037508
 ] 

Dian Fu commented on FLINK-16010:
-

As [~dwysakowicz] mentioned in FLINK-9431, I also think that this is valid 
requirement. I like the design overall as it will not introduce additional APIs 
and the design seems clean and feasible for me.

[~aitozi] It will be great if you could share some thoughts on this design as I 
noticed that you have proposed another design before.

> Support notFollowedBy with interval as the last part of a Pattern
> -
>
> Key: FLINK-16010
> URL: https://issues.apache.org/jira/browse/FLINK-16010
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / CEP
>Affects Versions: 1.9.0
>Reporter: shuai.xu
>Priority: Major
>
> Now, Pattern.begin("a").notFollowedBy("b") is not allowed in CEP. But this a 
> useful in many applications. Such as operators may want to find the users who 
> created an order but didn't pay in 10 minutes.
> So I propose to support that notFollowedBy() with a interval can be the last 
> part of a Pattern. 
> For example,  Pattern.begin("a").notFollowedBy("b").within(Time.minutes(10)) 
> will be valid in the future.
> Discuss in dev mail list is 
> [https://lists.apache.org/thread.html/rc505728048663d672ad379578ac67d3219f6076986c80a2362802ebb%40%3Cdev.flink.apache.org%3E]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu commented on a change in pull request #11068: [FLINK-15964] [cep] fix getting event of previous stage in notFollowedBy may throw exception bug

2020-02-15 Thread GitBox
dianfu commented on a change in pull request #11068: [FLINK-15964] [cep] fix 
getting event of previous stage in notFollowedBy may throw exception bug
URL: https://github.com/apache/flink/pull/11068#discussion_r379826374
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/nfa/NFAITCase.java
 ##
 @@ -2844,4 +2845,35 @@ public void testSharedBufferClearing() throws Exception 
{
Mockito.verify(accessor, 
Mockito.times(1)).advanceTime(2);
}
}
+
+   /**
+* Test that can access the value of the previous stage directly in 
notFollowedBy.
+*
 
 Review comment:
   The Java doc could be removed as I think the method name already describes 
clearly what it does. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on a change in pull request #11068: [FLINK-15964] [cep] fix getting event of previous stage in notFollowedBy may throw exception bug

2020-02-15 Thread GitBox
dianfu commented on a change in pull request #11068: [FLINK-15964] [cep] fix 
getting event of previous stage in notFollowedBy may throw exception bug
URL: https://github.com/apache/flink/pull/11068#discussion_r379826440
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/nfa/NFAITCase.java
 ##
 @@ -2844,4 +2845,35 @@ public void testSharedBufferClearing() throws Exception 
{
Mockito.verify(accessor, 
Mockito.times(1)).advanceTime(2);
}
}
+
+   /**
+* Test that can access the value of the previous stage directly in 
notFollowedBy.
+*
+* @see https://issues.apache.org/jira/browse/FLINK-15964;>FLINK-15964
+* @throws Exception
+*/
+   @Test
+   public void testAccessPreviousStageInNotFollowedBy() throws Exception {
 
 Review comment:
   What about move this test case to NotPatternITCase which is dedicated for 
not pattern?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11098: [FLINK-16060][task] Implement working StreamMultipleInputProcessor

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #11098: [FLINK-16060][task] Implement 
working StreamMultipleInputProcessor
URL: https://github.com/apache/flink/pull/11098#issuecomment-586328210
 
 
   
   ## CI report:
   
   * df7f0151d94bc7705c87baf855ae3d8d57f7e463 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148994879) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5187)
 
   * 311d3e9843bd601a8de8bee78c2ecd34222d19d6 UNKNOWN
   * e72b2a38b7ec718b44a14f98897dbaf22f9d0d0d Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149078552) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5196)
 
   * 8355093ce1ed0dab9985d9f522f3bcd97c66d016 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149077809) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5195)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] klion26 commented on a change in pull request #11095: [FLINK-10918][rocks-db-backend] Fix incremental checkpoints on Windows.

2020-02-15 Thread GitBox
klion26 commented on a change in pull request #11095: 
[FLINK-10918][rocks-db-backend] Fix incremental checkpoints on Windows.
URL: https://github.com/apache/flink/pull/11095#discussion_r379820869
 
 

 ##
 File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBStateDownloader.java
 ##
 @@ -120,14 +121,14 @@ private void downloadDataForStateHandle(
CloseableRegistry closeableRegistry) throws IOException {
 
FSDataInputStream inputStream = null;
-   FSDataOutputStream outputStream = null;
+   OutputStream outputStream = null;
 
try {
-   FileSystem restoreFileSystem = 
restoreFilePath.getFileSystem();
inputStream = remoteFileHandle.openInputStream();
closeableRegistry.registerCloseable(inputStream);
 
-   outputStream = 
restoreFileSystem.create(restoreFilePath, FileSystem.WriteMode.OVERWRITE);
+   Files.createDirectories(restoreFilePath.getParent());
+   outputStream = Files.newOutputStream(restoreFilePath, 
StandardOpenOption.CREATE, StandardOpenOption.WRITE);
 
 Review comment:
   Do we need to add an Option `TRUNCATE_EXISTING` here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11098: [FLINK-16060][task] Implement working StreamMultipleInputProcessor

2020-02-15 Thread GitBox
flinkbot edited a comment on issue #11098: [FLINK-16060][task] Implement 
working StreamMultipleInputProcessor
URL: https://github.com/apache/flink/pull/11098#issuecomment-586328210
 
 
   
   ## CI report:
   
   * df7f0151d94bc7705c87baf855ae3d8d57f7e463 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148994879) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5187)
 
   * 311d3e9843bd601a8de8bee78c2ecd34222d19d6 UNKNOWN
   * e72b2a38b7ec718b44a14f98897dbaf22f9d0d0d Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149078552) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5196)
 
   * 8355093ce1ed0dab9985d9f522f3bcd97c66d016 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #11048: [FLINK-15966][runtime] Capture callstacks for RPC ask() calls to improve exceptions.

2020-02-15 Thread GitBox
zentol commented on a change in pull request #11048: [FLINK-15966][runtime] 
Capture callstacks for RPC ask() calls to improve exceptions.
URL: https://github.com/apache/flink/pull/11048#discussion_r379802690
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaInvocationHandler.java
 ##
 @@ -370,4 +375,33 @@ public String getHostname() {
public CompletableFuture getTerminationFuture() {
return terminationFuture;
}
+
+   static Object deserializeValueIfNeeded(Object o, Method method) {
+   if (o instanceof SerializedValue) {
+   try {
+   return  ((SerializedValue) 
o).deserializeValue(AkkaInvocationHandler.class.getClassLoader());
+   } catch (IOException | ClassNotFoundException e) {
+   throw new CompletionException(
+   new RpcException(
+   "Could not deserialize the 
serialized payload of RPC method : " + method.getName(), e));
+   }
+   } else {
+   return o;
+   }
+   }
+
+   static Throwable resolveTimeoutException(Throwable exception, @Nullable 
Throwable callStackCapture, Method method) {
+   if (callStackCapture == null || (!(exception instanceof 
akka.pattern.AskTimeoutException))) {
+   return exception;
 
 Review comment:
   The null branch implies that the exception type is dependent on the 
configuration option, which sounds like an accident waiting to happen if 
someone wants to handle timeouts explicitly as they would have to know what 
there are 2 types of timeouts that can occur.
   
   How problematic would it be to always create a TimeoutException, and only 
have different behaviors for how the stacktrace is set?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-16038) Expose hitRate metric for JDBCLookupFunction

2020-02-15 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16038:
---

Assignee: Benchao Li

> Expose hitRate metric for JDBCLookupFunction
> 
>
> Key: FLINK-16038
> URL: https://issues.apache.org/jira/browse/FLINK-16038
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Benchao Li
>Assignee: Benchao Li
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{hitRate}} is a key metric for a cache. Exposing {{hitRate}} for 
> {{JDBCLookupFunction}} can give users an intuitive understanding about the 
> cache stat.
> cc [~lzljs3620320] [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on issue #10995: [FLINK-15847][ml] Include flink-ml-api and flink-ml-lib in opt

2020-02-15 Thread GitBox
hequn8128 commented on issue #10995: [FLINK-15847][ml] Include flink-ml-api and 
flink-ml-lib in opt
URL: https://github.com/apache/flink/pull/10995#issuecomment-586566224
 
 
   @dianfu @zentol Thanks a lot for your review. The PR has been updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services