[GitHub] [flink] flinkbot edited a comment on issue #10315: [FLINK-14552][table] Enable partition statistics in blink planner

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10315: [FLINK-14552][table] Enable 
partition statistics in blink planner
URL: https://github.com/apache/flink/pull/10315#issuecomment-558221318
 
 
   
   ## CI report:
   
   * 05b30617c280796e80e8569dae48b726cddb23d8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138084321)
   * 707fc20d39d5a2c82bd4a3024cd02a932413af70 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138184336)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10306: [FLINK-13943][table-api] Provide utility method to convert Flink table to Java List

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10306: [FLINK-13943][table-api] Provide 
utility method to convert Flink table to Java List
URL: https://github.com/apache/flink/pull/10306#issuecomment-558027199
 
 
   
   ## CI report:
   
   * 0b265d192e2a6024e5817317be0317136208ccaf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138004313)
   * 25e878014145b686c203e03decbe71041d7d2b3e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138010198)
   * dfc5b0f06ea6fb32bec861bd2cdd215af0c48413 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138179696)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aljoscha commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
aljoscha commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350585143
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/ProgramTargetDescriptor.java
 ##
 @@ -25,56 +25,28 @@
  */
 public class ProgramTargetDescriptor {
 
-   private final String clusterId;
-
private final String jobId;
 
-   private final String webInterfaceUrl;
-
-   public ProgramTargetDescriptor(String clusterId, String jobId, String 
webInterfaceUrl) {
-   this.clusterId = clusterId;
+   public ProgramTargetDescriptor(String jobId) {
this.jobId = jobId;
-   this.webInterfaceUrl = webInterfaceUrl;
-   }
-
-   public String getClusterId() {
-   return clusterId;
}
 
public String getJobId() {
return jobId;
}
 
-   public String getWebInterfaceUrl() {
-   return webInterfaceUrl;
-   }
-
@Override
public String toString() {
-   return String.format(
-   "Cluster ID: %s\n" +
-   "Job ID: %s\n" +
-   "Web interface: %s",
-   clusterId, jobId, webInterfaceUrl);
+   return "Job ID: %s\n" + jobId;
 
 Review comment:
   Before I had multiple fields in the `toString()` and then I reverted... 😅 
Good catch!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14947) Implement LocalExecutor as new Executor interface

2019-11-25 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982227#comment-16982227
 ] 

Zili Chen commented on FLINK-14947:
---

OK I will un-assign the issue. Besides, I think it would be better that you can 
take a look at FLINK-14762 & FLINK-14948 which I encounter some issues of 
lifecycle management when working locally. CC [~kkl0u]

> Implement LocalExecutor as new Executor interface
> -
>
> Key: FLINK-14947
> URL: https://issues.apache.org/jira/browse/FLINK-14947
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: Zili Chen
>Priority: Major
> Fix For: 1.10.0
>
>
> We can replace {{PlanExecutor}} things with new Executor interface. One of 
> this series is implement a {{LocalExecutor}} that execute pipeline within a 
> {{MiniCluster}}. For proper lifecycle management I would wait for FLINK-14762 
> & FLINK-14948 being merged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14946) Retraction infer would result in bad plan under corner case in blink planner

2019-11-25 Thread Jing Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-14946:
---
Attachment: screenshot-5.png

> Retraction infer would result in bad plan under corner case in blink planner
> 
>
> Key: FLINK-14946
> URL: https://issues.apache.org/jira/browse/FLINK-14946
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.9.1
>Reporter: Jing Zhang
>Priority: Major
> Attachments: RetractionRules1Test.scala, 
> image-2019-11-26-14-54-34-797.png, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, screenshot-4.png, screenshot-5.png
>
>
> Retractions rule would result in bad plan under some case, I simplify the 
> case like the following sql, complete test case could be found in attachments.
> {code:scala}
>   val join_sql =
>   """
> |SELECT
> |  ll.a AS a,
> |  ll.b AS b,
> |  cnt
> |FROM (
> | SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
> |) ll
> |JOIN (
> | SELECT a, b FROM r GROUP BY a, b
> |) rr ON
> |(ll.a = rr.a AND ll.b = rr.b)
>   """.stripMargin !image-2019-11-26-14-52-52-824.png! 
> val sqlQuery =
>   s"""
>  |SELECT a, b_1, SUM(cnt) AS cnt
>  |FROM (
>  | SELECT *, b AS b_1 FROM (${join_sql})
>  |   UNION ALL
>  | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
>  |) AS total_result
>  |GROUP BY a, b_1
>   """.stripMargin
> {code}
> The plan is :
>  !image-2019-11-26-14-54-34-797.png! 
> After retraction infer, we expect two join node in the above plan has 
> `AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 
> is unexpected.
> I find  in HepPlanner, before actually apply `SetAccModeRule` to Join2, 
> HepPlanner would check if the vertex belongs to dag or not, and the result is 
> false. So `SetAccModeRule` does not actually apply to Join2.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14947) Implement LocalExecutor as new Executor interface

2019-11-25 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982227#comment-16982227
 ] 

Zili Chen edited comment on FLINK-14947 at 11/26/19 7:56 AM:
-

OK I will un-assign the issue. Besides, I think it would be better that you 
take a look at FLINK-14762 & FLINK-14948 which I encounter some issues of 
lifecycle management when working locally. CC [~kkl0u]


was (Author: tison):
OK I will un-assign the issue. Besides, I think it would be better that you can 
take a look at FLINK-14762 & FLINK-14948 which I encounter some issues of 
lifecycle management when working locally. CC [~kkl0u]

> Implement LocalExecutor as new Executor interface
> -
>
> Key: FLINK-14947
> URL: https://issues.apache.org/jira/browse/FLINK-14947
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: Zili Chen
>Priority: Major
> Fix For: 1.10.0
>
>
> We can replace {{PlanExecutor}} things with new Executor interface. One of 
> this series is implement a {{LocalExecutor}} that execute pipeline within a 
> {{MiniCluster}}. For proper lifecycle management I would wait for FLINK-14762 
> & FLINK-14948 being merged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14946) Retraction infer would result in bad plan under corner case in blink planner

2019-11-25 Thread Jing Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-14946:
---
Attachment: screenshot-4.png

> Retraction infer would result in bad plan under corner case in blink planner
> 
>
> Key: FLINK-14946
> URL: https://issues.apache.org/jira/browse/FLINK-14946
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.9.1
>Reporter: Jing Zhang
>Priority: Major
> Attachments: RetractionRules1Test.scala, 
> image-2019-11-26-14-54-34-797.png, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, screenshot-4.png
>
>
> Retractions rule would result in bad plan under some case, I simplify the 
> case like the following sql, complete test case could be found in attachments.
> {code:scala}
>   val join_sql =
>   """
> |SELECT
> |  ll.a AS a,
> |  ll.b AS b,
> |  cnt
> |FROM (
> | SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
> |) ll
> |JOIN (
> | SELECT a, b FROM r GROUP BY a, b
> |) rr ON
> |(ll.a = rr.a AND ll.b = rr.b)
>   """.stripMargin !image-2019-11-26-14-52-52-824.png! 
> val sqlQuery =
>   s"""
>  |SELECT a, b_1, SUM(cnt) AS cnt
>  |FROM (
>  | SELECT *, b AS b_1 FROM (${join_sql})
>  |   UNION ALL
>  | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
>  |) AS total_result
>  |GROUP BY a, b_1
>   """.stripMargin
> {code}
> The plan is :
>  !image-2019-11-26-14-54-34-797.png! 
> After retraction infer, we expect two join node in the above plan has 
> `AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 
> is unexpected.
> I find  in HepPlanner, before actually apply `SetAccModeRule` to Join2, 
> HepPlanner would check if the vertex belongs to dag or not, and the result is 
> false. So `SetAccModeRule` does not actually apply to Join2.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14947) Implement LocalExecutor as new Executor interface

2019-11-25 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-14947:
-

Assignee: (was: Zili Chen)

> Implement LocalExecutor as new Executor interface
> -
>
> Key: FLINK-14947
> URL: https://issues.apache.org/jira/browse/FLINK-14947
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: Zili Chen
>Priority: Major
> Fix For: 1.10.0
>
>
> We can replace {{PlanExecutor}} things with new Executor interface. One of 
> this series is implement a {{LocalExecutor}} that execute pipeline within a 
> {{MiniCluster}}. For proper lifecycle management I would wait for FLINK-14762 
> & FLINK-14948 being merged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14947) Implement LocalExecutor as new Executor interface

2019-11-25 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1698#comment-1698
 ] 

Aljoscha Krettek commented on FLINK-14947:
--

I think [~kkl0u]] has already started working on this and the remote executor, 
but he didn't create an issue for it.

> Implement LocalExecutor as new Executor interface
> -
>
> Key: FLINK-14947
> URL: https://issues.apache.org/jira/browse/FLINK-14947
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
> Fix For: 1.10.0
>
>
> We can replace {{PlanExecutor}} things with new Executor interface. One of 
> this series is implement a {{LocalExecutor}} that execute pipeline within a 
> {{MiniCluster}}. For proper lifecycle management I would wait for FLINK-14762 
> & FLINK-14948 being merged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on issue #8859: [FLINK-12905][table-planner] Enable querying CatalogViews in legacy planner

2019-11-25 Thread GitBox
lirui-apache commented on issue #8859: [FLINK-12905][table-planner] Enable 
querying CatalogViews in legacy planner
URL: https://github.com/apache/flink/pull/8859#issuecomment-558504806
 
 
   @dawidwys Thanks for the update. +1 to have self contained dialect flag in 
view definition. The PR looks good to me overall. Just left some minor comment.
   I agree with @danny0405 we need to support views via DDL, which can be done 
as follow ups. My understanding is this PR only enables the planner to handle 
CatalogViews. But we shouldn't expect end users to directly create CatalogView 
instances because users wouldn't know how to generate expanded query strings, 
right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14946) Retraction infer would result in bad plan under corner case in blink planner

2019-11-25 Thread Jing Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-14946:
---
Attachment: screenshot-3.png

> Retraction infer would result in bad plan under corner case in blink planner
> 
>
> Key: FLINK-14946
> URL: https://issues.apache.org/jira/browse/FLINK-14946
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.9.1
>Reporter: Jing Zhang
>Priority: Major
> Attachments: RetractionRules1Test.scala, 
> image-2019-11-26-14-54-34-797.png, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png
>
>
> Retractions rule would result in bad plan under some case, I simplify the 
> case like the following sql, complete test case could be found in attachments.
> {code:scala}
>   val join_sql =
>   """
> |SELECT
> |  ll.a AS a,
> |  ll.b AS b,
> |  cnt
> |FROM (
> | SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
> |) ll
> |JOIN (
> | SELECT a, b FROM r GROUP BY a, b
> |) rr ON
> |(ll.a = rr.a AND ll.b = rr.b)
>   """.stripMargin !image-2019-11-26-14-52-52-824.png! 
> val sqlQuery =
>   s"""
>  |SELECT a, b_1, SUM(cnt) AS cnt
>  |FROM (
>  | SELECT *, b AS b_1 FROM (${join_sql})
>  |   UNION ALL
>  | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
>  |) AS total_result
>  |GROUP BY a, b_1
>   """.stripMargin
> {code}
> The plan is :
>  !image-2019-11-26-14-54-34-797.png! 
> After retraction infer, we expect two join node in the above plan has 
> `AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 
> is unexpected.
> I find  in HepPlanner, before actually apply `SetAccModeRule` to Join2, 
> HepPlanner would check if the vertex belongs to dag or not, and the result is 
> false. So `SetAccModeRule` does not actually apply to Join2.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14946) Retraction infer would result in bad plan under corner case in blink planner

2019-11-25 Thread Jing Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-14946:
---
Attachment: screenshot-2.png

> Retraction infer would result in bad plan under corner case in blink planner
> 
>
> Key: FLINK-14946
> URL: https://issues.apache.org/jira/browse/FLINK-14946
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.9.1
>Reporter: Jing Zhang
>Priority: Major
> Attachments: RetractionRules1Test.scala, 
> image-2019-11-26-14-54-34-797.png, screenshot-1.png, screenshot-2.png
>
>
> Retractions rule would result in bad plan under some case, I simplify the 
> case like the following sql, complete test case could be found in attachments.
> {code:scala}
>   val join_sql =
>   """
> |SELECT
> |  ll.a AS a,
> |  ll.b AS b,
> |  cnt
> |FROM (
> | SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
> |) ll
> |JOIN (
> | SELECT a, b FROM r GROUP BY a, b
> |) rr ON
> |(ll.a = rr.a AND ll.b = rr.b)
>   """.stripMargin !image-2019-11-26-14-52-52-824.png! 
> val sqlQuery =
>   s"""
>  |SELECT a, b_1, SUM(cnt) AS cnt
>  |FROM (
>  | SELECT *, b AS b_1 FROM (${join_sql})
>  |   UNION ALL
>  | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
>  |) AS total_result
>  |GROUP BY a, b_1
>   """.stripMargin
> {code}
> The plan is :
>  !image-2019-11-26-14-54-34-797.png! 
> After retraction infer, we expect two join node in the above plan has 
> `AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 
> is unexpected.
> I find  in HepPlanner, before actually apply `SetAccModeRule` to Join2, 
> HepPlanner would check if the vertex belongs to dag or not, and the result is 
> false. So `SetAccModeRule` does not actually apply to Join2.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10193: [FLINK-13938][yarn] Use pre-uploaded flink binary to accelerate flink submission

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10193: [FLINK-13938][yarn] Use pre-uploaded 
flink binary to accelerate flink submission
URL: https://github.com/apache/flink/pull/10193#issuecomment-553881527
 
 
   
   ## CI report:
   
   * e3ac83fe02a7583159184772ff4b4341fa65f827 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136517817)
   * eefbec6756be60a27698d275a1b94bef7cd0c1e2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136636043)
   * 19a83ead105c951505dbafb0280fa2d25132c9a0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136645898)
   * dd2b911c850a56e3d6aa4a3c7e16b30431977bf5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136651764)
   * 06b368d9fbd88eabf71391fc1662b4d8a626d43c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137165694)
   * d4b77c8aab32cdeb11806fdd45ea88141051a157 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137317107)
   * 53b86608c1d008c53112b34c634ccf96419cd921 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137343147)
   * 2969fb4fb3afc8c331415c1ca478b05f3cb47b47 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137394777)
   * edccba4a6db80772f9494f5631e1bc6a340d6586 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137697818)
   * a109168bc5582fad8bbd3dade6f30990931583b5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138010243)
   * 6600e07db82467eb3ce41e6d1c8032c9bcdd9751 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138072264)
   * b3155b18b290b31df0fc5e6bdf29ef421bf68373 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138160080)
   * 5350fff0d5479bd2015de2e61895a4da06aece47 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138181974)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10538) standalone-job.sh causes Classpath issues

2019-11-25 Thread limbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982216#comment-16982216
 ] 

limbo commented on FLINK-10538:
---

I think the dependencies have been shaded, issue could be closed

> standalone-job.sh causes Classpath issues
> -
>
> Key: FLINK-10538
> URL: https://issues.apache.org/jira/browse/FLINK-10538
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Docker, Deployment / Kubernetes, Runtime / 
> Coordination
>Affects Versions: 1.6.0, 1.6.1, 1.6.2, 1.7.0
>Reporter: Razvan
>Priority: Major
>
> When launching a job with the cluster through this script it creates 
> dependency issues.
>  
> We have a job which uses AsyncHttpClient, which uses the netty library. When 
> building/running a Docker image for a Flink job cluster on Kubernetes 
> (build.sh 
> [https://github.com/apache/flink/blob/release-1.6/flink-container/docker/build.sh])
>  will copy our given artifact to a file called "job.jar" in the lib/ folder 
> of the distribution inside the container.
> Upon runtime (standalone-job.sh) we get:
>  
> {code:java}
> 2018-10-11 13:44:10.057 [flink-akka.actor.default-dispatcher-15] INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph  - 
> StateProcessFunction -> ToCustomerRatingFlatMap -> async wait operator -> 
> Sink: CollectResultsSink (1/1) (f7fac66a85d41d4eac44ff609c515710) switched 
> from RUNNING to FAILED.
> java.lang.NoSuchMethodError: 
> io.netty.handler.ssl.SslContext.newClientContextInternal(Lio/netty/handler/ssl/SslProvider;Ljava/security/Provider;[Ljava/security/cert/X509Certificate;Ljavax/net/ssl/TrustManagerFactory;[Ljava/security/cert/X509Certificate;Ljava/security/PrivateKey;Ljava/lang/String;Ljavax/net/ssl/KeyManagerFactory;Ljava/lang/Iterable;Lio/netty/handler/ssl/CipherSuiteFilter;Lio/netty/handler/ssl/ApplicationProtocolConfig;[Ljava/lang/String;JJZ)Lio/netty/handler/ssl/SslContext;
> at 
> io.netty.handler.ssl.SslContextBuilder.build(SslContextBuilder.java:452)
> at 
> org.asynchttpclient.netty.ssl.DefaultSslEngineFactory.buildSslContext(DefaultSslEngineFactory.java:58)
> at 
> org.asynchttpclient.netty.ssl.DefaultSslEngineFactory.init(DefaultSslEngineFactory.java:73)
> at 
> org.asynchttpclient.netty.channel.ChannelManager.(ChannelManager.java:100)
> at 
> org.asynchttpclient.DefaultAsyncHttpClient.(DefaultAsyncHttpClient.java:89)
> at org.asynchttpclient.Dsl.asyncHttpClient(Dsl.java:32)
> at 
> com.test.events.common.asynchttp.AsyncHttpClientProvider.configureAsyncHttpClient(AsyncHttpClientProvider.java:128)
> at 
> com.test.events.common.asynchttp.AsyncHttpClientProvider.(AsyncHttpClientProvider.java:51)
> {code}
>   
> It's because it loads Apache Flink's Netty dependency first
>  
> {code:java}
> [Loaded io.netty.handler.codec.http.HttpObject from 
> file:/opt/flink-1.6.1/lib/flink-shaded-hadoop2-uber-1.6.1.jar]
> [Loaded io.netty.handler.codec.http.HttpMessage from 
> file:/opt/flink-1.6.1/lib/flink-shaded-hadoop2-uber-1.6.1.jar]
> {code}
>  
> {code:java}
> 2018-10-12 11:48:20.434 [main] INFO 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner - Classpath: 
> /opt/flink-1.6.1/lib/flink-python_2.11-1.6.1.jar:/opt/flink-1.6.1/lib/flink-shaded-hadoop2-uber-1.6.1.jar:/opt/flink-1.6.1/lib/job.jar:/opt/flink-1.6.1/lib/log4j-1.2.17.jar:/opt/flink-1.6.1/lib/logback-access.jar:/opt/flink-1.6.1/lib/logback-classic.jar:/opt/flink-1.6.1/lib/logback-core.jar:/opt/flink-1.6.1/lib/netty-buffer-4.1.30.Final.jar:/opt/flink-1.6.1/lib/netty-codec-4.1.30.Final.jar:/opt/flink-1.6.1/lib/netty-codec-socks-4.1.30.Final.jar:/opt/flink-1.6.1/lib/netty-common-4.1.30.Final.jar:/opt/flink-1.6.1/lib/netty-handler-4.1.30.Final.jar:/opt/flink-1.6.1/lib/netty-handler-proxy-4.1.30.Final.jar:/opt/flink-1.6.1/lib/netty-resolver-dns-4.1.30.Final.jar:/opt/flink-1.6.1/lib/netty-transport-4.1.30.Final.jar:/opt/flink-1.6.1/lib/netty-transport-native-epoll-4.1.30.Final.jar:/opt/flink-1.6.1/lib/netty-transport-native-unix-common-4.1.30.Final.jar:/opt/flink-1.6.1/lib/slf4j-log4j12-1.7.7.jar:/opt/flink-1.6.1/lib/flink-dist_2.11-1.6.1.jar:::
> {code}
>  
> The workaround is to rename job.jar to 1JOB.jar for example to be loaded first
>  
> {code:java}
> 2018-10-12 13:51:09.165 [main] INFO 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner - Classpath: 
> /Users/users/projects/flink/flink-1.6.1/lib/1JOB.jar:/Users/users/projects/flink/flink-1.6.1/lib/flink-python_2.11-1.6.1.jar:/Users/users/projects/flink/flink-1.6.1/lib/flink-shaded-hadoop2-uber-1.6.1.jar:/Users/users/projects/flink/flink-1.6.1/lib/log4j-1.2.17.jar:/Users/users/projects/flink/flink-1.6.1/lib/logback-access.jar:/Users/users/projects/flink/flink-1.6.1/lib/logback-classic.jar:/Users/users/projects/flink/flink-1.6.1/lib/logback-core.jar:

[GitHub] [flink] TisonKun commented on a change in pull request #10320: [FLINK-14948][client] Implement shutDownCluster for MiniClusterClient

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10320: [FLINK-14948][client] 
Implement shutDownCluster for MiniClusterClient
URL: https://github.com/apache/flink/pull/10320#discussion_r350578624
 
 

 ##
 File path: 
flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
 ##
 @@ -65,9 +65,7 @@ default void close() throws Exception {
/**
 * Shut down the cluster that this client communicate with.
 */
-   default void shutDownCluster() {
-   throw new UnsupportedOperationException();
-   }
+   void shutDownCluster();
 
 Review comment:
   Within Flink scope we implement this method for all cluster client and I 
think the consensus is that `ClusterClient` is an internal concept at least for 
now. Besides, every implementation of `ClusterClient` should think of whether 
or not and how to shut down its associated cluster.
   
   However, although it makes no sense add a default implementation just `throw 
new UnsupportedOperationException()`, if you insist I can add back such default 
implementation just for outside implementation pass compiler checking.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10161: [FLINK-13986][runtime] Clean up legacy code for FLIP-49.

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10161: [FLINK-13986][runtime] Clean up 
legacy code for FLIP-49.
URL: https://github.com/apache/flink/pull/10161#issuecomment-552882313
 
 
   
   ## CI report:
   
   * 2c0501f41bea1da031777069dd46eb17c5ae8038 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136122509)
   * a93fe47a7f1a91c8a33e7cac2bfc095e3f17012b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136270502)
   * 649d050fe4173a390df026156f6e9bae4f346360 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136299400)
   * 08dafb5d2c6f38599bf86c06516465aeaa324941 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136328000)
   * 8700267a462544e3d51aa40baa60ea07482305c5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136460282)
   * b90d7ea0e63b16b064f7e54e886202fff63a7516 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136474353)
   * 0366a60deac3f1da4902e76a7879bd75996dc15b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137744753)
   * 2d1421dc8595923aa04200a90dcc3a1eb2f9229e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137877709)
   * f778ef42223274946279a328552684b7c03e1d1b : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10315: [FLINK-14552][table] Enable partition statistics in blink planner

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10315: [FLINK-14552][table] Enable 
partition statistics in blink planner
URL: https://github.com/apache/flink/pull/10315#issuecomment-558221318
 
 
   
   ## CI report:
   
   * 05b30617c280796e80e8569dae48b726cddb23d8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138084321)
   * 707fc20d39d5a2c82bd4a3024cd02a932413af70 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10320: [FLINK-14948][client] Implement shutDownCluster for MiniClusterClient

2019-11-25 Thread GitBox
flinkbot commented on issue #10320: [FLINK-14948][client] Implement 
shutDownCluster for MiniClusterClient
URL: https://github.com/apache/flink/pull/10320#issuecomment-558499094
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3a92460f94784714f9ad505432bd5684a03b56f3 (Tue Nov 26 
07:29:24 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10320: [FLINK-14948][client] Implement shutDownCluster for MiniClusterClient

2019-11-25 Thread GitBox
zjffdu commented on a change in pull request #10320: [FLINK-14948][client] 
Implement shutDownCluster for MiniClusterClient
URL: https://github.com/apache/flink/pull/10320#discussion_r350576393
 
 

 ##
 File path: 
flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
 ##
 @@ -65,9 +65,7 @@ default void close() throws Exception {
/**
 * Shut down the cluster that this client communicate with.
 */
-   default void shutDownCluster() {
-   throw new UnsupportedOperationException();
-   }
+   void shutDownCluster();
 
 Review comment:
   What about other classes that extend ClusterClient ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on a change in pull request #10320: [FLINK-14948][client] Implement shutDownCluster for MiniClusterClient

2019-11-25 Thread GitBox
zjffdu commented on a change in pull request #10320: [FLINK-14948][client] 
Implement shutDownCluster for MiniClusterClient
URL: https://github.com/apache/flink/pull/10320#discussion_r350576393
 
 

 ##
 File path: 
flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java
 ##
 @@ -65,9 +65,7 @@ default void close() throws Exception {
/**
 * Shut down the cluster that this client communicate with.
 */
-   default void shutDownCluster() {
-   throw new UnsupportedOperationException();
-   }
+   void shutDownCluster();
 
 Review comment:
   What about classes that extend ClusterClient ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14760) Documentation links check failed on travis

2019-11-25 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-14760:
-
Fix Version/s: (was: 1.10.0)

> Documentation links check failed on travis
> --
>
> Key: FLINK-14760
> URL: https://issues.apache.org/jira/browse/FLINK-14760
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Bowen Li
>Priority: Blocker
>
> {code:java}
> [2019-11-13 16:21:19] ERROR `/dev/table/udfs.html' not found
> [2019-11-13 16:21:19] ERROR `/dev/table/functions.html' not found
> [2019-11-13 16:21:25] ERROR 
> `/zh/getting-started/tutorials/datastream_api.html' not found
> [2019-11-13 16:21:25] ERROR `/zh/dev/table/udfs.html' not found
> [2019-11-13 16:21:25] ERROR `/zh/dev/table/functions.html' not found.
> http://localhost:4000/dev/table/udfs.html:
> Remote file does not exist -- broken link!!!
> http://localhost:4000/dev/table/functions.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/getting-started/tutorials/datastream_api.html:
> Remote file does not exist -- broken link!!!
> http://localhost:4000/zh/dev/table/udfs.html:
> Remote file does not exist -- broken link!!!
> http://localhost:4000/zh/dev/table/functions.html:
> Remote file does not exist -- broken link!!!
> ---
> Found 5 broken links.
> {code}
> full log: [https://travis-ci.org/apache/flink/jobs/611350857]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10294: [FLINK-14913][table] refactor CatalogFunction to remove properties

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10294: [FLINK-14913][table] refactor 
CatalogFunction to remove properties
URL: https://github.com/apache/flink/pull/10294#issuecomment-557607647
 
 
   
   ## CI report:
   
   * 579e382633f5e4f262179e8ca3018a39f80b7af9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137799083)
   * f94aa9cdad575542d0b355bec540348823afdc07 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137838770)
   * aed0cd2d6c439a49f11589358e04e4ac4483781a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137855586)
   * 67a000f82502fa476f3c7ec0e130606f1ae40d8e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137926674)
   * ad91a36385bda9c21f332cbf20673e2cacad04ed : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137950341)
   * 4fee7824604851a6e07e5c06ad1f94567a182596 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137967057)
   * 57da5a6085413c0536683a5f515fd0708553170f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138157935)
   * 8a309ae63d2d6043cfe2b65612af0ca6350076d4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138179675)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14948) Implement shutDownCluster for MiniClusterClient

2019-11-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14948:
---
Labels: pull-request-available  (was: )

> Implement shutDownCluster for MiniClusterClient
> ---
>
> Key: FLINK-14948
> URL: https://issues.apache.org/jira/browse/FLINK-14948
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun opened a new pull request #10320: [FLINK-14948][client] Implement shutDownCluster for MiniClusterClient

2019-11-25 Thread GitBox
TisonKun opened a new pull request #10320: [FLINK-14948][client] Implement 
shutDownCluster for MiniClusterClient
URL: https://github.com/apache/flink/pull/10320
 
 
   ## What is the purpose of the change
   
   Indeed, we *can* implement proper `shutDownCluster` method for 
`MiniClusterClient`
   
   ## Verifying this change
   
   Straightforward.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (not applicable)
   
   cc @kl0u @aljoscha @zjffdu 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14947) Implement LocalExecutor as new Executor interface

2019-11-25 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen updated FLINK-14947:
--
Description: We can replace {{PlanExecutor}} things with new Executor 
interface. One of this series is implement a {{LocalExecutor}} that execute 
pipeline within a {{MiniCluster}}. For proper lifecycle management I would wait 
for FLINK-14762 & FLINK-14948 being merged.  (was: We can replace 
{{PlanExecutor}} things with new Executor interface. One of this series is 
implement a {{LocalExecutor}} that execute pipeline within a {{MiniCluster}}. 
For correctly lifecycle management I would wait for FLINK-14762 & FLINK-14948 
being merged.)

> Implement LocalExecutor as new Executor interface
> -
>
> Key: FLINK-14947
> URL: https://issues.apache.org/jira/browse/FLINK-14947
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
> Fix For: 1.10.0
>
>
> We can replace {{PlanExecutor}} things with new Executor interface. One of 
> this series is implement a {{LocalExecutor}} that execute pipeline within a 
> {{MiniCluster}}. For proper lifecycle management I would wait for FLINK-14762 
> & FLINK-14948 being merged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14947) Implement LocalExecutor as new Executor interface

2019-11-25 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen updated FLINK-14947:
--
Description: We can replace {{PlanExecutor}} things with new Executor 
interface. One of this series is implement a {{LocalExecutor}} that execute 
pipeline within a {{MiniCluster}}. For correctly lifecycle management I would 
wait for FLINK-14762 & FLINK-14948 being merged.  (was: We can replace 
{{PlanExecutor}} things with new Executor interface. One of this series is 
implement a {{LocalExecutor}} that execute pipeline within a {{MiniCluster}}. 
For correctly lifecycle management I would wait for FLINK-14762 being merged.)

> Implement LocalExecutor as new Executor interface
> -
>
> Key: FLINK-14947
> URL: https://issues.apache.org/jira/browse/FLINK-14947
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
> Fix For: 1.10.0
>
>
> We can replace {{PlanExecutor}} things with new Executor interface. One of 
> this series is implement a {{LocalExecutor}} that execute pipeline within a 
> {{MiniCluster}}. For correctly lifecycle management I would wait for 
> FLINK-14762 & FLINK-14948 being merged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14948) Implement shutDownCluster for MiniClusterClient

2019-11-25 Thread Zili Chen (Jira)
Zili Chen created FLINK-14948:
-

 Summary: Implement shutDownCluster for MiniClusterClient
 Key: FLINK-14948
 URL: https://issues.apache.org/jira/browse/FLINK-14948
 Project: Flink
  Issue Type: Improvement
  Components: Client / Job Submission
Reporter: Zili Chen
Assignee: Zili Chen
 Fix For: 1.10.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14949) Task cancellation can be stuck against out-of-thread error

2019-11-25 Thread Hwanju Kim (Jira)
Hwanju Kim created FLINK-14949:
--

 Summary: Task cancellation can be stuck against out-of-thread error
 Key: FLINK-14949
 URL: https://issues.apache.org/jira/browse/FLINK-14949
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.8.2
Reporter: Hwanju Kim


Task cancellation 
([_cancelOrFailAndCancelInvokable_|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L991])
 relies on multiple separate threads, which are _TaskCanceler_, 
_TaskInterrupter_, and _TaskCancelerWatchdog_. While TaskCanceler performs 
cancellation itself, TaskInterrupter periodically interrupts a non-reacting 
task and TaskCancelerWatchdog kills JVM if cancellation has never been finished 
within a certain amount of time (by default 3 min). Those all ensure that 
cancellation can be done or either aborted transitioning to a terminal state in 
finite time (FLINK-4715).

However, if any asynchronous thread creation is failed such as by out-of-thread 
(_java.lang.OutOfMemoryError: unable to create new native thread_), the code 
transitions to CANCELING, but nothing could be performed for cancellation or 
watched by watchdog. Currently, jobmanager does [retry 
cancellation|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java#L1121]
 against any error returned, but a next retry [returns success once it sees 
CANCELING|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L997],
 assuming that it is in progress. This leads to complete stuck in CANCELING, 
which is non-terminal, so state machine is stuck after that.

One solution would be that if a task has transitioned to CANCELLING but it gets 
fatal error or OOM (i.e., _isJvmFatalOrOutOfMemoryError_ is true) indicating 
that it could not reach spawning TaskCancelerWatchdog, it could immediately 
consider that as fatal error (not safely cancellable) calling 
_notifyFatalError_, just as TaskCancelerWatchdog does but eagerly and 
synchronously. That way, it can at least transition out of the non-terminal 
state and furthermore clear potentially leaked thread/memory by restarting JVM. 
The same method is also invoked by _failExternally_, but transitioning to 
FAILED seems less critical as it's already terminal state.

How to reproduce is straightforward by running an application that keeps 
creating threads, each of which never finishes in a loop, and has multiple 
tasks so that one task triggers failure and then the others are attempted to be 
cancelled by full fail-over. In web UI dashboard, some tasks from a task 
manager where any of cancellation-related threads failed to be spawned are 
stuck in CANCELLING for good.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on a change in pull request #8859: [FLINK-12905][table-planner] Enable querying CatalogViews in legacy planner

2019-11-25 Thread GitBox
lirui-apache commented on a change in pull request #8859: 
[FLINK-12905][table-planner] Enable querying CatalogViews in legacy planner
URL: https://github.com/apache/flink/pull/8859#discussion_r350568825
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/common/ViewsExpanding.scala
 ##
 @@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.common
+
+import org.apache.flink.api.scala._
+import org.apache.flink.table.api.scala._
+import org.apache.flink.table.api.{DataTypes, TableSchema}
+import org.apache.flink.table.catalog.{CatalogView, CatalogViewImpl, 
ObjectPath}
+import org.apache.flink.table.planner.utils.{TableTestBase, TableTestUtil, 
TableTestUtilBase}
+
+import org.junit.Test
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+import org.junit.runners.Parameterized.Parameters
+
+import java.util
+
+@RunWith(classOf[Parameterized])
+class ViewsExpanding(tableTestUtil: TableTestBase => TableTestUtil) extends 
TableTestBase {
 
 Review comment:
   I thought UT class names need the "Test" suffix to be picked up by the 
surefire plugin, no?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14813) Expose the new mechanism implemented in FLINK-14472 as a "is back-pressured" metric

2019-11-25 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982205#comment-16982205
 ] 

lining commented on FLINK-14813:


[~pnowojski] Could you help to review it?

> Expose the new mechanism implemented in FLINK-14472 as a "is back-pressured" 
> metric
> ---
>
> Key: FLINK-14813
> URL: https://issues.apache.org/jira/browse/FLINK-14813
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics, Runtime / Network, Runtime / REST
>Reporter: lining
>Assignee: lining
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {
>   "id": "0.Shuffle.Netty.BackPressure.isBackPressured",
>  "value": "true"
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14947) Implement LocalExecutor as new Executor interface

2019-11-25 Thread Zili Chen (Jira)
Zili Chen created FLINK-14947:
-

 Summary: Implement LocalExecutor as new Executor interface
 Key: FLINK-14947
 URL: https://issues.apache.org/jira/browse/FLINK-14947
 Project: Flink
  Issue Type: Sub-task
  Components: Client / Job Submission
Reporter: Zili Chen
Assignee: Zili Chen
 Fix For: 1.10.0


We can replace {{PlanExecutor}} things with new Executor interface. One of this 
series is implement a {{LocalExecutor}} that execute pipeline within a 
{{MiniCluster}}. For correctly lifecycle management I would wait for 
FLINK-14762 being merged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14936) Introduce MemoryManager#computeMemorySize to calculate managed memory from a fraction

2019-11-25 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982195#comment-16982195
 ] 

Zhu Zhu edited comment on FLINK-14936 at 11/26/19 7:15 AM:
---

Thanks [~azagrebin] for the explanation.
I think you are right that we can have {{computeMemorySizeForOperator}} in the 
{{AbstractStreamOperator}}(or a util) so that an operator can use it to reserve 
managed memory. In this way operator developers even do not need to be aware of 
the operator managed memory fraction in {{StreamConfig}}. This should work at 
the moment.

The limitation is that it cannot be used for other purposes, like computing 
managed memory size for the StateBackend.

The proposed {{MemoryManager#computeMemorySize}} is actually not related for a 
particular usage of managed memory. It can be used by an operator's processing 
logic, or other components like state backend. Having it paired with 
{{#reserveMemory}} is just the the other pair {{(#computeNumberOfPages, 
#allocatePages)}}.
By supporting it directly, {{MemoryManager}} methods {{#getMemorySizeByType}} 
and {{#getMemorySize}} can be private since they are exposed for this purpose 
only ATM. (Correct me if I'm wrong here)

In general, I think we should have a {{#computeMemorySize()}} in 
{{AbstractStreamOperator}} so that different operators do not need to the same 
things to fetch the fractions from {{StreamConfig}} and compute the memory size.
For {{MemoryManager#computeMemorySize(fraction, memoryType)}}, I think we can 
also have it to further avoid code duplication and to not expose 
{{#getMemorySizeByType}} and {{#getMemorySize}}.

WDYT?


was (Author: zhuzh):
Thanks [~azagrebin] for the explanation.
I think you are right that we can have {{computeMemorySizeForOperator}} in the 
{{AbstractStreamOperator}}(or a util) so that an operator can use it to reserve 
managed memory. In this way operator developers even do not need to be aware of 
the operator managed memory fraction in {{StreamConfig}}. This should work at 
the moment.

The limitation is that it cannot be used for other purposes, like computing 
managed memory size for the StateBackend.

The proposed {{MemoryManager#computeMemorySize}} is actually not related for a 
particular usage of managed memory. It can be used by an operator's processing 
logic, or other components like state backend. Having it paired with 
{{#reserveMemory}} is just the the other pair {{(#computeNumberOfPages, 
#allocatePages)}}.
By supporting it directly, {{MemoryManager}} methods {{#getMemorySizeByType}} 
and {{#getMemorySize}} can be private since they are exposed for this purpose 
only ATM. (Correct me if I'm wrong here)

In general, I think we should have a {{#computeMemorySize()}} in 
{{AbstractStreamOperator}} so that different operators do not need to the same 
things to fetch the fractions from {{StreamConfig}} and compute the memory size.
For {{MemoryManager#computeMemorySize(fraction, memoryType)}}, I think we can 
also have it to further avoid code duplication and to not expose 
{{#getMemorySizeByType}} and {{#getMemorySize}}.

> Introduce MemoryManager#computeMemorySize to calculate managed memory from a 
> fraction
> -
>
> Key: FLINK-14936
> URL: https://issues.apache.org/jira/browse/FLINK-14936
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Major
>
> A MemoryManager#computeMemorySize(double fraction) is needed to calculate 
> managed memory bytes from a fraction.
> It can be helpful for operators to get the memory size it can reserve and for 
> further #reserveMemory. (Similar to #computeNumberOfPages).
> Here are two cases that may need this method in near future:
> 1. [Python operator memory 
> management|https://lists.apache.org/thread.html/dd4dedeb9354c2ee559cd2f15629c719853915b5efb31a0eafee9361@%3Cdev.flink.apache.org%3E]
> 2. [Statebackend memory 
> management|https://issues.apache.org/jira/browse/FLINK-14883]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14936) Introduce MemoryManager#computeMemorySize to calculate managed memory from a fraction

2019-11-25 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982195#comment-16982195
 ] 

Zhu Zhu edited comment on FLINK-14936 at 11/26/19 7:08 AM:
---

Thanks [~azagrebin] for the explanation.
I think you are right that we can have {{computeMemorySizeForOperator}} in the 
{{AbstractStreamOperator}}(or a util) so that an operator can use it to reserve 
managed memory. In this way operator developers even do not need to be aware of 
the operator managed memory fraction in {{StreamConfig}}. This should work at 
the moment.

The limitation is that it cannot be used for other purposes, like computing 
managed memory size for the StateBackend.

The proposed {{MemoryManager#computeMemorySize}} is actually not related for a 
particular usage of managed memory. It can be used by an operator's processing 
logic, or other components like state backend. Having it paired with 
{{#reserveMemory}} is just the the other pair {{(#computeNumberOfPages, 
#allocatePages)}}.
By supporting it directly, {{MemoryManager}} methods {{#getMemorySizeByType}} 
and {{#getMemorySize}} can be private since they are exposed for this purpose 
only ATM. (Correct me if I'm wrong here)

In general, I think we should have a {{#computeMemorySize()}} in 
{{AbstractStreamOperator}} so that different operators do not need to the same 
things to fetch the fractions from {{StreamConfig}} and compute the memory size.
For {{MemoryManager#computeMemorySize(fraction, memoryType)}}, I think we can 
also have it to further avoid code duplication and to not expose 
{{#getMemorySizeByType}} and {{#getMemorySize}}.


was (Author: zhuzh):
Thanks [~azagrebin] for the explanation.
I think you are right that we can have {{computeMemorySizeForOperator}} in the 
{{AbstractStreamOperator}}(or a util) so that an operator can use it to reserve 
managed memory. In this way operator developers even do not need to be aware of 
the operator managed memory fraction in {{StreamConfig}}. This should work at 
the moment.

The limitation is that it cannot be used for other purposes, like computing 
managed memory size for the StateBackend.

The proposed {{MemoryManager#computeMemorySize}} is actually not related for a 
particular usage of managed memory. It can be used by an operator's processing 
logic, or other components like state backend. Having it paired with 
{{#reserveMemory}} is just the the other pair {{(#computeNumberOfPages, 
#allocatePages)}}.
By supporting it directly, {{MemoryManager}} methods {{#getMemorySizeByType}} 
and {{#getMemorySize}} can be private since they are exposed for this purpose 
only ATM. (Correct me if I'm wrong here)

In general, I think we should have a {{#computeMemorySize()}} in 
{{AbstractStreamOperator}} so that different operators do not need to the same 
things to fetch the fractions from {{StreamConfig}} and compute the memory size.
For {{MemoryManager#computeMemorySize(fraction, memoryType)}}, I think we can 
have it to further avoid code duplication and to not expose 
{{#getMemorySizeByType}} and {{#getMemorySize}}.

> Introduce MemoryManager#computeMemorySize to calculate managed memory from a 
> fraction
> -
>
> Key: FLINK-14936
> URL: https://issues.apache.org/jira/browse/FLINK-14936
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Major
>
> A MemoryManager#computeMemorySize(double fraction) is needed to calculate 
> managed memory bytes from a fraction.
> It can be helpful for operators to get the memory size it can reserve and for 
> further #reserveMemory. (Similar to #computeNumberOfPages).
> Here are two cases that may need this method in near future:
> 1. [Python operator memory 
> management|https://lists.apache.org/thread.html/dd4dedeb9354c2ee559cd2f15629c719853915b5efb31a0eafee9361@%3Cdev.flink.apache.org%3E]
> 2. [Statebackend memory 
> management|https://issues.apache.org/jira/browse/FLINK-14883]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14936) Introduce MemoryManager#computeMemorySize to calculate managed memory from a fraction

2019-11-25 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982195#comment-16982195
 ] 

Zhu Zhu commented on FLINK-14936:
-

Thanks [~azagrebin] for the explanation.
I think you are right that we can have {{computeMemorySizeForOperator}} in the 
{{AbstractStreamOperator}}(or a util) so that an operator can use it to reserve 
managed memory. In this way operator developers even do not need to be aware of 
the operator managed memory fraction in {{StreamConfig}}. This should work at 
the moment.

The limitation is that it cannot be used for other purposes, like computing 
managed memory size for the StateBackend.

The proposed {{MemoryManager#computeMemorySize}} is actually not related for a 
particular usage of managed memory. It can be used by an operator's processing 
logic, or other components like state backend. Having it paired with 
{{#reserveMemory}} is just the the other pair {{(#computeNumberOfPages, 
#allocatePages)}}.
By supporting it directly, {{MemoryManager}} methods {{#getMemorySizeByType}} 
and {{#getMemorySize}} can be private since they are exposed for this purpose 
only ATM. (Correct me if I'm wrong here)

In general, I think we should have a {{#computeMemorySize()}} in 
{{AbstractStreamOperator}} so that different operators do not need to the same 
things to fetch the fractions from {{StreamConfig}} and compute the memory size.
For {{MemoryManager#computeMemorySize(fraction, memoryType)}}, I think we can 
have it to further avoid code duplication and to not expose 
{{#getMemorySizeByType}} and {{#getMemorySize}}.

> Introduce MemoryManager#computeMemorySize to calculate managed memory from a 
> fraction
> -
>
> Key: FLINK-14936
> URL: https://issues.apache.org/jira/browse/FLINK-14936
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Major
>
> A MemoryManager#computeMemorySize(double fraction) is needed to calculate 
> managed memory bytes from a fraction.
> It can be helpful for operators to get the memory size it can reserve and for 
> further #reserveMemory. (Similar to #computeNumberOfPages).
> Here are two cases that may need this method in near future:
> 1. [Python operator memory 
> management|https://lists.apache.org/thread.html/dd4dedeb9354c2ee559cd2f15629c719853915b5efb31a0eafee9361@%3Cdev.flink.apache.org%3E]
> 2. [Statebackend memory 
> management|https://issues.apache.org/jira/browse/FLINK-14883]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10193: [FLINK-13938][yarn] Use pre-uploaded flink binary to accelerate flink submission

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10193: [FLINK-13938][yarn] Use pre-uploaded 
flink binary to accelerate flink submission
URL: https://github.com/apache/flink/pull/10193#issuecomment-553881527
 
 
   
   ## CI report:
   
   * e3ac83fe02a7583159184772ff4b4341fa65f827 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136517817)
   * eefbec6756be60a27698d275a1b94bef7cd0c1e2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136636043)
   * 19a83ead105c951505dbafb0280fa2d25132c9a0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136645898)
   * dd2b911c850a56e3d6aa4a3c7e16b30431977bf5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136651764)
   * 06b368d9fbd88eabf71391fc1662b4d8a626d43c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137165694)
   * d4b77c8aab32cdeb11806fdd45ea88141051a157 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137317107)
   * 53b86608c1d008c53112b34c634ccf96419cd921 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137343147)
   * 2969fb4fb3afc8c331415c1ca478b05f3cb47b47 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137394777)
   * edccba4a6db80772f9494f5631e1bc6a340d6586 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137697818)
   * a109168bc5582fad8bbd3dade6f30990931583b5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138010243)
   * 6600e07db82467eb3ce41e6d1c8032c9bcdd9751 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138072264)
   * b3155b18b290b31df0fc5e6bdf29ef421bf68373 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138160080)
   * 5350fff0d5479bd2015de2e61895a4da06aece47 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14946) Retraction infer would result in bad plan under corner case in blink planner

2019-11-25 Thread Jing Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-14946:
---
Description: 
Retractions rule would result in bad plan under some case, I simplify the case 
like the following sql, complete test case could be found in attachments.

{code:scala}
  val join_sql =
  """
|SELECT
|  ll.a AS a,
|  ll.b AS b,
|  cnt
|FROM (
| SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
|) ll
|JOIN (
| SELECT a, b FROM r GROUP BY a, b
|) rr ON
|(ll.a = rr.a AND ll.b = rr.b)
  """.stripMargin !image-2019-11-26-14-52-52-824.png! 

val sqlQuery =
  s"""
 |SELECT a, b_1, SUM(cnt) AS cnt
 |FROM (
 | SELECT *, b AS b_1 FROM (${join_sql})
 |   UNION ALL
 | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
 |) AS total_result
 |GROUP BY a, b_1
  """.stripMargin
{code}

The plan is :
 !image-2019-11-26-14-54-34-797.png! 
After retraction infer, we expect two join node in the above plan has 
`AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 is 
unexpected.

I find  in HepPlanner, before actually apply `SetAccModeRule` to Join2, 
HepPlanner would check if the vertex belongs to dag or not, and the result is 
false. So `SetAccModeRule` does not actually apply to Join2.
 !screenshot-1.png! 

  was:
Retractions rule would result in bad plan under some case, I simplify the case 
like the following sql, complete test case could be found in attachments.

{code:scala}
  val join_sql =
  """
|SELECT
|  ll.a AS a,
|  ll.b AS b,
|  cnt
|FROM (
| SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
|) ll
|JOIN (
| SELECT a, b FROM r GROUP BY a, b
|) rr ON
|(ll.a = rr.a AND ll.b = rr.b)
  """.stripMargin !image-2019-11-26-14-52-52-824.png! 

val sqlQuery =
  s"""
 |SELECT a, b_1, SUM(cnt) AS cnt
 |FROM (
 | SELECT *, b AS b_1 FROM (${join_sql})
 |   UNION ALL
 | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
 |) AS total_result
 |GROUP BY a, b_1
  """.stripMargin
{code}

The plan is :
 !image-2019-11-26-14-54-34-797.png! 
After retraction infer, we expect two join node in the above plan has 
`AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 is 
unexpected.


> Retraction infer would result in bad plan under corner case in blink planner
> 
>
> Key: FLINK-14946
> URL: https://issues.apache.org/jira/browse/FLINK-14946
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.9.1
>Reporter: Jing Zhang
>Priority: Major
> Attachments: RetractionRules1Test.scala, 
> image-2019-11-26-14-54-34-797.png, screenshot-1.png
>
>
> Retractions rule would result in bad plan under some case, I simplify the 
> case like the following sql, complete test case could be found in attachments.
> {code:scala}
>   val join_sql =
>   """
> |SELECT
> |  ll.a AS a,
> |  ll.b AS b,
> |  cnt
> |FROM (
> | SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
> |) ll
> |JOIN (
> | SELECT a, b FROM r GROUP BY a, b
> |) rr ON
> |(ll.a = rr.a AND ll.b = rr.b)
>   """.stripMargin !image-2019-11-26-14-52-52-824.png! 
> val sqlQuery =
>   s"""
>  |SELECT a, b_1, SUM(cnt) AS cnt
>  |FROM (
>  | SELECT *, b AS b_1 FROM (${join_sql})
>  |   UNION ALL
>  | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
>  |) AS total_result
>  |GROUP BY a, b_1
>   """.stripMargin
> {code}
> The plan is :
>  !image-2019-11-26-14-54-34-797.png! 
> After retraction infer, we expect two join node in the above plan has 
> `AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 
> is unexpected.
> I find  in HepPlanner, before actually apply `SetAccModeRule` to Join2, 
> HepPlanner would check if the vertex belongs to dag or not, and the result is 
> false. So `SetAccModeRule` does not actually apply to Join2.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14946) Retraction infer would result in bad plan under corner case in blink planner

2019-11-25 Thread Jing Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-14946:
---
Attachment: screenshot-1.png

> Retraction infer would result in bad plan under corner case in blink planner
> 
>
> Key: FLINK-14946
> URL: https://issues.apache.org/jira/browse/FLINK-14946
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.9.1
>Reporter: Jing Zhang
>Priority: Major
> Attachments: RetractionRules1Test.scala, 
> image-2019-11-26-14-54-34-797.png, screenshot-1.png
>
>
> Retractions rule would result in bad plan under some case, I simplify the 
> case like the following sql, complete test case could be found in attachments.
> {code:scala}
>   val join_sql =
>   """
> |SELECT
> |  ll.a AS a,
> |  ll.b AS b,
> |  cnt
> |FROM (
> | SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
> |) ll
> |JOIN (
> | SELECT a, b FROM r GROUP BY a, b
> |) rr ON
> |(ll.a = rr.a AND ll.b = rr.b)
>   """.stripMargin !image-2019-11-26-14-52-52-824.png! 
> val sqlQuery =
>   s"""
>  |SELECT a, b_1, SUM(cnt) AS cnt
>  |FROM (
>  | SELECT *, b AS b_1 FROM (${join_sql})
>  |   UNION ALL
>  | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
>  |) AS total_result
>  |GROUP BY a, b_1
>   """.stripMargin
> {code}
> The plan is :
>  !image-2019-11-26-14-54-34-797.png! 
> After retraction infer, we expect two join node in the above plan has 
> `AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 
> is unexpected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14946) Retraction infer would result in bad plan under corner case in blink planner

2019-11-25 Thread Jing Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-14946:
---
Attachment: RetractionRules1Test.scala

> Retraction infer would result in bad plan under corner case in blink planner
> 
>
> Key: FLINK-14946
> URL: https://issues.apache.org/jira/browse/FLINK-14946
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.9.1
>Reporter: Jing Zhang
>Priority: Major
> Attachments: RetractionRules1Test.scala, 
> image-2019-11-26-14-54-34-797.png
>
>
> Retractions rule would result in bad plan under some case, I simplify the 
> case like the following sql, complete test case could be found in attachments.
> {code:scala}
>   val join_sql =
>   """
> |SELECT
> |  ll.a AS a,
> |  ll.b AS b,
> |  cnt
> |FROM (
> | SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
> |) ll
> |JOIN (
> | SELECT a, b FROM r GROUP BY a, b
> |) rr ON
> |(ll.a = rr.a AND ll.b = rr.b)
>   """.stripMargin !image-2019-11-26-14-52-52-824.png! 
> val sqlQuery =
>   s"""
>  |SELECT a, b_1, SUM(cnt) AS cnt
>  |FROM (
>  | SELECT *, b AS b_1 FROM (${join_sql})
>  |   UNION ALL
>  | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
>  |) AS total_result
>  |GROUP BY a, b_1
>   """.stripMargin
> {code}
> The plan is :
>  !image-2019-11-26-14-54-34-797.png! 
> After retraction infer, we expect two join node in the above plan has 
> `AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 
> is unexpected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10319: [FLINK-14945] [runtime] simplify some code

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10319: [FLINK-14945] [runtime]  simplify 
some code
URL: https://github.com/apache/flink/pull/10319#issuecomment-558473837
 
 
   
   ## CI report:
   
   * b695eaeb18c267335f7f608602f747f3aec35ee3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138177444)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10316: [FLINK-14624][table-blink] Support computed column as rowtime attribute

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10316: [FLINK-14624][table-blink] Support 
computed column as rowtime attribute
URL: https://github.com/apache/flink/pull/10316#issuecomment-558248395
 
 
   
   ## CI report:
   
   * 3624bce085de9f708f4d5fb4a8a7617884c0adf8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138095604)
   * 0ed9e27f6cd27d3843eef2d8c20116ea86d6ec1a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138179707)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14946) Retraction infer would result in bad plan under corner case in blink planner

2019-11-25 Thread Jing Zhang (Jira)
Jing Zhang created FLINK-14946:
--

 Summary: Retraction infer would result in bad plan under corner 
case in blink planner
 Key: FLINK-14946
 URL: https://issues.apache.org/jira/browse/FLINK-14946
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.9.1, 1.9.0
Reporter: Jing Zhang
 Attachments: image-2019-11-26-14-54-34-797.png

Retractions rule would result in bad plan under some case, I simplify the case 
like the following sql, complete test case could be found in attachments.

{code:scala}
  val join_sql =
  """
|SELECT
|  ll.a AS a,
|  ll.b AS b,
|  cnt
|FROM (
| SELECT a, b, COUNT(c) AS cnt FROM l GROUP BY a, b
|) ll
|JOIN (
| SELECT a, b FROM r GROUP BY a, b
|) rr ON
|(ll.a = rr.a AND ll.b = rr.b)
  """.stripMargin !image-2019-11-26-14-52-52-824.png! 

val sqlQuery =
  s"""
 |SELECT a, b_1, SUM(cnt) AS cnt
 |FROM (
 | SELECT *, b AS b_1 FROM (${join_sql})
 |   UNION ALL
 | SELECT *, 'SEA' AS b_1 FROM (${join_sql})
 |) AS total_result
 |GROUP BY a, b_1
  """.stripMargin
{code}

The plan is :
 !image-2019-11-26-14-54-34-797.png! 
After retraction infer, we expect two join node in the above plan has 
`AccRetract` asAccMode. However, AccMode of Join1 is right, accMode of Join2 is 
unexpected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10306: [FLINK-13943][table-api] Provide utility method to convert Flink table to Java List

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10306: [FLINK-13943][table-api] Provide 
utility method to convert Flink table to Java List
URL: https://github.com/apache/flink/pull/10306#issuecomment-558027199
 
 
   
   ## CI report:
   
   * 0b265d192e2a6024e5817317be0317136208ccaf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138004313)
   * 25e878014145b686c203e03decbe71041d7d2b3e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138010198)
   * dfc5b0f06ea6fb32bec861bd2cdd215af0c48413 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138179696)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10294: [FLINK-14913][table] refactor CatalogFunction to remove properties

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10294: [FLINK-14913][table] refactor 
CatalogFunction to remove properties
URL: https://github.com/apache/flink/pull/10294#issuecomment-557607647
 
 
   
   ## CI report:
   
   * 579e382633f5e4f262179e8ca3018a39f80b7af9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137799083)
   * f94aa9cdad575542d0b355bec540348823afdc07 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137838770)
   * aed0cd2d6c439a49f11589358e04e4ac4483781a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137855586)
   * 67a000f82502fa476f3c7ec0e130606f1ae40d8e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137926674)
   * ad91a36385bda9c21f332cbf20673e2cacad04ed : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137950341)
   * 4fee7824604851a6e07e5c06ad1f94567a182596 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137967057)
   * 57da5a6085413c0536683a5f515fd0708553170f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138157935)
   * 8a309ae63d2d6043cfe2b65612af0ca6350076d4 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138179675)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14941) The AbstractTableInputFormat#nextRecord in hbase connector will handle the same rowkey twice once encountered any exception

2019-11-25 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-14941:

Component/s: Connectors / HBase

> The AbstractTableInputFormat#nextRecord in hbase connector will handle the 
> same rowkey twice once encountered any exception
> ---
>
> Key: FLINK-14941
> URL: https://issues.apache.org/jira/browse/FLINK-14941
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the mail list [1].   The user complain that it will see the same row twice 
> if encountered any HBase exception. 
> The problem is here: 
> {code}
> public T nextRecord(T reuse) throws IOException {
>   if (resultScanner == null) {
>   throw new IOException("No table result scanner 
> provided!");
>   }
>   try {
>   Result res = resultScanner.next();
>   if (res != null) {
>   scannedRows++;
>   currentRow = res.getRow();
>   return mapResultToOutType(res);
>   }
>   } catch (Exception e) {
>   resultScanner.close();
>   //workaround for timeout on scan
>   LOG.warn("Error after scan of " + scannedRows + " rows. 
> Retry with a new scanner...", e);
>   scan.setStartRow(currentRow);
>   resultScanner = table.getScanner(scan);
>   Result res = resultScanner.next();
>   if (res != null) {
>   scannedRows++;
>   currentRow = res.getRow();
>   return mapResultToOutType(res);
>   }
>   }
>   endReached = true;
>   return null;
>   }
> {code}
> We will set the startRow of the new scan to the currentRow which has been 
> seen,  that means the currentRow will be seen twice.   Actually, we should 
> replace the scan.setStartRow(currentRow) as scan.withStartRow(currentRow, 
> false) , the false means exclude the currentRow. 
> [1]. 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/DataSet-API-HBase-ScannerTimeoutException-and-double-Result-processing-td31174.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14941) The AbstractTableInputFormat#nextRecord in hbase connector will handle the same rowkey twice once encountered any exception

2019-11-25 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-14941:
---

Assignee: Zheng Hu

> The AbstractTableInputFormat#nextRecord in hbase connector will handle the 
> same rowkey twice once encountered any exception
> ---
>
> Key: FLINK-14941
> URL: https://issues.apache.org/jira/browse/FLINK-14941
> Project: Flink
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the mail list [1].   The user complain that it will see the same row twice 
> if encountered any HBase exception. 
> The problem is here: 
> {code}
> public T nextRecord(T reuse) throws IOException {
>   if (resultScanner == null) {
>   throw new IOException("No table result scanner 
> provided!");
>   }
>   try {
>   Result res = resultScanner.next();
>   if (res != null) {
>   scannedRows++;
>   currentRow = res.getRow();
>   return mapResultToOutType(res);
>   }
>   } catch (Exception e) {
>   resultScanner.close();
>   //workaround for timeout on scan
>   LOG.warn("Error after scan of " + scannedRows + " rows. 
> Retry with a new scanner...", e);
>   scan.setStartRow(currentRow);
>   resultScanner = table.getScanner(scan);
>   Result res = resultScanner.next();
>   if (res != null) {
>   scannedRows++;
>   currentRow = res.getRow();
>   return mapResultToOutType(res);
>   }
>   }
>   endReached = true;
>   return null;
>   }
> {code}
> We will set the startRow of the new scan to the currentRow which has been 
> seen,  that means the currentRow will be seen twice.   Actually, we should 
> replace the scan.setStartRow(currentRow) as scan.withStartRow(currentRow, 
> false) , the false means exclude the currentRow. 
> [1]. 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/DataSet-API-HBase-ScannerTimeoutException-and-double-Result-processing-td31174.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14941) The AbstractTableInputFormat#nextRecord in hbase connector will handle the same rowkey twice once encountered any exception

2019-11-25 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982179#comment-16982179
 ] 

Jark Wu commented on FLINK-14941:
-

Hi [~modavis], I checked the API and the {{withStartRow}} is in v1.4.3, but I 
agree with you we should only re-scan if it is an HBase exception.

 

What do you think about [~modavis]' s point? [~openinx]

> The AbstractTableInputFormat#nextRecord in hbase connector will handle the 
> same rowkey twice once encountered any exception
> ---
>
> Key: FLINK-14941
> URL: https://issues.apache.org/jira/browse/FLINK-14941
> Project: Flink
>  Issue Type: Bug
>Reporter: Zheng Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the mail list [1].   The user complain that it will see the same row twice 
> if encountered any HBase exception. 
> The problem is here: 
> {code}
> public T nextRecord(T reuse) throws IOException {
>   if (resultScanner == null) {
>   throw new IOException("No table result scanner 
> provided!");
>   }
>   try {
>   Result res = resultScanner.next();
>   if (res != null) {
>   scannedRows++;
>   currentRow = res.getRow();
>   return mapResultToOutType(res);
>   }
>   } catch (Exception e) {
>   resultScanner.close();
>   //workaround for timeout on scan
>   LOG.warn("Error after scan of " + scannedRows + " rows. 
> Retry with a new scanner...", e);
>   scan.setStartRow(currentRow);
>   resultScanner = table.getScanner(scan);
>   Result res = resultScanner.next();
>   if (res != null) {
>   scannedRows++;
>   currentRow = res.getRow();
>   return mapResultToOutType(res);
>   }
>   }
>   endReached = true;
>   return null;
>   }
> {code}
> We will set the startRow of the new scan to the currentRow which has been 
> seen,  that means the currentRow will be seen twice.   Actually, we should 
> replace the scan.setStartRow(currentRow) as scan.withStartRow(currentRow, 
> false) , the false means exclude the currentRow. 
> [1]. 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/DataSet-API-HBase-ScannerTimeoutException-and-double-Result-processing-td31174.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350563213
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/result/CollectStreamResult.java
 ##
 @@ -138,7 +146,10 @@ protected boolean isRetrieving() {
@Override
public void run() {
try {
-   deployer.run();
+   // fetch the job execution result, so that an 
attached cluster will shut down
+   deployer
+   .run()
+   .thenCompose((jobClient -> 
jobClient.getJobExecutionResult(classLoader)));
 
 Review comment:
   Maybe we should close job client here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350562960
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ProgramDeployer.java
 ##
 @@ -18,163 +18,83 @@
 
 package org.apache.flink.table.client.gateway.local;
 
-import org.apache.flink.api.common.JobExecutionResult;
-import org.apache.flink.client.ClientUtils;
-import org.apache.flink.client.deployment.ClusterDescriptor;
-import org.apache.flink.client.program.ClusterClient;
-import org.apache.flink.runtime.jobgraph.JobGraph;
-import org.apache.flink.table.client.gateway.SqlExecutionException;
-import org.apache.flink.table.client.gateway.local.result.Result;
+import org.apache.flink.api.dag.Pipeline;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.DeploymentOptions;
+import org.apache.flink.core.execution.DefaultExecutorServiceLoader;
+import org.apache.flink.core.execution.Executor;
+import org.apache.flink.core.execution.ExecutorFactory;
+import org.apache.flink.core.execution.ExecutorServiceLoader;
+import org.apache.flink.core.execution.JobClient;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.util.concurrent.BlockingQueue;
-import java.util.concurrent.LinkedBlockingDeque;
+import java.util.concurrent.CompletableFuture;
 
 /**
  * The helper class to deploy a table program on the cluster.
  */
-public class ProgramDeployer implements Runnable {
+public class ProgramDeployer {
private static final Logger LOG = 
LoggerFactory.getLogger(ProgramDeployer.class);
 
private final ExecutionContext context;
-   private final JobGraph jobGraph;
+   private final Pipeline pipeline;
private final String jobName;
-   private final Result result;
private final boolean awaitJobResult;
-   private final BlockingQueue executionResultBucket;
 
/**
 * Deploys a table program on the cluster.
 *
 * @param contextcontext with deployment information
 * @param jobNamejob name of the Flink job to be submitted
-* @param jobGraph   Flink job graph
-* @param result result that receives information about the 
target cluster
+* @param pipeline   Flink {@link Pipeline} to execute
 * @param awaitJobResult block for a job execution result from the 
cluster
 */
public ProgramDeployer(
ExecutionContext context,
String jobName,
-   JobGraph jobGraph,
-   Result result,
+   Pipeline pipeline,
boolean awaitJobResult) {
this.context = context;
-   this.jobGraph = jobGraph;
+   this.pipeline = pipeline;
this.jobName = jobName;
-   this.result = result;
this.awaitJobResult = awaitJobResult;
-   executionResultBucket = new LinkedBlockingDeque<>(1);
}
 
-   @Override
-   public void run() {
-   LOG.info("Submitting job {} for query {}`", 
jobGraph.getJobID(), jobName);
+   public CompletableFuture run() {
+   LOG.info("Submitting job {} for query {}`", pipeline, jobName);
if (LOG.isDebugEnabled()) {
LOG.debug("Submitting job {} with the following 
environment: \n{}",
-   jobGraph.getJobID(), 
context.getMergedEnvironment());
+   pipeline, 
context.getMergedEnvironment());
}
-   deployJob(context, jobGraph, result);
-   }
 
-   public JobExecutionResult fetchExecutionResult() {
-   return executionResultBucket.poll();
-   }
+   // create a copy so that we can change settings without 
affecting the original config
+   Configuration configuration = new 
Configuration(context.getFlinkConfig());
+   if (configuration.get(DeploymentOptions.TARGET) == null) {
+   throw new RuntimeException("No execution.target 
specified in your configuration file.");
+   }
 
-   /**
-* Deploys a job. Depending on the deployment creates a new job 
cluster. It saves the cluster id in
-* the result and blocks until job completion.
-*/
-   private  void deployJob(ExecutionContext context, JobGraph 
jobGraph, Result result) {
-   // create or retrieve cluster and deploy job
-   try (final ClusterDescriptor clusterDescriptor = 
context.createClusterDescriptor()) {
-   try {
-   // new cluster
-   if (context.getClusterId() == null) {
- 

[GitHub] [flink] TisonKun commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350562286
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java
 ##
 @@ -523,12 +525,12 @@ public void stop(SessionContext session) {
}
 
// store the result with a unique id (the job id for now)
-   final String resultId = jobGraph.getJobID().toString();
 
 Review comment:
   And then on translated, we use that JobID as the JobID set to JobGraph.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350558859
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -110,6 +112,8 @@
  */
 public class ExecutionContext {
 
+   protected static final Logger LOG = 
LoggerFactory.getLogger(ExecutionContext.class);
 
 Review comment:
   `private`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350558703
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/ProgramTargetDescriptor.java
 ##
 @@ -25,56 +25,28 @@
  */
 public class ProgramTargetDescriptor {
 
-   private final String clusterId;
-
private final String jobId;
 
-   private final String webInterfaceUrl;
-
-   public ProgramTargetDescriptor(String clusterId, String jobId, String 
webInterfaceUrl) {
-   this.clusterId = clusterId;
+   public ProgramTargetDescriptor(String jobId) {
this.jobId = jobId;
-   this.webInterfaceUrl = webInterfaceUrl;
-   }
-
-   public String getClusterId() {
-   return clusterId;
}
 
public String getJobId() {
 
 Review comment:
   Maybe out of this patch but any reason we stick to `String` representation 
of `JobID`? I've checked the usage and it seems we can just use `JobID` type 
and print its hex string representation in `toString` method. The idea is that 
a concrete type is better than generic `String` representation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350562168
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java
 ##
 @@ -523,12 +525,12 @@ public void stop(SessionContext session) {
}
 
// store the result with a unique id (the job id for now)
-   final String resultId = jobGraph.getJobID().toString();
 
 Review comment:
   Is `JobID` here possibly different from the actual `JobID` on running? I'm 
thinking of introduce a `getJobID` method in `Pipeline` even. A Pipeline 
corresponds to a job and is able to hold a job id.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350560496
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -242,6 +254,36 @@ public EnvironmentInstance createEnvironmentInstance() {
 
// 

 
+   private static Configuration createExecutionConfig(
+   CommandLine commandLine,
+   Options commandLineOptions,
+   List availableCommandLines) throws 
FlinkException {
+
+   LOG.debug("Available commandline options: {}", 
commandLineOptions);
+   List options = Stream
+   .of(commandLine.getOptions())
+   .map((o) -> o.getOpt() + "=" + o.getValue())
 
 Review comment:
   nit: redundant paren `(o)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350563329
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/result/MaterializedCollectBatchResult.java
 ##
 @@ -78,8 +82,9 @@ public boolean isMaterialized() {
 
@Override
public void startRetrieval(ProgramDeployer deployer) {
-   this.deployer = deployer;
-   retrievalThread.start();
+   deployer.run()
+   .thenCompose(jobClient -> 
jobClient.getJobExecutionResult(classLoader))
 
 Review comment:
   close job client on return I think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10313: [FLINK-14840] Use Executor interface in SQL cli

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10313: [FLINK-14840] Use 
Executor interface in SQL cli
URL: https://github.com/apache/flink/pull/10313#discussion_r350559045
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -119,7 +123,6 @@
private final Map> tableSinks;
private final Map functions;
private final Configuration flinkConfig;
-   private final Configuration executorConfig;
private final ClusterClientFactory clusterClientFactory;
private final ExecutionConfigAccessor executionParameters;
 
 Review comment:
   It seems so. If so, we can remove `createExecutionParameterProvider` also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-14892) Add documentation for checkpoint directory layout

2019-11-25 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-14892.
-
Resolution: Fixed

Fixed in 1.10.0: 5984d52f6a3f39447879a026fd34485362e03ab9

> Add documentation for checkpoint directory layout
> -
>
> Key: FLINK-14892
> URL: https://issues.apache.org/jira/browse/FLINK-14892
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Checkpointing
>Reporter: Congxian Qiu(klion26)
>Assignee: Congxian Qiu(klion26)
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In FLINK-8531, we change the checkpoint directory layout to
> {code:java}
> /user-defined-checkpoint-dir
> |
> + --shared/
> + --taskowned/
> + --chk-1/
> + --chk-2/
> + --chk-3/
> ...
> {code}
> But the directory layout did not describe in the doc currently, and I found 
> some users confused about this, such as[1][2], so I propose to add a 
> description for the checkpoint directory layout in the documentation, maybe 
> in the page {{checkpoints#DirectoryStructure}}[3]
>  [1] 
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-checkpointing-behavior-td30749.html#a30751]
>  [2] 
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Apache-Flink-Operator-name-and-uuid-best-practices-td31031.html]
>  [3] 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/state/checkpoints.html#directory-structure]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #10305: [FLINK-14892][docs] Add documentation for checkpoint directory layout

2019-11-25 Thread GitBox
wuchong merged pull request #10305: [FLINK-14892][docs] Add documentation for 
checkpoint directory layout
URL: https://github.com/apache/flink/pull/10305
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10319: [FLINK-14945] [runtime] simplify some code

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10319: [FLINK-14945] [runtime]  simplify 
some code
URL: https://github.com/apache/flink/pull/10319#issuecomment-558473837
 
 
   
   ## CI report:
   
   * b695eaeb18c267335f7f608602f747f3aec35ee3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138177444)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10316: [FLINK-14624][table-blink] Support computed column as rowtime attribute

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10316: [FLINK-14624][table-blink] Support 
computed column as rowtime attribute
URL: https://github.com/apache/flink/pull/10316#issuecomment-558248395
 
 
   
   ## CI report:
   
   * 3624bce085de9f708f4d5fb4a8a7617884c0adf8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138095604)
   * 0ed9e27f6cd27d3843eef2d8c20116ea86d6ec1a : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10311: [FLINK-14762][client] Enrich JobClient API

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10311: [FLINK-14762][client] Enrich 
JobClient API
URL: https://github.com/apache/flink/pull/10311#issuecomment-558081118
 
 
   
   ## CI report:
   
   * aadc1cfef92eec54d86efbf39f50d91afda6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138022323)
   * daf85a75b9c24918058b8bfe09416b2828bd02a5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138160048)
   * 7fe9b0e6c496482990905b9cec4389c8cbb8930a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138161977)
   * fe4dfd5ebd0508fc24dcfcb55aab4b1c99cd6bd3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138175553)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10306: [FLINK-13943][table-api] Provide utility method to convert Flink table to Java List

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10306: [FLINK-13943][table-api] Provide 
utility method to convert Flink table to Java List
URL: https://github.com/apache/flink/pull/10306#issuecomment-558027199
 
 
   
   ## CI report:
   
   * 0b265d192e2a6024e5817317be0317136208ccaf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138004313)
   * 25e878014145b686c203e03decbe71041d7d2b3e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138010198)
   * dfc5b0f06ea6fb32bec861bd2cdd215af0c48413 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10294: [FLINK-14913][table] refactor CatalogFunction to remove properties

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10294: [FLINK-14913][table] refactor 
CatalogFunction to remove properties
URL: https://github.com/apache/flink/pull/10294#issuecomment-557607647
 
 
   
   ## CI report:
   
   * 579e382633f5e4f262179e8ca3018a39f80b7af9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137799083)
   * f94aa9cdad575542d0b355bec540348823afdc07 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137838770)
   * aed0cd2d6c439a49f11589358e04e4ac4483781a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137855586)
   * 67a000f82502fa476f3c7ec0e130606f1ae40d8e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137926674)
   * ad91a36385bda9c21f332cbf20673e2cacad04ed : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137950341)
   * 4fee7824604851a6e07e5c06ad1f94567a182596 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137967057)
   * 57da5a6085413c0536683a5f515fd0708553170f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138157935)
   * 8a309ae63d2d6043cfe2b65612af0ca6350076d4 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14936) Introduce MemoryManager#computeMemorySize to calculate managed memory from a fraction

2019-11-25 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-14936:

Summary: Introduce MemoryManager#computeMemorySize to calculate managed 
memory from a fraction  (was: Introduce MemoryManager#computeMemorySize to 
calculate managed memory of an operator from a fraction)

> Introduce MemoryManager#computeMemorySize to calculate managed memory from a 
> fraction
> -
>
> Key: FLINK-14936
> URL: https://issues.apache.org/jira/browse/FLINK-14936
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Major
>
> A MemoryManager#computeMemorySize(double fraction) is needed to calculate 
> managed memory bytes from a fraction.
> It can be helpful for operators to get the memory size it can reserve and for 
> further #reserveMemory. (Similar to #computeNumberOfPages).
> Here are two cases that may need this method in near future:
> 1. [Python operator memory 
> management|https://lists.apache.org/thread.html/dd4dedeb9354c2ee559cd2f15629c719853915b5efb31a0eafee9361@%3Cdev.flink.apache.org%3E]
> 2. [Statebackend memory 
> management|https://issues.apache.org/jira/browse/FLINK-14883]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on issue #10174: [FLINK-14625][table-planner-blink] Add a rule to eliminate cross join as much as possible without statistics

2019-11-25 Thread GitBox
godfreyhe commented on issue #10174: [FLINK-14625][table-planner-blink] Add a 
rule to eliminate cross join as much as possible without statistics
URL: https://github.com/apache/flink/pull/10174#issuecomment-558479874
 
 
   LGTM,  cc @KurtYoung 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14865) Unstable tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2

2019-11-25 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-14865.
---
Resolution: Fixed

> Unstable tests 
> PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2
> --
>
> Key: FLINK-14865
> URL: https://issues.apache.org/jira/browse/FLINK-14865
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Wei Zhong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> Caused by: 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
>  java.io.IOException: Cannot run program 
> "/tmp/32b29e73-3348-4326-bc06-69f6adda04ea_pyflink-udf-runner.sh": error=26, 
> Text file busy547E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4966)548E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:211)549E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:202)550E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:185)551E
>  at 
> org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:201)552E
>  at 
> org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractPythonScalarFunctionOperator.java:177)553E
>  at 
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:114)554E
>  at 
> org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:137)555E
>  at 
> org.apache.flink.table.runtime.operators.python.BaseRowPythonScalarFunctionOperator.open(BaseRowPythonScalarFunctionOperator.java:83)556E
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:585)557E
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:436)558E
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)559E at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)560E ... 1 
> more561E Caused by: java.io.IOException: Cannot run program 
> "/tmp/32b29e73-3348-4326-bc06-69f6adda04ea_pyflink-udf-runner.sh": error=26, 
> Text file busy562E at 
> java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1128)563E at 
> java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1071)564E at 
> org.apache.beam.runners.fnexecution.environment.ProcessManager.startProcess(ProcessManager.java:133)565E
>  at 
> org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:120)566E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:178)567E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:162)568E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)569E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)570E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)571E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)572E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)573E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)574E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)575E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)576E
>  ... 13 more577E Suppressed: java.lang.NullPointerException: Process for id 
> does not exist: 1578E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:895)579E
>  at 
> org.apache.beam.runners.fnexecution.environment.ProcessManager.stopProcess(ProcessManager.java:147)580E
>  at 
> org.apache.beam.runners.fnexecution.environment.Proc

[jira] [Commented] (FLINK-14865) Unstable tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2

2019-11-25 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982150#comment-16982150
 ] 

Hequn Cheng commented on FLINK-14865:
-

Resolved in 1.10.0 via 8f665be6092d138a60c7318eaca5d47da1538283

> Unstable tests 
> PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2
> --
>
> Key: FLINK-14865
> URL: https://issues.apache.org/jira/browse/FLINK-14865
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Wei Zhong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> Caused by: 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
>  java.io.IOException: Cannot run program 
> "/tmp/32b29e73-3348-4326-bc06-69f6adda04ea_pyflink-udf-runner.sh": error=26, 
> Text file busy547E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4966)548E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:211)549E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:202)550E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:185)551E
>  at 
> org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:201)552E
>  at 
> org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractPythonScalarFunctionOperator.java:177)553E
>  at 
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:114)554E
>  at 
> org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:137)555E
>  at 
> org.apache.flink.table.runtime.operators.python.BaseRowPythonScalarFunctionOperator.open(BaseRowPythonScalarFunctionOperator.java:83)556E
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:585)557E
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:436)558E
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)559E at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)560E ... 1 
> more561E Caused by: java.io.IOException: Cannot run program 
> "/tmp/32b29e73-3348-4326-bc06-69f6adda04ea_pyflink-udf-runner.sh": error=26, 
> Text file busy562E at 
> java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1128)563E at 
> java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1071)564E at 
> org.apache.beam.runners.fnexecution.environment.ProcessManager.startProcess(ProcessManager.java:133)565E
>  at 
> org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:120)566E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:178)567E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:162)568E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)569E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)570E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)571E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)572E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)573E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)574E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)575E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)576E
>  ... 13 more577E Suppressed: java.lang.NullPointerException: Process for id 
> does not exist: 1578E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:895)579E
>  at 
> org.apache.beam.runners.fnexecution.environment.ProcessManager.

[jira] [Assigned] (FLINK-14865) Unstable tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2

2019-11-25 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng reassigned FLINK-14865:
---

Assignee: Wei Zhong

> Unstable tests 
> PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2
> --
>
> Key: FLINK-14865
> URL: https://issues.apache.org/jira/browse/FLINK-14865
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Wei Zhong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> Caused by: 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
>  java.io.IOException: Cannot run program 
> "/tmp/32b29e73-3348-4326-bc06-69f6adda04ea_pyflink-udf-runner.sh": error=26, 
> Text file busy547E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4966)548E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:211)549E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:202)550E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:185)551E
>  at 
> org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:201)552E
>  at 
> org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractPythonScalarFunctionOperator.java:177)553E
>  at 
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:114)554E
>  at 
> org.apache.flink.table.runtime.operators.python.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:137)555E
>  at 
> org.apache.flink.table.runtime.operators.python.BaseRowPythonScalarFunctionOperator.open(BaseRowPythonScalarFunctionOperator.java:83)556E
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:585)557E
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:436)558E
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)559E at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)560E ... 1 
> more561E Caused by: java.io.IOException: Cannot run program 
> "/tmp/32b29e73-3348-4326-bc06-69f6adda04ea_pyflink-udf-runner.sh": error=26, 
> Text file busy562E at 
> java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1128)563E at 
> java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1071)564E at 
> org.apache.beam.runners.fnexecution.environment.ProcessManager.startProcess(ProcessManager.java:133)565E
>  at 
> org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:120)566E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:178)567E
>  at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:162)568E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)569E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)570E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)571E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)572E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)573E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)574E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)575E
>  at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)576E
>  ... 13 more577E Suppressed: java.lang.NullPointerException: Process for id 
> does not exist: 1578E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:895)579E
>  at 
> org.apache.beam.runners.fnexecution.environment.ProcessManager.stopProcess(ProcessManager.java:147)580E
>  at 
> org.apache.beam.runners.fnexecution.envir

[GitHub] [flink] TisonKun commented on a change in pull request #10193: [FLINK-13938][yarn] Use pre-uploaded flink binary to accelerate flink submission

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10193: [FLINK-13938][yarn] Use 
pre-uploaded flink binary to accelerate flink submission
URL: https://github.com/apache/flink/pull/10193#discussion_r350555630
 
 

 ##
 File path: flink-yarn/src/main/java/org/apache/flink/yarn/Utils.java
 ##
 @@ -129,6 +134,33 @@ public static void setupYarnClassPath(Configuration conf, 
Map ap
}
}
 
+   /**
+* Setup and register a pre-uploaded resource according to the file 
status.
+*
+* @param localSrcPath
+*  path to the local file
+*  relative target path of the file (will be prefixed be 
the full home directory we set up)
+* @param preUploadedFileStatus
+*the corresponding pre-uploaded file status of localSrcPath
+*
+* @return Path to remote file (usually hdfs)
+*/
+   static Tuple2 setupPreUploadedResource(
 
 Review comment:
   Not yet. I don't have too much cycles for generally thinking a considerate 
solution of this demand now. Will revisit it later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 closed pull request #10310: [FLINK-14865][python] fix unstable tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2

2019-11-25 Thread GitBox
hequn8128 closed pull request #10310: [FLINK-14865][python] fix unstable tests 
PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2
URL: https://github.com/apache/flink/pull/10310
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10319: [FLINK-14945] [runtime] simplify some code

2019-11-25 Thread GitBox
flinkbot commented on issue #10319: [FLINK-14945] [runtime]  simplify some code
URL: https://github.com/apache/flink/pull/10319#issuecomment-558473837
 
 
   
   ## CI report:
   
   * b695eaeb18c267335f7f608602f747f3aec35ee3 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10311: [FLINK-14762][client] Enrich JobClient API

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10311: [FLINK-14762][client] Enrich 
JobClient API
URL: https://github.com/apache/flink/pull/10311#issuecomment-558081118
 
 
   
   ## CI report:
   
   * aadc1cfef92eec54d86efbf39f50d91afda6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138022323)
   * daf85a75b9c24918058b8bfe09416b2828bd02a5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138160048)
   * 7fe9b0e6c496482990905b9cec4389c8cbb8930a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138161977)
   * fe4dfd5ebd0508fc24dcfcb55aab4b1c99cd6bd3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138175553)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TsReaper commented on issue #10306: [FLINK-13943][table-api] Provide utility method to convert Flink table to Java List

2019-11-25 Thread GitBox
TsReaper commented on issue #10306: [FLINK-13943][table-api] Provide utility 
method to convert Flink table to Java List
URL: https://github.com/apache/flink/pull/10306#issuecomment-558473091
 
 
   > @TsReaper I think it would be better to add method `Table.collect` to 
expose this functionality, otherwise I am afraid very few people would know 
that they can collect table result in such way.
   > Another method we can add is `Table.head(n)` which collect the first n 
rows for preview.
   
   We indeed would like to add a `Table#collect` method. But we will thus need 
a FLIP and cannot make it into 1.10 before feature freeze. I think it might be 
better to let users try on this utility method first.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10268: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10268: [Flink-14599][table-planner-blink] 
Support precision of TimestampType in blink planner
URL: https://github.com/apache/flink/pull/10268#issuecomment-555964030
 
 
   
   ## CI report:
   
   * 72414aad5f654b834205103f23fbbfc3d5466748 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137370738)
   * 30d167184e68f6a44fc4f5d58228577c916d63d2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137496415)
   * f41e0aaf005a489fc1df5c85511ff632ed9402a7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137522333)
   * 29cc047cd4d65ecd9d47606843ee4893f765e8bc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137706167)
   * 9e6bb01366f7aaa7aacb8f74104213dc9d97ff25 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137729165)
   * b0271e6b12a9e951074adbb50d7a7110736d61bd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137737105)
   * 724494d11181f6185df8de83e825e5b1d636a415 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137741018)
   * f1c9a89535ada7e5d3b5a155fe34cac5ce1dc928 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137902784)
   * b7b0adea530f8a8273085990cc583128c2e90fc8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138001736)
   * 684c5a28c65a68ca5205d45bd52ae89becb766c4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138172046)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10193: [FLINK-13938][yarn] Use pre-uploaded flink binary to accelerate flink submission

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10193: [FLINK-13938][yarn] Use 
pre-uploaded flink binary to accelerate flink submission
URL: https://github.com/apache/flink/pull/10193#discussion_r350551175
 
 

 ##
 File path: flink-yarn/src/main/java/org/apache/flink/yarn/Utils.java
 ##
 @@ -263,6 +298,36 @@ private static LocalResource 
registerLocalResource(FileSystem fs, Path remoteRsr
return localResource;
}
 
+   /**
+* Register a local resource with resource info. The resource info may 
contains multiple parts.
+* For example, 
RemotePath;[resourceSize;resourceModificationTime;LocalResourceVisibility]
+* @param resourceInfoStr resource info string
+* @param yarnConfig yarn configuration
+* @return local resource tuple, f0 is filename, f1 is local resource.
+*/
+   private static Tuple2 registerLocalResource(
 
 Review comment:
   This method is quite tricky for introducing an in-place pattern 
"RemotePath;[resourceSize;resourceModificationTime;LocalResourceVisibility]" to 
parse configuration...I think it works as is but +0 for this patch. @walterddr 
you can merge this patch if you think it is good to go.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10310: [FLINK-14865][python] fix unstable tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10310: [FLINK-14865][python] fix unstable 
tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2
URL: https://github.com/apache/flink/pull/10310#issuecomment-558081072
 
 
   
   ## CI report:
   
   * 801926d95f316419f0464dd8456d55b074d9db34 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138022289)
   * 558fa2d712d7a11a4d6d8e842ccf1e7e360adde7 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/138167883)
   * 4fd28f635730c9b6df7662b270793f4c5f1c9cfa : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138170196)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10311: [FLINK-14762][client] Enrich JobClient API

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10311: [FLINK-14762][client] Enrich 
JobClient API
URL: https://github.com/apache/flink/pull/10311#issuecomment-558081118
 
 
   
   ## CI report:
   
   * aadc1cfef92eec54d86efbf39f50d91afda6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138022323)
   * daf85a75b9c24918058b8bfe09416b2828bd02a5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138160048)
   * 7fe9b0e6c496482990905b9cec4389c8cbb8930a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138161977)
   * fe4dfd5ebd0508fc24dcfcb55aab4b1c99cd6bd3 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #10235: [FLINK-14839][config] Let JobGraph#classpaths become non-null

2019-11-25 Thread GitBox
TisonKun commented on issue #10235: [FLINK-14839][config] Let 
JobGraph#classpaths become non-null
URL: https://github.com/apache/flink/pull/10235#issuecomment-558465932
 
 
   @GJL could you give this patch another pass? I've addressed your comments 
above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-14945) Simplify some code in runtime

2019-11-25 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-14945:
-

Assignee: clay

> Simplify some code in runtime
> -
>
> Key: FLINK-14945
> URL: https://issues.apache.org/jira/browse/FLINK-14945
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Checkpointing, Runtime / Metrics, Runtime / 
> Task
>Reporter: clay
>Assignee: clay
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I found some code to simplify when using IDEA to inspect the code,  so I 
> submit a PR here, This change only involves code simplification 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10319: [FLINK-14945] [runtime] simplify some code

2019-11-25 Thread GitBox
flinkbot commented on issue #10319: [FLINK-14945] [runtime]  simplify some code
URL: https://github.com/apache/flink/pull/10319#issuecomment-558465317
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b695eaeb18c267335f7f608602f747f3aec35ee3 (Tue Nov 26 
05:22:29 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #10319: [FLINK-14945] [runtime] simplify some code

2019-11-25 Thread GitBox
TisonKun commented on issue #10319: [FLINK-14945] [runtime]  simplify some code
URL: https://github.com/apache/flink/pull/10319#issuecomment-558465131
 
 
   @zentol I'm ok with this diff. However, I'm not sure if we still receive 
code refactor like this without functional changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] clay4444 opened a new pull request #10319: [FLINK-14945] [runtime] simplify some code

2019-11-25 Thread GitBox
clay opened a new pull request #10319: [FLINK-14945] [runtime]  simplify 
some code
URL: https://github.com/apache/flink/pull/10319
 
 
   ## What is the purpose of the change
   
   I found some code to simplify when using IDEA to inspect the code, so I 
submit a PR here
   
   
   ## Brief change log
   
   - Replace `Optional.isPresent()` condition with functional style expression
   - simplify some `if` data flow
   

   ## Verifying this change
   
   This change is a trivial rework / code simplify without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
   - Dependencies (does it add or upgrade a dependency): (no)
   - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
   - The serializers: (no)
   - The runtime per-record code paths (performance sensitive): (no)
   - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
   - The S3 file system connector: (no)
   
   
   ## Documentation
   
   - Does this pull request introduce a new feature? (no)
   - If yes, how is the feature documented? (not applicable)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14945) Simplify some code in runtime

2019-11-25 Thread clay (Jira)
clay created FLINK-14945:


 Summary: Simplify some code in runtime
 Key: FLINK-14945
 URL: https://issues.apache.org/jira/browse/FLINK-14945
 Project: Flink
  Issue Type: Task
  Components: Runtime / Checkpointing, Runtime / Metrics, Runtime / Task
Reporter: clay
 Fix For: 1.10.0


I found some code to simplify when using IDEA to inspect the code,  so I submit 
a PR here, This change only involves code simplification 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun commented on a change in pull request #10193: [FLINK-13938][yarn] Use pre-uploaded flink binary to accelerate flink submission

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10193: [FLINK-13938][yarn] Use 
pre-uploaded flink binary to accelerate flink submission
URL: https://github.com/apache/flink/pull/10193#discussion_r350545700
 
 

 ##
 File path: flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNITCase.java
 ##
 @@ -129,11 +150,20 @@ private void 
deployPerjob(YarnConfigOptions.UserJarInclusion userJarInclusion) t
assertThat(jobResult, is(notNullValue()));

assertThat(jobResult.getSerializedThrowable().isPresent(), is(false));
 
+   checkStagingDirectory(preUploadedFlinkBinary == 
null, applicationId);
+

waitApplicationFinishedElseKillIt(applicationId, yarnAppTerminateTimeout, 
yarnClusterDescriptor);
}
}
}
 
+   private void checkStagingDirectory(boolean shouldExist, ApplicationId 
appId) throws IOException {
+   final FileSystem fs = FileSystem.get(YARN_CONFIGURATION);
+   final Path stagingDirectory = new Path(fs.getHomeDirectory(), 
".flink/" + appId.toString());
+   // If pre-uploaded flink is set correctly, the lib directory 
will not be uploaded to staging directory.
+   assertEquals(shouldExist, fs.exists(new Path(stagingDirectory, 
flinkLibFolder.getName(;
 
 Review comment:
   We can move the comment as `message` parameter of `assertEquals`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10193: [FLINK-13938][yarn] Use pre-uploaded flink binary to accelerate flink submission

2019-11-25 Thread GitBox
TisonKun commented on a change in pull request #10193: [FLINK-13938][yarn] Use 
pre-uploaded flink binary to accelerate flink submission
URL: https://github.com/apache/flink/pull/10193#discussion_r350545500
 
 

 ##
 File path: flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNITCase.java
 ##
 @@ -93,8 +114,8 @@ private void 
deployPerjob(YarnConfigOptions.UserJarInclusion userJarInclusion) t
true)) {
 
yarnClusterDescriptor.setLocalJarPath(new 
Path(flinkUberjar.getAbsolutePath()));
-   
yarnClusterDescriptor.addShipFiles(Arrays.asList(flinkLibFolder.listFiles()));
-   
yarnClusterDescriptor.addShipFiles(Arrays.asList(flinkShadedHadoopDir.listFiles()));
+   
yarnClusterDescriptor.addShipFiles(Collections.singletonList(flinkLibFolder));
 
 Review comment:
   I don't stick to revert this but want to know that, we don't change anything 
by this diff right? IIRC Flink will iterate over a directory.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10268: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10268: [Flink-14599][table-planner-blink] 
Support precision of TimestampType in blink planner
URL: https://github.com/apache/flink/pull/10268#issuecomment-555964030
 
 
   
   ## CI report:
   
   * 72414aad5f654b834205103f23fbbfc3d5466748 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137370738)
   * 30d167184e68f6a44fc4f5d58228577c916d63d2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137496415)
   * f41e0aaf005a489fc1df5c85511ff632ed9402a7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137522333)
   * 29cc047cd4d65ecd9d47606843ee4893f765e8bc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137706167)
   * 9e6bb01366f7aaa7aacb8f74104213dc9d97ff25 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137729165)
   * b0271e6b12a9e951074adbb50d7a7110736d61bd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137737105)
   * 724494d11181f6185df8de83e825e5b1d636a415 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137741018)
   * f1c9a89535ada7e5d3b5a155fe34cac5ce1dc928 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137902784)
   * b7b0adea530f8a8273085990cc583128c2e90fc8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138001736)
   * 684c5a28c65a68ca5205d45bd52ae89becb766c4 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138172046)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun closed pull request #10185: [FLINK-14762][client] Implement ClusterClientJobClientAdapter

2019-11-25 Thread GitBox
TisonKun closed pull request #10185: [FLINK-14762][client] Implement 
ClusterClientJobClientAdapter
URL: https://github.com/apache/flink/pull/10185
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #10185: [FLINK-14762][client] Implement ClusterClientJobClientAdapter

2019-11-25 Thread GitBox
TisonKun commented on issue #10185: [FLINK-14762][client] Implement 
ClusterClientJobClientAdapter
URL: https://github.com/apache/flink/pull/10185#issuecomment-558456892
 
 
   closed as rebased as #10311 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10310: [FLINK-14865][python] fix unstable tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10310: [FLINK-14865][python] fix unstable 
tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2
URL: https://github.com/apache/flink/pull/10310#issuecomment-558081072
 
 
   
   ## CI report:
   
   * 801926d95f316419f0464dd8456d55b074d9db34 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138022289)
   * 558fa2d712d7a11a4d6d8e842ccf1e7e360adde7 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/138167883)
   * 4fd28f635730c9b6df7662b270793f4c5f1c9cfa : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138170196)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10017: [FLINK-14019][python] add support for managing environment and dependencies of Python UDF in Flink Python API

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10017: [FLINK-14019][python] add support 
for managing environment and dependencies of Python UDF in Flink Python API
URL: https://github.com/apache/flink/pull/10017#issuecomment-546947394
 
 
   
   ## CI report:
   
   * c851f9713cef338dd135c3e982f138d68bfbe33d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133821281)
   * 85c34fa3fe7ec4b016438faf40358b66f043e37e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134189114)
   * 880095643c38982cd28d8558fcd0426c38e3cf67 : UNKNOWN
   * 362c69a059433aeed3c05426ba5e8aae96522263 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/134530748)
   * d59863bf8e1d95df1db00bfdd73aa1cb19caa8fa : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135220068)
   * b79460aa27c8e1a10ccfa7de914d9362ac6ad7c8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136068705)
   * 0b1c1af9b44a42e3eb2c30969fc801970051f88e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137379346)
   * 1ae7d82c2ef097d71a304e5a57be546c7129813b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/137557615)
   * 6074499cf1c05b89af3b1e2525f21361414a330a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137561657)
   * e027f28287fd4d3e5b79f4dfe0782adf3f93f12c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138165917)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10268: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10268: [Flink-14599][table-planner-blink] 
Support precision of TimestampType in blink planner
URL: https://github.com/apache/flink/pull/10268#issuecomment-555964030
 
 
   
   ## CI report:
   
   * 72414aad5f654b834205103f23fbbfc3d5466748 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137370738)
   * 30d167184e68f6a44fc4f5d58228577c916d63d2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137496415)
   * f41e0aaf005a489fc1df5c85511ff632ed9402a7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137522333)
   * 29cc047cd4d65ecd9d47606843ee4893f765e8bc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137706167)
   * 9e6bb01366f7aaa7aacb8f74104213dc9d97ff25 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137729165)
   * b0271e6b12a9e951074adbb50d7a7110736d61bd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137737105)
   * 724494d11181f6185df8de83e825e5b1d636a415 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137741018)
   * f1c9a89535ada7e5d3b5a155fe34cac5ce1dc928 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137902784)
   * b7b0adea530f8a8273085990cc583128c2e90fc8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138001736)
   * 684c5a28c65a68ca5205d45bd52ae89becb766c4 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on issue #10268: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-25 Thread GitBox
docete commented on issue #10268: [Flink-14599][table-planner-blink] Support 
precision of TimestampType in blink planner
URL: https://github.com/apache/flink/pull/10268#issuecomment-558450225
 
 
   @KurtYoung @JingsongLi @wuchong please take another look at this. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on issue #10268: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-25 Thread GitBox
docete commented on issue #10268: [Flink-14599][table-planner-blink] Support 
precision of TimestampType in blink planner
URL: https://github.com/apache/flink/pull/10268#issuecomment-558449319
 
 
   > I agree with @JingsongLi . We can add some tests for high precision 
timestamp as group key, as distinct key, as time attribute.
   
   `TableSourceValidation::validateTimestampExtractorArguments `uses 
`LegacyTypeInfoDataTypeConverter` in validating time attribute config. 
   So `Timestamp(p)` where p > 3 can't be a time attribute now since 
`LegacyTypeInfoDataTypeConverter` does not support that. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10318: [FLINK-14721][hive]HiveTableSource implement LimitableTableSource interface

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10318: [FLINK-14721][hive]HiveTableSource 
implement LimitableTableSource interface
URL: https://github.com/apache/flink/pull/10318#issuecomment-558436713
 
 
   
   ## CI report:
   
   * 1a201bc0608815246d1550c0aa55691d59a25f1b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138165902)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10310: [FLINK-14865][python] fix unstable tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10310: [FLINK-14865][python] fix unstable 
tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2
URL: https://github.com/apache/flink/pull/10310#issuecomment-558081072
 
 
   
   ## CI report:
   
   * 801926d95f316419f0464dd8456d55b074d9db34 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138022289)
   * 558fa2d712d7a11a4d6d8e842ccf1e7e360adde7 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138167883)
   * 4fd28f635730c9b6df7662b270793f4c5f1c9cfa : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10017: [FLINK-14019][python] add support for managing environment and dependencies of Python UDF in Flink Python API

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10017: [FLINK-14019][python] add support 
for managing environment and dependencies of Python UDF in Flink Python API
URL: https://github.com/apache/flink/pull/10017#issuecomment-546947394
 
 
   
   ## CI report:
   
   * c851f9713cef338dd135c3e982f138d68bfbe33d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133821281)
   * 85c34fa3fe7ec4b016438faf40358b66f043e37e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134189114)
   * 880095643c38982cd28d8558fcd0426c38e3cf67 : UNKNOWN
   * 362c69a059433aeed3c05426ba5e8aae96522263 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/134530748)
   * d59863bf8e1d95df1db00bfdd73aa1cb19caa8fa : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135220068)
   * b79460aa27c8e1a10ccfa7de914d9362ac6ad7c8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136068705)
   * 0b1c1af9b44a42e3eb2c30969fc801970051f88e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137379346)
   * 1ae7d82c2ef097d71a304e5a57be546c7129813b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/137557615)
   * 6074499cf1c05b89af3b1e2525f21361414a330a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137561657)
   * e027f28287fd4d3e5b79f4dfe0782adf3f93f12c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138165917)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8444: [FLINK-10190] [Kinesis Connector] Allow AWS_REGION to be supplied along with custom Kinesis endpoint

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #8444: [FLINK-10190] [Kinesis Connector] 
Allow AWS_REGION to be supplied along with custom Kinesis endpoint
URL: https://github.com/apache/flink/pull/8444#issuecomment-546119518
 
 
   
   ## CI report:
   
   * 176c41a7b12ecaa06a40496e6da76017e7aed122 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133459462)
   * 2b045b2fcc31d63c987d33113701cee0aec547d0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138161996)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10318: [FLINK-14721][hive]HiveTableSource implement LimitableTableSource interface

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10318: [FLINK-14721][hive]HiveTableSource 
implement LimitableTableSource interface
URL: https://github.com/apache/flink/pull/10318#issuecomment-558436713
 
 
   
   ## CI report:
   
   * 1a201bc0608815246d1550c0aa55691d59a25f1b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138165902)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10310: [FLINK-14865][python] fix unstable tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10310: [FLINK-14865][python] fix unstable 
tests PyFlinkBlinkStreamUserDefinedFunctionTests#test_udf_in_join_condition_2
URL: https://github.com/apache/flink/pull/10310#issuecomment-558081072
 
 
   
   ## CI report:
   
   * 801926d95f316419f0464dd8456d55b074d9db34 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138022289)
   * 558fa2d712d7a11a4d6d8e842ccf1e7e360adde7 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10311: [FLINK-14762][client] Enrich JobClient API

2019-11-25 Thread GitBox
flinkbot edited a comment on issue #10311: [FLINK-14762][client] Enrich 
JobClient API
URL: https://github.com/apache/flink/pull/10311#issuecomment-558081118
 
 
   
   ## CI report:
   
   * aadc1cfef92eec54d86efbf39f50d91afda6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138022323)
   * daf85a75b9c24918058b8bfe09416b2828bd02a5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138160048)
   * 7fe9b0e6c496482990905b9cec4389c8cbb8930a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138161977)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14944) Unstable test FlinkFnExecutionSyncTests.test_flink_fn_execution_pb2_synced

2019-11-25 Thread sunjincheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982108#comment-16982108
 ] 

sunjincheng commented on FLINK-14944:
-

Hi [~dian.fu] Thanks for report and trace this issue, in the log we can find 
that "

ConnectionResetError: [Errno 104] Connection reset by peer", So, It seems that 
this has something to do with the network(some times, not every time). Is it 
possible to add a check for network exception to the test and print the log, 
then ensure that the test passes then does not affect existing PRs tests? And I 
am agree that we should find out the real reason for this issue, and fix it 
using the final solution. 

What do you think?

> Unstable test FlinkFnExecutionSyncTests.test_flink_fn_execution_pb2_synced
> --
>
> Key: FLINK-14944
> URL: https://issues.apache.org/jira/browse/FLINK-14944
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Priority: Major
> Fix For: 1.10.0
>
>
> This tests failed occasionally:
> {code:java}
> force = 'True', output_dir = '/tmp/tmpkh6rmeig'
> def generate_proto_files(force=True, 
> output_dir=DEFAULT_PYTHON_OUTPUT_PATH):
> try:
> import grpc_tools  # noqa  # pylint: disable=unused-import
> except ImportError:
> warnings.warn('Installing grpcio-tools is recommended for 
> development.')
> 
> proto_dirs = [os.path.join(PYFLINK_ROOT_PATH, path) for path in 
> PROTO_PATHS]
> proto_files = sum(
> [glob.glob(os.path.join(d, '*.proto')) for d in proto_dirs], [])
> out_dir = os.path.join(PYFLINK_ROOT_PATH, output_dir)
> out_files = [path for path in glob.glob(os.path.join(out_dir, 
> '*_pb2.py'))]
> 
> if out_files and not proto_files and not force:
> # We have out_files but no protos; assume they're up to date.
> # This is actually the common case (e.g. installation from an 
> sdist).
> logging.info('No proto files; using existing generated files.')
> return
> 
> elif not out_files and not proto_files:
> raise RuntimeError(
> 'No proto files found in %s.' % proto_dirs)
> 
> # Regenerate iff the proto files or this file are newer.
> elif force or not out_files or len(out_files) < len(proto_files) or (
> min(os.path.getmtime(path) for path in out_files)
> <= max(os.path.getmtime(path)
>for path in proto_files + 
> [os.path.realpath(__file__)])):
> try:
> >   from grpc_tools import protoc
> E   ModuleNotFoundError: No module named 'grpc_tools'
> pyflink/gen_protos.py:70: ModuleNotFoundError
> During handling of the above exception, another exception occurred:
> self = 
>   testMethod=test_flink_fn_execution_pb2_synced>
> def test_flink_fn_execution_pb2_synced(self):
> >   generate_proto_files('True', self.tempdir)
> pyflink/fn_execution/tests/test_flink_fn_execution_pb2_synced.py:35: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> pyflink/gen_protos.py:83: in generate_proto_files
> target=_install_grpcio_tools_and_generate_proto_files(force, output_dir))
> pyflink/gen_protos.py:131: in _install_grpcio_tools_and_generate_proto_files
> '--upgrade', GRPC_TOOLS, "-I"])
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> popenargs = 
> (['/home/travis/build/apache/flink/flink-python/.tox/py36/bin/python', '-m', 
> 'pip', 'install', '--prefix', 
> '/home/travis/build/apache/flink/flink-python/pyflink/../.eggs/grpcio-wheels',
>  ...],)
> kwargs = {}, retcode = 2
> cmd = ['/home/travis/build/apache/flink/flink-python/.tox/py36/bin/python', 
> '-m', 'pip', 'install', '--prefix', 
> '/home/travis/build/apache/flink/flink-python/pyflink/../.eggs/grpcio-wheels',
>  ...]
> def check_call(*popenargs, **kwargs):
> """Run command with arguments.  Wait for command to complete.  If
> the exit code was zero then return, otherwise raise
> CalledProcessError.  The CalledProcessError object will have the
> return code in the returncode attribute.
> 
> The arguments are the same as for the call function.  Example:
> 
> check_call(["ls", "-l"])
> """
> retcode = call(*popenargs, **kwargs)
> if retcode:
> cmd = kwargs.get("args")
> if cmd is None:
> cmd = popenargs[0]
> >   raise CalledProcessError(retcode, cmd)
> E   subprocess.CalledProcessError: Command 
> '['/home/travis/build/apache/flink/flink-python/.tox/py36/

  1   2   3   4   5   >