[GitHub] [flink] flinkbot edited a comment on pull request #14280: [FLINK-20110][table-planner] Support 'merge' method for first_value a…

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14280:
URL: https://github.com/apache/flink/pull/14280#issuecomment-736629561


   
   ## CI report:
   
   * 57ed58c709400cabc75c5a6a4fe8587bca16dbb8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10428)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19943) Support sink parallelism configuration to ElasticSearch connector

2020-12-01 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242126#comment-17242126
 ] 

zhisheng commented on FLINK-19943:
--

[~lzljs3620320] hi, this subtask will merge to 1.12?

> Support sink parallelism configuration to ElasticSearch connector
> -
>
> Key: FLINK-19943
> URL: https://issues.apache.org/jira/browse/FLINK-19943
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ElasticSearch
>Reporter: CloseRiver
>Assignee: wgcn
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-12884) FLIP-144: Native Kubernetes HA Service

2020-12-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242124#comment-17242124
 ] 

Xintong Song commented on FLINK-12884:
--

[~shravan.adharapurapu],

K8s HA service will be released with Flink 1.12.0.

Flink 1.12.0 is not released yet, but should be very soon. You can try it out 
now with the RC version.

https://dist.apache.org/repos/dist/dev/flink/flink-1.12.0-rc2/

> FLIP-144: Native Kubernetes HA Service
> --
>
> Key: FLINK-12884
> URL: https://issues.apache.org/jira/browse/FLINK-12884
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Reporter: MalcolmSanders
>Assignee: Yang Wang
>Priority: Major
> Fix For: 1.12.0
>
>
> Currently flink only supports HighAvailabilityService using zookeeper. As a 
> result, it requires a zookeeper cluster to be deployed on k8s cluster if our 
> customers needs high availability for flink. If we support 
> HighAvailabilityService based on native k8s APIs, it will save the efforts of 
> zookeeper deployment as well as the resources used by zookeeper cluster. It 
> might be especially helpful for customers who run small-scale k8s clusters so 
> that flink HighAvailabilityService may not cause too much overhead on k8s 
> clusters.
> Previously [FLINK-11105|https://issues.apache.org/jira/browse/FLINK-11105] 
> has proposed a HighAvailabilityService using etcd. As [~NathanHowell] 
> suggested in FLINK-11105, since k8s doesn't expose its own etcd cluster by 
> design (see [Securing etcd 
> clusters|https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#securing-etcd-clusters]),
>  it also requires the deployment of etcd cluster if flink uses etcd to 
> achieve HA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20425) Change exit code in FatalExitExceptionHandler to be different from JvmShutdownSafeguard

2020-12-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-20425:
--

Assignee: Nicholas Jiang

> Change exit code in FatalExitExceptionHandler to be different from 
> JvmShutdownSafeguard
> ---
>
> Key: FLINK-20425
> URL: https://issues.apache.org/jira/browse/FLINK-20425
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Robert Metzger
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: 1.13.0
>
>
> Both classes mentioned in the title use the same exit code (-17), making it 
> impossible to distinguish the cause of an unexpected Flink JVM exit based on 
> the exit code alone.
> Maybe we could introduce an Enum that defines all possible exit codes in 
> Flink?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14280: [FLINK-20110][table-planner] Support 'merge' method for first_value a…

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14280:
URL: https://github.com/apache/flink/pull/14280#issuecomment-736629561


   
   ## CI report:
   
   * 6a0849e61034e34186780ce45dcf9157f65c0dfd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10424)
 
   * 57ed58c709400cabc75c5a6a4fe8587bca16dbb8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10428)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14177: [FLINK-16443][checkpointing] Make sure that CheckpointException are also serialized in DeclineCheckpoint.

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14177:
URL: https://github.com/apache/flink/pull/14177#issuecomment-732207895


   
   ## CI report:
   
   * ce283f64f17cadd3f4ab80534343dbfe917867fa Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10214)
 
   * e7fcff5c36ef33398a10e786fdae848ed72664b4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10427)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20442) Fix license documentation mistakes in flink-python.jar

2020-12-01 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242112#comment-17242112
 ] 

Dian Fu commented on FLINK-20442:
-

I found that part of the problems described in this JIRA also exists in 1.11.x. 
I'll keep this JIRA open for now and prepare a PR for release-1.11. 

> Fix license documentation mistakes in flink-python.jar
> --
>
> Key: FLINK-20442
> URL: https://issues.apache.org/jira/browse/FLINK-20442
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3
>
>
> Issues reported by Chesnay:
> -The flink-python jar contains 2 license files in the root directory and 
> another 2 in the META-INF directory. This should be reduced down to 1 under 
> META-INF. I'm inclined to block the release on this because the root license 
> is BSD.
> - The flink-python jar appears to bundle lz4 (native libraries under win32/, 
> linux/ and darwin/), but this is neither listed in the NOTICE nor do we have 
> an explicit license file for it.
> Other minor things that we should address in the future:
> - opt/python contains some LICENSE files that should instead be placed under 
> licenses/
> - licenses/ contains a stray "ASM" file containing the ASM license. It's not 
> a problem (because it is identical with our intended copy), but it indicates 
> that something is amiss. This seems to originate from the flink-python jar, 
> which bundles some beam stuff, which bundles bytebuddy, which bundles this 
> license file. From what I can tell bytebuddy is not actually bundling ASM 
> though; they just bundle the license for whatever reason. It is not listed as 
> bundled in the flink-python NOTICE though, so I wouldn't block the release on 
> it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18685) JobClient.getAccumulators() blocks until streaming job has finished in local environment

2020-12-01 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242108#comment-17242108
 ] 

Robert Metzger commented on FLINK-18685:


Thanks a lot! I assigned you.

> JobClient.getAccumulators() blocks until streaming job has finished in local 
> environment
> 
>
> Key: FLINK-18685
> URL: https://issues.apache.org/jira/browse/FLINK-18685
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Etienne Chauchot
>Priority: Major
>  Labels: pull-request-available, starter
>
> *Steps to reproduce:*
> {code:java}
> JobClient client = env.executeAsync("Test");
> CompletableFuture status = client.getJobStatus();
> LOG.info("status = " + status.get());
> CompletableFuture> accumulators = 
> client.getAccumulators(StreamingJob.class.getClassLoader());
> LOG.info("accus = " + accumulators.get(5, TimeUnit.SECONDS));
> {code}
> *Actual behavior*
> The accumulators future will never complete for a streaming job when calling 
> this just in your main() method from the IDE.
> *Expected behavior*
> Receive the accumulators of the running streaming job.
> The JavaDocs of the method state the following: "Accumulators can be 
> requested while it is running or after it has finished.". 
> While it is technically true that I can request accumulators, I was expecting 
> as a user that I can access the accumulators of a running job.
> Also, I can request accumulators if I submit the job to a cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18685) JobClient.getAccumulators() blocks until streaming job has finished in local environment

2020-12-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-18685:
--

Assignee: Etienne Chauchot

> JobClient.getAccumulators() blocks until streaming job has finished in local 
> environment
> 
>
> Key: FLINK-18685
> URL: https://issues.apache.org/jira/browse/FLINK-18685
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Etienne Chauchot
>Priority: Major
>  Labels: pull-request-available, starter
>
> *Steps to reproduce:*
> {code:java}
> JobClient client = env.executeAsync("Test");
> CompletableFuture status = client.getJobStatus();
> LOG.info("status = " + status.get());
> CompletableFuture> accumulators = 
> client.getAccumulators(StreamingJob.class.getClassLoader());
> LOG.info("accus = " + accumulators.get(5, TimeUnit.SECONDS));
> {code}
> *Actual behavior*
> The accumulators future will never complete for a streaming job when calling 
> this just in your main() method from the IDE.
> *Expected behavior*
> Receive the accumulators of the running streaming job.
> The JavaDocs of the method state the following: "Accumulators can be 
> requested while it is running or after it has finished.". 
> While it is technically true that I can request accumulators, I was expecting 
> as a user that I can access the accumulators of a running job.
> Also, I can request accumulators if I submit the job to a cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20442) Fix license documentation mistakes in flink-python.jar

2020-12-01 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242107#comment-17242107
 ] 

Dian Fu commented on FLINK-20442:
-

Merged to
- master via 7818b7a8290c84b99e809773b8c91fb028670865
- release-1.12 via 58ccb941803107afd08f72762072c7467b7bbd01

> Fix license documentation mistakes in flink-python.jar
> --
>
> Key: FLINK-20442
> URL: https://issues.apache.org/jira/browse/FLINK-20442
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Issues reported by Chesnay:
> -The flink-python jar contains 2 license files in the root directory and 
> another 2 in the META-INF directory. This should be reduced down to 1 under 
> META-INF. I'm inclined to block the release on this because the root license 
> is BSD.
> - The flink-python jar appears to bundle lz4 (native libraries under win32/, 
> linux/ and darwin/), but this is neither listed in the NOTICE nor do we have 
> an explicit license file for it.
> Other minor things that we should address in the future:
> - opt/python contains some LICENSE files that should instead be placed under 
> licenses/
> - licenses/ contains a stray "ASM" file containing the ASM license. It's not 
> a problem (because it is identical with our intended copy), but it indicates 
> that something is amiss. This seems to originate from the flink-python jar, 
> which bundles some beam stuff, which bundles bytebuddy, which bundles this 
> license file. From what I can tell bytebuddy is not actually bundling ASM 
> though; they just bundle the license for whatever reason. It is not listed as 
> bundled in the flink-python NOTICE though, so I wouldn't block the release on 
> it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20442) Fix license documentation mistakes in flink-python.jar

2020-12-01 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20442:

Fix Version/s: 1.11.3

> Fix license documentation mistakes in flink-python.jar
> --
>
> Key: FLINK-20442
> URL: https://issues.apache.org/jira/browse/FLINK-20442
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3
>
>
> Issues reported by Chesnay:
> -The flink-python jar contains 2 license files in the root directory and 
> another 2 in the META-INF directory. This should be reduced down to 1 under 
> META-INF. I'm inclined to block the release on this because the root license 
> is BSD.
> - The flink-python jar appears to bundle lz4 (native libraries under win32/, 
> linux/ and darwin/), but this is neither listed in the NOTICE nor do we have 
> an explicit license file for it.
> Other minor things that we should address in the future:
> - opt/python contains some LICENSE files that should instead be placed under 
> licenses/
> - licenses/ contains a stray "ASM" file containing the ASM license. It's not 
> a problem (because it is identical with our intended copy), but it indicates 
> that something is amiss. This seems to originate from the flink-python jar, 
> which bundles some beam stuff, which bundles bytebuddy, which bundles this 
> license file. From what I can tell bytebuddy is not actually bundling ASM 
> though; they just bundle the license for whatever reason. It is not listed as 
> bundled in the flink-python NOTICE though, so I wouldn't block the release on 
> it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu commented on pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


dianfu commented on pull request #14282:
URL: https://github.com/apache/flink/pull/14282#issuecomment-737047682







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu closed pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


dianfu closed pull request #14282:
URL: https://github.com/apache/flink/pull/14282


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14280: [FLINK-20110][table-planner] Support 'merge' method for first_value a…

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14280:
URL: https://github.com/apache/flink/pull/14280#issuecomment-736629561


   
   ## CI report:
   
   * 6a0849e61034e34186780ce45dcf9157f65c0dfd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10424)
 
   * 57ed58c709400cabc75c5a6a4fe8587bca16dbb8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14177: [FLINK-16443][checkpointing] Make sure that CheckpointException are also serialized in DeclineCheckpoint.

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14177:
URL: https://github.com/apache/flink/pull/14177#issuecomment-732207895


   
   ## CI report:
   
   * ce283f64f17cadd3f4ab80534343dbfe917867fa Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10214)
 
   * e7fcff5c36ef33398a10e786fdae848ed72664b4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20441) Deprecate CheckpointConfig.setPreferCheckpointForRecovery

2020-12-01 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-20441.
--
Fix Version/s: (was: 1.13.0)
   1.12.0
   Resolution: Fixed

> Deprecate CheckpointConfig.setPreferCheckpointForRecovery
> -
>
> Key: FLINK-20441
> URL: https://issues.apache.org/jira/browse/FLINK-20441
> Project: Flink
>  Issue Type: Task
>  Components: API / DataStream, Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> According to FLINK-20427, we should deprecate 
> {{CheckpointConfig.setPreferCheckpointForRecovery}} so that we can remove it 
> in the next release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20441) Deprecate CheckpointConfig.setPreferCheckpointForRecovery

2020-12-01 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242102#comment-17242102
 ] 

Robert Metzger commented on FLINK-20441:


Merged to master (1.13) 
https://github.com/apache/flink/commit/232d0f1c2547a7af3f13cd1f3a1ba772172367de
Merged to release-1.12 
https://github.com/apache/flink/commit/bf8db48e0368f1e3d4e5c5c54b18b507fa721e08

> Deprecate CheckpointConfig.setPreferCheckpointForRecovery
> -
>
> Key: FLINK-20441
> URL: https://issues.apache.org/jira/browse/FLINK-20441
> Project: Flink
>  Issue Type: Task
>  Components: API / DataStream, Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> According to FLINK-20427, we should deprecate 
> {{CheckpointConfig.setPreferCheckpointForRecovery}} so that we can remove it 
> in the next release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger merged pull request #14273: [FLINK-20441] Deprecate CheckpointConfig.setPreferCheckpointForRecovery and .isPreferCheckpointForRecovery

2020-12-01 Thread GitBox


rmetzger merged pull request #14273:
URL: https://github.com/apache/flink/pull/14273


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #14273: [FLINK-20441] Deprecate CheckpointConfig.setPreferCheckpointForRecovery and .isPreferCheckpointForRecovery

2020-12-01 Thread GitBox


rmetzger commented on pull request #14273:
URL: https://github.com/apache/flink/pull/14273#issuecomment-737038230


   I'm merging this now to proceed with the next RC



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Issue Comment Deleted] (FLINK-19863) SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process failed due to timeout"

2020-12-01 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-19863:
-
Comment: was deleted

(was: another instance 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10420=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529)

> SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process 
> failed due to timeout"
> ---
>
> Key: FLINK-19863
> URL: https://issues.apache.org/jira/browse/FLINK-19863
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 00:50:02,589 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,106 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,741 [main] INFO  
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource [] - Backed up 
> logs to 
> /home/vsts/work/1/s/flink-end-to-end-tests/artifacts/flink-b3924665-1ac9-4309-8255-20f0dc94e7b9.
> 00:50:04,788 [main] INFO  
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource [] - Stopping 
> HBase Cluster
> 00:50:16,243 [main] ERROR 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase   [] - 
> 
> Test testHBase[0: 
> hbase-version:1.4.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) 
> failed with:
> java.io.IOException: Process failed due to timeout.
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:130)
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
>   at 
> org.apache.flink.tests.util.flink.FlinkDistribution.submitSQLJob(FlinkDistribution.java:221)
>   at 
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource$StandaloneClusterController.submitSQLJob(LocalStandaloneFlinkResource.java:196)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.executeSqlStatements(SQLClientHBaseITCase.java:215)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:152)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19863) SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process failed due to timeout"

2020-12-01 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242088#comment-17242088
 ] 

Huang Xingbo commented on FLINK-19863:
--

another instance 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10420=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529

> SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process 
> failed due to timeout"
> ---
>
> Key: FLINK-19863
> URL: https://issues.apache.org/jira/browse/FLINK-19863
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 00:50:02,589 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,106 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,741 [main] INFO  
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource [] - Backed up 
> logs to 
> /home/vsts/work/1/s/flink-end-to-end-tests/artifacts/flink-b3924665-1ac9-4309-8255-20f0dc94e7b9.
> 00:50:04,788 [main] INFO  
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource [] - Stopping 
> HBase Cluster
> 00:50:16,243 [main] ERROR 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase   [] - 
> 
> Test testHBase[0: 
> hbase-version:1.4.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) 
> failed with:
> java.io.IOException: Process failed due to timeout.
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:130)
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
>   at 
> org.apache.flink.tests.util.flink.FlinkDistribution.submitSQLJob(FlinkDistribution.java:221)
>   at 
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource$StandaloneClusterController.submitSQLJob(LocalStandaloneFlinkResource.java:196)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.executeSqlStatements(SQLClientHBaseITCase.java:215)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:152)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-12884) FLIP-144: Native Kubernetes HA Service

2020-12-01 Thread shravan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242086#comment-17242086
 ] 

shravan commented on FLINK-12884:
-

[~trohrmann] [~fly_in_gis] Is the  K8s HA service with 1.12 released? We tested 
our lower env with zookeeper but hoping to use the k8 HA service now instead of 
migrating at a later point. Could you please confirm?

> FLIP-144: Native Kubernetes HA Service
> --
>
> Key: FLINK-12884
> URL: https://issues.apache.org/jira/browse/FLINK-12884
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Reporter: MalcolmSanders
>Assignee: Yang Wang
>Priority: Major
> Fix For: 1.12.0
>
>
> Currently flink only supports HighAvailabilityService using zookeeper. As a 
> result, it requires a zookeeper cluster to be deployed on k8s cluster if our 
> customers needs high availability for flink. If we support 
> HighAvailabilityService based on native k8s APIs, it will save the efforts of 
> zookeeper deployment as well as the resources used by zookeeper cluster. It 
> might be especially helpful for customers who run small-scale k8s clusters so 
> that flink HighAvailabilityService may not cause too much overhead on k8s 
> clusters.
> Previously [FLINK-11105|https://issues.apache.org/jira/browse/FLINK-11105] 
> has proposed a HighAvailabilityService using etcd. As [~NathanHowell] 
> suggested in FLINK-11105, since k8s doesn't expose its own etcd cluster by 
> design (see [Securing etcd 
> clusters|https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#securing-etcd-clusters]),
>  it also requires the deployment of etcd cluster if flink uses etcd to 
> achieve HA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19863) SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process failed due to timeout"

2020-12-01 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242085#comment-17242085
 ] 

Dian Fu commented on FLINK-19863:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10420=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529

> SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process 
> failed due to timeout"
> ---
>
> Key: FLINK-19863
> URL: https://issues.apache.org/jira/browse/FLINK-19863
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 00:50:02,589 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,106 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,741 [main] INFO  
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource [] - Backed up 
> logs to 
> /home/vsts/work/1/s/flink-end-to-end-tests/artifacts/flink-b3924665-1ac9-4309-8255-20f0dc94e7b9.
> 00:50:04,788 [main] INFO  
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource [] - Stopping 
> HBase Cluster
> 00:50:16,243 [main] ERROR 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase   [] - 
> 
> Test testHBase[0: 
> hbase-version:1.4.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) 
> failed with:
> java.io.IOException: Process failed due to timeout.
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:130)
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
>   at 
> org.apache.flink.tests.util.flink.FlinkDistribution.submitSQLJob(FlinkDistribution.java:221)
>   at 
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource$StandaloneClusterController.submitSQLJob(LocalStandaloneFlinkResource.java:196)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.executeSqlStatements(SQLClientHBaseITCase.java:215)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:152)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20433) UnalignedCheckpointTestBase.execute failed with "TestTimedOutException: test timed out after 300 seconds"

2020-12-01 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20433:

Fix Version/s: 1.13.0

> UnalignedCheckpointTestBase.execute failed with "TestTimedOutException: test 
> timed out after 300 seconds"
> -
>
> Key: FLINK-20433
> URL: https://issues.apache.org/jira/browse/FLINK-20433
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0, 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10353=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> 2020-12-01T01:33:37.8672135Z [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 300.308 s  <<< ERROR!
> 2020-12-01T01:33:37.8672736Z org.junit.runners.model.TestTimedOutException: 
> test timed out after 300 seconds
> 2020-12-01T01:33:37.8673110Z  at sun.misc.Unsafe.park(Native Method)
> 2020-12-01T01:33:37.8673463Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2020-12-01T01:33:37.8673951Z  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2020-12-01T01:33:37.8674429Z  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> 2020-12-01T01:33:37.8686627Z  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2020-12-01T01:33:37.8687167Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-12-01T01:33:37.8687859Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1842)
> 2020-12-01T01:33:37.8696554Z  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:70)
> 2020-12-01T01:33:37.8697226Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1822)
> 2020-12-01T01:33:37.8697885Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1804)
> 2020-12-01T01:33:37.8698514Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:122)
> 2020-12-01T01:33:37.8699131Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:159)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20433) UnalignedCheckpointTestBase.execute failed with "TestTimedOutException: test timed out after 300 seconds"

2020-12-01 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20433:

Affects Version/s: 1.13.0

> UnalignedCheckpointTestBase.execute failed with "TestTimedOutException: test 
> timed out after 300 seconds"
> -
>
> Key: FLINK-20433
> URL: https://issues.apache.org/jira/browse/FLINK-20433
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10353=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> 2020-12-01T01:33:37.8672135Z [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 300.308 s  <<< ERROR!
> 2020-12-01T01:33:37.8672736Z org.junit.runners.model.TestTimedOutException: 
> test timed out after 300 seconds
> 2020-12-01T01:33:37.8673110Z  at sun.misc.Unsafe.park(Native Method)
> 2020-12-01T01:33:37.8673463Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2020-12-01T01:33:37.8673951Z  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2020-12-01T01:33:37.8674429Z  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> 2020-12-01T01:33:37.8686627Z  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2020-12-01T01:33:37.8687167Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-12-01T01:33:37.8687859Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1842)
> 2020-12-01T01:33:37.8696554Z  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:70)
> 2020-12-01T01:33:37.8697226Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1822)
> 2020-12-01T01:33:37.8697885Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1804)
> 2020-12-01T01:33:37.8698514Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:122)
> 2020-12-01T01:33:37.8699131Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:159)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20433) UnalignedCheckpointTestBase.execute failed with "TestTimedOutException: test timed out after 300 seconds"

2020-12-01 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242083#comment-17242083
 ] 

Dian Fu commented on FLINK-20433:
-

Instance on master(1.13.0): 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10416=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56

> UnalignedCheckpointTestBase.execute failed with "TestTimedOutException: test 
> timed out after 300 seconds"
> -
>
> Key: FLINK-20433
> URL: https://issues.apache.org/jira/browse/FLINK-20433
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10353=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> 2020-12-01T01:33:37.8672135Z [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 300.308 s  <<< ERROR!
> 2020-12-01T01:33:37.8672736Z org.junit.runners.model.TestTimedOutException: 
> test timed out after 300 seconds
> 2020-12-01T01:33:37.8673110Z  at sun.misc.Unsafe.park(Native Method)
> 2020-12-01T01:33:37.8673463Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2020-12-01T01:33:37.8673951Z  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2020-12-01T01:33:37.8674429Z  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> 2020-12-01T01:33:37.8686627Z  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2020-12-01T01:33:37.8687167Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-12-01T01:33:37.8687859Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1842)
> 2020-12-01T01:33:37.8696554Z  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:70)
> 2020-12-01T01:33:37.8697226Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1822)
> 2020-12-01T01:33:37.8697885Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1804)
> 2020-12-01T01:33:37.8698514Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:122)
> 2020-12-01T01:33:37.8699131Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:159)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20449) UnalignedCheckpointITCase times out

2020-12-01 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-20449.
---
Fix Version/s: (was: 1.13.0)
   Resolution: Duplicate

This issue should be duplicate with FLINK-20433. I'm closing this one.

> UnalignedCheckpointITCase times out
> ---
>
> Key: FLINK-20449
> URL: https://issues.apache.org/jira/browse/FLINK-20449
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10416=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> 2020-12-02T01:24:33.7219846Z [ERROR] Tests run: 10, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 746.887 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> 2020-12-02T01:24:33.7220860Z [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 300.24 s  <<< ERROR!
> 2020-12-02T01:24:33.7221663Z org.junit.runners.model.TestTimedOutException: 
> test timed out after 300 seconds
> 2020-12-02T01:24:33.7222017Z  at sun.misc.Unsafe.park(Native Method)
> 2020-12-02T01:24:33.7222390Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2020-12-02T01:24:33.7222882Z  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2020-12-02T01:24:33.7223356Z  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> 2020-12-02T01:24:33.7223840Z  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2020-12-02T01:24:33.7224320Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-12-02T01:24:33.7224864Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1842)
> 2020-12-02T01:24:33.7225500Z  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:70)
> 2020-12-02T01:24:33.7226297Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1822)
> 2020-12-02T01:24:33.7226929Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1804)
> 2020-12-02T01:24:33.7227572Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:122)
> 2020-12-02T01:24:33.7228187Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:159)
> 2020-12-02T01:24:33.7228680Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-12-02T01:24:33.7229099Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-12-02T01:24:33.7229617Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-12-02T01:24:33.7230068Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-12-02T01:24:33.7230733Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-12-02T01:24:33.7231262Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-12-02T01:24:33.7231775Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-12-02T01:24:33.7232276Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-12-02T01:24:33.7232732Z  at 
> org.junit.rules.Verifier$1.evaluate(Verifier.java:35)
> 2020-12-02T01:24:33.7233144Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-12-02T01:24:33.7233663Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-12-02T01:24:33.7234239Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-12-02T01:24:33.7234735Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-12-02T01:24:33.7235093Z  at java.lang.Thread.run(Thread.java:748)
> 2020-12-02T01:24:33.7235305Z 
> 2020-12-02T01:24:33.7235728Z [ERROR] execute[Parallel union, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 300.539 s  <<< ERROR!
> 2020-12-02T01:24:33.7236436Z org.junit.runners.model.TestTimedOutException: 
> test timed out after 300 seconds
> 2020-12-02T01:24:33.7236790Z  at sun.misc.Unsafe.park(Native Method)
> 2020-12-02T01:24:33.7237158Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 

[jira] [Created] (FLINK-20449) UnalignedCheckpointITCase times out

2020-12-01 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-20449:
--

 Summary: UnalignedCheckpointITCase times out
 Key: FLINK-20449
 URL: https://issues.apache.org/jira/browse/FLINK-20449
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.13.0
Reporter: Robert Metzger
 Fix For: 1.13.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10416=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56

{code}
2020-12-02T01:24:33.7219846Z [ERROR] Tests run: 10, Failures: 0, Errors: 2, 
Skipped: 0, Time elapsed: 746.887 s <<< FAILURE! - in 
org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
2020-12-02T01:24:33.7220860Z [ERROR] execute[Parallel cogroup, p = 
10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
elapsed: 300.24 s  <<< ERROR!
2020-12-02T01:24:33.7221663Z org.junit.runners.model.TestTimedOutException: 
test timed out after 300 seconds
2020-12-02T01:24:33.7222017Zat sun.misc.Unsafe.park(Native Method)
2020-12-02T01:24:33.7222390Zat 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
2020-12-02T01:24:33.7222882Zat 
java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
2020-12-02T01:24:33.7223356Zat 
java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
2020-12-02T01:24:33.7223840Zat 
java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
2020-12-02T01:24:33.7224320Zat 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
2020-12-02T01:24:33.7224864Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1842)
2020-12-02T01:24:33.7225500Zat 
org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:70)
2020-12-02T01:24:33.7226297Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1822)
2020-12-02T01:24:33.7226929Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1804)
2020-12-02T01:24:33.7227572Zat 
org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:122)
2020-12-02T01:24:33.7228187Zat 
org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:159)
2020-12-02T01:24:33.7228680Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-12-02T01:24:33.7229099Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-12-02T01:24:33.7229617Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-12-02T01:24:33.7230068Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-12-02T01:24:33.7230733Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-12-02T01:24:33.7231262Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-12-02T01:24:33.7231775Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-12-02T01:24:33.7232276Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-12-02T01:24:33.7232732Zat 
org.junit.rules.Verifier$1.evaluate(Verifier.java:35)
2020-12-02T01:24:33.7233144Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-12-02T01:24:33.7233663Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
2020-12-02T01:24:33.7234239Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
2020-12-02T01:24:33.7234735Zat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
2020-12-02T01:24:33.7235093Zat java.lang.Thread.run(Thread.java:748)
2020-12-02T01:24:33.7235305Z 
2020-12-02T01:24:33.7235728Z [ERROR] execute[Parallel union, p = 
10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
elapsed: 300.539 s  <<< ERROR!
2020-12-02T01:24:33.7236436Z org.junit.runners.model.TestTimedOutException: 
test timed out after 300 seconds
2020-12-02T01:24:33.7236790Zat sun.misc.Unsafe.park(Native Method)
2020-12-02T01:24:33.7237158Zat 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
2020-12-02T01:24:33.7237641Zat 
java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
2020-12-02T01:24:33.7238118Zat 
java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
2020-12-02T01:24:33.7238599Zat 
java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
2020-12-02T01:24:33.7239885Zat 

[jira] [Commented] (FLINK-19863) SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process failed due to timeout"

2020-12-01 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242079#comment-17242079
 ] 

Robert Metzger commented on FLINK-19863:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10416=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6

> SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process 
> failed due to timeout"
> ---
>
> Key: FLINK-19863
> URL: https://issues.apache.org/jira/browse/FLINK-19863
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 00:50:02,589 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,106 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,741 [main] INFO  
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource [] - Backed up 
> logs to 
> /home/vsts/work/1/s/flink-end-to-end-tests/artifacts/flink-b3924665-1ac9-4309-8255-20f0dc94e7b9.
> 00:50:04,788 [main] INFO  
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource [] - Stopping 
> HBase Cluster
> 00:50:16,243 [main] ERROR 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase   [] - 
> 
> Test testHBase[0: 
> hbase-version:1.4.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) 
> failed with:
> java.io.IOException: Process failed due to timeout.
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:130)
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
>   at 
> org.apache.flink.tests.util.flink.FlinkDistribution.submitSQLJob(FlinkDistribution.java:221)
>   at 
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource$StandaloneClusterController.submitSQLJob(LocalStandaloneFlinkResource.java:196)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.executeSqlStatements(SQLClientHBaseITCase.java:215)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:152)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14270: [FLINK-20436][table-planner-blink] Simplify type parameter of ExecNode

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14270:
URL: https://github.com/apache/flink/pull/14270#issuecomment-736353538


   
   ## CI report:
   
   * 88d6d6aecb9a67c4bd7065f2d58cf9793d71cf95 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10422)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on a change in pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


rmetzger commented on a change in pull request #14282:
URL: https://github.com/apache/flink/pull/14282#discussion_r533926297



##
File path: flink-python/lib/LICENSE.py4j
##
@@ -1,26 +0,0 @@
-Copyright (c) 2009-2018, Barthelemy Dagenais and individual contributors. All

Review comment:
   Okay, thanks for the patient explanations. I wasn't aware that the files 
are located in the `/licenses` folder.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20420) ES6 ElasticsearchSinkITCase failed due to no output for 900 seconds

2020-12-01 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242077#comment-17242077
 ] 

Robert Metzger commented on FLINK-20420:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10407=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20

> ES6 ElasticsearchSinkITCase failed due to no output for 900 seconds
> ---
>
> Key: FLINK-20420
> URL: https://issues.apache.org/jira/browse/FLINK-20420
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: Yun Tang
>Priority: Major
>
> Instance:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10249=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=18821
> {code:java}
> Process produced no output for 900 seconds.
> ==
> ==
> The following Java processes are running (JPS)
> ==
> Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> 2274 Launcher
> 18260 Jps
> 15916 surefirebooter3434370240444055571.jar
> ==
> "main" #1 prio=5 os_prio=0 tid=0x7feec000b800 nid=0x3e2d runnable 
> [0x7feec8541000]
>java.lang.Thread.State: RUNNABLE
>   at org.testcontainers.shaded.okio.Buffer.indexOf(Buffer.java:1463)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.indexOf(RealBufferedSource.java:352)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:230)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:224)
>   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:489)
>   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471)
>   at 
> org.testcontainers.shaded.okhttp3.internal.Util.skipAll(Util.java:204)
>   at 
> org.testcontainers.shaded.okhttp3.internal.Util.discard(Util.java:186)
>   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.close(Http1ExchangeCodec.java:511)
>   at 
> org.testcontainers.shaded.okio.ForwardingSource.close(ForwardingSource.java:43)
>   at 
> org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.close(Exchange.java:313)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.close(RealBufferedSource.java:476)
>   at 
> org.testcontainers.shaded.okhttp3.internal.Util.closeQuietly(Util.java:139)
>   at 
> org.testcontainers.shaded.okhttp3.ResponseBody.close(ResponseBody.java:192)
>   at org.testcontainers.shaded.okhttp3.Response.close(Response.java:290)
>   at 
> org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.close(OkDockerHttpClient.java:280)
>   at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$null$0(DefaultInvocationBuilder.java:272)
>   at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$$Lambda$87/1112018050.close(Unknown
>  Source)
>   at 
> com.github.dockerjava.api.async.ResultCallbackTemplate.close(ResultCallbackTemplate.java:77)
>   at 
> org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:177)
>   at 
> org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:203)
>   - locked <0x88fcbbf0> (a [Ljava.lang.Object;)
>   at 
> org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
>   at 
> org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)
>   at 
> org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
>   - locked <0x88fcb940> (a 
> org.testcontainers.images.LocalImagesCache)
>   at 
> org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32)
>   at 
> org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:66)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:27)
>   at 
> org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)
>   - locked <0x890763d0> (a 
> java.util.concurrent.atomic.AtomicReference)
>   at 

[GitHub] [flink] rmetzger commented on pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


rmetzger commented on pull request #14282:
URL: https://github.com/apache/flink/pull/14282#issuecomment-737020504


   I reported the CI failure here: 
https://issues.apache.org/jira/browse/FLINK-20420



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14280: [FLINK-20110][table-planner] Support 'merge' method for first_value a…

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14280:
URL: https://github.com/apache/flink/pull/14280#issuecomment-736629561


   
   ## CI report:
   
   * 6a0849e61034e34186780ce45dcf9157f65c0dfd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10424)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14280: [FLINK-20110][table-planner] Support 'merge' method for first_value a…

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14280:
URL: https://github.com/apache/flink/pull/14280#issuecomment-736629561


   
   ## CI report:
   
   * b2173ba7a73dc7d79ba4f0679b7076400815d116 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10400)
 
   * 6a0849e61034e34186780ce45dcf9157f65c0dfd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2020-12-01 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242055#comment-17242055
 ] 

Partha Pradeep Mishra commented on FLINK-20376:
---

[~klion26] Trying savepoint triggered on 1.9 can be restored in 1.10 can be an 
option.

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Partha Pradeep Mishra
>Priority: Major
>
> We tried to save checkpoints for one of the flink job (1.9 version) and then 
> import/restore the checkpoints in the newer flink version (1.11.2). The 
> import/resume operation failed with the below error. Please note that both 
> the jobs(i.e. one running in 1.9 and other in 1.11.2) are same binary with no 
> code difference or introduction of new operators. Still we got the below 
> issue.
> _Cannot map checkpoint/savepoint state for operator 
> fbb4ef531e002f8fb3a2052db255adf5 to the new program, because the operator is 
> not available in the new program._
> *Complete Stack Trace :*
> {"errors":["org.apache.flink.runtime.rest.handler.RestHandlerException: Could 
> not execute application.\n\tat 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)\n\tat
>  
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\tat
>  
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\tat
>  
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)\n\tat
>  
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tat
>  
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute 
> application.\n\tat 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)\n\tat
>  
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\n\t...
>  7 more\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.\n\tat 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)\n\tat
>  
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\n\t...
>  7 more\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed to execute job 
> 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)\n\tat
>  
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)\n\tat
>  
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\n\t...
>  10 more\nCaused by: org.apache.flink.util.FlinkException: Failed to execute 
> job 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1821)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:128)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)\n\tat
>  
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1697)\n\tat
>  
> 

[jira] [Commented] (FLINK-20376) Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 1.11.2

2020-12-01 Thread Partha Pradeep Mishra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242053#comment-17242053
 ] 

Partha Pradeep Mishra commented on FLINK-20376:
---

[~yunta] Thanks for the response. I don't want to allow the skipping of the 
same using --allowNonRestoredState. No operator or parallelism is changed. As 
mentioned binary is same and parallelism is also same.

> Error in restoring checkpoint/savepoint when Flink is upgraded from 1.9 to 
> 1.11.2
> -
>
> Key: FLINK-20376
> URL: https://issues.apache.org/jira/browse/FLINK-20376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Partha Pradeep Mishra
>Priority: Major
>
> We tried to save checkpoints for one of the flink job (1.9 version) and then 
> import/restore the checkpoints in the newer flink version (1.11.2). The 
> import/resume operation failed with the below error. Please note that both 
> the jobs(i.e. one running in 1.9 and other in 1.11.2) are same binary with no 
> code difference or introduction of new operators. Still we got the below 
> issue.
> _Cannot map checkpoint/savepoint state for operator 
> fbb4ef531e002f8fb3a2052db255adf5 to the new program, because the operator is 
> not available in the new program._
> *Complete Stack Trace :*
> {"errors":["org.apache.flink.runtime.rest.handler.RestHandlerException: Could 
> not execute application.\n\tat 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:103)\n\tat
>  
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)\n\tat
>  
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)\n\tat
>  
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)\n\tat
>  
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tat
>  
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not execute 
> application.\n\tat 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)\n\tat
>  
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)\n\t...
>  7 more\nCaused by: org.apache.flink.util.FlinkRuntimeException: Could not 
> execute application.\n\tat 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:81)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)\n\tat
>  
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)\n\tat
>  
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)\n\t...
>  7 more\nCaused by: 
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Failed to execute job 
> 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)\n\tat
>  
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)\n\tat
>  
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)\n\tat
>  
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)\n\t...
>  10 more\nCaused by: org.apache.flink.util.FlinkException: Failed to execute 
> job 'ST1_100Services-preprod-Tumbling-ProcessedBased'.\n\tat 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1821)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:128)\n\tat
>  
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)\n\tat
>  
> 

[jira] [Commented] (FLINK-20414) 【HBase】FileNotFoundException: File /tmp/hbase-deploy/hbase/lib does not exist

2020-12-01 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242045#comment-17242045
 ] 

zhisheng commented on FLINK-20414:
--

[~jark] yes, i solved it, but the exception is not friendly, if the log 
level='INFO', I can't find the DEBUG log
{code:java}
2020-11-30 10:49:45,772 DEBUG 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
[] - Finding class again: 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
10:49:45,774 DEBUG 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
[] - Class org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException not 
found - using dynamical class loader2020-11-30 10:49:45,774 {code}

> 【HBase】FileNotFoundException: File /tmp/hbase-deploy/hbase/lib does not exist
> -
>
> Key: FLINK-20414
> URL: https://issues.apache.org/jira/browse/FLINK-20414
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Fix For: 1.12.0
>
>
> {code:java}
> CREATE TABLE yarn_log_datagen_test_hbase_sink (
>  appid INT,
>  message STRING
> ) WITH (
>  'connector' = 'datagen',
>  'rows-per-second'='10',
>  'fields.appid.kind'='random',
>  'fields.appid.min'='1',
>  'fields.appid.max'='1000',
>  'fields.message.length'='100'
> );
> CREATE TABLE hbase_test1 (
>  rowkey INT,
>  family1 ROW
> ) WITH (
>  'connector' = 'hbase-1.4',
>  'table-name' = 'test_flink',
>  'zookeeper.quorum' = 'xxx:2181',
>  'sink.parallelism' = '2',
>  'sink.buffer-flush.interval' = '1',
>  'sink.buffer-flush.max-rows' = '1',
>  'sink.buffer-flush.max-size' = '1'
> );
> INSERT INTO hbase_test1 SELECT appid, ROW(message) FROM 
> yarn_log_datagen_test_hbase_sink;
> {code}
> I run the sql, has exception, and data is not write into hbase, i add the  
> flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar  in the lib folder
>  
> {code:java}
> 2020-11-30 10:49:45,772 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Finding class again: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
> 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Class org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException 
> not found - using dynamical class loader2020-11-30 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Finding class: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
> 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Loading new jar files, if any2020-11-30 10:49:45,776 WARN  
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Failed to check remote dir status 
> /tmp/hbase-deploy/hbase/libjava.io.FileNotFoundException: File 
> /tmp/hbase-deploy/hbase/lib does not exist.at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:795)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  ~[hadoop-common-2.7.3.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadNewJars(DynamicClassLoader.java:206)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.tryRefreshClass(DynamicClassLoader.java:168)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:140)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> java.lang.Class.forName0(Native Method) ~[?:1.8.0_92]at 
> java.lang.Class.forName(Class.java:348) [?:1.8.0_92]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1753)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> 

[GitHub] [flink] caozhen1937 commented on a change in pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


caozhen1937 commented on a change in pull request #14088:
URL: https://github.com/apache/flink/pull/14088#discussion_r533884649



##
File path: docs/concepts/flink-architecture.zh.md
##
@@ -24,229 +24,109 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink is a distributed system and requires effective allocation and management
-of compute resources in order to execute streaming applications. It integrates
-with all common cluster resource managers such as [Hadoop
-YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html),
-[Apache Mesos](https://mesos.apache.org/) and
-[Kubernetes](https://kubernetes.io/), but can also be set up to run as a
-standalone cluster or even as a library.
+Flink 是一个分布式系统,需要有效分配和管理计算资源才能执行流应用程序。它集成了所有常见的集群资源管理器,例如[Hadoop 
YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)、[Apache
 
Mesos](https://mesos.apache.org/)和[Kubernetes](https://kubernetes.io/),但也可以设置作为独立集群甚至库运行。
 
-This section contains an overview of Flink’s architecture and describes how its
-main components interact to execute applications and recover from failures.
+本节概述了 Flink 架构,并且描述了其主要组件如何交互以执行应用程序和从故障中恢复。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Anatomy of a Flink Cluster
+## Flink 集群剖析
 
-The Flink runtime consists of two types of processes: a _JobManager_ and one 
or more _TaskManagers_.
+Flink 运行时由两种类型的进程组成:_JobManager_和一个或者多个_TaskManager_。
 
 
 
-The *Client* is not part of the runtime and program execution, but is used to
-prepare and send a dataflow to the JobManager.  After that, the client can
-disconnect (_detached mode_), or stay connected to receive progress reports
-(_attached mode_). The client runs either as part of the Java/Scala program
-that triggers the execution, or in the command line process `./bin/flink run
-...`.
-
-The JobManager and TaskManagers can be started in various ways: directly on
-the machines as a [standalone cluster]({% link
-deployment/resource-providers/standalone/index.zh.md %}), in containers, or 
managed by resource
-frameworks like [YARN]({% link deployment/resource-providers/yarn.zh.md
-%}) or [Mesos]({% link deployment/resource-providers/mesos.zh.md %}).
-TaskManagers connect to JobManagers, announcing themselves as available, and
-are assigned work.
+*Client* 不是运行时和程序执行的一部分,而是用于准备数据流并将其发送给 
JobManager。之后,客户端可以断开连接(_分离模式_),或保持连接来接收进程报告(_附加模式_)。客户端可以作为触发执行 Java/Scala 
程序的一部分运行,也可以在命令行进程`./bin/flink run ...`中运行。
+
+可以通过各种方式启动 JobManager 和 TaskManager:直接在机器上作为[standalone 集群]({% link 
deployment/resource-providers/standalone/index.zh.md %})启动、在容器中启动、或者通过[YARN]({% 
link deployment/resource-providers/yarn.zh.md %})或[Mesos]({% link 
deployment/resource-providers/mesos.zh.md %})等资源框架管理启动。TaskManager 连接到 
JobManagers,宣布自己可用,并被分配工作。

Review comment:
   这里的 managed 是不是想表达使用资源框架 “管理并启动”?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on a change in pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


HuangXingBo commented on a change in pull request #14088:
URL: https://github.com/apache/flink/pull/14088#discussion_r533881048



##
File path: docs/concepts/flink-architecture.zh.md
##
@@ -24,229 +24,109 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink is a distributed system and requires effective allocation and management
-of compute resources in order to execute streaming applications. It integrates
-with all common cluster resource managers such as [Hadoop
-YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html),
-[Apache Mesos](https://mesos.apache.org/) and
-[Kubernetes](https://kubernetes.io/), but can also be set up to run as a
-standalone cluster or even as a library.
+Flink 是一个分布式系统,需要有效分配和管理计算资源才能执行流应用程序。它集成了所有常见的集群资源管理器,例如[Hadoop 
YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)、[Apache
 
Mesos](https://mesos.apache.org/)和[Kubernetes](https://kubernetes.io/),但也可以设置作为独立集群甚至库运行。
 
-This section contains an overview of Flink’s architecture and describes how its
-main components interact to execute applications and recover from failures.
+本节概述了 Flink 架构,并且描述了其主要组件如何交互以执行应用程序和从故障中恢复。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Anatomy of a Flink Cluster
+## Flink 集群剖析
 
-The Flink runtime consists of two types of processes: a _JobManager_ and one 
or more _TaskManagers_.
+Flink 运行时由两种类型的进程组成:_JobManager_和一个或者多个_TaskManager_。

Review comment:
   
我觉得这里的描述很可能是针对下面那个图来的。你TaskManager在聊数量了,前面JobManager又不聊,会让读者很奇怪吧,而且也许和英文的a 
JobManager保持一致也许会更好。你怎么看?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14088:
URL: https://github.com/apache/flink/pull/14088#issuecomment-728662796


   
   ## CI report:
   
   * f14b52c30b09d46b9fa03f840d0642f764122e8f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10423)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14003: [FLINK-19527][Doc]Flink SQL Getting Started

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14003:
URL: https://github.com/apache/flink/pull/14003#issuecomment-724292850


   
   ## CI report:
   
   * 34877c626eb981f4dca0873d7ba2d5547b16ff2b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10419)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on a change in pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


HuangXingBo commented on a change in pull request #14088:
URL: https://github.com/apache/flink/pull/14088#discussion_r533879310



##
File path: docs/concepts/flink-architecture.zh.md
##
@@ -24,229 +24,109 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink is a distributed system and requires effective allocation and management
-of compute resources in order to execute streaming applications. It integrates
-with all common cluster resource managers such as [Hadoop
-YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html),
-[Apache Mesos](https://mesos.apache.org/) and
-[Kubernetes](https://kubernetes.io/), but can also be set up to run as a
-standalone cluster or even as a library.
+Flink 是一个分布式系统,需要有效分配和管理计算资源才能执行流应用程序。它集成了所有常见的集群资源管理器,例如[Hadoop 
YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)、[Apache
 
Mesos](https://mesos.apache.org/)和[Kubernetes](https://kubernetes.io/),但也可以设置作为独立集群甚至库运行。
 
-This section contains an overview of Flink’s architecture and describes how its
-main components interact to execute applications and recover from failures.
+本节概述了 Flink 架构,并且描述了其主要组件如何交互以执行应用程序和从故障中恢复。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Anatomy of a Flink Cluster
+## Flink 集群剖析
 
-The Flink runtime consists of two types of processes: a _JobManager_ and one 
or more _TaskManagers_.
+Flink 运行时由两种类型的进程组成:_JobManager_和一个或者多个_TaskManager_。
 
 
 
-The *Client* is not part of the runtime and program execution, but is used to
-prepare and send a dataflow to the JobManager.  After that, the client can
-disconnect (_detached mode_), or stay connected to receive progress reports
-(_attached mode_). The client runs either as part of the Java/Scala program
-that triggers the execution, or in the command line process `./bin/flink run
-...`.
-
-The JobManager and TaskManagers can be started in various ways: directly on
-the machines as a [standalone cluster]({% link
-deployment/resource-providers/standalone/index.zh.md %}), in containers, or 
managed by resource
-frameworks like [YARN]({% link deployment/resource-providers/yarn.zh.md
-%}) or [Mesos]({% link deployment/resource-providers/mesos.zh.md %}).
-TaskManagers connect to JobManagers, announcing themselves as available, and
-are assigned work.
+*Client* 不是运行时和程序执行的一部分,而是用于准备数据流并将其发送给 
JobManager。之后,客户端可以断开连接(_分离模式_),或保持连接来接收进程报告(_附加模式_)。客户端可以作为触发执行 Java/Scala 
程序的一部分运行,也可以在命令行进程`./bin/flink run ...`中运行。
+
+可以通过各种方式启动 JobManager 和 TaskManager:直接在机器上作为[standalone 集群]({% link 
deployment/resource-providers/standalone/index.zh.md %})启动、在容器中启动、或者通过[YARN]({% 
link deployment/resource-providers/yarn.zh.md %})或[Mesos]({% link 
deployment/resource-providers/mesos.zh.md %})等资源框架管理启动。TaskManager 连接到 
JobManagers,宣布自己可用,并被分配工作。
 
 ### JobManager
 
-The _JobManager_ has a number of responsibilities related to coordinating the 
distributed execution of Flink Applications:
-it decides when to schedule the next task (or set of tasks), reacts to finished
-tasks or execution failures, coordinates checkpoints, and coordinates recovery 
on
-failures, among others. This process consists of three different components:
+_JobManager_具有许多与协调 Flink 应用程序的分布式执行有关的职责:它决定何时调度下一个 task(或一组 task)、对完成的 task 
或执行失败做出反应、协调 checkpoint、并且协调从失败中恢复等等。这个进程由三个不同的组件组成:
 
   * **ResourceManager** 
 
-The _ResourceManager_ is responsible for resource de-/allocation and
-provisioning in a Flink cluster — it manages **task slots**, which are the
-unit of resource scheduling in a Flink cluster (see 
[TaskManagers](#taskmanagers)).
-Flink implements multiple ResourceManagers for different environments and
-resource providers such as YARN, Mesos, Kubernetes and standalone
-deployments. In a standalone setup, the ResourceManager can only distribute
-the slots of available TaskManagers and cannot start new TaskManagers on
-its own.  
+_ResourceManager_负责 Flink 集群中的资源删除/分配和供应 - 它管理 **task slots**,这是 Flink 
集群中资源调度的单位(请参考[TaskManagers](#taskmanagers))。Flink 为不同的环境和资源提供者(例如 
YARN、Mesos、Kubernetes 和 standalone 部署)实现了多个 ResourceManager。在 standalone 
设置中,ResourceManager 只能分配可用 TaskManager 的 slots,而不能自行启动新的 TaskManager。
 
   * **Dispatcher** 
 
-The _Dispatcher_ provides a REST interface to submit Flink applications for
-execution and starts a new JobMaster for each submitted job. It
-also runs the Flink WebUI to provide information about job executions.
+_Dispatcher_ 提供了一个 REST 接口,用来提交 Flink 应用程序执行,并为每个提交的作业启动一个新的 
JobMaster。它还运行 Flink WebUI 用来提供作业执行信息。
 
   * **JobMaster** 
 
-A _JobMaster_ is responsible for managing the execution of a single
-[JobGraph]({% link concepts/glossary.zh.md %}#logical-graph).
-Multiple jobs can run simultaneously in a Flink cluster, each having its
-own JobMaster.
+_JobMaster_ 负责管理单个[JobGraph]({% link concepts/glossary.zh.md 
%}#logical-graph)的执行。Flink 集群中可以同时运行多个作业,每个作业都有自己的 JobMaster。
 

[jira] [Updated] (FLINK-20448) Obsolete generated avro classes

2020-12-01 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-20448:
---
Description: We generate avro classes for testing in some modules like 
{{flink-avro}} and {{flink-parquet}}. The generated classes are put into 
"/src/test/java/" (ignored by git), which means {{mvn clean}} doesn't force a 
re-generation. This can cause problems if, for example, we change the avro 
version, because the new build would still use the generated classes from 
previous build.

> Obsolete generated avro classes
> ---
>
> Key: FLINK-20448
> URL: https://issues.apache.org/jira/browse/FLINK-20448
> Project: Flink
>  Issue Type: Test
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Rui Li
>Priority: Major
>
> We generate avro classes for testing in some modules like {{flink-avro}} and 
> {{flink-parquet}}. The generated classes are put into "/src/test/java/" 
> (ignored by git), which means {{mvn clean}} doesn't force a re-generation. 
> This can cause problems if, for example, we change the avro version, because 
> the new build would still use the generated classes from previous build.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] caozhen1937 commented on a change in pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


caozhen1937 commented on a change in pull request #14088:
URL: https://github.com/apache/flink/pull/14088#discussion_r533877640



##
File path: docs/concepts/flink-architecture.zh.md
##
@@ -24,229 +24,109 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink is a distributed system and requires effective allocation and management
-of compute resources in order to execute streaming applications. It integrates
-with all common cluster resource managers such as [Hadoop
-YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html),
-[Apache Mesos](https://mesos.apache.org/) and
-[Kubernetes](https://kubernetes.io/), but can also be set up to run as a
-standalone cluster or even as a library.
+Flink 是一个分布式系统,需要有效分配和管理计算资源才能执行流应用程序。它集成了所有常见的集群资源管理器,例如[Hadoop 
YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)、[Apache
 
Mesos](https://mesos.apache.org/)和[Kubernetes](https://kubernetes.io/),但也可以设置作为独立集群甚至库运行。
 
-This section contains an overview of Flink’s architecture and describes how its
-main components interact to execute applications and recover from failures.
+本节概述了 Flink 架构,并且描述了其主要组件如何交互以执行应用程序和从故障中恢复。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Anatomy of a Flink Cluster
+## Flink 集群剖析
 
-The Flink runtime consists of two types of processes: a _JobManager_ and one 
or more _TaskManagers_.
+Flink 运行时由两种类型的进程组成:_JobManager_和一个或者多个_TaskManager_。
 
 
 
-The *Client* is not part of the runtime and program execution, but is used to
-prepare and send a dataflow to the JobManager.  After that, the client can
-disconnect (_detached mode_), or stay connected to receive progress reports
-(_attached mode_). The client runs either as part of the Java/Scala program
-that triggers the execution, or in the command line process `./bin/flink run
-...`.
-
-The JobManager and TaskManagers can be started in various ways: directly on
-the machines as a [standalone cluster]({% link
-deployment/resource-providers/standalone/index.zh.md %}), in containers, or 
managed by resource
-frameworks like [YARN]({% link deployment/resource-providers/yarn.zh.md
-%}) or [Mesos]({% link deployment/resource-providers/mesos.zh.md %}).
-TaskManagers connect to JobManagers, announcing themselves as available, and
-are assigned work.
+*Client* 不是运行时和程序执行的一部分,而是用于准备数据流并将其发送给 
JobManager。之后,客户端可以断开连接(_分离模式_),或保持连接来接收进程报告(_附加模式_)。客户端可以作为触发执行 Java/Scala 
程序的一部分运行,也可以在命令行进程`./bin/flink run ...`中运行。
+
+可以通过各种方式启动 JobManager 和 TaskManager:直接在机器上作为[standalone 集群]({% link 
deployment/resource-providers/standalone/index.zh.md %})启动、在容器中启动、或者通过[YARN]({% 
link deployment/resource-providers/yarn.zh.md %})或[Mesos]({% link 
deployment/resource-providers/mesos.zh.md %})等资源框架管理启动。TaskManager 连接到 
JobManagers,宣布自己可用,并被分配工作。
 
 ### JobManager
 
-The _JobManager_ has a number of responsibilities related to coordinating the 
distributed execution of Flink Applications:
-it decides when to schedule the next task (or set of tasks), reacts to finished
-tasks or execution failures, coordinates checkpoints, and coordinates recovery 
on
-failures, among others. This process consists of three different components:
+_JobManager_具有许多与协调 Flink 应用程序的分布式执行有关的职责:它决定何时调度下一个 task(或一组 task)、对完成的 task 
或执行失败做出反应、协调 checkpoint、并且协调从失败中恢复等等。这个进程由三个不同的组件组成:
 
   * **ResourceManager** 
 
-The _ResourceManager_ is responsible for resource de-/allocation and
-provisioning in a Flink cluster — it manages **task slots**, which are the
-unit of resource scheduling in a Flink cluster (see 
[TaskManagers](#taskmanagers)).
-Flink implements multiple ResourceManagers for different environments and
-resource providers such as YARN, Mesos, Kubernetes and standalone
-deployments. In a standalone setup, the ResourceManager can only distribute
-the slots of available TaskManagers and cannot start new TaskManagers on
-its own.  
+_ResourceManager_负责 Flink 集群中的资源删除/分配和供应 - 它管理 **task slots**,这是 Flink 
集群中资源调度的单位(请参考[TaskManagers](#taskmanagers))。Flink 为不同的环境和资源提供者(例如 
YARN、Mesos、Kubernetes 和 standalone 部署)实现了多个 ResourceManager。在 standalone 
设置中,ResourceManager 只能分配可用 TaskManager 的 slots,而不能自行启动新的 TaskManager。
 
   * **Dispatcher** 
 
-The _Dispatcher_ provides a REST interface to submit Flink applications for
-execution and starts a new JobMaster for each submitted job. It
-also runs the Flink WebUI to provide information about job executions.
+_Dispatcher_ 提供了一个 REST 接口,用来提交 Flink 应用程序执行,并为每个提交的作业启动一个新的 
JobMaster。它还运行 Flink WebUI 用来提供作业执行信息。
 
   * **JobMaster** 
 
-A _JobMaster_ is responsible for managing the execution of a single
-[JobGraph]({% link concepts/glossary.zh.md %}#logical-graph).
-Multiple jobs can run simultaneously in a Flink cluster, each having its
-own JobMaster.
+_JobMaster_ 负责管理单个[JobGraph]({% link concepts/glossary.zh.md 
%}#logical-graph)的执行。Flink 集群中可以同时运行多个作业,每个作业都有自己的 JobMaster。
 

[GitHub] [flink] caozhen1937 commented on a change in pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


caozhen1937 commented on a change in pull request #14088:
URL: https://github.com/apache/flink/pull/14088#discussion_r533877242



##
File path: docs/concepts/flink-architecture.zh.md
##
@@ -24,229 +24,109 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink is a distributed system and requires effective allocation and management
-of compute resources in order to execute streaming applications. It integrates
-with all common cluster resource managers such as [Hadoop
-YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html),
-[Apache Mesos](https://mesos.apache.org/) and
-[Kubernetes](https://kubernetes.io/), but can also be set up to run as a
-standalone cluster or even as a library.
+Flink 是一个分布式系统,需要有效分配和管理计算资源才能执行流应用程序。它集成了所有常见的集群资源管理器,例如[Hadoop 
YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)、[Apache
 
Mesos](https://mesos.apache.org/)和[Kubernetes](https://kubernetes.io/),但也可以设置作为独立集群甚至库运行。
 
-This section contains an overview of Flink’s architecture and describes how its
-main components interact to execute applications and recover from failures.
+本节概述了 Flink 架构,并且描述了其主要组件如何交互以执行应用程序和从故障中恢复。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Anatomy of a Flink Cluster
+## Flink 集群剖析
 
-The Flink runtime consists of two types of processes: a _JobManager_ and one 
or more _TaskManagers_.
+Flink 运行时由两种类型的进程组成:_JobManager_和一个或者多个_TaskManager_。
 
 
 
-The *Client* is not part of the runtime and program execution, but is used to
-prepare and send a dataflow to the JobManager.  After that, the client can
-disconnect (_detached mode_), or stay connected to receive progress reports
-(_attached mode_). The client runs either as part of the Java/Scala program
-that triggers the execution, or in the command line process `./bin/flink run
-...`.
-
-The JobManager and TaskManagers can be started in various ways: directly on
-the machines as a [standalone cluster]({% link
-deployment/resource-providers/standalone/index.zh.md %}), in containers, or 
managed by resource
-frameworks like [YARN]({% link deployment/resource-providers/yarn.zh.md
-%}) or [Mesos]({% link deployment/resource-providers/mesos.zh.md %}).
-TaskManagers connect to JobManagers, announcing themselves as available, and
-are assigned work.
+*Client* 不是运行时和程序执行的一部分,而是用于准备数据流并将其发送给 
JobManager。之后,客户端可以断开连接(_分离模式_),或保持连接来接收进程报告(_附加模式_)。客户端可以作为触发执行 Java/Scala 
程序的一部分运行,也可以在命令行进程`./bin/flink run ...`中运行。
+
+可以通过各种方式启动 JobManager 和 TaskManager:直接在机器上作为[standalone 集群]({% link 
deployment/resource-providers/standalone/index.zh.md %})启动、在容器中启动、或者通过[YARN]({% 
link deployment/resource-providers/yarn.zh.md %})或[Mesos]({% link 
deployment/resource-providers/mesos.zh.md %})等资源框架管理启动。TaskManager 连接到 
JobManagers,宣布自己可用,并被分配工作。
 
 ### JobManager
 
-The _JobManager_ has a number of responsibilities related to coordinating the 
distributed execution of Flink Applications:
-it decides when to schedule the next task (or set of tasks), reacts to finished
-tasks or execution failures, coordinates checkpoints, and coordinates recovery 
on
-failures, among others. This process consists of three different components:
+_JobManager_具有许多与协调 Flink 应用程序的分布式执行有关的职责:它决定何时调度下一个 task(或一组 task)、对完成的 task 
或执行失败做出反应、协调 checkpoint、并且协调从失败中恢复等等。这个进程由三个不同的组件组成:
 
   * **ResourceManager** 
 
-The _ResourceManager_ is responsible for resource de-/allocation and
-provisioning in a Flink cluster — it manages **task slots**, which are the
-unit of resource scheduling in a Flink cluster (see 
[TaskManagers](#taskmanagers)).
-Flink implements multiple ResourceManagers for different environments and
-resource providers such as YARN, Mesos, Kubernetes and standalone
-deployments. In a standalone setup, the ResourceManager can only distribute
-the slots of available TaskManagers and cannot start new TaskManagers on
-its own.  
+_ResourceManager_负责 Flink 集群中的资源删除/分配和供应 - 它管理 **task slots**,这是 Flink 
集群中资源调度的单位(请参考[TaskManagers](#taskmanagers))。Flink 为不同的环境和资源提供者(例如 
YARN、Mesos、Kubernetes 和 standalone 部署)实现了多个 ResourceManager。在 standalone 
设置中,ResourceManager 只能分配可用 TaskManager 的 slots,而不能自行启动新的 TaskManager。
 
   * **Dispatcher** 
 
-The _Dispatcher_ provides a REST interface to submit Flink applications for
-execution and starts a new JobMaster for each submitted job. It
-also runs the Flink WebUI to provide information about job executions.
+_Dispatcher_ 提供了一个 REST 接口,用来提交 Flink 应用程序执行,并为每个提交的作业启动一个新的 
JobMaster。它还运行 Flink WebUI 用来提供作业执行信息。
 
   * **JobMaster** 
 
-A _JobMaster_ is responsible for managing the execution of a single
-[JobGraph]({% link concepts/glossary.zh.md %}#logical-graph).
-Multiple jobs can run simultaneously in a Flink cluster, each having its
-own JobMaster.
+_JobMaster_ 负责管理单个[JobGraph]({% link concepts/glossary.zh.md 
%}#logical-graph)的执行。Flink 集群中可以同时运行多个作业,每个作业都有自己的 JobMaster。
 

[GitHub] [flink] caozhen1937 commented on a change in pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


caozhen1937 commented on a change in pull request #14088:
URL: https://github.com/apache/flink/pull/14088#discussion_r533877242



##
File path: docs/concepts/flink-architecture.zh.md
##
@@ -24,229 +24,109 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink is a distributed system and requires effective allocation and management
-of compute resources in order to execute streaming applications. It integrates
-with all common cluster resource managers such as [Hadoop
-YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html),
-[Apache Mesos](https://mesos.apache.org/) and
-[Kubernetes](https://kubernetes.io/), but can also be set up to run as a
-standalone cluster or even as a library.
+Flink 是一个分布式系统,需要有效分配和管理计算资源才能执行流应用程序。它集成了所有常见的集群资源管理器,例如[Hadoop 
YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)、[Apache
 
Mesos](https://mesos.apache.org/)和[Kubernetes](https://kubernetes.io/),但也可以设置作为独立集群甚至库运行。
 
-This section contains an overview of Flink’s architecture and describes how its
-main components interact to execute applications and recover from failures.
+本节概述了 Flink 架构,并且描述了其主要组件如何交互以执行应用程序和从故障中恢复。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Anatomy of a Flink Cluster
+## Flink 集群剖析
 
-The Flink runtime consists of two types of processes: a _JobManager_ and one 
or more _TaskManagers_.
+Flink 运行时由两种类型的进程组成:_JobManager_和一个或者多个_TaskManager_。
 
 
 
-The *Client* is not part of the runtime and program execution, but is used to
-prepare and send a dataflow to the JobManager.  After that, the client can
-disconnect (_detached mode_), or stay connected to receive progress reports
-(_attached mode_). The client runs either as part of the Java/Scala program
-that triggers the execution, or in the command line process `./bin/flink run
-...`.
-
-The JobManager and TaskManagers can be started in various ways: directly on
-the machines as a [standalone cluster]({% link
-deployment/resource-providers/standalone/index.zh.md %}), in containers, or 
managed by resource
-frameworks like [YARN]({% link deployment/resource-providers/yarn.zh.md
-%}) or [Mesos]({% link deployment/resource-providers/mesos.zh.md %}).
-TaskManagers connect to JobManagers, announcing themselves as available, and
-are assigned work.
+*Client* 不是运行时和程序执行的一部分,而是用于准备数据流并将其发送给 
JobManager。之后,客户端可以断开连接(_分离模式_),或保持连接来接收进程报告(_附加模式_)。客户端可以作为触发执行 Java/Scala 
程序的一部分运行,也可以在命令行进程`./bin/flink run ...`中运行。
+
+可以通过各种方式启动 JobManager 和 TaskManager:直接在机器上作为[standalone 集群]({% link 
deployment/resource-providers/standalone/index.zh.md %})启动、在容器中启动、或者通过[YARN]({% 
link deployment/resource-providers/yarn.zh.md %})或[Mesos]({% link 
deployment/resource-providers/mesos.zh.md %})等资源框架管理启动。TaskManager 连接到 
JobManagers,宣布自己可用,并被分配工作。
 
 ### JobManager
 
-The _JobManager_ has a number of responsibilities related to coordinating the 
distributed execution of Flink Applications:
-it decides when to schedule the next task (or set of tasks), reacts to finished
-tasks or execution failures, coordinates checkpoints, and coordinates recovery 
on
-failures, among others. This process consists of three different components:
+_JobManager_具有许多与协调 Flink 应用程序的分布式执行有关的职责:它决定何时调度下一个 task(或一组 task)、对完成的 task 
或执行失败做出反应、协调 checkpoint、并且协调从失败中恢复等等。这个进程由三个不同的组件组成:
 
   * **ResourceManager** 
 
-The _ResourceManager_ is responsible for resource de-/allocation and
-provisioning in a Flink cluster — it manages **task slots**, which are the
-unit of resource scheduling in a Flink cluster (see 
[TaskManagers](#taskmanagers)).
-Flink implements multiple ResourceManagers for different environments and
-resource providers such as YARN, Mesos, Kubernetes and standalone
-deployments. In a standalone setup, the ResourceManager can only distribute
-the slots of available TaskManagers and cannot start new TaskManagers on
-its own.  
+_ResourceManager_负责 Flink 集群中的资源删除/分配和供应 - 它管理 **task slots**,这是 Flink 
集群中资源调度的单位(请参考[TaskManagers](#taskmanagers))。Flink 为不同的环境和资源提供者(例如 
YARN、Mesos、Kubernetes 和 standalone 部署)实现了多个 ResourceManager。在 standalone 
设置中,ResourceManager 只能分配可用 TaskManager 的 slots,而不能自行启动新的 TaskManager。
 
   * **Dispatcher** 
 
-The _Dispatcher_ provides a REST interface to submit Flink applications for
-execution and starts a new JobMaster for each submitted job. It
-also runs the Flink WebUI to provide information about job executions.
+_Dispatcher_ 提供了一个 REST 接口,用来提交 Flink 应用程序执行,并为每个提交的作业启动一个新的 
JobMaster。它还运行 Flink WebUI 用来提供作业执行信息。
 
   * **JobMaster** 
 
-A _JobMaster_ is responsible for managing the execution of a single
-[JobGraph]({% link concepts/glossary.zh.md %}#logical-graph).
-Multiple jobs can run simultaneously in a Flink cluster, each having its
-own JobMaster.
+_JobMaster_ 负责管理单个[JobGraph]({% link concepts/glossary.zh.md 
%}#logical-graph)的执行。Flink 集群中可以同时运行多个作业,每个作业都有自己的 JobMaster。
 

[jira] [Created] (FLINK-20448) Obsolete generated avro classes

2020-12-01 Thread Rui Li (Jira)
Rui Li created FLINK-20448:
--

 Summary: Obsolete generated avro classes
 Key: FLINK-20448
 URL: https://issues.apache.org/jira/browse/FLINK-20448
 Project: Flink
  Issue Type: Test
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Rui Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] caozhen1937 commented on a change in pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


caozhen1937 commented on a change in pull request #14088:
URL: https://github.com/apache/flink/pull/14088#discussion_r533875476



##
File path: docs/concepts/flink-architecture.zh.md
##
@@ -24,229 +24,109 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink is a distributed system and requires effective allocation and management
-of compute resources in order to execute streaming applications. It integrates
-with all common cluster resource managers such as [Hadoop
-YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html),
-[Apache Mesos](https://mesos.apache.org/) and
-[Kubernetes](https://kubernetes.io/), but can also be set up to run as a
-standalone cluster or even as a library.
+Flink 是一个分布式系统,需要有效分配和管理计算资源才能执行流应用程序。它集成了所有常见的集群资源管理器,例如[Hadoop 
YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)、[Apache
 
Mesos](https://mesos.apache.org/)和[Kubernetes](https://kubernetes.io/),但也可以设置作为独立集群甚至库运行。
 
-This section contains an overview of Flink’s architecture and describes how its
-main components interact to execute applications and recover from failures.
+本节概述了 Flink 架构,并且描述了其主要组件如何交互以执行应用程序和从故障中恢复。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Anatomy of a Flink Cluster
+## Flink 集群剖析
 
-The Flink runtime consists of two types of processes: a _JobManager_ and one 
or more _TaskManagers_.
+Flink 运行时由两种类型的进程组成:_JobManager_和一个或者多个_TaskManager_。

Review comment:
   这里说的是两种类型的进程,我觉得只是说明进程名称就可以,不需要指明个数,并且JobManager可能不是一个。





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] V1ncentzzZ commented on pull request #14165: [FLINK-19687][table] Support to get execution plan from StatementSet

2020-12-01 Thread GitBox


V1ncentzzZ commented on pull request #14165:
URL: https://github.com/apache/flink/pull/14165#issuecomment-736961835


   cc @wuchong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19398) Hive connector fails with IllegalAccessError if submitted as usercode

2020-12-01 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242014#comment-17242014
 ] 

Yun Gao commented on FLINK-19398:
-

OK, I'll open the PR, very sorry for not noticed the message.

> Hive connector fails with IllegalAccessError if submitted as usercode
> -
>
> Key: FLINK-19398
> URL: https://issues.apache.org/jira/browse/FLINK-19398
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Fabian Hueske
>Assignee: Yun Gao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.4
>
>
> Using Flink's Hive connector fails if the dependency is loaded with the user 
> code classloader with the following exception.
> {code:java}
> java.lang.IllegalAccessError: tried to access method 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Lorg/apache/flink/core/fs/Path;Lorg/apache/flink/streaming/api/functions/sink/filesystem/BucketAssigner;Lorg/apache/flink/streaming/api/functions/sink/filesystem/BucketFactory;Lorg/apache/flink/streaming/api/functions/sink/filesystem/BucketWriter;Lorg/apache/flink/streaming/api/functions/sink/filesystem/RollingPolicy;ILorg/apache/flink/streaming/api/functions/sink/filesystem/OutputFileConfig;)V
>  from class 
> org.apache.flink.streaming.api.functions.sink.filesystem.HadoopPathBasedBulkFormatBuilder
>   at 
> org.apache.flink.streaming.api.functions.sink.filesystem.HadoopPathBasedBulkFormatBuilder.createBuckets(HadoopPathBasedBulkFormatBuilder.java:127)
>  
> ~[flink-connector-hive_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.table.filesystem.stream.StreamingFileWriter.initializeState(StreamingFileWriter.java:81)
>  ~[flink-table-blink_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:106)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:258)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:290)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:92)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721) 
> [flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546) 
> [flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
> {code}
> The problem is the constructor of {{Buckets}} with default visibility which 
> is called from {{HadoopPathBasedBulkFormatBuilder}} . This works as long as 
> both classes are loaded with the same classloader but when they are loaded in 
> different classloaders, the access fails.
> {{Buckets}} is loaded with the system CL because it is part of 
> flink-streaming-java. 
>  
> To solve this issue, we should change the visibility of the {{Buckets}} 
> constructor to {{public}}.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo commented on a change in pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


HuangXingBo commented on a change in pull request #14088:
URL: https://github.com/apache/flink/pull/14088#discussion_r533856595



##
File path: docs/concepts/flink-architecture.zh.md
##
@@ -24,229 +24,109 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink is a distributed system and requires effective allocation and management
-of compute resources in order to execute streaming applications. It integrates
-with all common cluster resource managers such as [Hadoop
-YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html),
-[Apache Mesos](https://mesos.apache.org/) and
-[Kubernetes](https://kubernetes.io/), but can also be set up to run as a
-standalone cluster or even as a library.
+Flink 是一个分布式系统,需要有效分配和管理计算资源才能执行流应用程序。它集成了所有常见的集群资源管理器,例如[Hadoop 
YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)、[Apache
 
Mesos](https://mesos.apache.org/)和[Kubernetes](https://kubernetes.io/),但也可以设置作为独立集群甚至库运行。
 
-This section contains an overview of Flink’s architecture and describes how its
-main components interact to execute applications and recover from failures.
+本节概述了 Flink 架构,并且描述了其主要组件如何交互以执行应用程序和从故障中恢复。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Anatomy of a Flink Cluster
+## Flink 集群剖析
 
-The Flink runtime consists of two types of processes: a _JobManager_ and one 
or more _TaskManagers_.
+Flink 运行时由两种类型的进程组成:_JobManager_和一个或者多个_TaskManager_。
 
 
 
-The *Client* is not part of the runtime and program execution, but is used to
-prepare and send a dataflow to the JobManager.  After that, the client can
-disconnect (_detached mode_), or stay connected to receive progress reports
-(_attached mode_). The client runs either as part of the Java/Scala program
-that triggers the execution, or in the command line process `./bin/flink run
-...`.
-
-The JobManager and TaskManagers can be started in various ways: directly on
-the machines as a [standalone cluster]({% link
-deployment/resource-providers/standalone/index.zh.md %}), in containers, or 
managed by resource
-frameworks like [YARN]({% link deployment/resource-providers/yarn.zh.md
-%}) or [Mesos]({% link deployment/resource-providers/mesos.zh.md %}).
-TaskManagers connect to JobManagers, announcing themselves as available, and
-are assigned work.
+*Client* 不是运行时和程序执行的一部分,而是用于准备数据流并将其发送给 
JobManager。之后,客户端可以断开连接(_分离模式_),或保持连接来接收进程报告(_附加模式_)。客户端可以作为触发执行 Java/Scala 
程序的一部分运行,也可以在命令行进程`./bin/flink run ...`中运行。

Review comment:
   ```suggestion
   *Client* 不是运行时和程序执行的一部分,而是用于准备dataflow并将其发送给 
JobManager。之后,客户端可以断开连接(_detached mode_),或保持连接来接收进度报告(_attached 
mode_)。客户端既可以运行Java/Scala的程序来触发执行,也可以通过命令行`./bin/flink run ...`的方式执行。
   ```

##
File path: docs/concepts/flink-architecture.zh.md
##
@@ -24,229 +24,109 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink is a distributed system and requires effective allocation and management
-of compute resources in order to execute streaming applications. It integrates
-with all common cluster resource managers such as [Hadoop
-YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html),
-[Apache Mesos](https://mesos.apache.org/) and
-[Kubernetes](https://kubernetes.io/), but can also be set up to run as a
-standalone cluster or even as a library.
+Flink 是一个分布式系统,需要有效分配和管理计算资源才能执行流应用程序。它集成了所有常见的集群资源管理器,例如[Hadoop 
YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)、[Apache
 
Mesos](https://mesos.apache.org/)和[Kubernetes](https://kubernetes.io/),但也可以设置作为独立集群甚至库运行。
 
-This section contains an overview of Flink’s architecture and describes how its
-main components interact to execute applications and recover from failures.
+本节概述了 Flink 架构,并且描述了其主要组件如何交互以执行应用程序和从故障中恢复。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Anatomy of a Flink Cluster
+## Flink 集群剖析
 
-The Flink runtime consists of two types of processes: a _JobManager_ and one 
or more _TaskManagers_.
+Flink 运行时由两种类型的进程组成:_JobManager_和一个或者多个_TaskManager_。
 
 
 
-The *Client* is not part of the runtime and program execution, but is used to
-prepare and send a dataflow to the JobManager.  After that, the client can
-disconnect (_detached mode_), or stay connected to receive progress reports
-(_attached mode_). The client runs either as part of the Java/Scala program
-that triggers the execution, or in the command line process `./bin/flink run
-...`.
-
-The JobManager and TaskManagers can be started in various ways: directly on
-the machines as a [standalone cluster]({% link
-deployment/resource-providers/standalone/index.zh.md %}), in containers, or 
managed by resource
-frameworks like [YARN]({% link deployment/resource-providers/yarn.zh.md
-%}) or [Mesos]({% link deployment/resource-providers/mesos.zh.md %}).
-TaskManagers connect to JobManagers, announcing themselves as available, and
-are assigned work.
+*Client* 

[jira] [Updated] (FLINK-20447) Querying grouy by PK does not work

2020-12-01 Thread Zhenwei Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenwei Feng updated FLINK-20447:
-
Priority: Major  (was: Minor)

> Querying grouy by PK does not work
> --
>
> Key: FLINK-20447
> URL: https://issues.apache.org/jira/browse/FLINK-20447
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.2
>Reporter: Zhenwei Feng
>Priority: Major
>
> Since PRIMARY KEY is unique, it should be feasible to screen columns by PK.
> The problem could be reproduced by creating a simple table:
> {code:java}
> CREATE TABLE test_table(
>   Code STRING,
>   Name  STRING,
>   ...,
>   PRIMARY KEY (Code) NOT ENFORCED
> )WITH (...)
> {code}
> then parsing a SQL statement `SELECT *FROM test_table GROUP BY Code`. An 
> exception as below will be thrown:
>  
> {code:java}
>  org.apache.calcite.sql.validate.SqlValidatorException: Expression 
> 'test_table.Name' is not being grouped
> {code}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14088:
URL: https://github.com/apache/flink/pull/14088#issuecomment-728662796


   
   ## CI report:
   
   * 0b671a6faa179d8465c67150b8459779b8674978 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9669)
 
   * f14b52c30b09d46b9fa03f840d0642f764122e8f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10423)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] RocMarshal closed pull request #183: [hotfix][sdk] Change variable names to comply with camel case naming rules and correct spelling of wrong words.

2020-12-01 Thread GitBox


RocMarshal closed pull request #183:
URL: https://github.com/apache/flink-statefun/pull/183


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20447) Querying grouy by PK does not work

2020-12-01 Thread Zhenwei Feng (Jira)
Zhenwei Feng created FLINK-20447:


 Summary: Querying grouy by PK does not work
 Key: FLINK-20447
 URL: https://issues.apache.org/jira/browse/FLINK-20447
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.11.2
Reporter: Zhenwei Feng


Since PRIMARY KEY is unique, it should be feasible to screen columns by PK.

The problem could be reproduced by creating a simple table:
{code:java}
CREATE TABLE test_table(
  Code STRING,
  Name  STRING,
  ...,
  PRIMARY KEY (Code) NOT ENFORCED
)WITH (...)
{code}
then parsing a SQL statement `SELECT *FROM test_table GROUP BY Code`. An 
exception as below will be thrown:

 
{code:java}
 org.apache.calcite.sql.validate.SqlValidatorException: Expression 
'test_table.Name' is not being grouped
{code}
 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14270: [FLINK-20436][table-planner-blink] Simplify type parameter of ExecNode

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14270:
URL: https://github.com/apache/flink/pull/14270#issuecomment-736353538


   
   ## CI report:
   
   * 40d11837b72f31a3c99a6eb8db755acb5309f86e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10369)
 
   * 88d6d6aecb9a67c4bd7065f2d58cf9793d71cf95 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10422)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14088:
URL: https://github.com/apache/flink/pull/14088#issuecomment-728662796


   
   ## CI report:
   
   * 0b671a6faa179d8465c67150b8459779b8674978 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9669)
 
   * f14b52c30b09d46b9fa03f840d0642f764122e8f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] RocMarshal opened a new pull request #183: [hotfix][sdk] Change variable names to comply with camel case naming rules and correct spelling of wrong words.

2020-12-01 Thread GitBox


RocMarshal opened a new pull request #183:
URL: https://github.com/apache/flink-statefun/pull/183


   Change variable names to comply with camel case naming rules in 
`org.apache.flink.statefun.sdk.match.MatchBinder`
   and correct spelling of wrong words in 
`org.apache.flink.statefun.sdk.spi.StatefulFunctionModule`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #14028: [FLINK-20020][client] Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract

2020-12-01 Thread GitBox


SteNicholas commented on a change in pull request #14028:
URL: https://github.com/apache/flink/pull/14028#discussion_r533853218



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/deployment/ClusterClientJobClientAdapter.java
##
@@ -116,8 +115,7 @@ public JobID getJobID() {
try {
return 
jobResult.toJobExecutionResult(classLoader);
} catch (Throwable t) {
-   throw new 
CompletionException(
-   new 
ProgramInvocationException("Job failed", jobID, t));

Review comment:
   @tillrohrmann , I would wait for @aljoscha to give the opinion about the 
changes to `CompletionExcetpion` contract for `JobClient`. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #14028: [FLINK-20020][client] Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract

2020-12-01 Thread GitBox


SteNicholas commented on pull request #14028:
URL: https://github.com/apache/flink/pull/14028#issuecomment-736948175


   Thanks for @tillrohrmann detailed review reply. I agree with the point that 
`JobExecutionException` transporting the final `ApplicationStatus` shouldn't be 
used within the runtime components. I would change the commit in 
`ExecutionGraphBuilder` to remove the `JobExecutionException` from `runtime ` 
components.
   BTW, @aljoscha , could you please give your opinion about the changes to 
`CompletionExcetpion` contract for `JobClient`?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #14028: [FLINK-20020][client] Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract

2020-12-01 Thread GitBox


SteNicholas commented on a change in pull request #14028:
URL: https://github.com/apache/flink/pull/14028#discussion_r533853218



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/deployment/ClusterClientJobClientAdapter.java
##
@@ -116,8 +115,7 @@ public JobID getJobID() {
try {
return 
jobResult.toJobExecutionResult(classLoader);
} catch (Throwable t) {
-   throw new 
CompletionException(
-   new 
ProgramInvocationException("Job failed", jobID, t));

Review comment:
   @tillrohrmann , I would wait for @aljoscha to give the opinion of the 
changes to `CompletionExcetpion` contract for `JobClient`. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #14028: [FLINK-20020][client] Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract

2020-12-01 Thread GitBox


SteNicholas commented on a change in pull request #14028:
URL: https://github.com/apache/flink/pull/14028#discussion_r533851974



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/client/JobExecutionException.java
##
@@ -19,41 +19,67 @@
 package org.apache.flink.runtime.client;
 
 import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.clusterframework.ApplicationStatus;
 import org.apache.flink.util.FlinkException;
 
 /**
  * This exception is the base exception for all exceptions that denote any 
failure during
- * the execution of a job.
+ * the execution of a job, and signal the failure of an application with a 
given
+ * {@link ApplicationStatus}.
  */
 public class JobExecutionException extends FlinkException {
 
private static final long serialVersionUID = 2818087325120827525L;
 
private final JobID jobID;
 
+   private final ApplicationStatus status;
+
/**
 * Constructs a new job execution exception.
 *
-* @param jobID The job's ID.
-* @param msg The cause for the execution exception.
-* @param cause The cause of the exception
+* @param jobID  The job's ID.
+* @param status The application status.
+* @param msgThe cause for the execution exception.
+* @param cause  The cause of the exception
 */
-   public JobExecutionException(JobID jobID, String msg, Throwable cause) {
+   public JobExecutionException(JobID jobID, ApplicationStatus status, 
String msg, Throwable cause) {
super(msg, cause);
this.jobID = jobID;
+   this.status = status;

Review comment:
   @tillrohrmann , the case is that the execution of the job is successful, 
but calls `AccumulatorHelper.deserializeAccumulators` throws Exception when 
calling `jobResult.toJobExecutionResult`. In this case the application status 
is `SUCCEEDED `.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #14028: [FLINK-20020][client] Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract

2020-12-01 Thread GitBox


SteNicholas commented on a change in pull request #14028:
URL: https://github.com/apache/flink/pull/14028#discussion_r533851974



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/client/JobExecutionException.java
##
@@ -19,41 +19,67 @@
 package org.apache.flink.runtime.client;
 
 import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.clusterframework.ApplicationStatus;
 import org.apache.flink.util.FlinkException;
 
 /**
  * This exception is the base exception for all exceptions that denote any 
failure during
- * the execution of a job.
+ * the execution of a job, and signal the failure of an application with a 
given
+ * {@link ApplicationStatus}.
  */
 public class JobExecutionException extends FlinkException {
 
private static final long serialVersionUID = 2818087325120827525L;
 
private final JobID jobID;
 
+   private final ApplicationStatus status;
+
/**
 * Constructs a new job execution exception.
 *
-* @param jobID The job's ID.
-* @param msg The cause for the execution exception.
-* @param cause The cause of the exception
+* @param jobID  The job's ID.
+* @param status The application status.
+* @param msgThe cause for the execution exception.
+* @param cause  The cause of the exception
 */
-   public JobExecutionException(JobID jobID, String msg, Throwable cause) {
+   public JobExecutionException(JobID jobID, ApplicationStatus status, 
String msg, Throwable cause) {
super(msg, cause);
this.jobID = jobID;
+   this.status = status;

Review comment:
   @tillrohrmann , the case is that the execution of the job is successful, 
but calls `AccumulatorHelper.deserializeAccumulators` throws Exception while 
calling `jobResult.toJobExecutionResult`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14270: [FLINK-20436][table-planner-blink] Simplify type parameter of ExecNode

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14270:
URL: https://github.com/apache/flink/pull/14270#issuecomment-736353538


   
   ## CI report:
   
   * 40d11837b72f31a3c99a6eb8db755acb5309f86e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10369)
 
   * 88d6d6aecb9a67c4bd7065f2d58cf9793d71cf95 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #14028: [FLINK-20020][client] Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract

2020-12-01 Thread GitBox


SteNicholas commented on a change in pull request #14028:
URL: https://github.com/apache/flink/pull/14028#discussion_r533850366



##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaConsumerTestBase.java
##
@@ -257,7 +258,7 @@ public void run() {
runner.join();
 
final Throwable t = errorRef.get();
-   if (t != null) {
+   if (t != null && ((JobExecutionException) t).getStatus() == 
ApplicationStatus.FAILED) {

Review comment:
   @tillrohrmann , I got your point for this change, and I would like to 
update commit for this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] caozhen1937 commented on pull request #14088: [FLINK-20168][docs-zh] Translate page 'Flink Architecture' into Chinese.

2020-12-01 Thread GitBox


caozhen1937 commented on pull request #14088:
URL: https://github.com/apache/flink/pull/14088#issuecomment-736942413


   @HuangXingBo  thank you, I have rebase `master`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20073) Add kerberos setup documentation for native k8s integration

2020-12-01 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-20073.

Resolution: Done

Merged via
 * master: 7a7c87096ab76f416cd7c393240faa8454db36f0
 * 1.12: 5232b205fa1a7d282153a998113769add9c7b62d

> Add kerberos setup documentation for native k8s integration
> ---
>
> Key: FLINK-20073
> URL: https://issues.apache.org/jira/browse/FLINK-20073
> Project: Flink
>  Issue Type: Task
>  Components: Deployment / Kubernetes, Documentation
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> In FLINK-18971, we support to mount kerberos conf as ConfigMap and Keytab as 
> Secrete. We need to add a user doc for it. Maybe on the Kerberos page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong closed pull request #14241: [FLINK-20073][doc] Add native k8s integration to kerberos setup documentation

2020-12-01 Thread GitBox


xintongsong closed pull request #14241:
URL: https://github.com/apache/flink/pull/14241


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on a change in pull request #14257: [FLINK-19530] Reorganise contents of Table API concepts

2020-12-01 Thread GitBox


sjwiesman commented on a change in pull request #14257:
URL: https://github.com/apache/flink/pull/14257#discussion_r533833261



##
File path: docs/dev/table/streaming/temporal_tables.md
##
@@ -36,6 +36,14 @@ For the changing dimension table, Flink allows for accessing 
the content of the
 Motivation
 --

Review comment:
temporal tables got a lot of new features in 1.12. Please rebase, 
unfortunately, a lot of what you did is no longer relevant.  

##
File path: docs/dev/table/streaming/unbounded-data-processing/joins.md
##
@@ -22,17 +22,18 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to 
connect the rows of two relations. However, the semantics of joins on [dynamic 
tables](dynamic_tables.html) are much less obvious or even confusing.
+Joins are a common and well-understood operation in batch data processing to 
connect the rows of two relations. However, the semantics of joins on [dynamic 
tables](dynamic_tables.html) representing unbounded data are fairly 
non-trivial. 
 
-Because of that, there are a couple of ways to actually perform a join using 
either Table API or SQL.
+However there are a couple of ways to actually perform a join using either 
Table API or SQL.
 
 For more information regarding the syntax, please check the join sections in 
[Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl 
}}/dev/table/sql/queries.html#joins).
 
+//TODO: Explain the complications of unbounded joins

Review comment:
   remove todo

##
File path: docs/dev/table/streaming/unbounded-data-processing/joins.md
##
@@ -22,17 +22,18 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to 
connect the rows of two relations. However, the semantics of joins on [dynamic 
tables](dynamic_tables.html) are much less obvious or even confusing.
+Joins are a common and well-understood operation in batch data processing to 
connect the rows of two relations. However, the semantics of joins on [dynamic 
tables](dynamic_tables.html) representing unbounded data are fairly 
non-trivial. 
 
-Because of that, there are a couple of ways to actually perform a join using 
either Table API or SQL.
+However there are a couple of ways to actually perform a join using either 
Table API or SQL.
 
 For more information regarding the syntax, please check the join sections in 
[Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl 
}}/dev/table/sql/queries.html#joins).
 
+//TODO: Explain the complications of unbounded joins
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
--
+Regular Joins and its challenges
+---
 

Review comment:
   Lets get rid of "and its challenges". We want to make flink more 
approachable :) 

##
File path: docs/dev/table/streaming/unbounded-data-processing/joins.md
##
@@ -351,4 +339,20 @@ FROM
 
 Attention Flink does not support event 
time temporal table joins currently.
 
+### Temporal Table joins vs Other joins
+
+In contrast to [regular joins](#regular-joins), the previous results of the 
temporal table join will not be affected despite the changes on the build side. 
Also, the temporal table join operator is very lightweight and does not keep 
any state.
+
+Compared to [interval joins](#interval-joins), temporal table joins do not 
define a time window within which the records will be joined.

Review comment:
   again, 1.12 added support for temporal event time table joins so this 
needs to be updated. 

##
File path: docs/dev/table/streaming/unbounded-data-processing/joins.md
##
@@ -351,4 +339,20 @@ FROM
 
 Attention Flink does not support event 
time temporal table joins currently.
 
+### Temporal Table joins vs Other joins
+
+In contrast to [regular joins](#regular-joins), the previous results of the 
temporal table join will not be affected despite the changes on the build side. 
Also, the temporal table join operator is very lightweight and does not keep 
any state.

Review comment:
   Also doesn't sound right here. 
   ```suggestion
   In contrast to [regular joins](#regular-joins), the previous results of the 
temporal table join will not be affected despite the changes on the build side. 
The temporal table join operator is very lightweight and does not keep any 
state.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


dianfu commented on a change in pull request #14282:
URL: https://github.com/apache/flink/pull/14282#discussion_r533801315



##
File path: flink-python/src/main/resources/META-INF/NOTICE
##
@@ -67,11 +68,20 @@ The bundled Apache Beam dependencies bundle the following 
dependencies under the
 - io.netty:netty-tcnative-boringssl-static:2.0.26.Final
 - io.opencensus:opencensus-api:0.24.0
 - io.opencensus:opencensus-contrib-grpc-metrics:0.24.0
-- net.bytebuddy:1.10.8
+- io.perfmark:perfmark-api:0.19.0
+- net.jpountz.lz4:lz4:1.3.0
 
 The bundled Apache Beam dependencies bundle the following dependencies under 
the BSD license.
 See bundled license files for details
 
 - com.google.auth:google-auth-library-credentials:0.18.0
+- com.google.protobuf.nano:protobuf-javanano:3.0.0-alpha-5
 - com.google.protobuf:protobuf-java:3.11.0
 - com.google.protobuf:protobuf-java-util:3.11.0
+- com.jcraft:jzlib:1.1.3
+
+The bundled Apache Beam dependencies bundle the following dependencies under 
the Bouncy Castle license.
+See bundled license files for details
+
+- org.bouncycastle:bcpkix-jdk15on:1.54

Review comment:
   Yes, you are right. Most dependencies are pulled in because of beam grpc 
vendor which is defined in 
https://github.com/apache/beam/blob/release-2.23.0/buildSrc/src/main/groovy/org/apache/beam/gradle/GrpcVendoring_1_26_0.groovy.
 
   
   As jetty and jboss are not needed and so I excluded them from the fat jar.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


dianfu commented on a change in pull request #14282:
URL: https://github.com/apache/flink/pull/14282#discussion_r533801315



##
File path: flink-python/src/main/resources/META-INF/NOTICE
##
@@ -67,11 +68,20 @@ The bundled Apache Beam dependencies bundle the following 
dependencies under the
 - io.netty:netty-tcnative-boringssl-static:2.0.26.Final
 - io.opencensus:opencensus-api:0.24.0
 - io.opencensus:opencensus-contrib-grpc-metrics:0.24.0
-- net.bytebuddy:1.10.8
+- io.perfmark:perfmark-api:0.19.0
+- net.jpountz.lz4:lz4:1.3.0
 
 The bundled Apache Beam dependencies bundle the following dependencies under 
the BSD license.
 See bundled license files for details
 
 - com.google.auth:google-auth-library-credentials:0.18.0
+- com.google.protobuf.nano:protobuf-javanano:3.0.0-alpha-5
 - com.google.protobuf:protobuf-java:3.11.0
 - com.google.protobuf:protobuf-java-util:3.11.0
+- com.jcraft:jzlib:1.1.3
+
+The bundled Apache Beam dependencies bundle the following dependencies under 
the Bouncy Castle license.
+See bundled license files for details
+
+- org.bouncycastle:bcpkix-jdk15on:1.54

Review comment:
   Yes, you are right. jetty and jboss are not needed and so excluded from 
the fat jar.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] blublinsky closed pull request #14005: Support mounting custom PVCs, secrets and config maps to job/Task manager pods

2020-12-01 Thread GitBox


blublinsky closed pull request #14005:
URL: https://github.com/apache/flink/pull/14005


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14003: [FLINK-19527][Doc]Flink SQL Getting Started

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14003:
URL: https://github.com/apache/flink/pull/14003#issuecomment-724292850


   
   ## CI report:
   
   * cd96cfd7f843b802919f0a57a4383fc0bfcb090a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9985)
 
   * 34877c626eb981f4dca0873d7ba2d5547b16ff2b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10419)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


zentol commented on a change in pull request #14282:
URL: https://github.com/apache/flink/pull/14282#discussion_r533764956



##
File path: flink-python/src/main/resources/META-INF/NOTICE
##
@@ -67,11 +68,20 @@ The bundled Apache Beam dependencies bundle the following 
dependencies under the
 - io.netty:netty-tcnative-boringssl-static:2.0.26.Final
 - io.opencensus:opencensus-api:0.24.0
 - io.opencensus:opencensus-contrib-grpc-metrics:0.24.0
-- net.bytebuddy:1.10.8
+- io.perfmark:perfmark-api:0.19.0
+- net.jpountz.lz4:lz4:1.3.0
 
 The bundled Apache Beam dependencies bundle the following dependencies under 
the BSD license.
 See bundled license files for details
 
 - com.google.auth:google-auth-library-credentials:0.18.0
+- com.google.protobuf.nano:protobuf-javanano:3.0.0-alpha-5
 - com.google.protobuf:protobuf-java:3.11.0
 - com.google.protobuf:protobuf-java-util:3.11.0
+- com.jcraft:jzlib:1.1.3
+
+The bundled Apache Beam dependencies bundle the following dependencies under 
the Bouncy Castle license.
+See bundled license files for details
+
+- org.bouncycastle:bcpkix-jdk15on:1.54

Review comment:
   It would be good to document somewhere (maybe the licensing wiki page) 
how these were determined.
   
   I assume you pulled them from 
https://github.com/apache/beam/blob/release-2.23.0/buildSrc/src/main/groovy/org/apache/beam/gradle/GrpcVendoring_1_26_0.groovy
 ?
   If so, what about these:
   ```
 "org.eclipse.jetty.alpn:alpn-api:$alpn_api_version",
 "org.eclipse.jetty.npn:npn-api:$npn_api_version",
 "org.jboss.marshalling:jboss-marshalling:$jboss_marshalling_version",
 "org.jboss.modules:jboss-modules:$jboss_modules_version"
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14003: [FLINK-19527][Doc]Flink SQL Getting Started

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14003:
URL: https://github.com/apache/flink/pull/14003#issuecomment-724292850


   
   ## CI report:
   
   * cd96cfd7f843b802919f0a57a4383fc0bfcb090a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9985)
 
   * 34877c626eb981f4dca0873d7ba2d5547b16ff2b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


zentol commented on a change in pull request #14282:
URL: https://github.com/apache/flink/pull/14282#discussion_r533758138



##
File path: flink-python/lib/LICENSE.py4j
##
@@ -1,26 +0,0 @@
-Copyright (c) 2009-2018, Barthelemy Dagenais and individual contributors. All

Review comment:
   I think it should be fine to remove them. The copies at 
`flink-python/src/main/resources/META-INF/licenses` ensure they are contained 
in the flink-python jar and the distribution (and thus transitively the python 
distribution).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] haseeb1431 commented on pull request #14003: [FLINK-19527][Doc]Flink SQL Getting Started

2020-12-01 Thread GitBox


haseeb1431 commented on pull request #14003:
URL: https://github.com/apache/flink/pull/14003#issuecomment-736854962


   Thanks for all the suggestions and inputs. I have incorporated the changes. 
Please have a review and let me know if we need any final touches. Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14282:
URL: https://github.com/apache/flink/pull/14282#issuecomment-736657230


   
   ## CI report:
   
   * b264a5b5ccd7365df1c01fbf404d728ef00bf58f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10407)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] haseeb1431 commented on a change in pull request #14003: [FLINK-19527][Doc]Flink SQL Getting Started

2020-12-01 Thread GitBox


haseeb1431 commented on a change in pull request #14003:
URL: https://github.com/apache/flink/pull/14003#discussion_r533735947



##
File path: docs/dev/table/sql/gettingStarted.md
##
@@ -0,0 +1,226 @@
+---
+title: "Getting Started - Flink SQL"
+nav-parent_id: sql
+nav-pos: 0
+---
+
+
+
+* This will be replaced by the TOC
+{:toc}
+
+Flink SQL enables SQL developers to design and develop the batch or streaming 
application without writing the Java, Scala, or any other programming language 
code. It provides a unified API for both stream and batch processing. As a 
user, you can perform powerful transformations. Flink’s SQL support is based on 
[Apache Calcite](https://calcite.apache.org/) which implements the SQL standard.
+
+In addition to the SQL API, Flink also has a Table API with similar semantics 
as SQL. The Table API is a language-integrated API, where users develop in a 
specific programming language to write the queries or call the API. For 
example, jobs create a table environment, read a table, and apply different 
transformations and aggregations, and write the results back to another table. 
It supports different languages e.g. Java, Scala, Python. 
+ 
+Flink SQL and Table API are just two different ways to write queries that use 
the same Flink runtime. All the queries are optimized for efficient execution. 
SQL API is a more descriptive way of writing queries using well-known SQL 
standards e.g. `select * from Table`. On the other hand, Table API queries 
start with from clause, followed by joins and where clause, and then finally 
projection or select at the last e.g. `Table.filter(...).select(...)`. Standard 
SQL is easy and quick to learn even for users with no programming background. 
This article will focus on Flink SQL API but Table API details can be found 
[here]({{ site.baseurl }}/dev/table/).

Review comment:
   Took out the initial introduction and added an intro about the tutorial. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] haseeb1431 commented on a change in pull request #14003: [FLINK-19527][Doc]Flink SQL Getting Started

2020-12-01 Thread GitBox


haseeb1431 commented on a change in pull request #14003:
URL: https://github.com/apache/flink/pull/14003#discussion_r533729375



##
File path: docs/dev/table/sql/gettingStarted.md
##
@@ -0,0 +1,226 @@
+---
+title: "Getting Started - Flink SQL"
+nav-parent_id: sql
+nav-pos: 0
+---
+
+
+
+* This will be replaced by the TOC
+{:toc}
+
+Flink SQL enables SQL developers to design and develop the batch or streaming 
application without writing the Java, Scala, or any other programming language 
code. It provides a unified API for both stream and batch processing. As a 
user, you can perform powerful transformations. Flink’s SQL support is based on 
[Apache Calcite](https://calcite.apache.org/) which implements the SQL standard.
+
+In addition to the SQL API, Flink also has a Table API with similar semantics 
as SQL. The Table API is a language-integrated API, where users develop in a 
specific programming language to write the queries or call the API. For 
example, jobs create a table environment, read a table, and apply different 
transformations and aggregations, and write the results back to another table. 
It supports different languages e.g. Java, Scala, Python. 
+ 
+Flink SQL and Table API are just two different ways to write queries that use 
the same Flink runtime. All the queries are optimized for efficient execution. 
SQL API is a more descriptive way of writing queries using well-known SQL 
standards e.g. `select * from Table`. On the other hand, Table API queries 
start with from clause, followed by joins and where clause, and then finally 
projection or select at the last e.g. `Table.filter(...).select(...)`. Standard 
SQL is easy and quick to learn even for users with no programming background. 
This article will focus on Flink SQL API but Table API details can be found 
[here]({{ site.baseurl }}/dev/table/).
+
+### Pre-requisites
+You only need to have basic knowledge of SQL to follow along. You will not 
need to write Java or Scala code or use an IDE.
+
+### Installation
+There are various ways to [install]({{ site.baseurl }}/ops/deployment/) Flink. 
Probably the easiest one is to download the binaries and run them locally for 
experimentation. We assume [local installation]({{ site.baseurl 
}}/try-flink/local_installation.html) for the rest of the tutorial. You can 
start a local cluster using the following command from the installation folder
+ 
+{% highlight bash %}
+./bin/start-cluster.sh
+{% endhighlight %}
+ 
+Once the cluster is started, it will also start a web server on 
[localhost:8081](localhost:8081) to manage settings and monitor the different 
jobs.
+
+### SQL Client
+The SQL Client is an interactive client to submit SQL queries to Flink and 
visualize the results. It’s like a query editor for any other database 
management system where you can write queries using standard SQL. You can start 
the SQL client from the installation folder as follows
+
+ {% highlight bash %}
+./bin/sql-client.sh embedded
+ {% endhighlight %} 
+
+### Hello World query
+ 
+Once the SQL client, our query editor, is up and running it's time to start 
writing SQL queries. These queries will be submitted to Flink cluster for 
computation and results will be returned to the SQL client UI. Let's start with 
printing 'Hello World'. You can print hello world using the following simple 
query
+ 
+{% highlight sql %}
+SELECT 'Hello World';
+{% endhighlight %}
+
+`Help;` command is used to see different supported DDL (Data definition 
language) commands. Furthermore, Flink SQL does support different built-in 
functions as well. The following query will show all the built-in and 
user-defined functions. 
+{% highlight sql %}
+SHOW FUNCTIONS;
+{% endhighlight %}
+
+Flink SQL provides users with a set of [built-in functions]({{ site.baseurl 
}}/dev/table/functions/systemFunctions.html) for data transformations. The 
following example will print the current timestamp using the 
`CURRENT_TIMESTAMP` function.
+
+{% highlight sql %}
+SELECT CURRENT_TIMESTAMP;
+{% endhighlight %}
+
+---
+
+{% top %}
+
+## Setting up tables
+Real-world database queries are run against the SQL tables. Although Flink is 
a stream processing engine, users can define a table on top of the streaming 
data. Generally, Flink data processing pipelines have three components - 
source, compute, sink. 
+
+The source is input or from where data is read e.g. a text file, Kafka topic. 
Then we define some computations that need to be performed on input data. 
Finally, the sink defines what to do with the output or where to store the 
results. A sink can be a console log, another output file, or a Kafka topic. 
It's similar to a database query that reads data from a table, performs a query 
on it, and then displays the results. 
+
+In Flink SQL semantics, source and sink will be tables, but Flink isn’t a 
storage engine hence it cannot store the data. So Flink tables need to backed 
up with a [storage connector]({{ 

[GitHub] [flink] haseeb1431 commented on a change in pull request #14003: [FLINK-19527][Doc]Flink SQL Getting Started

2020-12-01 Thread GitBox


haseeb1431 commented on a change in pull request #14003:
URL: https://github.com/apache/flink/pull/14003#discussion_r533726745



##
File path: docs/dev/table/sql/gettingStarted.md
##
@@ -0,0 +1,226 @@
+---
+title: "Getting Started - Flink SQL"
+nav-parent_id: sql
+nav-pos: 0
+---
+
+
+
+* This will be replaced by the TOC
+{:toc}
+
+Flink SQL enables SQL developers to design and develop the batch or streaming 
application without writing the Java, Scala, or any other programming language 
code. It provides a unified API for both stream and batch processing. As a 
user, you can perform powerful transformations. Flink’s SQL support is based on 
[Apache Calcite](https://calcite.apache.org/) which implements the SQL standard.
+
+In addition to the SQL API, Flink also has a Table API with similar semantics 
as SQL. The Table API is a language-integrated API, where users develop in a 
specific programming language to write the queries or call the API. For 
example, jobs create a table environment, read a table, and apply different 
transformations and aggregations, and write the results back to another table. 
It supports different languages e.g. Java, Scala, Python. 
+ 
+Flink SQL and Table API are just two different ways to write queries that use 
the same Flink runtime. All the queries are optimized for efficient execution. 
SQL API is a more descriptive way of writing queries using well-known SQL 
standards e.g. `select * from Table`. On the other hand, Table API queries 
start with from clause, followed by joins and where clause, and then finally 
projection or select at the last e.g. `Table.filter(...).select(...)`. Standard 
SQL is easy and quick to learn even for users with no programming background. 
This article will focus on Flink SQL API but Table API details can be found 
[here]({{ site.baseurl }}/dev/table/).
+
+### Pre-requisites
+You only need to have basic knowledge of SQL to follow along. You will not 
need to write Java or Scala code or use an IDE.
+
+### Installation
+There are various ways to [install]({{ site.baseurl }}/ops/deployment/) Flink. 
Probably the easiest one is to download the binaries and run them locally for 
experimentation. We assume [local installation]({{ site.baseurl 
}}/try-flink/local_installation.html) for the rest of the tutorial. You can 
start a local cluster using the following command from the installation folder
+ 
+{% highlight bash %}
+./bin/start-cluster.sh
+{% endhighlight %}
+ 
+Once the cluster is started, it will also start a web server on 
[localhost:8081](localhost:8081) to manage settings and monitor the different 
jobs.
+
+### SQL Client
+The SQL Client is an interactive client to submit SQL queries to Flink and 
visualize the results. It’s like a query editor for any other database 
management system where you can write queries using standard SQL. You can start 
the SQL client from the installation folder as follows
+
+ {% highlight bash %}
+./bin/sql-client.sh embedded
+ {% endhighlight %} 
+
+### Hello World query
+ 
+Once the SQL client, our query editor, is up and running it's time to start 
writing SQL queries. These queries will be submitted to Flink cluster for 
computation and results will be returned to the SQL client UI. Let's start with 
printing 'Hello World'. You can print hello world using the following simple 
query
+ 
+{% highlight sql %}
+SELECT 'Hello World';
+{% endhighlight %}
+
+`Help;` command is used to see different supported DDL (Data definition 
language) commands. Furthermore, Flink SQL does support different built-in 
functions as well. The following query will show all the built-in and 
user-defined functions. 
+{% highlight sql %}
+SHOW FUNCTIONS;
+{% endhighlight %}
+
+Flink SQL provides users with a set of [built-in functions]({{ site.baseurl 
}}/dev/table/functions/systemFunctions.html) for data transformations. The 
following example will print the current timestamp using the 
`CURRENT_TIMESTAMP` function.
+
+{% highlight sql %}
+SELECT CURRENT_TIMESTAMP;
+{% endhighlight %}
+
+---
+
+{% top %}
+
+## Setting up tables
+Real-world database queries are run against the SQL tables. Although Flink is 
a stream processing engine, users can define a table on top of the streaming 
data. Generally, Flink data processing pipelines have three components - 
source, compute, sink. 
+
+The source is input or from where data is read e.g. a text file, Kafka topic. 
Then we define some computations that need to be performed on input data. 
Finally, the sink defines what to do with the output or where to store the 
results. A sink can be a console log, another output file, or a Kafka topic. 
It's similar to a database query that reads data from a table, performs a query 
on it, and then displays the results. 
+
+In Flink SQL semantics, source and sink will be tables, but Flink isn’t a 
storage engine hence it cannot store the data. So Flink tables need to backed 
up with a [storage connector]({{ 

[GitHub] [flink] flinkbot edited a comment on pull request #14281: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes setup

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14281:
URL: https://github.com/apache/flink/pull/14281#issuecomment-736654087


   
   ## CI report:
   
   * 6a43ad75129ec9332f82134fe4c98329bd6db7b5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10402)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14282: [FLINK-20442][python][legal] Updated flink-python NOTICE

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14282:
URL: https://github.com/apache/flink/pull/14282#issuecomment-736657230


   
   ## CI report:
   
   * 63f04e148ad295e4a4c03df7e7b56825f2333655 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10403)
 
   * b264a5b5ccd7365df1c01fbf404d728ef00bf58f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10407)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] paulangton commented on pull request #11338: [FLINK-10052][ha] Tolerate temporarily suspended ZooKeeper connections

2020-12-01 Thread GitBox


paulangton commented on pull request #11338:
URL: https://github.com/apache/flink/pull/11338#issuecomment-736791918


   @zentol @tillrohrmann I've been running into this issue and have a lot of 
interest in getting it fixed, anything we can do to get this moving forward?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20427) Remove CheckpointConfig.setPreferCheckpointForRecovery because it can lead to data loss

2020-12-01 Thread Nico Kruber (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241831#comment-17241831
 ] 

Nico Kruber commented on FLINK-20427:
-

If Flink 1.12 adds additional risks here due to the new sources, that would of 
course be something to consider, but I'm a bit unclear what the problems are 
there.

Are you saying that any restore operation for these sources that is not based 
on the latest snapshot will/may lead to data loss? So including restoring from 
an arbitrary savepoint (or actually also a retained checkpoint)? If that was 
the case, that's imho a bug in the new sources...

> Remove CheckpointConfig.setPreferCheckpointForRecovery because it can lead to 
> data loss
> ---
>
> Key: FLINK-20427
> URL: https://issues.apache.org/jira/browse/FLINK-20427
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Till Rohrmann
>Priority: Critical
> Fix For: 1.13.0
>
>
> The {{CheckpointConfig.setPreferCheckpointForRecovery}} allows to configure 
> whether Flink prefers checkpoints for recovery if the 
> {{CompletedCheckpointStore}} contains savepoints and checkpoints. This is 
> problematic because due to this feature, Flink might prefer older checkpoints 
> over newer savepoints for recovery. Since some components expect that the 
> always the latest checkpoint/savepoint is used (e.g. the 
> {{SourceCoordinator}}), it breaks assumptions and can lead to 
> {{SourceSplits}} which are not read. This effectively means that the system 
> loses data. Similarly, this behaviour can cause that exactly once sinks might 
> output results multiple times which violates the processing guarantees. 
> Hence, I believe that we should remove this setting because it changes 
> Flink's behaviour in some very significant way potentially w/o the user 
> noticing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13722: [FLINK-19636][coordination] Add DeclarativeSlotPool

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #13722:
URL: https://github.com/apache/flink/pull/13722#issuecomment-713509789


   
   ## CI report:
   
   * 7cdd555313da89f3b6be3da396e6782460a482d8 UNKNOWN
   * 2f59bf5d7da389a1f208497fec6d4453bee129de Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10397)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14283: FLINK-15649 Support mounting volumes

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14283:
URL: https://github.com/apache/flink/pull/14283#issuecomment-736675395


   
   ## CI report:
   
   * bf7d5e41e38d9dbeef734ccb80f97fe948974837 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10413)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14280: [FLINK-20110][table-planner] Support 'merge' method for first_value a…

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14280:
URL: https://github.com/apache/flink/pull/14280#issuecomment-736629561


   
   ## CI report:
   
   * b2173ba7a73dc7d79ba4f0679b7076400815d116 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10400)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-20418) NPE in IteratorSourceReader

2020-12-01 Thread Arvid Heise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvid Heise resolved FLINK-20418.
-
Resolution: Fixed

> NPE in IteratorSourceReader
> ---
>
> Key: FLINK-20418
> URL: https://issues.apache.org/jira/browse/FLINK-20418
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.12.0
>Reporter: Roman Khachatryan
>Assignee: Arvid Heise
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> With the following job
> {code}
>   @Test
>   public void testNpe() throws Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   env.setParallelism(1);
>   env.setRestartStrategy(new NoRestartStrategyConfiguration());
>   env.enableCheckpointing(50, CheckpointingMode.EXACTLY_ONCE);
>   env
>   .fromSequence(0, 100)
>   .map(x -> {
>   Thread.sleep(10);
>   return x;
>   })
>   .addSink(new DiscardingSink<>());
>   env.execute();
>   }
> {code}
> I (always) get  an exception like this:
> {code}
> ...
> Caused by: java.lang.Exception: Could not perform checkpoint 1 for operator 
> Source: Sequence Source -> Map -> Sink: Unnamed (1/1)#0.
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:866)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$triggerCheckpointAsync$8(StreamTask.java:831)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:78)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:283)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:184)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:575)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:539)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:547)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Could not 
> complete snapshot 1 for operator Source: Sequence Source -> Map -> Sink: 
> Unnamed (1/1)#0. Failure reason: Checkpoint was declined.
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:226)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:158)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:343)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointStreamOperator(SubtaskCheckpointCoordinatorImpl.java:603)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.buildOperatorSnapshotFutures(SubtaskCheckpointCoordinatorImpl.java:529)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.takeSnapshotSync(SubtaskCheckpointCoordinatorImpl.java:496)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointState(SubtaskCheckpointCoordinatorImpl.java:266)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$performCheckpoint$9(StreamTask.java:924)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:914)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:857)
>   ... 10 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.api.connector.source.lib.util.IteratorSourceReader.snapshotState(IteratorSourceReader.java:132)
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.snapshotState(SourceOperator.java:264)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:197)
>   ... 20 more
> {code}
> Adding a null check solves the issue. But if I then change sleep time from 10 
> to 50 I get
> {code}
> Caused by: java.lang.IllegalArgumentException: 'from' must be <= 

[jira] [Commented] (FLINK-20418) NPE in IteratorSourceReader

2020-12-01 Thread Arvid Heise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241820#comment-17241820
 ] 

Arvid Heise commented on FLINK-20418:
-

Merged into master as d019ce782164ddf317dd497770e48facf5ccbe2c
Merged into 1.12 as 6f4bc8c957.

> NPE in IteratorSourceReader
> ---
>
> Key: FLINK-20418
> URL: https://issues.apache.org/jira/browse/FLINK-20418
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.12.0
>Reporter: Roman Khachatryan
>Assignee: Arvid Heise
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> With the following job
> {code}
>   @Test
>   public void testNpe() throws Exception {
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   env.setParallelism(1);
>   env.setRestartStrategy(new NoRestartStrategyConfiguration());
>   env.enableCheckpointing(50, CheckpointingMode.EXACTLY_ONCE);
>   env
>   .fromSequence(0, 100)
>   .map(x -> {
>   Thread.sleep(10);
>   return x;
>   })
>   .addSink(new DiscardingSink<>());
>   env.execute();
>   }
> {code}
> I (always) get  an exception like this:
> {code}
> ...
> Caused by: java.lang.Exception: Could not perform checkpoint 1 for operator 
> Source: Sequence Source -> Map -> Sink: Unnamed (1/1)#0.
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:866)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$triggerCheckpointAsync$8(StreamTask.java:831)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:78)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:283)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:184)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:575)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:539)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:547)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Could not 
> complete snapshot 1 for operator Source: Sequence Source -> Map -> Sink: 
> Unnamed (1/1)#0. Failure reason: Checkpoint was declined.
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:226)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:158)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:343)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointStreamOperator(SubtaskCheckpointCoordinatorImpl.java:603)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.buildOperatorSnapshotFutures(SubtaskCheckpointCoordinatorImpl.java:529)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.takeSnapshotSync(SubtaskCheckpointCoordinatorImpl.java:496)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointState(SubtaskCheckpointCoordinatorImpl.java:266)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$performCheckpoint$9(StreamTask.java:924)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:914)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:857)
>   ... 10 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.api.connector.source.lib.util.IteratorSourceReader.snapshotState(IteratorSourceReader.java:132)
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.snapshotState(SourceOperator.java:264)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:197)
>   ... 20 more
> {code}
> Adding a null check solves the issue. But if I then change 

[GitHub] [flink] AHeise merged pull request #14279: [FLINK-20418][core] Fixing checkpointing of IteratorSourceReader.

2020-12-01 Thread GitBox


AHeise merged pull request #14279:
URL: https://github.com/apache/flink/pull/14279


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14283: FLINK-15649 Support mounting volumes

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14283:
URL: https://github.com/apache/flink/pull/14283#issuecomment-736675395


   
   ## CI report:
   
   * ca7d9b637e2ac020049e3a54b80ba6a14b9f7865 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10404)
 
   * bf7d5e41e38d9dbeef734ccb80f97fe948974837 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14284: FLINK-20324 Added Configuration mount support

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14284:
URL: https://github.com/apache/flink/pull/14284#issuecomment-736676337


   
   ## CI report:
   
   * 79e46eb71823e08b4462a22c024fad6a71f4e3dd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10410)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14279: [FLINK-20418][core] Fixing checkpointing of IteratorSourceReader.

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14279:
URL: https://github.com/apache/flink/pull/14279#issuecomment-736588058


   
   ## CI report:
   
   * 05490783d40bea9c1a38f1af4c731f23bb5160c1 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10393)
 
   * 2087860510508ad7e65a97204cb7806b03d830a8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10399)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20444) Chain AsyncWaitOperator to new sources

2020-12-01 Thread Arvid Heise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17241772#comment-17241772
 ] 

Arvid Heise commented on FLINK-20444:
-

Merged into master as f79a09b8488190b549f4b7159f3e95818d57946a.
Merged into 1.12 as 302c6739fc.

> Chain AsyncWaitOperator to new sources
> --
>
> Key: FLINK-20444
> URL: https://issues.apache.org/jira/browse/FLINK-20444
> Project: Flink
>  Issue Type: Improvement
>Reporter: Arvid Heise
>Priority: Major
>  Labels: pull-request-available
>
> For legacy sources, we had to disable chaining because of incompatible 
> threading models.
> New sources are working fine however and it would give some users massive 
> performance improvements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-20444) Chain AsyncWaitOperator to new sources

2020-12-01 Thread Arvid Heise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvid Heise resolved FLINK-20444.
-
Fix Version/s: 1.13.0
   1.12.0
   Resolution: Fixed

> Chain AsyncWaitOperator to new sources
> --
>
> Key: FLINK-20444
> URL: https://issues.apache.org/jira/browse/FLINK-20444
> Project: Flink
>  Issue Type: Improvement
>Reporter: Arvid Heise
>Assignee: Arvid Heise
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.13.0
>
>
> For legacy sources, we had to disable chaining because of incompatible 
> threading models.
> New sources are working fine however and it would give some users massive 
> performance improvements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20444) Chain AsyncWaitOperator to new sources

2020-12-01 Thread Arvid Heise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvid Heise reassigned FLINK-20444:
---

Assignee: Arvid Heise

> Chain AsyncWaitOperator to new sources
> --
>
> Key: FLINK-20444
> URL: https://issues.apache.org/jira/browse/FLINK-20444
> Project: Flink
>  Issue Type: Improvement
>Reporter: Arvid Heise
>Assignee: Arvid Heise
>Priority: Major
>  Labels: pull-request-available
>
> For legacy sources, we had to disable chaining because of incompatible 
> threading models.
> New sources are working fine however and it would give some users massive 
> performance improvements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] AHeise merged pull request #14278: [FLINK-20444][runtime] Chain YieldingOperatorFactory to new sources.

2020-12-01 Thread GitBox


AHeise merged pull request #14278:
URL: https://github.com/apache/flink/pull/14278


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14284: FLINK-20324 Added Configuration mount support

2020-12-01 Thread GitBox


flinkbot edited a comment on pull request #14284:
URL: https://github.com/apache/flink/pull/14284#issuecomment-736676337


   
   ## CI report:
   
   * d58f9ba006f80ae16074f0b1cf2e0025a7f5da43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10405)
 
   * 79e46eb71823e08b4462a22c024fad6a71f4e3dd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >