[GitHub] [flink] flinkbot edited a comment on pull request #16757: [FLINK-23615][docs] Fix the mistake in "systemfunctions"

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16757:
URL: https://github.com/apache/flink/pull/16757#issuecomment-895146040


   
   ## CI report:
   
   * 9fcc3202a4bb7900256f05785bbfa618e8a78de6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16759: [FLINK-23434][table-planner] Fix the inconsistent type in IncrementalAggregateRule when the query has one distinct agg function and c

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16759:
URL: https://github.com/apache/flink/pull/16759#issuecomment-895204515


   
   ## CI report:
   
   * dadf56154b8855a4dee387f04838f8bd36d9add9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21778)
 
   * add96bdd5d9d653826def99a7e44810d16957183 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21807)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23698) Pass watermarks in finished on restore operators

2021-08-10 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-23698:
--

 Summary: Pass watermarks in finished on restore operators
 Key: FLINK-23698
 URL: https://issues.apache.org/jira/browse/FLINK-23698
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Affects Versions: 1.14.0
Reporter: Piotr Nowojski
Assignee: Piotr Nowojski
 Fix For: 1.14.0


After merging FLINK-23541 there is a bug on restore finished tasks that we are 
loosing an information that max watermark has been already emitted. As task is 
finished, it means no new watermark will be ever emitted, while downstream 
tasks (for example two/multiple input tasks) would be deadlocked waiting for a 
watermark from an already finished input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23698) Pass watermarks in finished on restore operators

2021-08-10 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-23698:
---
Priority: Critical  (was: Major)

> Pass watermarks in finished on restore operators
> 
>
> Key: FLINK-23698
> URL: https://issues.apache.org/jira/browse/FLINK-23698
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Affects Versions: 1.14.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Critical
> Fix For: 1.14.0
>
>
> After merging FLINK-23541 there is a bug on restore finished tasks that we 
> are loosing an information that max watermark has been already emitted. As 
> task is finished, it means no new watermark will be ever emitted, while 
> downstream tasks (for example two/multiple input tasks) would be deadlocked 
> waiting for a watermark from an already finished input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23695) DispatcherFailoverITCase.testRecoverFromCheckpointAfterJobGraphRemovalOfTerminatedJobFailed fails due to timeout

2021-08-10 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396488#comment-17396488
 ] 

Till Rohrmann commented on FLINK-23695:
---

[~dmvk] could you take a look at this problem?

> DispatcherFailoverITCase.testRecoverFromCheckpointAfterJobGraphRemovalOfTerminatedJobFailed
>  fails due to timeout
> 
>
> Key: FLINK-23695
> URL: https://issues.apache.org/jira/browse/FLINK-23695
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21790&view=logs&j=0e7be18f-84f2-53f0-a32d-4a5e4a174679&t=7c1d86e3-35bd-5fd5-3b7c-30c126a78702&l=9573
> {code}
> Aug 09 23:09:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 60.331 s <<< FAILURE! - in 
> org.apache.flink.runtime.dispatcher.DispatcherFailoverITCase
> Aug 09 23:09:32 [ERROR] 
> testRecoverFromCheckpointAfterJobGraphRemovalOfTerminatedJobFailed(org.apache.flink.runtime.dispatcher.DispatcherFailoverITCase)
>   Time elapsed: 60.265 s  <<< ERROR!
> Aug 09 23:09:32 java.util.concurrent.TimeoutException: Condition was not met 
> in given timeout.
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:154)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:132)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:124)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.dispatcher.AbstractDispatcherTest.awaitStatus(AbstractDispatcherTest.java:81)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.dispatcher.DispatcherFailoverITCase.testRecoverFromCheckpointAfterJobGraphRemovalOfTerminatedJobFailed(DispatcherFailoverITCase.java:137)
> Aug 09 23:09:32   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 09 23:09:32   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 09 23:09:32   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 09 23:09:32   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 09 23:09:32   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Aug 09 23:09:32   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 09 23:09:32   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Aug 09 23:09:32   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 09 23:09:32   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Aug 09 23:09:32   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Aug 09 23:09:32   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Aug 09 23:09:32   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$CloseableStatement.evaluate(TestingFatalErrorHandlerResource.java:91)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$CloseableStatement.access$200(TestingFatalErrorHandlerResource.java:83)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$1.evaluate(TestingFatalErrorHandlerResource.java:55)
> Aug 09 23:09:32   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Aug 09 23:09:32   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 09 23:09:32   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Aug 09 23:09:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Aug 09 23:09:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunne

[jira] [Commented] (FLINK-23695) DispatcherFailoverITCase.testRecoverFromCheckpointAfterJobGraphRemovalOfTerminatedJobFailed fails due to timeout

2021-08-10 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-23695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396492#comment-17396492
 ] 

David Morávek commented on FLINK-23695:
---

Yes, I'll try to fix this today.

> DispatcherFailoverITCase.testRecoverFromCheckpointAfterJobGraphRemovalOfTerminatedJobFailed
>  fails due to timeout
> 
>
> Key: FLINK-23695
> URL: https://issues.apache.org/jira/browse/FLINK-23695
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21790&view=logs&j=0e7be18f-84f2-53f0-a32d-4a5e4a174679&t=7c1d86e3-35bd-5fd5-3b7c-30c126a78702&l=9573
> {code}
> Aug 09 23:09:32 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 60.331 s <<< FAILURE! - in 
> org.apache.flink.runtime.dispatcher.DispatcherFailoverITCase
> Aug 09 23:09:32 [ERROR] 
> testRecoverFromCheckpointAfterJobGraphRemovalOfTerminatedJobFailed(org.apache.flink.runtime.dispatcher.DispatcherFailoverITCase)
>   Time elapsed: 60.265 s  <<< ERROR!
> Aug 09 23:09:32 java.util.concurrent.TimeoutException: Condition was not met 
> in given timeout.
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:154)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:132)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:124)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.dispatcher.AbstractDispatcherTest.awaitStatus(AbstractDispatcherTest.java:81)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.dispatcher.DispatcherFailoverITCase.testRecoverFromCheckpointAfterJobGraphRemovalOfTerminatedJobFailed(DispatcherFailoverITCase.java:137)
> Aug 09 23:09:32   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 09 23:09:32   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 09 23:09:32   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 09 23:09:32   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 09 23:09:32   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Aug 09 23:09:32   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 09 23:09:32   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Aug 09 23:09:32   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 09 23:09:32   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Aug 09 23:09:32   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Aug 09 23:09:32   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Aug 09 23:09:32   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$CloseableStatement.evaluate(TestingFatalErrorHandlerResource.java:91)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$CloseableStatement.access$200(TestingFatalErrorHandlerResource.java:83)
> Aug 09 23:09:32   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$1.evaluate(TestingFatalErrorHandlerResource.java:55)
> Aug 09 23:09:32   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Aug 09 23:09:32   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 09 23:09:32   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Aug 09 23:09:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Aug 09 23:09:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Aug 09 23:09:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 

[GitHub] [flink] cuibo01 commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-10 Thread GitBox


cuibo01 commented on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-895790739


   > Hey @cuibo01 , hive has more than just `UGI Authenticator` and `Session 
Config Authenticator`. It can even be a customized implementation.
   
   right, almost all security clusters use UGI.
   
   > So it's not possible to support all of them unless we directly leverage 
the `HiveAuthenticationProvider` interface. But like I said, we can just use 
the UGI for now. Or is there any reason you must rely on the `user.name` 
configuration which has to be different from the UGI?
   
   i think flink should be able to set owner attribute in non-secure 
cluster,because authentication and authorization are different, and in 
non-secure cluster, the authentication is deabled, and the authorization may be 
enabled. and the UGI#currentUserName of each TM is the hostname of the current 
machine. so we can set owner through hiveconf
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] cuibo01 edited a comment on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-10 Thread GitBox


cuibo01 edited a comment on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-895790739


   > Hey @cuibo01 , hive has more than just `UGI Authenticator` and `Session 
Config Authenticator`. It can even be a customized implementation.
   
   right, almost all security clusters use UGI.
   
   > So it's not possible to support all of them unless we directly leverage 
the `HiveAuthenticationProvider` interface. But like I said, we can just use 
the UGI for now. Or is there any reason you must rely on the `user.name` 
configuration which has to be different from the UGI?
   
   i think flink should be able to set owner attribute in non-secure 
cluster,because authentication and authorization are different, and in 
non-secure cluster, the authentication is deabled, and the authorization may be 
enabled. and the UGI#currentUserName of each TM is the hostname of the current 
machine. so we can set owner through hiveconf
   and if customized implementation, flink can also set owner through hiveconf
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16669: [FLINK-23544][table-planner]Window TVF Supports session window

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16669:
URL: https://github.com/apache/flink/pull/16669#issuecomment-890847741


   
   ## CI report:
   
   * 889dfbb1b28097728eea054580fa4e0e6bb61c22 UNKNOWN
   * 666aa0b83c06a3ce506a89a359287d10e4cd49b8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21802)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21749)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16688: [FLINK-22773][coordination] Optimize the construction of pipelined regions

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16688:
URL: https://github.com/apache/flink/pull/16688#issuecomment-891705944


   
   ## CI report:
   
   * f91aea0c209d076f062891cb814c3b0cc0eea4ef UNKNOWN
   * 05d73b31d7c9b902164d9e20e035716bcc2b225d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21800)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16744: [FLINK-23208] let late processing timers fired immediately

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16744:
URL: https://github.com/apache/flink/pull/16744#issuecomment-894596520


   
   ## CI report:
   
   * d834558cae758cfbe0ca1a84e75dce128567e5b4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21801)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16762: [FLINK-23408] Waiting for final checkpoint completed before finishing a task

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16762:
URL: https://github.com/apache/flink/pull/16762#issuecomment-895436730


   
   ## CI report:
   
   * 8f94b06837560f488b704315887d7d9c4c463807 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21787)
 
   * 2daa6ac1bc5ee54ab9fddc85e9dbe454e0ae009a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #16714: [FLINK-23475][runtime][checkpoint] Supports repartition partly finished broacast states

2021-08-10 Thread GitBox


curcur commented on a change in pull request #16714:
URL: https://github.com/apache/flink/pull/16714#discussion_r685748071



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/RoundRobinOperatorStateRepartitioner.java
##
@@ -123,9 +128,28 @@
 return mergeMapList;
 }
 
-/** Collect union states from given parallelSubtaskStates. */
 private Map>>
 collectUnionStates(List> 
parallelSubtaskStates) {
+return collectStates(parallelSubtaskStates, 
OperatorStateHandle.Mode.UNION);
+}
+
+private Map>>
+collectPartlyFinishedBroadcastStates(
+List> parallelSubtaskStates) {
+return collectStates(parallelSubtaskStates, 
OperatorStateHandle.Mode.BROADCAST).entrySet()
+.stream()
+.filter(
+e ->
+e.getValue().size() > 0
+&& e.getValue().size() < 
parallelSubtaskStates.size())
+.collect(Collectors.toMap(Map.Entry::getKey, 
Map.Entry::getValue));
+}
+
+/** Collect union states from given parallelSubtaskStates. */

Review comment:
   change the comment to reflect what the function does right now?

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/RoundRobinOperatorStateRepartitioner.java
##
@@ -123,9 +128,28 @@
 return mergeMapList;
 }
 
-/** Collect union states from given parallelSubtaskStates. */
 private Map>>
 collectUnionStates(List> 
parallelSubtaskStates) {
+return collectStates(parallelSubtaskStates, 
OperatorStateHandle.Mode.UNION);
+}
+
+private Map>>
+collectPartlyFinishedBroadcastStates(
+List> parallelSubtaskStates) {
+return collectStates(parallelSubtaskStates, 
OperatorStateHandle.Mode.BROADCAST).entrySet()
+.stream()
+.filter(
+e ->
+e.getValue().size() > 0
+&& e.getValue().size() < 
parallelSubtaskStates.size())

Review comment:
   `e.getValue().size() > 0` means `not empty`?
   what does `e.getValue().size() < parallelSubtaskStates.size()` mean?

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/RoundRobinOperatorStateRepartitioner.java
##
@@ -123,9 +128,28 @@
 return mergeMapList;
 }
 
-/** Collect union states from given parallelSubtaskStates. */
 private Map>>
 collectUnionStates(List> 
parallelSubtaskStates) {
+return collectStates(parallelSubtaskStates, 
OperatorStateHandle.Mode.UNION);
+}
+
+private Map>>
+collectPartlyFinishedBroadcastStates(
+List> parallelSubtaskStates) {
+return collectStates(parallelSubtaskStates, 
OperatorStateHandle.Mode.BROADCAST).entrySet()
+.stream()
+.filter(
+e ->
+e.getValue().size() > 0
+&& e.getValue().size() < 
parallelSubtaskStates.size())

Review comment:
   My guess is that the empty state should be `e.getValue().size() == 0` ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pnowojski commented on pull request #16744: [FLINK-23208] let late processing timers fired immediately

2021-08-10 Thread GitBox


pnowojski commented on pull request #16744:
URL: https://github.com/apache/flink/pull/16744#issuecomment-895800136


   Thanks for the investigation, and fix @Jiayi-Liao ! Merging :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pnowojski merged pull request #16744: [FLINK-23208] let late processing timers fired immediately

2021-08-10 Thread GitBox


pnowojski merged pull request #16744:
URL: https://github.com/apache/flink/pull/16744


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-23208) Late processing timers need to wait 1ms at least to be fired

2021-08-10 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-23208.
--
Fix Version/s: 1.14.0
   Resolution: Fixed

merged commit 2d38ef4 into apache:master

A follow up ticket with further optimisations: FLINK-23690

> Late processing timers need to wait 1ms at least to be fired
> 
>
> Key: FLINK-23208
> URL: https://issues.apache.org/jira/browse/FLINK-23208
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Task
>Affects Versions: 1.11.0, 1.11.3, 1.13.0, 1.14.0, 1.12.4
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Critical
>  Labels: critical, pull-request-available
> Fix For: 1.14.0
>
> Attachments: screenshot-1.png
>
>
> The problem is from the codes below:
> {code:java}
> public static long getProcessingTimeDelay(long processingTimestamp, long 
> currentTimestamp) {
>   // delay the firing of the timer by 1 ms to align the semantics with 
> watermark. A watermark
>   // T says we won't see elements in the future with a timestamp smaller 
> or equal to T.
>   // With processing time, we therefore need to delay firing the timer by 
> one ms.
>   return Math.max(processingTimestamp - currentTimestamp, 0) + 1;
> }
> {code}
> Assuming a Flink job creates 1 timer per millionseconds, and is able to 
> consume 1 timer/ms. Here is what will happen: 
> * Timestmap1(1st ms): timer1 is registered and will be triggered on 
> Timestamp2. 
> * Timestamp2(2nd ms): timer2 is registered and timer1 is triggered
> * Timestamp3(3rd ms): timer3 is registered and timer1 is consumed, after 
> this, {{InternalTimerServiceImpl}} registers next timer, which is timer2, and 
> timer2 will be triggered on Timestamp4(wait 1ms at least)
> * Timestamp4(4th ms): timer4 is registered and timer2 is triggered
> * Timestamp5(5th ms): timer5 is registered and timer2 is consumed, after 
> this, {{InternalTimerServiceImpl}} registers next timer, which is timer3, and 
> timer3 will be triggered on Timestamp6(wait 1ms at least)
> As we can see here, the ability of the Flink job is consuming 1 timer/ms, but 
> it's actually able to consume 0.5 timer/ms. And another problem is that we 
> cannot observe the delay from the lag metrics of the source(Kafka). Instead, 
> what we can tell is that the moment of output is much later than expected. 
> I've added a metrics in our inner version, we can see the lag of the timer 
> triggering keeps increasing: 
>  !screenshot-1.png! 
> *In another word, we should never let the late processing timer wait 1ms, I 
> think a simple change would be as below:*
> {code:java}
> return Math.max(processingTimestamp - currentTimestamp, -1) + 1;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23693) Add principal creation possibilities to SecureTestEnvironment

2021-08-10 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-23693:
--

Assignee: Gyula Fora

> Add principal creation possibilities to SecureTestEnvironment
> -
>
> Key: FLINK-23693
> URL: https://issues.apache.org/jira/browse/FLINK-23693
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.14.0
>Reporter: Gabor Somogyi
>Assignee: Gyula Fora
>Priority: Major
>
> Kerberos authentication handler (external project) needs it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23688) Make JaasModule.getAppConfigurationEntries public

2021-08-10 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-23688:
--

Assignee: Gyula Fora

> Make JaasModule.getAppConfigurationEntries public
> -
>
> Key: FLINK-23688
> URL: https://issues.apache.org/jira/browse/FLINK-23688
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hadoop Compatibility
>Affects Versions: 1.14.0
>Reporter: Gabor Somogyi
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> Kerberos authentication handler (external project) needs it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23688) Make JaasModule.getAppConfigurationEntries public

2021-08-10 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396508#comment-17396508
 ] 

Gyula Fora commented on FLINK-23688:


Straightforward change to expose a valuable utility function for other uses

> Make JaasModule.getAppConfigurationEntries public
> -
>
> Key: FLINK-23688
> URL: https://issues.apache.org/jira/browse/FLINK-23688
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hadoop Compatibility
>Affects Versions: 1.14.0
>Reporter: Gabor Somogyi
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> Kerberos authentication handler (external project) needs it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol merged pull request #16722: [FLINK-23644][build] Resolve maven warnings

2021-08-10 Thread GitBox


zentol merged pull request #16722:
URL: https://github.com/apache/flink/pull/16722


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gyfora commented on pull request #16758: [FLINK-23688][hadoop] Make JaasModule.getAppConfigurationEntries public

2021-08-10 Thread GitBox


gyfora commented on pull request #16758:
URL: https://github.com/apache/flink/pull/16758#issuecomment-895804027


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-23644) Resolve maven warnings for duplicate dependencies/plugins

2021-08-10 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-23644.

Resolution: Fixed

master: d759a5ad87f4b4a231340cbadc97e116d91b7cf3

> Resolve maven warnings for duplicate dependencies/plugins
> -
>
> Key: FLINK-23644
> URL: https://issues.apache.org/jira/browse/FLINK-23644
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.14.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk commented on a change in pull request #16688: [FLINK-22773][coordination] Optimize the construction of pipelined regions

2021-08-10 Thread GitBox


zhuzhurk commented on a change in pull request #16688:
URL: https://github.com/apache/flink/pull/16688#discussion_r685733608



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/SchedulingPipelinedRegionComputeUtil.java
##
@@ -0,0 +1,186 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License
+ */
+
+package org.apache.flink.runtime.executiongraph.failover.flip1;
+
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.scheduler.strategy.ConsumerVertexGroup;
+import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingExecutionVertex;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingPipelinedRegion;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingResultPartition;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.IdentityHashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.function.Function;
+
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.buildRawRegions;
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.mergeRegions;
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.uniqueRegions;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/** Utils for computing {@link SchedulingPipelinedRegion}s. */
+public final class SchedulingPipelinedRegionComputeUtil {
+
+public static Set> computePipelinedRegions(
+final Iterable 
topologicallySortedVertices,
+final Function
+executionVertexRetriever,
+final Function
+resultPartitionRetriever) {
+
+final Map> 
vertexToRegion =
+buildRawRegions(
+topologicallySortedVertices,
+(vertex) -> getReconnectableResults(vertex, 
resultPartitionRetriever));

Review comment:
   `(vertex)` -> `vertex`

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/SchedulingPipelinedRegionComputeUtil.java
##
@@ -0,0 +1,186 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License
+ */
+
+package org.apache.flink.runtime.executiongraph.failover.flip1;
+
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.scheduler.strategy.ConsumerVertexGroup;
+import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingExecutionVertex;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingPipelinedRegion;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingResultPartition;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.IdentityHashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.function.Function;
+
+imp

[jira] [Created] (FLINK-23699) The code comment's reference is wrong for the function isUsingFixedMemoryPerSlot

2021-08-10 Thread Liu (Jira)
Liu created FLINK-23699:
---

 Summary: The code comment's reference is wrong for the function 
isUsingFixedMemoryPerSlot
 Key: FLINK-23699
 URL: https://issues.apache.org/jira/browse/FLINK-23699
 Project: Flink
  Issue Type: Improvement
Reporter: Liu


The code is as follow. USE_MANAGED_MEMORY should be FIX_PER_SLOT_MEMORY_SIZE.

/**
 * Gets whether the state backend is configured to use a fixed amount of memory 
shared between
 * all RocksDB instances (in all tasks and operators) of a slot. See {@link
 * RocksDBOptions#USE_MANAGED_MEMORY} for details.
 */
public boolean isUsingFixedMemoryPerSlot() {
 return fixedMemoryPerSlot != null;
}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk commented on a change in pull request #16688: [FLINK-22773][coordination] Optimize the construction of pipelined regions

2021-08-10 Thread GitBox


zhuzhurk commented on a change in pull request #16688:
URL: https://github.com/apache/flink/pull/16688#discussion_r685774213



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/SchedulingPipelinedRegionComputeUtil.java
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License
+ */
+
+package org.apache.flink.runtime.executiongraph.failover.flip1;
+
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.scheduler.strategy.ConsumerVertexGroup;
+import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingExecutionVertex;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingPipelinedRegion;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingResultPartition;
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.IdentityHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.mergeRegions;
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.mergeRegionsOnCycles;
+
+/** Utils for computing {@link SchedulingPipelinedRegion}s. */
+public final class SchedulingPipelinedRegionComputeUtil {
+
+public static Set> computePipelinedRegions(
+final Iterable 
topologicallySortedVertexes,
+final Function
+executionVertexRetriever,
+final Function
+resultPartitionRetriever) {
+
+final Map> 
vertexToRegion =
+buildRawRegions(topologicallySortedVertexes, 
resultPartitionRetriever);
+
+return mergeRegionsOnCycles(
+vertexToRegion,
+(currentRegion, regionIndices) ->
+buildOutEdgesForRegion(
+currentRegion,
+regionIndices,
+vertexToRegion,
+executionVertexRetriever));
+}
+
+private static Map> buildRawRegions(
+final Iterable 
topologicallySortedVertexes,
+final Function
+resultPartitionRetriever) {
+
+final Map> 
vertexToRegion =
+new IdentityHashMap<>();
+
+// iterate all the vertices which are topologically sorted
+for (SchedulingExecutionVertex vertex : topologicallySortedVertexes) {
+Set currentRegion = new HashSet<>();
+currentRegion.add(vertex);
+vertexToRegion.put(vertex, currentRegion);
+
+for (ConsumedPartitionGroup consumedPartitionGroup :
+vertex.getConsumedPartitionGroups()) {
+for (IntermediateResultPartitionID consumedPartitionId : 
consumedPartitionGroup) {
+SchedulingResultPartition consumedPartition =
+
resultPartitionRetriever.apply(consumedPartitionId);
+// Similar to the BLOCKING ResultPartitionType, each 
vertex connected through
+// PIPELINED_APPROXIMATE is also considered as a single 
region. This attribute
+// is called  "reconnectable". Reconnectable will be 
removed after FLINK-19895,
+// see also {@link  ResultPartitionType#isReconnectable}
+if (!consumedPartition.getResultType().isReconnectable()) {
+final SchedulingExecutionVertex producerVertex =
+consumedPartition.getProducer();
+final Set producerRegion =
+vertexToRegion.get(producerVertex);
+
+// check if it is the same as the producer region, if 
so skip the merge
+// this check can significantly reduce compute 
complexity in All-to-All
+// PIPELINED edge case
+if (p

[GitHub] [flink] gaborgsomogyi opened a new pull request #16764: [FLINK-23693][tests] Add principal creation possibilities to SecureTestEnvironment

2021-08-10 Thread GitBox


gaborgsomogyi opened a new pull request #16764:
URL: https://github.com/apache/flink/pull/16764


   ## What is the purpose of the change
   
   There are use-cases where this is needed. A good example is Kerberos 
authentication handler which is intended to be added as external project in 
FLINK-23274.
   
   ## Brief change log
   
   * Fixed some forgotten reference bugs
   * Minor re-ordering to see easily what's de-referenced and what's not
   * Added `createPrincipal`
   
   ## Verifying this change
   
   * Existing unit tests
   * New unit tests with local build in the mentioned Kerberos authentication 
handler (intended to be published soon)
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaborgsomogyi commented on pull request #16764: [FLINK-23693][tests] Add principal creation possibilities to SecureTestEnvironment

2021-08-10 Thread GitBox


gaborgsomogyi commented on pull request #16764:
URL: https://github.com/apache/flink/pull/16764#issuecomment-895811477


   cc @gyfora 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23693) Add principal creation possibilities to SecureTestEnvironment

2021-08-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23693:
---
Labels: pull-request-available  (was: )

> Add principal creation possibilities to SecureTestEnvironment
> -
>
> Key: FLINK-23693
> URL: https://issues.apache.org/jira/browse/FLINK-23693
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.14.0
>Reporter: Gabor Somogyi
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> Kerberos authentication handler (external project) needs it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk commented on a change in pull request #16688: [FLINK-22773][coordination] Optimize the construction of pipelined regions

2021-08-10 Thread GitBox


zhuzhurk commented on a change in pull request #16688:
URL: https://github.com/apache/flink/pull/16688#discussion_r685758089



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/SchedulingPipelinedRegionComputeUtil.java
##
@@ -0,0 +1,186 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License
+ */
+
+package org.apache.flink.runtime.executiongraph.failover.flip1;
+
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.scheduler.strategy.ConsumerVertexGroup;
+import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingExecutionVertex;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingPipelinedRegion;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingResultPartition;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.IdentityHashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.function.Function;
+
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.buildRawRegions;
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.mergeRegions;
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.uniqueRegions;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/** Utils for computing {@link SchedulingPipelinedRegion}s. */
+public final class SchedulingPipelinedRegionComputeUtil {
+
+public static Set> computePipelinedRegions(
+final Iterable 
topologicallySortedVertices,
+final Function
+executionVertexRetriever,
+final Function
+resultPartitionRetriever) {
+
+final Map> 
vertexToRegion =
+buildRawRegions(
+topologicallySortedVertices,
+(vertex) -> getReconnectableResults(vertex, 
resultPartitionRetriever));
+
+return mergeRegionsOnCycles(vertexToRegion, executionVertexRetriever);
+}
+
+/**
+ * Merge the regions base on https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm";>
+ * Tarjan's strongly connected components algorithm. For more details 
please see https://issues.apache.org/jira/browse/FLINK-17330";>FLINK-17330.
+ */
+private static Set> mergeRegionsOnCycles(
+final Map> vertexToRegion,
+final Function
+executionVertexRetriever) {
+
+final List> regionList =
+new ArrayList<>(uniqueRegions(vertexToRegion));
+final List> outEdges =
+buildOutEdgesDesc(vertexToRegion, regionList, 
executionVertexRetriever);
+final Set> sccs =
+
StronglyConnectedComponentsComputeUtils.computeStronglyConnectedComponents(
+outEdges.size(), outEdges);
+
+final Set> mergedRegions =
+Collections.newSetFromMap(new IdentityHashMap<>());
+for (Set scc : sccs) {
+checkState(scc.size() > 0);
+
+Set mergedRegion = new HashSet<>();
+for (int regionIndex : scc) {
+mergedRegion =
+mergeRegions(mergedRegion, 
regionList.get(regionIndex), vertexToRegion);
+}
+mergedRegions.add(mergedRegion);
+}
+
+return mergedRegions;
+}
+
+private static List> buildOutEdgesDesc(
+final Map> vertexToRegion,
+final List> regionList,
+final Function
+executionVertexRetriever) {
+
+final Map, Integer> regionIndices = new 
IdentityHashMap<>();
+for (int i = 0; i < regionList.size(); i++) {
+regionIndices.put(regionList.get(i), i);
+}
+
+final List> outEdges = new 
ArrayList<>(regionList.size());
+for (Set currentRegio

[GitHub] [flink] zhuzhurk commented on a change in pull request #16688: [FLINK-22773][coordination] Optimize the construction of pipelined regions

2021-08-10 Thread GitBox


zhuzhurk commented on a change in pull request #16688:
URL: https://github.com/apache/flink/pull/16688#discussion_r685778130



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/SchedulingPipelinedRegionComputeUtil.java
##
@@ -0,0 +1,186 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License
+ */
+
+package org.apache.flink.runtime.executiongraph.failover.flip1;
+
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.scheduler.strategy.ConsumerVertexGroup;
+import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingExecutionVertex;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingPipelinedRegion;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingResultPartition;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.IdentityHashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NoSuchElementException;
+import java.util.Set;
+import java.util.function.Function;
+
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.buildRawRegions;
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.mergeRegions;
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.uniqueRegions;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/** Utils for computing {@link SchedulingPipelinedRegion}s. */
+public final class SchedulingPipelinedRegionComputeUtil {
+
+public static Set> computePipelinedRegions(
+final Iterable 
topologicallySortedVertices,
+final Function
+executionVertexRetriever,
+final Function
+resultPartitionRetriever) {
+
+final Map> 
vertexToRegion =
+buildRawRegions(
+topologicallySortedVertices,
+(vertex) -> getReconnectableResults(vertex, 
resultPartitionRetriever));
+
+return mergeRegionsOnCycles(vertexToRegion, executionVertexRetriever);
+}
+
+/**
+ * Merge the regions base on https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm";>
+ * Tarjan's strongly connected components algorithm. For more details 
please see https://issues.apache.org/jira/browse/FLINK-17330";>FLINK-17330.
+ */
+private static Set> mergeRegionsOnCycles(
+final Map> vertexToRegion,
+final Function
+executionVertexRetriever) {
+
+final List> regionList =
+new ArrayList<>(uniqueRegions(vertexToRegion));
+final List> outEdges =
+buildOutEdgesDesc(vertexToRegion, regionList, 
executionVertexRetriever);
+final Set> sccs =
+
StronglyConnectedComponentsComputeUtils.computeStronglyConnectedComponents(
+outEdges.size(), outEdges);
+
+final Set> mergedRegions =
+Collections.newSetFromMap(new IdentityHashMap<>());
+for (Set scc : sccs) {
+checkState(scc.size() > 0);
+
+Set mergedRegion = new HashSet<>();
+for (int regionIndex : scc) {
+mergedRegion =
+mergeRegions(mergedRegion, 
regionList.get(regionIndex), vertexToRegion);
+}
+mergedRegions.add(mergedRegion);
+}
+
+return mergedRegions;
+}
+
+private static List> buildOutEdgesDesc(
+final Map> vertexToRegion,
+final List> regionList,
+final Function
+executionVertexRetriever) {
+
+final Map, Integer> regionIndices = new 
IdentityHashMap<>();
+for (int i = 0; i < regionList.size(); i++) {
+regionIndices.put(regionList.get(i), i);
+}
+
+final List> outEdges = new 
ArrayList<>(regionList.size());
+for (Set currentRegio

[GitHub] [flink] flinkbot commented on pull request #16764: [FLINK-23693][tests] Add principal creation possibilities to SecureTestEnvironment

2021-08-10 Thread GitBox


flinkbot commented on pull request #16764:
URL: https://github.com/apache/flink/pull/16764#issuecomment-895814279


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2bce8e962ba637e992e25b058b47dbac5efeb52b (Tue Aug 10 
07:56:42 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #16741: [FLINK-23662][rpc][akka] Port all Scala code to Java

2021-08-10 Thread GitBox


zentol commented on a change in pull request #16741:
URL: https://github.com/apache/flink/pull/16741#discussion_r685782751



##
File path: 
flink-rpc/flink-rpc-akka/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaUtils.java
##
@@ -0,0 +1,599 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rpc.akka;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.configuration.AkkaOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.SecurityOptions;
+import org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils;
+import org.apache.flink.runtime.rpc.RpcSystem;
+import org.apache.flink.util.NetUtils;
+import org.apache.flink.util.TimeUtils;
+import org.apache.flink.util.function.FunctionUtils;
+
+import akka.actor.ActorRef;
+import akka.actor.ActorSystem;
+import akka.actor.Address;
+import akka.actor.AddressFromURIString;
+import com.typesafe.config.Config;
+import com.typesafe.config.ConfigFactory;
+import org.jboss.netty.logging.InternalLoggerFactory;
+import org.jboss.netty.logging.Slf4JLoggerFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.PrintWriter;
+import java.io.StringWriter;
+import java.net.InetSocketAddress;
+import java.net.MalformedURLException;
+import java.time.Duration;
+import java.util.Arrays;
+import java.util.concurrent.CompletableFuture;
+import java.util.stream.Collectors;
+
+/**
+ * This class contains utility functions for akka. It contains methods to 
start an actor system with
+ * a given akka configuration. Furthermore, the akka configuration used for 
starting the different
+ * actor systems resides in this class.
+ */
+class AkkaUtils {
+private static final Logger LOG = LoggerFactory.getLogger(AkkaUtils.class);
+
+private static final String FLINK_ACTOR_SYSTEM_NAME = "flink";
+
+public static String getFlinkActorSystemName() {
+return FLINK_ACTOR_SYSTEM_NAME;
+}
+
+/**
+ * Gets the basic Akka config which is shared by remote and local actor 
systems.
+ *
+ * @param configuration instance which contains the user specified values 
for the configuration
+ * @return Flink's basic Akka config
+ */
+private static Config getBasicAkkaConfig(Configuration configuration) {
+final int akkaThroughput = 
configuration.getInteger(AkkaOptions.DISPATCHER_THROUGHPUT);
+final String jvmExitOnFatalError =
+
booleanToString(configuration.getBoolean(AkkaOptions.JVM_EXIT_ON_FATAL_ERROR));
+final String logLifecycleEvents =
+
booleanToString(configuration.getBoolean(AkkaOptions.LOG_LIFECYCLE_EVENTS));
+final String supervisorStrategy = 
EscalatingSupervisorStrategy.class.getCanonicalName();
+
+return new AkkaConfigBuilder()
+.add("akka {")
+.add("  daemonic = off")
+.add("  loggers = [\"akka.event.slf4j.Slf4jLogger\"]")
+.add("  logging-filter = 
\"akka.event.slf4j.Slf4jLoggingFilter\"")
+.add("  log-config-on-start = off")
+.add("  logger-startup-timeout = 30s")
+.add("  loglevel = " + getLogLevel())
+.add("  stdout-loglevel = OFF")
+.add("  log-dead-letters = " + logLifecycleEvents)
+.add("  log-dead-letters-during-shutdown = " + 
logLifecycleEvents)
+.add("  jvm-exit-on-fatal-error = " + jvmExitOnFatalError)
+.add("  serialize-messages = off")
+.add("  actor {")
+.add("guardian-supervisor-strategy = " + 
supervisorStrategy)
+.add("warn-about-java-serializer-usage = off")
+.add("allow-java-serialization = on")
+.add("default-dispatcher {")
+.add("  throughput = " + akkaThroughput)
+.add("}")
+.add("supervisor-dispatcher {")
+.add("  type = Dispatcher")
+.add("  executor = \"thread-pool-executor\"")
+.add("  thread-pool-e

[GitHub] [flink] zhuzhurk commented on a change in pull request #16688: [FLINK-22773][coordination] Optimize the construction of pipelined regions

2021-08-10 Thread GitBox


zhuzhurk commented on a change in pull request #16688:
URL: https://github.com/apache/flink/pull/16688#discussion_r685785327



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/SchedulingPipelinedRegionComputeUtil.java
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License
+ */
+
+package org.apache.flink.runtime.executiongraph.failover.flip1;
+
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.scheduler.strategy.ConsumerVertexGroup;
+import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingExecutionVertex;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingPipelinedRegion;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingResultPartition;
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.IdentityHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.mergeRegions;
+import static 
org.apache.flink.runtime.executiongraph.failover.flip1.PipelinedRegionComputeUtil.mergeRegionsOnCycles;
+
+/** Utils for computing {@link SchedulingPipelinedRegion}s. */
+public final class SchedulingPipelinedRegionComputeUtil {
+
+public static Set> computePipelinedRegions(
+final Iterable 
topologicallySortedVertexes,
+final Function
+executionVertexRetriever,
+final Function
+resultPartitionRetriever) {
+
+final Map> 
vertexToRegion =
+buildRawRegions(topologicallySortedVertexes, 
resultPartitionRetriever);
+
+return mergeRegionsOnCycles(
+vertexToRegion,
+(currentRegion, regionIndices) ->
+buildOutEdgesForRegion(
+currentRegion,
+regionIndices,
+vertexToRegion,
+executionVertexRetriever));
+}
+
+private static Map> buildRawRegions(
+final Iterable 
topologicallySortedVertexes,
+final Function
+resultPartitionRetriever) {
+
+final Map> 
vertexToRegion =
+new IdentityHashMap<>();
+
+// iterate all the vertices which are topologically sorted
+for (SchedulingExecutionVertex vertex : topologicallySortedVertexes) {
+Set currentRegion = new HashSet<>();
+currentRegion.add(vertex);
+vertexToRegion.put(vertex, currentRegion);
+
+for (ConsumedPartitionGroup consumedPartitionGroup :
+vertex.getConsumedPartitionGroups()) {
+for (IntermediateResultPartitionID consumedPartitionId : 
consumedPartitionGroup) {
+SchedulingResultPartition consumedPartition =
+
resultPartitionRetriever.apply(consumedPartitionId);
+// Similar to the BLOCKING ResultPartitionType, each 
vertex connected through
+// PIPELINED_APPROXIMATE is also considered as a single 
region. This attribute
+// is called  "reconnectable". Reconnectable will be 
removed after FLINK-19895,
+// see also {@link  ResultPartitionType#isReconnectable}
+if (!consumedPartition.getResultType().isReconnectable()) {
+final SchedulingExecutionVertex producerVertex =
+consumedPartition.getProducer();
+final Set producerRegion =
+vertexToRegion.get(producerVertex);
+
+// check if it is the same as the producer region, if 
so skip the merge
+// this check can significantly reduce compute 
complexity in All-to-All
+// PIPELINED edge case
+if (p

[GitHub] [flink] flinkbot edited a comment on pull request #16700: [FLINK-22606][hive] Simplify the usage of SessionState

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16700:
URL: https://github.com/apache/flink/pull/16700#issuecomment-892453171


   
   ## CI report:
   
   * bca1eec4495bb331dc32cb6ff219d62524ee3cb9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21803)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16758: [FLINK-23688][hadoop] Make JaasModule.getAppConfigurationEntries public

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16758:
URL: https://github.com/apache/flink/pull/16758#issuecomment-895165395


   
   ## CI report:
   
   * c09fd62d474672f10aa5cf5ea9d1b279896d984a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21788)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21774)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21812)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16762: [FLINK-23408] Waiting for final checkpoint completed before finishing a task

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16762:
URL: https://github.com/apache/flink/pull/16762#issuecomment-895436730


   
   ## CI report:
   
   * 8f94b06837560f488b704315887d7d9c4c463807 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21787)
 
   * 2daa6ac1bc5ee54ab9fddc85e9dbe454e0ae009a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21809)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-23671) Failed to inference type in correlate

2021-08-10 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-23671:


Assignee: Timo Walther

> Failed to inference type in correlate 
> --
>
> Key: FLINK-23671
> URL: https://issues.apache.org/jira/browse/FLINK-23671
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.2
>Reporter: Shengkai Fang
>Assignee: Timo Walther
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Please also turn off the assert by running the query *without* jvm parameter 
> -ea .
>  !screenshot-1.png! 
> {code:java}
> CREATE FUNCTION func111 AS 
> 'org.apache.flink.table.client.gateway.utils.CPDetailOriginMatchV2UDF';
> CREATE TABLE side(
>   `id2` VARCHAR,
>   PRIMARY KEY (`id2`) NOT ENFORCED
> ) WITH (
>   'connector' = 'values'
> );
> CREATE TABLE main(
>   `id` VARCHAR,
>   `proctime` as proctime()
> ) WITH (
>   'connector' = 'datagen',
>   'number-of-rows' = '10'
> );
> CREATE TABLE blackhole(
>   `id` VARCHAR
> ) WITH (
>   'connector' = 'blackhole'
> );
> INSERT INTO blackhole
> SELECT `id`
> FROM main
> LEFT JOIN side FOR SYSTEM_TIME AS OF main.`proctime` ON main.`id` = side.`id2`
> INNER join lateral table(func111(side.`id2`)) as T(`is_match`, 
> `match_bizline`, `match_page_id`, `source_type`) ON 1 = 1;
> {code}
> The implementation of the udf is as follow 
> {code:java}
> package org.apache.flink.table.client.gateway.utils;
> import org.apache.flink.table.api.DataTypes;
> import org.apache.flink.table.catalog.DataTypeFactory;
> import org.apache.flink.table.functions.TableFunction;
> import org.apache.flink.table.types.DataType;
> import org.apache.flink.table.types.inference.TypeInference;
> import org.apache.flink.types.Row;
> import java.util.Optional;
> public class CPDetailOriginMatchV2UDF extends TableFunction {
> public void eval(String original) {
> collect(null);
> }
> // is_matched, match_bizline, match_page_id, scene
> @Override
> public TypeInference getTypeInference(DataTypeFactory typeFactory) {
> return TypeInference.newBuilder()
> .outputTypeStrategy(
> callContext -> {
> DataType[] array = new DataType[4];
> array[0] = DataTypes.BOOLEAN();
> array[1] = DataTypes.STRING();
> // page_id 是Long类型, BIGINT 是否可以支持?
> array[2] = DataTypes.BIGINT();
> array[3] = DataTypes.STRING();
> return Optional.of(DataTypes.ROW(array));
> })
> .build();
> }
> }
> {code}
> The exception stack as follows.
> {code:java}
> org.apache.flink.table.planner.codegen.CodeGenException: Mismatch of 
> function's argument data type 'STRING NOT NULL' and actual argument type 
> 'STRING'.
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:323)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$$anonfun$verifyArgumentTypes$1.apply(BridgingFunctionGenUtil.scala:320)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.verifyArgumentTypes(BridgingFunctionGenUtil.scala:320)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCallWithDataType(BridgingFunctionGenUtil.scala:95)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingFunctionGenUtil$.generateFunctionAwareCall(BridgingFunctionGenUtil.scala:65)
>   at 
> org.apache.flink.table.planner.codegen.calls.BridgingSqlFunctionCallGen.generate(BridgingSqlFunctionCallGen.scala:73)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:861)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:537)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>   at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:157)
>   at 
> org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateOperator(CorrelateCodeGenerator.scala:127)
>   at 
> org.apache.flink.table.planner.codegen.CorrelateCodeGenerator$.generateCorrelateTransformation(CorrelateCodeGenerator.scala:75)
>  

[GitHub] [flink] paul8263 commented on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…

2021-08-10 Thread GitBox


paul8263 commented on pull request #16740:
URL: https://github.com/apache/flink/pull/16740#issuecomment-895820166


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] paul8263 commented on a change in pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…

2021-08-10 Thread GitBox


paul8263 commented on a change in pull request #16740:
URL: https://github.com/apache/flink/pull/16740#discussion_r685791988



##
File path: 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/MathFunctionsITCase.java
##
@@ -115,6 +115,14 @@
 $("f0").round(2),
 "ROUND(f0, 2)",
 new BigDecimal("12345.12"),
-DataTypes.DECIMAL(8, 2).notNull()));
+DataTypes.DECIMAL(8, 2).notNull()),
+TestSpec.forFunction(BuiltInFunctionDefinitions.TRUNCATE)
+.onFieldsWithData(new BigDecimal("123.456"))
+// TRUNCATE(DECIMAL(6, 3) NOT NULL, 2) => DECIMAL(6, 
2) NOT NULL
+.testResult(
+$("f0").truncate(2),
+"TRUNCATE(f0, 2)",
+new BigDecimal("123.45"),
+DataTypes.DECIMAL(6, 2).notNull()));

Review comment:
   Hi @tsreaper ,
   I added several "truncated to zero" test cases.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zch93 opened a new pull request #16765: [FLINK-23692] Clarify text on Cancel Job confirmation buttons

2021-08-10 Thread GitBox


zch93 opened a new pull request #16765:
URL: https://github.com/apache/flink/pull/16765


   ## What is the purpose of the change
   
   *When prompting a user to confirm a job cancellation currently we present a 
popup with text: "Calncel Job?" and decision options: Cancel or OK. These 
decision options were the default in the html layer, and it had been changed 
with this PR to "Yes" or "No"*
   
   
   ## Brief change log
 - *Added nzOkText="Yes" nzCancelText="No" to the html layer*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): NO
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: NO
 - The serializers: don't know
 - The runtime per-record code paths (performance sensitive): NO
 - Anything that affects deployment or recovery: NO
 - The S3 file system connector: NO
   
   
   ## Documentation
   
 - Does this pull request introduce a new feature? NO
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23692) Clarify text on "Cancel Job" confirmation buttons

2021-08-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23692:
---
Labels: pull-request-available  (was: )

> Clarify text on "Cancel Job" confirmation buttons
> -
>
> Key: FLINK-23692
> URL: https://issues.apache.org/jira/browse/FLINK-23692
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.1, 1.12.5
>Reporter: Márton Balassi
>Assignee: Zsombor Chikán
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2021-07-16-14-47-12-383.png, 
> image-2021-07-21-11-03-05-977.png
>
>
> When prompting a user to confirm a job cancellation currently we present the 
> following popup (see screenshots). These text descriptions are the default in 
> the html layer, however it happens to be misleading when the action to be 
> confirmed is cancel.
> Multiple users reported that this behavior is confusing and as it is only a 
> display issue its scope is very limited.
>  
> !image-2021-07-21-11-03-05-977.png!!image-2021-07-16-14-47-12-383.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23692) Clarify text on "Cancel Job" confirmation buttons

2021-08-10 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-23692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396524#comment-17396524
 ] 

Zsombor Chikán commented on FLINK-23692:


There is an open PR, ready for review 
[https://github.com/apache/flink/pull/16765] 

> Clarify text on "Cancel Job" confirmation buttons
> -
>
> Key: FLINK-23692
> URL: https://issues.apache.org/jira/browse/FLINK-23692
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.1, 1.12.5
>Reporter: Márton Balassi
>Assignee: Zsombor Chikán
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2021-07-16-14-47-12-383.png, 
> image-2021-07-21-11-03-05-977.png
>
>
> When prompting a user to confirm a job cancellation currently we present the 
> following popup (see screenshots). These text descriptions are the default in 
> the html layer, however it happens to be misleading when the action to be 
> confirmed is cancel.
> Multiple users reported that this behavior is confusing and as it is only a 
> display issue its scope is very limited.
>  
> !image-2021-07-21-11-03-05-977.png!!image-2021-07-16-14-47-12-383.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] paul8263 commented on a change in pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…

2021-08-10 Thread GitBox


paul8263 commented on a change in pull request #16740:
URL: https://github.com/apache/flink/pull/16740#discussion_r685794542



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/sql/FlinkSqlOperatorTable.java
##
@@ -224,6 +224,15 @@ public void lookupOperatorOverloads(
 OperandTypes.or(OperandTypes.NUMERIC_INTEGER, 
OperandTypes.NUMERIC),
 SqlFunctionCategory.NUMERIC);
 
+public static final SqlFunction TRUNCATE =
+new SqlFunction(
+"TRUNCATE",
+SqlKind.OTHER_FUNCTION,
+FlinkReturnTypes.ROUND_FUNCTION_NULLABLE,

Review comment:
   OK. I'll keep it the same as the round method.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16765: [FLINK-23692] Clarify text on Cancel Job confirmation buttons

2021-08-10 Thread GitBox


flinkbot commented on pull request #16765:
URL: https://github.com/apache/flink/pull/16765#issuecomment-895826855


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b95100a0fb1ea2c8a1adc1530d782018c59ef7af (Tue Aug 10 
08:15:12 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on a change in pull request #16590: [FLINK-19554][connector/testing-framework] Connector Testing Framework

2021-08-10 Thread GitBox


AHeise commented on a change in pull request #16590:
URL: https://github.com/apache/flink/pull/16590#discussion_r685803188



##
File path: 
flink-test-utils-parent/flink-connector-testing-framework/src/main/java/org/apache/flink/connectors/test/common/testsuites/TestSuiteBase.java
##
@@ -0,0 +1,430 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connectors.test.common.testsuites;
+
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.connectors.test.common.environment.ClusterControllable;
+import org.apache.flink.connectors.test.common.environment.TestEnvironment;
+import org.apache.flink.connectors.test.common.external.ExternalContext;
+import org.apache.flink.connectors.test.common.external.SourceSplitDataWriter;
+import org.apache.flink.connectors.test.common.junit.annotations.Case;
+import 
org.apache.flink.connectors.test.common.junit.extensions.TestLoggerExtension;
+import 
org.apache.flink.connectors.test.common.junit.extensions.TestingFrameworkExtension;
+import org.apache.flink.connectors.test.common.utils.JobStatusUtils;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import 
org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.CloseableIterator;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.Tag;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.UUID;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertFalse;
+import static org.junit.jupiter.api.Assumptions.assumeTrue;
+
+/**
+ * Base class for all test suites.
+ *
+ * All cases should have well-descriptive JavaDoc, including:
+ *
+ * 
+ *   What's the purpose of this case
+ *   Simple description of how this case works
+ *   Condition to fulfill in order to pass this case
+ *   Requirement of running this case
+ * 
+ */
+@ExtendWith({TestingFrameworkExtension.class, TestLoggerExtension.class})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+public abstract class TestSuiteBase {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(TestSuiteBase.class);
+
+// - Basic test cases 
-
+
+/**
+ * Test connector source with only one split in the external system.
+ *
+ * This test will create one split in the external system, write test 
data into it, and
+ * consume back via a Flink job with 1 parallelism.
+ *
+ * The number and order of records consumed by Flink need to be 
identical to the test data
+ * written to the external system in order to pass this test.
+ *
+ * A bounded source is required for this test.
+ */
+@Case

Review comment:
   Okay I'm not deep enough in JUnit5 yet to propose an alternative. I had 
assumed that we could but the `@ExtendWith` at the class and then just use 
`@TestTemplate`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infras

[jira] [Commented] (FLINK-23650) A sql contains 'Case when' could not run successfully when choose Hive Dialect

2021-08-10 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396532#comment-17396532
 ] 

luoyuxia commented on FLINK-23650:
--

For such sql, the logical plans generated by hive dialect and default dialect 
are different.

I would like to fix it.

> A sql contains 'Case when' could not run successfully when choose Hive Dialect
> --
>
> Key: FLINK-23650
> URL: https://issues.apache.org/jira/browse/FLINK-23650
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: JING ZHANG
>Priority: Major
>
> {code:java}
> tableEnv.sqlQuery(
> "select x,CASE WHEN x > 1 THEN 'aaa' WHEN x >1 AND x < 3 THEN 'bbb' 
> ELSE 'ccc' END as y from src"){code}
> If use Default dialect, the above code could run successfully. However if use 
> Hive dialect, following exception would be thrown out.
> {code:java}
> org.apache.flink.table.planner.codegen.CodeGenException: Unsupported call: 
> when(BOOLEAN, STRING NOT NULL, BOOLEAN, STRING NOT NULL, STRING NOT NULL) 
> org.apache.flink.table.planner.codegen.CodeGenException: Unsupported call: 
> when(BOOLEAN, STRING NOT NULL, BOOLEAN, STRING NOT NULL, STRING NOT NULL) If 
> you think this function should be supported, you can create an issue and 
> start a discussion for it.
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateCallExpression$7$$anonfun$apply$2.apply(ExprCodeGenerator.scala:837)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateCallExpression$7$$anonfun$apply$2.apply(ExprCodeGenerator.scala:837)
>  at scala.Option.getOrElse(Option.scala:121) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateCallExpression$7.apply(ExprCodeGenerator.scala:836)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateCallExpression$7.apply(ExprCodeGenerator.scala:841)
>  at scala.Option.getOrElse(Option.scala:121) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:829)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:503)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:58)
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:157)
>  at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$$anonfun$2.apply(CalcCodeGenerator.scala:137)
>  at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$$anonfun$2.apply(CalcCodeGenerator.scala:137)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.produceProjectionCode$1(CalcCodeGenerator.scala:137)
>  at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateProcessCode(CalcCodeGenerator.scala:162)
>  at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateCalcOperator(CalcCodeGenerator.scala:48)
>  at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator.generateCalcOperator(CalcCodeGenerator.scala)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:94)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:250)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:58)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>  at 
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$1.apply(BatchPlanner.scala:81)
>  at 
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$1.apply(BatchPlanner.scala:80)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(It

[GitHub] [flink] flinkbot edited a comment on pull request #16603: [FLINK-23111][runtime-web] Bump angular's and ng-zorro's version to 12

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16603:
URL: https://github.com/apache/flink/pull/16603#issuecomment-887338670


   
   ## CI report:
   
   * 2d804307c02b930091d03f45b6285546214e3f89 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21804)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #16714: [FLINK-23475][runtime][checkpoint] Supports repartition partly finished broacast states

2021-08-10 Thread GitBox


curcur commented on a change in pull request #16714:
URL: https://github.com/apache/flink/pull/16714#discussion_r685753095



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/RoundRobinOperatorStateRepartitioner.java
##
@@ -123,9 +128,28 @@
 return mergeMapList;
 }
 
-/** Collect union states from given parallelSubtaskStates. */
 private Map>>
 collectUnionStates(List> 
parallelSubtaskStates) {
+return collectStates(parallelSubtaskStates, 
OperatorStateHandle.Mode.UNION);
+}
+
+private Map>>
+collectPartlyFinishedBroadcastStates(
+List> parallelSubtaskStates) {
+return collectStates(parallelSubtaskStates, 
OperatorStateHandle.Mode.BROADCAST).entrySet()
+.stream()
+.filter(
+e ->
+e.getValue().size() > 0
+&& e.getValue().size() < 
parallelSubtaskStates.size())

Review comment:
   My guess is that the empty state should be `e.getValue().size() == 0` ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16740:
URL: https://github.com/apache/flink/pull/16740#issuecomment-894176423


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * ae3ae37c52e99c8ab1f294545445038b4e469e0d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16758: [FLINK-23688][hadoop] Make JaasModule.getAppConfigurationEntries public

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16758:
URL: https://github.com/apache/flink/pull/16758#issuecomment-895165395


   
   ## CI report:
   
   * c09fd62d474672f10aa5cf5ea9d1b279896d984a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21774)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21812)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21788)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16765: [FLINK-23692] Clarify text on Cancel Job confirmation buttons

2021-08-10 Thread GitBox


flinkbot commented on pull request #16765:
URL: https://github.com/apache/flink/pull/16765#issuecomment-895836722


   
   ## CI report:
   
   * b95100a0fb1ea2c8a1adc1530d782018c59ef7af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16764: [FLINK-23693][tests] Add principal creation possibilities to SecureTestEnvironment

2021-08-10 Thread GitBox


flinkbot commented on pull request #16764:
URL: https://github.com/apache/flink/pull/16764#issuecomment-895836639


   
   ## CI report:
   
   * 2bce8e962ba637e992e25b058b47dbac5efeb52b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zlzhang0122 commented on a change in pull request #16637: [FLINK-23189][checkpoint]Count and fail the task when the disk is error on JobManager

2021-08-10 Thread GitBox


zlzhang0122 commented on a change in pull request #16637:
URL: https://github.com/apache/flink/pull/16637#discussion_r685807958



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/OperatorCoordinatorCheckpoints.java
##
@@ -121,14 +123,17 @@ private static void acknowledgeAllCoordinators(
 final Throwable error =
 checkpoint.isDisposed() ? checkpoint.getFailureCause() 
: null;
 
+CheckpointFailureReason reason = 
CheckpointFailureReason.TRIGGER_CHECKPOINT_FAILURE;

Review comment:
   I agree with @pnowojski that `acknowledgeAllCoordinators` is called very 
early, and as far as I can see, `acknowledgeAllCoordinators` is used to take 
checkpoint for `OperatorCoordinator` which is runing on JobMaster. So maybe we 
can simply consider it as another kind of masterStates.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-23473) Do not create transaction in TwoPhaseCommitSinkFunction after finish()

2021-08-10 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei reassigned FLINK-23473:


Assignee: Yuan Mei

> Do not create transaction in TwoPhaseCommitSinkFunction after finish()
> --
>
> Key: FLINK-23473
> URL: https://issues.apache.org/jira/browse/FLINK-23473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka, Runtime / Checkpointing
>Reporter: Dawid Wysakowicz
>Assignee: Yuan Mei
>Priority: Major
>
> In a scenario where:
> 1. task/operator received `finish()`
> 2. checkpoint 42 triggered (not yet completed)
> 3. checkpoint 43 triggered (not yet completed)
> 4. checkpoint 44 triggered (not yet completed)
> 5. notifyCheckpointComplete(43)
> And what should we do now? We can of course commit all transactions until
> checkpoint 43. But should we keep waiting for 
> `notyifyCheckpointComplete(44)`? What if in the meantime another checkpoint 
> is triggered? We could end up waiting indefinitely.
> Our proposal is to shutdown the task immediately after seeing first
> `notifyCheckpointComplete(X)`, where X is any triggered checkpoint AFTER
> `finish()`. This should be fine, as:
> a) ideally there should be no new pending transactions opened after
> checkpoint 42
> b) even if operator/function is opening some transactions for checkpoint 43
> and checkpoint 44 (`FlinkKafkaProducer`), those transactions after
> checkpoint 42 should be empty
> After seeing 5. (notifyCheckpointComplete(43)) It should be good enough to:
> - commit transactions from checkpoint 42, (and 43 if they were created,
> depends on the user code)
> - close operator, aborting any pending transactions (for checkpoint 44 if
> they were opened, depends on the user code)
> If checkpoint 44 completes afterwards, it will still be valid. Ideally we
> would recommend that after seeing `finish()` operators/functions should not
> be opening any new transactions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] AHeise commented on a change in pull request #16590: [FLINK-19554][connector/testing-framework] Connector Testing Framework

2021-08-10 Thread GitBox


AHeise commented on a change in pull request #16590:
URL: https://github.com/apache/flink/pull/16590#discussion_r685809592



##
File path: 
flink-test-utils-parent/flink-connector-testing-framework/src/main/java/org/apache/flink/connectors/test/common/testsuites/TestSuiteBase.java
##
@@ -0,0 +1,430 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connectors.test.common.testsuites;
+
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.connectors.test.common.environment.ClusterControllable;
+import org.apache.flink.connectors.test.common.environment.TestEnvironment;
+import org.apache.flink.connectors.test.common.external.ExternalContext;
+import org.apache.flink.connectors.test.common.external.SourceSplitDataWriter;
+import org.apache.flink.connectors.test.common.junit.annotations.Case;
+import 
org.apache.flink.connectors.test.common.junit.extensions.TestLoggerExtension;
+import 
org.apache.flink.connectors.test.common.junit.extensions.TestingFrameworkExtension;
+import org.apache.flink.connectors.test.common.utils.JobStatusUtils;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import 
org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.CloseableIterator;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.Tag;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.UUID;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertFalse;
+import static org.junit.jupiter.api.Assumptions.assumeTrue;
+
+/**
+ * Base class for all test suites.
+ *
+ * All cases should have well-descriptive JavaDoc, including:
+ *
+ * 
+ *   What's the purpose of this case
+ *   Simple description of how this case works
+ *   Condition to fulfill in order to pass this case
+ *   Requirement of running this case
+ * 
+ */
+@ExtendWith({TestingFrameworkExtension.class, TestLoggerExtension.class})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+public abstract class TestSuiteBase {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(TestSuiteBase.class);
+
+// - Basic test cases 
-
+
+/**
+ * Test connector source with only one split in the external system.
+ *
+ * This test will create one split in the external system, write test 
data into it, and
+ * consume back via a Flink job with 1 parallelism.
+ *
+ * The number and order of records consumed by Flink need to be 
identical to the test data
+ * written to the external system in order to pass this test.
+ *
+ * A bounded source is required for this test.
+ */
+@Case
+@DisplayName("Test source with single split")
+public void testSourceSingleSplit(TestEnvironment testEnv, 
ExternalContext externalContext)
+throws Exception {
+
+// Write test data to external system
+final Collection testRecords = 
generateAndWriteTestData(externalContext);
+
+// Build and execute Flink job
+StreamExecutionEnvironment execEnv = 
testEnv.createExecutionEnvironment();
+final CloseableIterator r

[GitHub] [flink] AHeise commented on a change in pull request #16590: [FLINK-19554][connector/testing-framework] Connector Testing Framework

2021-08-10 Thread GitBox


AHeise commented on a change in pull request #16590:
URL: https://github.com/apache/flink/pull/16590#discussion_r685810346



##
File path: 
flink-test-utils-parent/flink-connector-testing-framework/src/main/java/org/apache/flink/connectors/test/common/testsuites/TestSuiteBase.java
##
@@ -0,0 +1,430 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connectors.test.common.testsuites;
+
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.connectors.test.common.environment.ClusterControllable;
+import org.apache.flink.connectors.test.common.environment.TestEnvironment;
+import org.apache.flink.connectors.test.common.external.ExternalContext;
+import org.apache.flink.connectors.test.common.external.SourceSplitDataWriter;
+import org.apache.flink.connectors.test.common.junit.annotations.Case;
+import 
org.apache.flink.connectors.test.common.junit.extensions.TestLoggerExtension;
+import 
org.apache.flink.connectors.test.common.junit.extensions.TestingFrameworkExtension;
+import org.apache.flink.connectors.test.common.utils.JobStatusUtils;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import 
org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.CloseableIterator;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.Tag;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.UUID;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertFalse;
+import static org.junit.jupiter.api.Assumptions.assumeTrue;
+
+/**
+ * Base class for all test suites.
+ *
+ * All cases should have well-descriptive JavaDoc, including:
+ *
+ * 
+ *   What's the purpose of this case
+ *   Simple description of how this case works
+ *   Condition to fulfill in order to pass this case
+ *   Requirement of running this case
+ * 
+ */
+@ExtendWith({TestingFrameworkExtension.class, TestLoggerExtension.class})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+public abstract class TestSuiteBase {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(TestSuiteBase.class);
+
+// - Basic test cases 
-
+
+/**
+ * Test connector source with only one split in the external system.
+ *
+ * This test will create one split in the external system, write test 
data into it, and
+ * consume back via a Flink job with 1 parallelism.
+ *
+ * The number and order of records consumed by Flink need to be 
identical to the test data
+ * written to the external system in order to pass this test.
+ *
+ * A bounded source is required for this test.
+ */
+@Case
+@DisplayName("Test source with single split")
+public void testSourceSingleSplit(TestEnvironment testEnv, 
ExternalContext externalContext)
+throws Exception {
+
+// Write test data to external system
+final Collection testRecords = 
generateAndWriteTestData(externalContext);
+
+// Build and execute Flink job
+StreamExecutionEnvironment execEnv = 
testEnv.createExecutionEnvironment();
+final CloseableIterator r

[GitHub] [flink] AHeise commented on a change in pull request #16590: [FLINK-19554][connector/testing-framework] Connector Testing Framework

2021-08-10 Thread GitBox


AHeise commented on a change in pull request #16590:
URL: https://github.com/apache/flink/pull/16590#discussion_r685812389



##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/e2e/KafkaSourceE2ECase.java
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.source.e2e;
+
+import 
org.apache.flink.connector.kafka.source.e2e.resources.KafkaContainerizedExternalSystem;
+import 
org.apache.flink.connector.kafka.source.e2e.resources.KafkaMultipleTopicExternalContext;
+import 
org.apache.flink.connector.kafka.source.e2e.resources.KafkaSingleTopicExternalContext;
+import 
org.apache.flink.connectors.test.common.environment.FlinkContainersTestEnvironment;
+import 
org.apache.flink.connectors.test.common.environment.MiniClusterTestEnvironment;
+import 
org.apache.flink.connectors.test.common.junit.annotations.WithExternalContextFactory;
+import 
org.apache.flink.connectors.test.common.junit.annotations.WithExternalSystem;
+import 
org.apache.flink.connectors.test.common.junit.annotations.WithTestEnvironment;
+import org.apache.flink.connectors.test.common.testsuites.TestSuiteBase;
+import org.apache.flink.connectors.test.common.utils.ConnectorJarUtils;
+
+import org.junit.jupiter.api.Disabled;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.Nested;
+import org.junit.jupiter.api.Tag;
+
+/**
+ * End-to-end test for {@link 
org.apache.flink.connector.kafka.source.KafkaSource} using testing
+ * framework, based on JUnit 5.
+ *
+ * This class uses {@link Nested} classes to test functionality of 
KafkaSource under different
+ * Flink environments (MiniCluster and Containers). Each nested class extends 
{@link TestSuiteBase}
+ * for reusing test cases already defined by testing framework.
+ */
+@Disabled
+@Tag("TestingFramework")
+@DisplayName("Kafka Source E2E Test")
+public class KafkaSourceE2ECase {
+
+@Nested
+@DisplayName("On MiniCluster")
+class OnMiniCluster extends TestSuiteBase {

Review comment:
   I think this is a good solution for now but I'd like to revisit it 
later; it would be lead to a module explosion if we add a new e2e module for 
all connectors under test. But for Kafka, this works well.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-23669) Avoid using Scala >= 2.12.8 in Flink Training exercises

2021-08-10 Thread Nico Kruber (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber reassigned FLINK-23669:
---

Assignee: Nico Kruber  (was: Nico Kruber)

> Avoid using Scala >= 2.12.8 in Flink Training exercises
> ---
>
> Key: FLINK-23669
> URL: https://issues.apache.org/jira/browse/FLINK-23669
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation / Training / Exercises
>Affects Versions: 1.13.2
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.3
>
>
> The current IDE setup instructions of the Flink training exercises do not 
> mention a specific Scala SDK to set up. For compatibility reasons described 
> in 
> https://ci.apache.org/projects/flink/flink-docs-stable/docs/dev/datastream/project-configuration/#scala-versions,
>  we should also not use 2.12.8 and up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23667) Fix training exercises IDE setup description for Scala

2021-08-10 Thread Nico Kruber (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber reassigned FLINK-23667:
---

Assignee: Nico Kruber  (was: Nico Kruber)

> Fix training exercises IDE setup description for Scala
> --
>
> Key: FLINK-23667
> URL: https://issues.apache.org/jira/browse/FLINK-23667
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation / Training / Exercises
>Affects Versions: 1.13.2
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.3
>
>
> If you follow the training exercises instructions to set up your IDE with 
> code formatting and the Save Actions plugin while having Scala enabled, it 
> will completely reformat your Scala code files instead of keeping them as is.
> The instructions should be updated to match the ones used for the Flink main 
> project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-22014) Flink JobManager failed to restart after failure in kubernetes HA setup

2021-08-10 Thread Mikalai Lushchytski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikalai Lushchytski closed FLINK-22014.
---
Resolution: Not A Problem

Closing issue as we finally managed to figure out the root cause and it was 
related to bucket lifecycle policy which expired files in 7 days which resulted 
in missing `submittedJobGraphXXX` data.

> Flink JobManager failed to restart after failure in kubernetes HA setup
> ---
>
> Key: FLINK-22014
> URL: https://issues.apache.org/jira/browse/FLINK-22014
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.11.3, 1.12.2, 1.13.0
>Reporter: Mikalai Lushchytski
>Priority: Major
>  Labels: k8s-ha, pull-request-available
> Attachments: flink-logs.txt.zip, image-2021-04-19-11-17-58-215.png, 
> scalyr-logs (1).txt
>
>
> After the JobManager pod failed and the new one started, it was not able to 
> recover jobs due to the absence of recovery data in storage - config map 
> pointed at not existing file.
>   
>  Due to this the JobManager pod entered into the `CrashLoopBackOff`state and 
> was not able to recover - each attempt failed with the same error so the 
> whole cluster became unrecoverable and not operating.
>   
>  I had to manually delete the config map and start the jobs again without the 
> save point.
>   
>  If I tried to emulate the failure further by deleting job manager pod 
> manually, the new pod every time recovered well and issue was not 
> reproducible anymore artificially.
>   
>  Below is the failure log:
> {code:java}
> 2021-03-26 08:22:57,925 INFO 
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManagerImpl [] - 
> Starting the SlotManager.
>  2021-03-26 08:22:57,928 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with KubernetesLeaderRetrievalDriver
> {configMapName='stellar-flink-cluster-dispatcher-leader'}.
>  2021-03-26 08:22:57,931 INFO 
> org.apache.flink.runtime.jobmanager.DefaultJobGraphStore [] - Retrieved job 
> ids [198c46bac791e73ebcc565a550fa4ff6, 344f5ebc1b5c3a566b4b2837813e4940, 
> 96c4603a0822d10884f7fe536703d811, d9ded24224aab7c7041420b3efc1b6ba] from 
> KubernetesStateHandleStore{configMapName='stellar-flink-cluster-dispatcher-leader'}
> 2021-03-26 08:22:57,933 INFO 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] 
> - Trying to recover job with job id 198c46bac791e73ebcc565a550fa4ff6.
>  2021-03-26 08:22:58,029 INFO 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] 
> - Stopping SessionDispatcherLeaderProcess.
>  2021-03-26 08:28:22,677 INFO 
> org.apache.flink.runtime.jobmanager.DefaultJobGraphStore [] - Stopping 
> DefaultJobGraphStore. 2021-03-26 08:28:22,681 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Fatal error 
> occurred in the cluster entrypoint. java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Could not recover job with job 
> id 198c46bac791e73ebcc565a550fa4ff6.
>at java.util.concurrent.CompletableFuture.encodeThrowable(Unknown Source) 
> ~[?:?]
>at java.util.concurrent.CompletableFuture.completeThrowable(Unknown 
> Source) [?:?]
>at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) 
> [?:?]
>at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
>at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
>at java.lang.Thread.run(Unknown Source) [?:?] Caused by: 
> org.apache.flink.util.FlinkRuntimeException: Could not recover job with job 
> id 198c46bac791e73ebcc565a550fa4ff6.
>at 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.recoverJob(SessionDispatcherLeaderProcess.java:144
>  undefined) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
>at 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.recoverJobs(SessionDispatcherLeaderProcess.java:122
>  undefined) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
>at 
> org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.supplyUnsynchronizedIfRunning(AbstractDispatcherLeaderProcess.java:198
>  undefined) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
>at 
> org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.recoverJobsIfRunning(SessionDispatcherLeaderProcess.java:113
>  undefined) ~[flink-dist_2.12-1.12.2.jar:1.12.2] ... 4 more 
> Caused by: org.apache.flink.util.FlinkException: Could not retrieve submitted 
> JobGraph from state handle under jobGraph-198c46bac791e73ebcc565a550fa4ff6. 
> This indicates that the retrieved state handle is broken. Try cleaning the 
> state handle store.
>at 

[GitHub] [flink] mbalassi commented on pull request #16765: [FLINK-23692] Clarify text on Cancel Job confirmation buttons

2021-08-10 Thread GitBox


mbalassi commented on pull request #16765:
URL: https://github.com/apache/flink/pull/16765#issuecomment-895850635


   Thanks @zch93. After the CI passes and having no objections in 48 hours I am 
happy to merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23700) Add `since` field to configuration options

2021-08-10 Thread Paul Lin (Jira)
Paul Lin created FLINK-23700:


 Summary: Add `since` field to configuration options
 Key: FLINK-23700
 URL: https://issues.apache.org/jira/browse/FLINK-23700
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Runtime / Configuration
Reporter: Paul Lin


Flink configuration options are getting more and complicated. When upgrading 
Flink, It's hard for users to search for configuration options diffs between 
versions. To address the problem, we may need to add `since` field to the 
options, both in documents and javadocs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kisimple closed pull request #7195: [hotfix][test][streaming] Add an empty side output test to WindowOperatorTest.

2021-08-10 Thread GitBox


kisimple closed pull request #7195:
URL: https://github.com/apache/flink/pull/7195


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-10 Thread GitBox


lirui-apache commented on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-895854266


   > because authentication and authorization are different, and in non-secure 
cluster, the authentication is deabled, and the authorization may be enabled
   
   But the problem is `HiveCatalog` (or the underlying metastore client) still 
uses UGI to make connection to HMS. If authorization is a concern, wouldn't 
that trigger the issue first before you can create any tables? Even if the 
connection can be made, authorization check still relies on UGI. For example, 
in storage based authorization, HMS checks FS permission with client UGI.
   
   BTW, `HiveCatalog` instance won't be used on TMs. If you use sql-client, 
it's only used on client node. If you submit program in application mode, it's 
used on JM. And current UGI in non-secure env is usually the user running the 
process, not the hostname of the machine.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] akalash commented on a change in pull request #16637: [FLINK-23189][checkpoint]Count and fail the task when the disk is error on JobManager

2021-08-10 Thread GitBox


akalash commented on a change in pull request #16637:
URL: https://github.com/apache/flink/pull/16637#discussion_r685828704



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointFailureManager.java
##
@@ -136,6 +135,8 @@ public void checkFailureCounter(CheckpointException 
exception, long checkpointId
 // ignore
 break;
 
+case EXCEPTION:

Review comment:
   @AHeise In this PR new `CheckpointFailureReason` is 
introduced(`IO_EXCEPTION`). Now we want to replace the existing `EXCEPTION` 
with `IO_EXCEPTION`(because according to the code it is exactly IO_EXCEPTION). 
Also, `IO_EXCEPTION` would not be ignored by `FailureManager` as it was with 
`EXCEPTION`. Do you have any objections to this?
   
   @zlzhang0122 If nobody will mind, I suggest getting rid of the EXCEPTION and 
replace it with IO_EXCEPTION. But it is better to do so in two separated 
commits(under one ticket). As a result of this ticket, you will have two 
commits - one of them adding the IO_EXCEPTION but don't touch the EXCEPTION and 
the second one just replace EXCEPTION with IO_EXCEPTION.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #16716: [FLINK-23631][docs] Fix the link for page zh/docs/dev/dataset/examples

2021-08-10 Thread GitBox


wuchong merged pull request #16716:
URL: https://github.com/apache/flink/pull/16716


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-23631) [docs-zh] The name attribute of GH_link in shortcodes was lost. As a result, the Link to Flink-examexam-batch failed

2021-08-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-23631.
---
Fix Version/s: 1.14.0
   Resolution: Fixed

Fixed in master: bfb0d79526c487f972a28041404bac79e0499371

> [docs-zh] The name attribute of GH_link in shortcodes was lost. As a result, 
> the Link to Flink-examexam-batch failed
> 
>
> Key: FLINK-23631
> URL: https://issues.apache.org/jira/browse/FLINK-23631
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
> Attachments: image-20210805011518012.png, 
> image-20210805014000555.png, image-20210808002209467.png
>
>
> 1.  
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/dataset/examples/]
> 2. docs/content.zh/docs/dev/dataset/examples.md
> 3. The shortcode below contains both 'file' attributes and no 'name'
> {code:java}
>  {{< gh_link file="flink-examples/flink-examples-batch" 
> file="flink-examples-batch" >}}
> {code}
> 4. It should be modified as follows
> {code:java}
>  {{< gh_link file="flink-examples/flink-examples-batch" 
> name="flink-examples-batch" >}}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #16743: [FLINK-23645][Table SQL / Client] Fix SqlClient doesn't response correctly to Signal.INT

2021-08-10 Thread GitBox


wuchong commented on pull request #16743:
URL: https://github.com/apache/flink/pull/16743#issuecomment-895858803


   cc @fsk119 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23631) Fix broken links in documentations

2021-08-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-23631:

Summary: Fix broken links in documentations  (was: [docs-zh] The name 
attribute of GH_link in shortcodes was lost. As a result, the Link to 
Flink-examexam-batch failed)

> Fix broken links in documentations
> --
>
> Key: FLINK-23631
> URL: https://issues.apache.org/jira/browse/FLINK-23631
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
> Attachments: image-20210805011518012.png, 
> image-20210805014000555.png, image-20210808002209467.png
>
>
> 1.  
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/dataset/examples/]
> 2. docs/content.zh/docs/dev/dataset/examples.md
> 3. The shortcode below contains both 'file' attributes and no 'name'
> {code:java}
>  {{< gh_link file="flink-examples/flink-examples-batch" 
> file="flink-examples-batch" >}}
> {code}
> 4. It should be modified as follows
> {code:java}
>  {{< gh_link file="flink-examples/flink-examples-batch" 
> name="flink-examples-batch" >}}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-894632163


   
   ## CI report:
   
   * a804087bac58fe874f2978e47e309ca648e16edc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21805)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16740:
URL: https://github.com/apache/flink/pull/16740#issuecomment-894176423


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * ae3ae37c52e99c8ab1f294545445038b4e469e0d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21813)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16764: [FLINK-23693][tests] Add principal creation possibilities to SecureTestEnvironment

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16764:
URL: https://github.com/apache/flink/pull/16764#issuecomment-895836639


   
   ## CI report:
   
   * 2bce8e962ba637e992e25b058b47dbac5efeb52b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21814)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16765: [FLINK-23692] Clarify text on Cancel Job confirmation buttons

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16765:
URL: https://github.com/apache/flink/pull/16765#issuecomment-895836722


   
   ## CI report:
   
   * b95100a0fb1ea2c8a1adc1530d782018c59ef7af Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21815)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #16765: [FLINK-23692] Clarify text on Cancel Job confirmation buttons

2021-08-10 Thread GitBox


zentol commented on a change in pull request #16765:
URL: https://github.com/apache/flink/pull/16765#discussion_r685836834



##
File path: 
flink-runtime-web/web-dashboard/src/app/pages/job/status/job-status.component.html
##
@@ -51,7 +51,7 @@ {{ jobDetail.name }}
   
 {{ statusTips }}
 
-  Cancel Job
+Cancel Job

Review comment:
   - please revert the indentation change
   - why are you removing the webCancelEnabled check?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23701) Fix Error translation in "Builtin Watermark Generators" page

2021-08-10 Thread Aiden Gong (Jira)
Aiden Gong created FLINK-23701:
--

 Summary: Fix Error translation in "Builtin Watermark Generators" 
page
 Key: FLINK-23701
 URL: https://issues.apache.org/jira/browse/FLINK-23701
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Documentation / Training
Affects Versions: 1.14.0
Reporter: Aiden Gong
 Fix For: 1.14.0


Error translation in "Builtin Watermark Generators" page 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/|[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23701) Fix Error translation in "Builtin Watermark Generators" page

2021-08-10 Thread Aiden Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aiden Gong updated FLINK-23701:
---
Attachment: 企业微信截图_16285874495585.png
企业微信截图_16285874228277.png

> Fix Error translation in "Builtin Watermark Generators" page
> 
>
> Key: FLINK-23701
> URL: https://issues.apache.org/jira/browse/FLINK-23701
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.14.0
>Reporter: Aiden Gong
>Priority: Minor
> Fix For: 1.14.0
>
> Attachments: 企业微信截图_16285874228277.png, 企业微信截图_16285874495585.png
>
>
> Error translation in "Builtin Watermark Generators" page 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/|[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23701) Fix Error translation in "Builtin Watermark Generators" page

2021-08-10 Thread Aiden Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aiden Gong updated FLINK-23701:
---
Description: 
Error translation in "Builtin Watermark Generators" page 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/|[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].]

The error translation is located by follow screenshot.
 

  was:Error translation in "Builtin Watermark Generators" page 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/|[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].]


> Fix Error translation in "Builtin Watermark Generators" page
> 
>
> Key: FLINK-23701
> URL: https://issues.apache.org/jira/browse/FLINK-23701
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.14.0
>Reporter: Aiden Gong
>Priority: Minor
> Fix For: 1.14.0
>
> Attachments: 企业微信截图_16285874228277.png, 企业微信截图_16285874495585.png
>
>
> Error translation in "Builtin Watermark Generators" page 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/|[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].]
> The error translation is located by follow screenshot.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23702) SQL casting to decimal which cannot hold its original value throws 'Column 'EXPR$0' is NOT NULL, however, a null value is being written into it'

2021-08-10 Thread Yao Zhang (Jira)
Yao Zhang created FLINK-23702:
-

 Summary: SQL casting to decimal which cannot hold its original 
value throws 'Column 'EXPR$0' is NOT NULL, however, a null value is being 
written into it'
 Key: FLINK-23702
 URL: https://issues.apache.org/jira/browse/FLINK-23702
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.13.2
Reporter: Yao Zhang


If we run the following SQL:
{code:sql}
select CAST( 1234567.123456789 as DECIMAL(10, 3))
{code}
Flink can get the correct result because the result 1234567.123 meets its 
definition (precision=10 and scale=3). It only loses accuracy in the fraction 
section.

However, if we execute:
{code:sql}
select CAST( 1234567.123456789 as DECIMAL(9, 3))
{code}
it will result in an exception as the original value cannot stored according to 
this definition.

But the exception seems to be irrelevant to its root cause:

Exception in thread "main" java.lang.RuntimeException: Failed to fetch next 
result
 at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
 at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
 at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
 at 
org.apache.flink.table.utils.PrintUtils.printAsTableauForm(PrintUtils.java:152)
 at 
org.apache.flink.table.api.internal.TableResultImpl.print(TableResultImpl.java:160)
 at com.paultech.sql.SqlStreamEnvDemo$.main(SqlStreamEnvDemo.scala:70)
 at com.paultech.sql.SqlStreamEnvDemo.main(SqlStreamEnvDemo.scala)
 Caused by: java.io.IOException: Failed to fetch job execution result
 at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:177)
 at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:120)
 at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
 ... 6 more
 Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
 at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
 at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
 at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:175)
 ... 8 more
 Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
execution failed.
 at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
 at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
 at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
 at 
java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:628)
 at 
java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:1996)
 at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.getJobExecutionResult(MiniClusterJobClient.java:134)
 at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:174)
 ... 8 more
 Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
 at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
 at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:207)
 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:197)
 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:188)
 at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:677)
 at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
 at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:435)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
 at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActo

[jira] [Created] (FLINK-23703) Support unrecoverable exception class list in configuration

2021-08-10 Thread Paul Lin (Jira)
Paul Lin created FLINK-23703:


 Summary: Support unrecoverable exception class list in 
configuration
 Key: FLINK-23703
 URL: https://issues.apache.org/jira/browse/FLINK-23703
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Configuration
Reporter: Paul Lin


Currently users can use `@ThrowableAnnotation` to dento a custom exception is 
unrecoverable to avoid unnecessary retries, but it is not possible to annotate 
an exception from the standard libs or third-party libs. 

Thus I propose to add a configuration option,  such as 
`unrecoverable.exceptions`, of which values are a comma-separated list of 
fully-qualified names of exceptions, with a default value covering the most 
common unrecoverable exceptions like NPE, ClassNotFoundException, etc. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23701) Fix Error translation in "Builtin Watermark Generators" page

2021-08-10 Thread Aiden Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396565#comment-17396565
 ] 

Aiden Gong commented on FLINK-23701:


Hi,[~jark]. Please assigne to me,I have fixed.Thanks!
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13390685]

> Fix Error translation in "Builtin Watermark Generators" page
> 
>
> Key: FLINK-23701
> URL: https://issues.apache.org/jira/browse/FLINK-23701
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.14.0
>Reporter: Aiden Gong
>Priority: Minor
> Fix For: 1.14.0
>
> Attachments: 企业微信截图_16285874228277.png, 企业微信截图_16285874495585.png
>
>
> Error translation in "Builtin Watermark Generators" page 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/|[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].]
> The error translation is located by follow screenshot.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-23701) Fix Error translation in "Builtin Watermark Generators" page

2021-08-10 Thread Aiden Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17396565#comment-17396565
 ] 

Aiden Gong edited comment on FLINK-23701 at 8/10/21, 9:29 AM:
--

Hi,[~jark]. Please assigne to me,I have fixed.Thanks!


 


was (Author: aiden gong):
Hi,[~jark]. Please assigne to me,I have fixed.Thanks!
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13390685]

> Fix Error translation in "Builtin Watermark Generators" page
> 
>
> Key: FLINK-23701
> URL: https://issues.apache.org/jira/browse/FLINK-23701
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.14.0
>Reporter: Aiden Gong
>Priority: Minor
> Fix For: 1.14.0
>
> Attachments: 企业微信截图_16285874228277.png, 企业微信截图_16285874495585.png
>
>
> Error translation in "Builtin Watermark Generators" page 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/|[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].]
> The error translation is located by follow screenshot.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16637: [FLINK-23189][checkpoint]Count and fail the task when the disk is error on JobManager

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16637:
URL: https://github.com/apache/flink/pull/16637#issuecomment-889099584


   
   ## CI report:
   
   * eb8c1b59b1fe328cfefc51c1204ced5202ad4448 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21702)
 
   * afe05192b1bd26789bd0b7da3a773e63bc2215a1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #16750: [FLINK-23627] [Runtime / Metrics] Migrate all OperatorMG instantiations to factory method

2021-08-10 Thread GitBox


zentol commented on a change in pull request #16750:
URL: https://github.com/apache/flink/pull/16750#discussion_r685854332



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/TwoInputStreamTaskTest.java
##
@@ -473,7 +473,7 @@ public void testOperatorMetricReuse() throws Exception {
 @Override
 public OperatorMetricGroup getOrAddOperator(
 OperatorID operatorID, String name) {
-return new OperatorMetricGroup(
+return OperatorMetricGroup.createOperatorMetricGroup(

Review comment:
   hmm could we do something like this?
   
   ```
   TaskMetricGroup taskMetricGroup =
   new UnregisteredMetricGroups.UnregisteredTaskMetricGroup() {
   @Override
   public OperatorMetricGroup getOrAddOperator(
   OperatorID operatorID, String name) {
   OperatorMetricGroup operatorMetricGroup = 
super.getOrAddOperator(operatorID, name);
   operatorMetrics.put(name, operatorMetricGroup);
   return operatorMetricGroup;
   }
   };
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16741: [FLINK-23662][rpc][akka] Port all Scala code to Java

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16741:
URL: https://github.com/apache/flink/pull/16741#issuecomment-894188972


   
   ## CI report:
   
   * 12d7444aeeb1621eb2472b26727024fb68b085e1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21789)
 
   * 3cfadc7d4b63776ba11962d656c9867033a7f808 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23704) FLIP-27 sources are not generating LatencyMarkers

2021-08-10 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-23704:
--

 Summary: FLIP-27 sources are not generating LatencyMarkers
 Key: FLINK-23704
 URL: https://issues.apache.org/jira/browse/FLINK-23704
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Runtime / Task
Affects Versions: 1.13.2, 1.12.5, 1.14.0
Reporter: Piotr Nowojski


Currently {{LatencyMarker}} is created only in 
{{StreamSource.LatencyMarksEmitter#LatencyMarksEmitter}}. FLIP-27 sources are 
never emitting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol merged pull request #16742: [FLINK-23626] [Runtime / Metrics] Migrate all TaskMG instantiations to factory method

2021-08-10 Thread GitBox


zentol merged pull request #16742:
URL: https://github.com/apache/flink/pull/16742


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23701) Fix Error translation in "Builtin Watermark Generators" page

2021-08-10 Thread Aiden Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aiden Gong updated FLINK-23701:
---
Description: 
Error translation in "Builtin Watermark Generators" page 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].

The error translation is located by follow screenshot.
  

  was:
Error translation in "Builtin Watermark Generators" page 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/|[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].]

The error translation is located by follow screenshot.
 


> Fix Error translation in "Builtin Watermark Generators" page
> 
>
> Key: FLINK-23701
> URL: https://issues.apache.org/jira/browse/FLINK-23701
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.14.0
>Reporter: Aiden Gong
>Priority: Minor
> Fix For: 1.14.0
>
> Attachments: 企业微信截图_16285874228277.png, 企业微信截图_16285874495585.png
>
>
> Error translation in "Builtin Watermark Generators" page 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].
> The error translation is located by follow screenshot.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-23626) Migrate all TaskMG instantiations to factory method

2021-08-10 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-23626.

Resolution: Fixed

master: bd95c729e901cac4fd0b75229bf5b42caae9b928

> Migrate all TaskMG instantiations to factory method
> ---
>
> Key: FLINK-23626
> URL: https://issues.apache.org/jira/browse/FLINK-23626
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics
>Reporter: Chesnay Schepler
>Assignee: liwei li
>Priority: Major
>  Labels: pull-request-available, starter
> Fix For: 1.14.0
>
>
> Modify all existing usages of the TaskMG constructor to use runtime Apis, for 
> consistency and to make constructor changes easier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23701) Fix Error translation in "Builtin Watermark Generators" page

2021-08-10 Thread Aiden Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aiden Gong updated FLINK-23701:
---
Issue Type: Bug  (was: Improvement)

> Fix Error translation in "Builtin Watermark Generators" page
> 
>
> Key: FLINK-23701
> URL: https://issues.apache.org/jira/browse/FLINK-23701
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.14.0
>Reporter: Aiden Gong
>Priority: Minor
> Fix For: 1.14.0
>
> Attachments: 企业微信截图_16285874228277.png, 企业微信截图_16285874495585.png
>
>
> Error translation in "Builtin Watermark Generators" page 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].
> The error translation is located by follow screenshot.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] cuibo01 commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-10 Thread GitBox


cuibo01 commented on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-895883541


   > BTW, `HiveCatalog` instance won't be used on TMs. If you use sql-client, 
it's only used on client node. If you submit program in application mode, it's 
used on JM. And current UGI in non-secure env is usually the user running the 
process, not the hostname of the machine.
   
   yes, it is not TM, is client user or process user.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23705) Unregister MetricGroup after shard closes in Kinesis connector

2021-08-10 Thread Elphas Toringepi (Jira)
Elphas Toringepi created FLINK-23705:


 Summary: Unregister MetricGroup after shard closes in Kinesis 
connector
 Key: FLINK-23705
 URL: https://issues.apache.org/jira/browse/FLINK-23705
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.13.2, 1.13.1, 1.13.0
Reporter: Elphas Toringepi
 Fix For: 1.13.2, 1.13.1, 1.13.0


Kinesis connector continues to report metrics for closed shards after 
resharding leading to incorrect millisBehindLatest metric reported on the Flink 
dashboard.

This is results in incorrect aggregated metrics for the entire stream even 
after restarting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23703) Support unrecoverable exception class list in configuration

2021-08-10 Thread Paul Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Lin updated FLINK-23703:
-
Description: 
Currently, users can use `@ThrowableAnnotation` to denote a custom exception is 
unrecoverable to avoid unnecessary retries, but it is not possible to annotate 
an exception from the standard libs or third-party libs. 

Thus, I propose to add a configuration option,  such as 
`unrecoverable.exceptions`, of which values are a comma-separated list of 
fully-qualified names of exceptions, with a default value covering the most 
common unrecoverable exceptions like NPE, ClassNotFoundException, etc. 

  was:
Currently users can use `@ThrowableAnnotation` to dento a custom exception is 
unrecoverable to avoid unnecessary retries, but it is not possible to annotate 
an exception from the standard libs or third-party libs. 

Thus I propose to add a configuration option,  such as 
`unrecoverable.exceptions`, of which values are a comma-separated list of 
fully-qualified names of exceptions, with a default value covering the most 
common unrecoverable exceptions like NPE, ClassNotFoundException, etc. 


> Support unrecoverable exception class list in configuration
> ---
>
> Key: FLINK-23703
> URL: https://issues.apache.org/jira/browse/FLINK-23703
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Configuration
>Reporter: Paul Lin
>Priority: Minor
>
> Currently, users can use `@ThrowableAnnotation` to denote a custom exception 
> is unrecoverable to avoid unnecessary retries, but it is not possible to 
> annotate an exception from the standard libs or third-party libs. 
> Thus, I propose to add a configuration option,  such as 
> `unrecoverable.exceptions`, of which values are a comma-separated list of 
> fully-qualified names of exceptions, with a default value covering the most 
> common unrecoverable exceptions like NPE, ClassNotFoundException, etc. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23701) Fix Error translation in "Builtin Watermark Generators" page

2021-08-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-23701:
---

Assignee: Aiden Gong

> Fix Error translation in "Builtin Watermark Generators" page
> 
>
> Key: FLINK-23701
> URL: https://issues.apache.org/jira/browse/FLINK-23701
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.14.0
>Reporter: Aiden Gong
>Assignee: Aiden Gong
>Priority: Minor
> Fix For: 1.14.0
>
> Attachments: 企业微信截图_16285874228277.png, 企业微信截图_16285874495585.png
>
>
> Error translation in "Builtin Watermark Generators" page 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/dev/datastream/event-time/built_in/].
> The error translation is located by follow screenshot.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] cuibo01 commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…

2021-08-10 Thread GitBox


cuibo01 commented on pull request #16745:
URL: https://github.com/apache/flink/pull/16745#issuecomment-895893487


   > wouldn't that trigger the issue first before you can create any tables?
   
   no, HMS allows requests to get all information,  and HMS does not check 
whether the request has permission on the table, but hive checks whether the 
request has the permission on the table. so i think fink support setting owner.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-benchmarks] pnowojski opened a new pull request #29: [FLINK-23689] Increase ProcessingTimerBenchmark length

2021-08-10 Thread GitBox


pnowojski opened a new pull request #29:
URL: https://github.com/apache/flink-benchmarks/pull/29


   Previous 1000 records was forced upon this benchmark because of
   very slow production code. After resolving FLINK-23689 we can
   safely increase number of records. Otherwise we would be testing
   setup time of a Flink job.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23689) TestProcessingTimeService.setCurrentTime(long) not delay the firing timer by 1ms delay

2021-08-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23689:
---
Labels: pull-request-available test-stability  (was: test-stability)

> TestProcessingTimeService.setCurrentTime(long) not delay the firing timer by 
> 1ms delay
> --
>
> Key: FLINK-23689
> URL: https://issues.apache.org/jira/browse/FLINK-23689
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.4, 1.14.0, 1.12.4, 1.13.2
>Reporter: Jiayi Liao
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> FLINK-9857 enabled {{SystemProcessingTimeService}}  to fire processing timers 
> by 1ms delay but it ignored {{TestProcessingTimeService}}, and the method 
> {{TestProcessingTimeService.setCurrentTime}} can still trigger the processing 
> timers whose timestamp equals to {{currentTime}}. 
> {code:java}
> while (!priorityQueue.isEmpty() && currentTime >= priorityQueue.peek().f0) {
> ..
> }
> {code}
> We can simply fix the problem with {{currentTime > priorityQueue.peek().f0}}, 
> but it will break too many existing tests: 
> * Tests using {{TestProcessingTimeService.setCurrentTime(long)}} (64 usage)
> * Tests using {{AbstractStreamOperatorTestHarness.setProcessingTime(long)}} 
> (368 usage)
> Tips: Some tests can be fixed with plusing 1ms on the parameter of 
> {{TestProcessingTimeService.setCurrentTime(long)}}, but this doesn't work in 
> tests that invokes {{setCurrentTime(long)}} for serveral times.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-benchmarks] pnowojski commented on pull request #29: [FLINK-23689] Increase ProcessingTimerBenchmark length

2021-08-10 Thread GitBox


pnowojski commented on pull request #29:
URL: https://github.com/apache/flink-benchmarks/pull/29#issuecomment-895894925


   @Jiayi-Liao could you take a look? 
   
   Increasing beyond 100_000 records stops showing any performance improvement, 
so the setup costs are already negligible. With 1_000 records benchmark results 
were around ~7ops/ms. With 150_000 records, results are ~250ops/ms. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23706) temporary join versioned table , state TTL Setting

2021-08-10 Thread tao.yang03 (Jira)
tao.yang03 created FLINK-23706:
--

 Summary: temporary join versioned table , state TTL Setting
 Key: FLINK-23706
 URL: https://issues.apache.org/jira/browse/FLINK-23706
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.13.0
Reporter: tao.yang03


when I Temporal joins a [versioned 
table|https://ci.apache.org/projects/flink/flink-docs-release-1.13/zh/docs/dev/table/concepts/versioned_tables/].
 i want to  set  state TTL,but i  don‘t find this setting  . should i use 
{color:#FF}table.exec.state.ttl {color:#172b4d}?   i want to  
StateTtlConfig.UpdateType.OnReadAndWrite like datastream API ,Is there anything 
I can do{color}{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23707) Use consistent managed memory weights for StreamNode

2021-08-10 Thread Timo Walther (Jira)
Timo Walther created FLINK-23707:


 Summary: Use consistent managed memory weights for StreamNode
 Key: FLINK-23707
 URL: https://issues.apache.org/jira/browse/FLINK-23707
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Timo Walther
Assignee: Timo Walther


Managed memory that is declared on transformations via 
{{Transformation#declareManagedMemoryUseCaseAtOperatorScope(ManagedMemoryUseCase
 managedMemoryUseCase, int weight)}} should be declared using a weight.

Usually, a weight should be some kind of factor, however, in the table planner 
it is used a kibi byte value. This causes issues on the DataStream API side 
that sets it to {{1}} in 
{{org.apache.flink.streaming.runtime.translators.BatchExecutionUtils#applyBatchExecutionSettings}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16637: [FLINK-23189][checkpoint]Count and fail the task when the disk is error on JobManager

2021-08-10 Thread GitBox


flinkbot edited a comment on pull request #16637:
URL: https://github.com/apache/flink/pull/16637#issuecomment-889099584


   
   ## CI report:
   
   * eb8c1b59b1fe328cfefc51c1204ced5202ad4448 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21702)
 
   * afe05192b1bd26789bd0b7da3a773e63bc2215a1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21817)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   >