[GitHub] [flink] flinkbot edited a comment on pull request #18696: [hotfix][docs] project config pages

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18696:
URL: https://github.com/apache/flink/pull/18696#issuecomment-1034481691


   
   ## CI report:
   
   * 2921ed917d22b64cb99f05dfdd0f7e119dd86880 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31078)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26036) LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory timeout on azure

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490015#comment-17490015
 ] 

Yun Gao commented on FLINK-26036:
-

Hi [~trohrmann]~ there seems to be an occurrence of this issue after merged: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31071=logs=baf26b34-3c6a-54e8-f93f-cf269b32f802=8c9d126d-57d2-5a9e-a8c8-ff53f7b35cd9=21947
 ~

> LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory 
> timeout on azure
> ---
>
> Key: FLINK-26036
> URL: https://issues.apache.org/jira/browse/FLINK-26036
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> 022-02-09T02:18:17.1827314Z Feb 09 02:18:14 [ERROR] 
> org.apache.flink.test.recovery.LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory
>   Time elapsed: 62.252 s  <<< ERROR!
> 2022-02-09T02:18:17.1827940Z Feb 09 02:18:14 
> java.util.concurrent.TimeoutException
> 2022-02-09T02:18:17.1828450Z Feb 09 02:18:14  at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> 2022-02-09T02:18:17.1829040Z Feb 09 02:18:14  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> 2022-02-09T02:18:17.1829752Z Feb 09 02:18:14  at 
> org.apache.flink.test.recovery.LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory(LocalRecoveryITCase.java:115)
> 2022-02-09T02:18:17.1830407Z Feb 09 02:18:14  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-02-09T02:18:17.1830954Z Feb 09 02:18:14  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-02-09T02:18:17.1831582Z Feb 09 02:18:14  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-02-09T02:18:17.1832135Z Feb 09 02:18:14  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-02-09T02:18:17.1832697Z Feb 09 02:18:14  at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> 2022-02-09T02:18:17.1833566Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> 2022-02-09T02:18:17.1834394Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> 2022-02-09T02:18:17.1835125Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> 2022-02-09T02:18:17.1835875Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> 2022-02-09T02:18:17.1836565Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
> 2022-02-09T02:18:17.1837294Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> 2022-02-09T02:18:17.1838007Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> 2022-02-09T02:18:17.1838743Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> 2022-02-09T02:18:17.1839499Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> 2022-02-09T02:18:17.1840224Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> 2022-02-09T02:18:17.1840952Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> 2022-02-09T02:18:17.1841616Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> 2022-02-09T02:18:17.1842257Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> 2022-02-09T02:18:17.1842951Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
> 2022-02-09T02:18:17.1843681Z Feb 09 02:18:14  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-02-09T02:18:17.1844782Z Feb 09 02:18:14  at 
> 

[jira] [Reopened] (FLINK-26036) LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory timeout on azure

2022-02-09 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao reopened FLINK-26036:
-

> LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory 
> timeout on azure
> ---
>
> Key: FLINK-26036
> URL: https://issues.apache.org/jira/browse/FLINK-26036
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> 022-02-09T02:18:17.1827314Z Feb 09 02:18:14 [ERROR] 
> org.apache.flink.test.recovery.LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory
>   Time elapsed: 62.252 s  <<< ERROR!
> 2022-02-09T02:18:17.1827940Z Feb 09 02:18:14 
> java.util.concurrent.TimeoutException
> 2022-02-09T02:18:17.1828450Z Feb 09 02:18:14  at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> 2022-02-09T02:18:17.1829040Z Feb 09 02:18:14  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> 2022-02-09T02:18:17.1829752Z Feb 09 02:18:14  at 
> org.apache.flink.test.recovery.LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory(LocalRecoveryITCase.java:115)
> 2022-02-09T02:18:17.1830407Z Feb 09 02:18:14  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-02-09T02:18:17.1830954Z Feb 09 02:18:14  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-02-09T02:18:17.1831582Z Feb 09 02:18:14  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-02-09T02:18:17.1832135Z Feb 09 02:18:14  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-02-09T02:18:17.1832697Z Feb 09 02:18:14  at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> 2022-02-09T02:18:17.1833566Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> 2022-02-09T02:18:17.1834394Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> 2022-02-09T02:18:17.1835125Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> 2022-02-09T02:18:17.1835875Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> 2022-02-09T02:18:17.1836565Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
> 2022-02-09T02:18:17.1837294Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> 2022-02-09T02:18:17.1838007Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> 2022-02-09T02:18:17.1838743Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> 2022-02-09T02:18:17.1839499Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> 2022-02-09T02:18:17.1840224Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> 2022-02-09T02:18:17.1840952Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> 2022-02-09T02:18:17.1841616Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> 2022-02-09T02:18:17.1842257Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> 2022-02-09T02:18:17.1842951Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
> 2022-02-09T02:18:17.1843681Z Feb 09 02:18:14  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-02-09T02:18:17.1844782Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
> 2022-02-09T02:18:17.1845603Z Feb 09 02:18:14  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
> 2022-02-09T02:18:17.1846375Z Feb 09 02:18:14  at 
> 

[jira] [Updated] (FLINK-26004) Introduce ForwardForConsecutiveHashPartitioner

2022-02-09 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-26004:

Description: 
If there are multiple consecutive and the same hash shuffles, SQL planner will 
change them except the first one to use forward partitioner, so that these 
operators can be chained to reduce unnecessary shuffles.

However, sometimes the consecutive hash operators are not chained (e.g. 
multiple inputs), and this kind of forward partitioners will turn into forward 
job edges. These forward edges still have the consecutive hash assumption, so 
that they cannot be changed into rescale/rebalance edges, otherwise it can lead 
to incorrect results. This prevents the adaptive batch scheduler from 
determining parallelism for other forward edge downstream job vertices (see 
FLINK-25046).

To solve it, I propose to introduce a new 
{{{}ForwardForConsecutiveHashPartitioner}}. When SQL planner optimizes the case 
of multiple consecutive the same groupBy, it should use the proposed 
partitioner, so that the runtime framework can further decide whether the 
partitioner can be changed to hash or not.
h4.  

  was:
If there are multiple consecutive the same hash shuffle(i.e. keyBy), SQL 
planner will change them except the first one to use forward partitioner, so 
that these operators can be chained to reduce unnecessary shuffles.

However, sometimes the consecutive hash operators are not chained (e.g. 
multiple inputs), and this kind of forward partitioners will turn into forward 
job edges. These forward edges still have the consecutive hash assumption, so 
that they cannot be changed into rescale/rebalance edges, otherwise it can lead 
to incorrect results. This prevents the adaptive batch scheduler from 
determining parallelism for other forward edge downstream job vertices (see 
FLINK-25046).

To solve it, I propose to introduce a new 
{{{}ForwardForConsecutiveHashPartitioner}}. When SQL planner optimizes the case 
of multiple consecutive the same groupBy, it should use the proposed 
partitioner, so that the runtime framework can further decide whether the 
partitioner can be changed to hash or not.
h4.  


> Introduce ForwardForConsecutiveHashPartitioner
> --
>
> Key: FLINK-26004
> URL: https://issues.apache.org/jira/browse/FLINK-26004
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
>
> If there are multiple consecutive and the same hash shuffles, SQL planner 
> will change them except the first one to use forward partitioner, so that 
> these operators can be chained to reduce unnecessary shuffles.
> However, sometimes the consecutive hash operators are not chained (e.g. 
> multiple inputs), and this kind of forward partitioners will turn into 
> forward job edges. These forward edges still have the consecutive hash 
> assumption, so that they cannot be changed into rescale/rebalance edges, 
> otherwise it can lead to incorrect results. This prevents the adaptive batch 
> scheduler from determining parallelism for other forward edge downstream job 
> vertices (see FLINK-25046).
> To solve it, I propose to introduce a new 
> {{{}ForwardForConsecutiveHashPartitioner}}. When SQL planner optimizes the 
> case of multiple consecutive the same groupBy, it should use the proposed 
> partitioner, so that the runtime framework can further decide whether the 
> partitioner can be changed to hash or not.
> h4.  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18675: [FLINK-25491][table-planner] Fix bug: generated code for a large IN filter can't be compiled

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18675:
URL: https://github.com/apache/flink/pull/18675#issuecomment-1033442210


   
   ## CI report:
   
   * 2448efd3a0c18a24860160ae8bb4faec0d0636be Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31075)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18641: [FLINK-25761] [docs] Translate Avro format page into Chinese.

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18641:
URL: https://github.com/apache/flink/pull/18641#issuecomment-1031234534


   
   ## CI report:
   
   * 60ff1ee473b47a6fad8440dd6b9cc635633c56b8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31076)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26004) Introduce ForwardForConsecutiveHashPartitioner

2022-02-09 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-26004:

Description: 
If there are multiple consecutive the same hash shuffle(i.e. keyBy), SQL 
planner will change them except the first one to use forward partitioner, so 
that these operators can be chained to reduce unnecessary shuffles.

However, sometimes the consecutive hash operators are not chained (e.g. 
multiple inputs), and this kind of forward partitioners will turn into forward 
job edges. These forward edges still have the consecutive hash assumption, so 
that they cannot be changed into rescale/rebalance edges, otherwise it can lead 
to incorrect results. This prevents the adaptive batch scheduler from 
determining parallelism for other forward edge downstream job vertices (see 
FLINK-25046).

To solve it, I propose to introduce a new 
{{{}ForwardForConsecutiveHashPartitioner}}. When SQL planner optimizes the case 
of multiple consecutive the same groupBy, it should use the proposed 
partitioner, so that the runtime framework can further decide whether the 
partitioner can be changed to hash or not.
h4.  

  was:
If there are multiple consecutive the same groupBy(i.e. keyBy), SQL planner 
will change them except the first one to use forward partitioner, so that these 
operators can be chained to reduce unnecessary shuffles.

However, sometimes the consecutive hash operators are not chained (e.g. 
multiple inputs), and this kind of forward partitioners will turn into forward 
job edges. These forward edges still have the consecutive hash assumption, so 
that they cannot be changed into rescale/rebalance edges, otherwise it can lead 
to incorrect results. This prevents the adaptive batch scheduler from 
determining parallelism for other forward edge downstream job vertices (see 
FLINK-25046).

To solve it, I propose to introduce a new 
{{{}ForwardForConsecutiveHashPartitioner{}}}. When SQL planner optimizes the 
case of multiple consecutive the same groupBy, it should use the proposed 
partitioner, so that the runtime framework can further decide whether the 
partitioner can be changed to hash or not.
h4.  


> Introduce ForwardForConsecutiveHashPartitioner
> --
>
> Key: FLINK-26004
> URL: https://issues.apache.org/jira/browse/FLINK-26004
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
>
> If there are multiple consecutive the same hash shuffle(i.e. keyBy), SQL 
> planner will change them except the first one to use forward partitioner, so 
> that these operators can be chained to reduce unnecessary shuffles.
> However, sometimes the consecutive hash operators are not chained (e.g. 
> multiple inputs), and this kind of forward partitioners will turn into 
> forward job edges. These forward edges still have the consecutive hash 
> assumption, so that they cannot be changed into rescale/rebalance edges, 
> otherwise it can lead to incorrect results. This prevents the adaptive batch 
> scheduler from determining parallelism for other forward edge downstream job 
> vertices (see FLINK-25046).
> To solve it, I propose to introduce a new 
> {{{}ForwardForConsecutiveHashPartitioner}}. When SQL planner optimizes the 
> case of multiple consecutive the same groupBy, it should use the proposed 
> partitioner, so that the runtime framework can further decide whether the 
> partitioner can be changed to hash or not.
> h4.  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26064) KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed classloader

2022-02-09 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490010#comment-17490010
 ] 

Danny Cranmer commented on FLINK-26064:
---

[~CrynetLogistics] is working on this test failure. 

> KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed 
> classloader
> 
>
> Key: FLINK-26064
> URL: https://issues.apache.org/jira/browse/FLINK-26064
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Piotr Nowojski
>Assignee: Zichen Liu
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31044=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d
> (shortened stack trace, as full is too large)
> {noformat}
> Feb 09 20:05:04 java.util.concurrent.ExecutionException: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> (...)
> Feb 09 20:05:04 Caused by: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:98)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:43)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:204)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:200)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:179)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)
> (...)
> Feb 09 20:05:04 Caused by: java.lang.IllegalStateException: Trying to access 
> closed classloader. Please check if you store classloaders directly or 
> indirectly in static fields. If the stacktrace suggests that the leak occurs 
> in a third party library and cannot be fixed immediately, you can disable 
> this check with the configuration 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164)
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResources(FlinkUserCodeClassLoaders.java:188)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:348)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder$1.run(FactoryFinder.java:352)
> Feb 09 20:05:04   at java.security.AccessController.doPrivileged(Native 
> Method)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.findServiceProvider(FactoryFinder.java:341)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:313)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:227)
> Feb 09 20:05:04   at 
> javax.xml.stream.XMLInputFactory.newInstance(XMLInputFactory.java:154)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.protocols.query.unmarshall.XmlDomParser.createXmlInputFactory(XmlDomParser.java:124)
> Feb 09 20:05:04   at 
> java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:284)
> Feb 09 20:05:04 

[GitHub] [flink] flinkbot edited a comment on pull request #18498: [FLINK-25801][ metrics]add cpu processor metric of taskmanager

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18498:
URL: https://github.com/apache/flink/pull/18498#issuecomment-1020905745


   
   ## CI report:
   
   * 966903ce9ef31c804b67f0731ea1be44a120f0e0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31074)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-26064) KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed classloader

2022-02-09 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-26064:
-

Assignee: Zichen Liu

> KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed 
> classloader
> 
>
> Key: FLINK-26064
> URL: https://issues.apache.org/jira/browse/FLINK-26064
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Piotr Nowojski
>Assignee: Zichen Liu
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31044=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d
> (shortened stack trace, as full is too large)
> {noformat}
> Feb 09 20:05:04 java.util.concurrent.ExecutionException: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> (...)
> Feb 09 20:05:04 Caused by: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:98)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:43)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:204)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:200)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:179)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)
> (...)
> Feb 09 20:05:04 Caused by: java.lang.IllegalStateException: Trying to access 
> closed classloader. Please check if you store classloaders directly or 
> indirectly in static fields. If the stacktrace suggests that the leak occurs 
> in a third party library and cannot be fixed immediately, you can disable 
> this check with the configuration 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164)
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResources(FlinkUserCodeClassLoaders.java:188)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:348)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder$1.run(FactoryFinder.java:352)
> Feb 09 20:05:04   at java.security.AccessController.doPrivileged(Native 
> Method)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.findServiceProvider(FactoryFinder.java:341)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:313)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:227)
> Feb 09 20:05:04   at 
> javax.xml.stream.XMLInputFactory.newInstance(XMLInputFactory.java:154)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.protocols.query.unmarshall.XmlDomParser.createXmlInputFactory(XmlDomParser.java:124)
> Feb 09 20:05:04   at 
> java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:284)
> Feb 09 20:05:04   at 
> 

[GitHub] [flink] XComp commented on pull request #18637: [FLINK-25433][runtime] Adds retry mechanism to DefaultResourceCleaner

2022-02-09 Thread GitBox


XComp commented on pull request #18637:
URL: https://github.com/apache/flink/pull/18637#issuecomment-1034597170


   Looks like rebasing wasn't a good idea. There are failures on `master` 
covered by FLINK-26065. Additionally, the e2e tests stopped due to some problem 
that looks like an infra issue


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on pull request #18694: [FLINK-26062][state/changelog] Replace poll() with remove() for PQ states

2022-02-09 Thread GitBox


rkhachatryan commented on pull request #18694:
URL: https://github.com/apache/flink/pull/18694#issuecomment-1034595589


   Thanks for the review!
   I'll merge the PR after resolving the (unrelated) build failures 
(FLINK-26065).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18656: [FLINK-25249][connector/kafka] Reimplement KafkaTestEnvironment with KafkaContainer

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18656:
URL: https://github.com/apache/flink/pull/18656#issuecomment-1032337760


   
   ## CI report:
   
   * 6ebf88715d326f8b6814478fcc30cb088999663f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30914)
 
   * 9720c3e92ef12305e72d1b0de4f70d8be5a3d9af Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31085)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MrWhiteSike commented on pull request #18698: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chinese.

2022-02-09 Thread GitBox


MrWhiteSike commented on pull request #18698:
URL: https://github.com/apache/flink/pull/18698#issuecomment-1034594297


   Hi, [@Thesharing](https://github.com/Thesharing) 
[@RocMarshal](https://github.com/RocMarshal) , May I get your help to review 
it? Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25233) UpsertKafkaTableITCase.testAggregate fails on AZP

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490007#comment-17490007
 ] 

Yun Gao commented on FLINK-25233:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31071=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=36523

> UpsertKafkaTableITCase.testAggregate fails on AZP
> -
>
> Key: FLINK-25233
> URL: https://issues.apache.org/jira/browse/FLINK-25233
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> {{UpsertKafkaTableITCase.testAggregate}} fails on AZP with
> {code}
> 2021-12-09T01:41:49.8038402Z Dec 09 01:41:49 [ERROR] 
> UpsertKafkaTableITCase.testAggregate  Time elapsed: 90.624 s  <<< ERROR!
> 2021-12-09T01:41:49.8039372Z Dec 09 01:41:49 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
> 2021-12-09T01:41:49.8040303Z Dec 09 01:41:49  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2021-12-09T01:41:49.8040956Z Dec 09 01:41:49  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2021-12-09T01:41:49.8041862Z Dec 09 01:41:49  at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118)
> 2021-12-09T01:41:49.8042939Z Dec 09 01:41:49  at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81)
> 2021-12-09T01:41:49.8044130Z Dec 09 01:41:49  at 
> org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.wordCountToUpsertKafka(UpsertKafkaTableITCase.java:436)
> 2021-12-09T01:41:49.8045308Z Dec 09 01:41:49  at 
> org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.testAggregate(UpsertKafkaTableITCase.java:79)
> 2021-12-09T01:41:49.8045940Z Dec 09 01:41:49  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-12-09T01:41:49.8052892Z Dec 09 01:41:49  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-12-09T01:41:49.8053812Z Dec 09 01:41:49  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-12-09T01:41:49.8054458Z Dec 09 01:41:49  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-12-09T01:41:49.8055027Z Dec 09 01:41:49  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-12-09T01:41:49.8055649Z Dec 09 01:41:49  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-12-09T01:41:49.8056644Z Dec 09 01:41:49  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-12-09T01:41:49.8057911Z Dec 09 01:41:49  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-12-09T01:41:49.8058858Z Dec 09 01:41:49  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-12-09T01:41:49.8059907Z Dec 09 01:41:49  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2021-12-09T01:41:49.8060871Z Dec 09 01:41:49  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2021-12-09T01:41:49.8061847Z Dec 09 01:41:49  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-12-09T01:41:49.8062898Z Dec 09 01:41:49  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-12-09T01:41:49.8063804Z Dec 09 01:41:49  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-12-09T01:41:49.8064963Z Dec 09 01:41:49  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-12-09T01:41:49.8065992Z Dec 09 01:41:49  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-12-09T01:41:49.8066940Z Dec 09 01:41:49  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-12-09T01:41:49.8067939Z Dec 09 01:41:49  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2021-12-09T01:41:49.8068904Z Dec 09 01:41:49  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2021-12-09T01:41:49.8069837Z Dec 09 01:41:49  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2021-12-09T01:41:49.8070715Z Dec 09 01:41:49  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2021-12-09T01:41:49.8071587Z Dec 09 01:41:49  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2021-12-09T01:41:49.8072582Z Dec 09 01:41:49  at 
> 

[jira] [Commented] (FLINK-26065) org.apache.flink.table.api.PlanReference$ContentPlanReference $FilePlanReference $ResourcePlanReference violation the api rules

2022-02-09 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490006#comment-17490006
 ] 

Roman Khachatryan commented on FLINK-26065:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31065=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=26350

> org.apache.flink.table.api.PlanReference$ContentPlanReference 
> $FilePlanReference $ResourcePlanReference violation the api rules
> ---
>
> Key: FLINK-26065
> URL: https://issues.apache.org/jira/browse/FLINK-26065
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Blocker
>  Labels: test-stability
>
> {code:java}
> Feb 09 21:10:32 [ERROR] Failures: 
> Feb 09 21:10:32 [ERROR]   Architecture Violation [Priority: MEDIUM] - Rule 
> 'Classes in API packages should have at least one API visibility annotation.' 
> was violated (3 times):
> Feb 09 21:10:32 org.apache.flink.table.api.PlanReference$ContentPlanReference 
> does not satisfy: annotated with @Internal or annotated with @Experimental or 
> annotated with @PublicEvolving or annotated with @Public or annotated with 
> @Deprecated
> Feb 09 21:10:32 org.apache.flink.table.api.PlanReference$FilePlanReference 
> does not satisfy: annotated with @Internal or annotated with @Experimental or 
> annotated with @PublicEvolving or annotated with @Public or annotated with 
> @Deprecated
> Feb 09 21:10:32 
> org.apache.flink.table.api.PlanReference$ResourcePlanReference does not 
> satisfy: annotated with @Internal or annotated with @Experimental or 
> annotated with @PublicEvolving or annotated with @Public or annotated with 
> @Deprecated
> Feb 09 21:10:32 [INFO] 
> Feb 09 21:10:32 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31051=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=26427



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18656: [FLINK-25249][connector/kafka] Reimplement KafkaTestEnvironment with KafkaContainer

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18656:
URL: https://github.com/apache/flink/pull/18656#issuecomment-1032337760


   
   ## CI report:
   
   * 6ebf88715d326f8b6814478fcc30cb088999663f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30914)
 
   * 9720c3e92ef12305e72d1b0de4f70d8be5a3d9af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25825) MySqlCatalogITCase fails on azure

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490004#comment-17490004
 ] 

Yun Gao commented on FLINK-25825:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31071=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=c520d2c3-4d17-51f1-813b-4b0b74a0c307=14836

> MySqlCatalogITCase fails on azure
> -
>
> Key: FLINK-25825
> URL: https://issues.apache.org/jira/browse/FLINK-25825
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / API
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: RocMarshal
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30189=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=c520d2c3-4d17-51f1-813b-4b0b74a0c307=13677
>  
> {code}
> 2022-01-26T06:04:42.8019913Z Jan 26 06:04:42 [ERROR] 
> org.apache.flink.connector.jdbc.catalog.MySqlCatalogITCase.testFullPath  Time 
> elapsed: 2.166 *s  <<< FAILURE!
> 2022-01-26T06:04:42.8025522Z Jan 26 06:04:42 java.lang.AssertionError: 
> expected: java.util.ArrayList<[+I[1, -1, 1, null, true, null, hello, 2021-0 
> 8-04, 2021-08-04T01:54:16, -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, -1, 1, 
> \{"k1": "v1"}, null, col_longtext, null, -1, 1, col_mediumtext, -99, 9 9, 
> -1.0, 1.0, set_ele1, -1, 1, col_text, 10:32:34, 2021-08-04T01:54:16, 
> col_tinytext, -1, 1, null, col_varchar, 2021-08-04T01:54:16.463, 09:33:43,  
> 2021-08-04T01:54:16.463, null], +I[2, -1, 1, null, true, null, hello, 
> 2021-08-04, 2021-08-04T01:53:19, -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1,  
> -1, 1, \{"k1": "v1"}, null, col_longtext, null, -1, 1, col_mediumtext, -99, 
> 99, -1.0, 1.0, set_ele1,set_ele12, -1, 1, col_text, 10:32:34, 2021-08- 
> 04T01:53:19, col_tinytext, -1, 1, null, col_varchar, 2021-08-04T01:53:19.098, 
> 09:33:43, 2021-08-04T01:53:19.098, null]]> but was: java.util.ArrayL 
> ist<[+I[1, -1, 1, null, true, null, hello, 2021-08-04, 2021-08-04T01:54:16, 
> -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, -1, 1, \{"k1": "v1"}, null,  
> col_longtext, null, -1, 1, col_mediumtext, -99, 99, -1.0, 1.0, set_ele1, -1, 
> 1, col_text, 10:32:34, 2021-08-04T01:54:16, col_tinytext, -1, 1, null , 
> col_varchar, 2021-08-04T01:54:16.463, 09:33:43, 2021-08-04T01:54:16.463, 
> null], +I[2, -1, 1, null, true, null, hello, 2021-08-04, 2021-08-04T01: 
> 53:19, -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, -1, 1, \{"k1": "v1"}, null, 
> col_longtext, null, -1, 1, col_mediumtext, -99, 99, -1.0, 1.0, set_el 
> e1,set_ele12, -1, 1, col_text, 10:32:34, 2021-08-04T01:53:19, col_tinytext, 
> -1, 1, null, col_varchar, 2021-08-04T01:53:19.098, 09:33:43, 2021-08-0 
> 4T01:53:19.098, null]]>
> 2022-01-26T06:04:42.8029336Z Jan 26 06:04:42    at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-01-26T06:04:42.8029824Z Jan 26 06:04:42    at 
> org.junit.Assert.failNotEquals(Assert.java:835)
> 2022-01-26T06:04:42.8030319Z Jan 26 06:04:42    at 
> org.junit.Assert.assertEquals(Assert.java:120)
> 2022-01-26T06:04:42.8030815Z Jan 26 06:04:42    at 
> org.junit.Assert.assertEquals(Assert.java:146)
> 2022-01-26T06:04:42.8031419Z Jan 26 06:04:42    at 
> org.apache.flink.connector.jdbc.catalog.MySqlCatalogITCase.testFullPath(MySqlCatalogITCase.java*:306)
> {code}
>  
> {code}
> 2022-01-26T06:04:43.2899378Z Jan 26 06:04:43 [ERROR] Failures:
> 2022-01-26T06:04:43.2907942Z Jan 26 06:04:43 [ERROR]   
> MySqlCatalogITCase.testFullPath:306 expected: java.util.ArrayList<[+I[1, -1, 
> 1, null, true,
> 2022-01-26T06:04:43.2914065Z Jan 26 06:04:43 [ERROR]   
> MySqlCatalogITCase.testGetTable:253 expected:<(
> 2022-01-26T06:04:43.2983567Z Jan 26 06:04:43 [ERROR]   
> MySqlCatalogITCase.testSelectToInsert:323 expected: 
> java.util.ArrayList<[+I[1, -1, 1, null,
> 2022-01-26T06:04:43.2997373Z Jan 26 06:04:43 [ERROR]   
> MySqlCatalogITCase.testWithoutCatalog:291 expected: 
> java.util.ArrayList<[+I[1, -1, 1, null,
> 2022-01-26T06:04:43.3010450Z Jan 26 06:04:43 [ERROR]   
> MySqlCatalogITCase.testWithoutCatalogDB:278 expected: 
> java.util.ArrayList<[+I[1, -1, 1, nul
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18605: [FLINK-25785][Connectors][JDBC] Upgrade com.h2database:h2 to 2.1.210

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18605:
URL: https://github.com/apache/flink/pull/18605#issuecomment-1027887418


   
   ## CI report:
   
   * 56d378844fb46ed0957fb6a96f800a949eb31f11 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30986)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18391: [FLINK-25478][chaneglog] Correct the state register logic of ChangelogStateBackendHandle

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18391:
URL: https://github.com/apache/flink/pull/18391#issuecomment-1015282046


   
   ## CI report:
   
   * 9aa06f3fde2b809a98943eeb7ae7851ec9a4009d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31073)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-11813) Standby per job mode Dispatchers don't know job's JobSchedulingStatus

2022-02-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-11813:
-

Assignee: Matthias Pohl

> Standby per job mode Dispatchers don't know job's JobSchedulingStatus
> -
>
> Key: FLINK-11813
> URL: https://issues.apache.org/jira/browse/FLINK-11813
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.4, 1.7.2, 1.8.0, 1.9.3, 1.10.3, 1.11.3, 1.13.1, 
> 1.12.4
>Reporter: Till Rohrmann
>Assignee: Matthias Pohl
>Priority: Major
> Fix For: 1.15.0
>
>
> At the moment, it can happen that standby {{Dispatchers}} in per job mode 
> will restart a terminated job after they gained leadership. The problem is 
> that we currently clear the {{RunningJobsRegistry}} once a job has reached a 
> globally terminal state. After the leading {{Dispatcher}} terminates, a 
> standby {{Dispatcher}} will gain leadership. Without having the information 
> from the {{RunningJobsRegistry}} it cannot tell whether the job has been 
> executed or whether the {{Dispatcher}} needs to re-execute the job. At the 
> moment, the {{Dispatcher}} will assume that there was a fault and hence 
> re-execute the job. This can lead to duplicate results.
> I think we need some way to tell standby {{Dispatchers}} that a certain job 
> has been successfully executed. One trivial solution could be to not clean up 
> the {{RunningJobsRegistry}} but then we will clutter ZooKeeper.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25973) Rename ArchivedExecutionGraph.createFromInitializingJob into more generic createSparseArchivedExecutionGraph

2022-02-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-25973:
--
Priority: Minor  (was: Major)

> Rename ArchivedExecutionGraph.createFromInitializingJob into more generic 
> createSparseArchivedExecutionGraph
> 
>
> Key: FLINK-25973
> URL: https://issues.apache.org/jira/browse/FLINK-25973
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Priority: Minor
>
> The use cases for this method changed. We should rename it into something 
> that fits both usecases.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24573) ZooKeeperJobGraphsStoreITCase crashes JVM

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490003#comment-17490003
 ] 

Yun Gao commented on FLINK-24573:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31069=logs=d8d26c26-7ec2-5ed2-772e-7a1a1eb8317c=ec8797b0-5eee-5a0e-f936-8db65cff44cc=8248

> ZooKeeperJobGraphsStoreITCase crashes JVM
> -
>
> Key: FLINK-24573
> URL: https://issues.apache.org/jira/browse/FLINK-24573
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: stale-major, test-stability
> Attachments: logs-ci_build-test_ci_build_core-1637952435.zip
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25123=logs=a549b384-c55a-52c0-c451-00e0477ab6db=eef5922c-08d9-5ba3-7299-8393476594e7=8375
> {code}
> Oct 17 00:15:16 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test 
> (integration-tests) on project flink-runtime: There are test failures.
> Oct 17 00:15:16 [ERROR] 
> Oct 17 00:15:16 [ERROR] Please refer to 
> /__w/1/s/flink-runtime/target/surefire-reports for the individual test 
> results.
> Oct 17 00:15:16 [ERROR] Please refer to dump files (if any exist) 
> [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> Oct 17 00:15:16 [ERROR] ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> Oct 17 00:15:16 [ERROR] Command was /bin/sh -c cd 
> /__w/1/s/flink-runtime/target && 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /__w/1/s/flink-runtime/target/surefire/surefirebooter6284072213813812385.jar 
> /__w/1/s/flink-runtime/target/surefire 2021-10-16T23-44-38_893-jvmRun2 
> surefire134157100872108937tmp surefire_819867287453033687541tmp
> Oct 17 00:15:16 [ERROR] Error occurred in starting fork, check output in log
> Oct 17 00:15:16 [ERROR] Process Exit Code: 239
> Oct 17 00:15:16 [ERROR] Crashed tests:
> Oct 17 00:15:16 [ERROR] 
> org.apache.flink.runtime.jobmanager.ZooKeeperJobGraphsStoreITCase
> Oct 17 00:15:16 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> Oct 17 00:15:16 [ERROR] Command was /bin/sh -c cd 
> /__w/1/s/flink-runtime/target && 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /__w/1/s/flink-runtime/target/surefire/surefirebooter6284072213813812385.jar 
> /__w/1/s/flink-runtime/target/surefire 2021-10-16T23-44-38_893-jvmRun2 
> surefire134157100872108937tmp surefire_819867287453033687541tmp
> Oct 17 00:15:16 [ERROR] Error occurred in starting fork, check output in log
> Oct 17 00:15:16 [ERROR] Process Exit Code: 239
> Oct 17 00:15:16 [ERROR] Crashed tests:
> Oct 17 00:15:16 [ERROR] 
> org.apache.flink.runtime.jobmanager.ZooKeeperJobGraphsStoreITCase
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> Oct 17 00:15:16 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
> Oct 17 00:15:16 [ERROR] at 
> 

[GitHub] [flink] MartijnVisser commented on pull request #18605: [FLINK-25785][Connectors][JDBC] Upgrade com.h2database:h2 to 2.1.210

2022-02-09 Thread GitBox


MartijnVisser commented on pull request #18605:
URL: https://github.com/apache/flink/pull/18605#issuecomment-1034588640


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26041) AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure hang on azure

2022-02-09 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26041:

Priority: Critical  (was: Major)

> AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure 
> hang on azure
> -
>
> Key: FLINK-26041
> URL: https://issues.apache.org/jira/browse/FLINK-26041
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> Feb 08 13:04:58 "main" #1 prio=5 os_prio=0 tid=0x7fdcf000b800 nid=0x47bd 
> waiting on condition [0x7fdcf697b000]
> Feb 08 13:04:58java.lang.Thread.State: WAITING (parking)
> Feb 08 13:04:58   at sun.misc.Unsafe.park(Native Method)
> Feb 08 13:04:58   - parking to wait for  <0x8f644330> (a 
> java.util.concurrent.CompletableFuture$Signaller)
> Feb 08 13:04:58   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Feb 08 13:04:58   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 08 13:04:58   at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:36)
> Feb 08 13:04:58   at 
> org.apache.flink.test.recovery.AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure(AbstractTaskManagerProcessFailureRecoveryTest.java:209)
> Feb 08 13:04:58   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 08 13:04:58   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 08 13:04:58   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 08 13:04:58   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 08 13:04:58   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 08 13:04:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 08 13:04:58   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 08 13:04:58   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Feb 08 13:04:58   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Feb 08 13:04:58   at org.junit.runners.Suite.runChild(Suite.java:128)
> Feb 08 13:04:58   at org.junit.runners.Suite.runChild(Suite.java:27)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30901=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=14617



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26041) AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure hang on azure

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490002#comment-17490002
 ] 

Yun Gao commented on FLINK-26041:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31066=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=34698

> AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure 
> hang on azure
> -
>
> Key: FLINK-26041
> URL: https://issues.apache.org/jira/browse/FLINK-26041
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Feb 08 13:04:58 "main" #1 prio=5 os_prio=0 tid=0x7fdcf000b800 nid=0x47bd 
> waiting on condition [0x7fdcf697b000]
> Feb 08 13:04:58java.lang.Thread.State: WAITING (parking)
> Feb 08 13:04:58   at sun.misc.Unsafe.park(Native Method)
> Feb 08 13:04:58   - parking to wait for  <0x8f644330> (a 
> java.util.concurrent.CompletableFuture$Signaller)
> Feb 08 13:04:58   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Feb 08 13:04:58   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Feb 08 13:04:58   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 08 13:04:58   at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:36)
> Feb 08 13:04:58   at 
> org.apache.flink.test.recovery.AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure(AbstractTaskManagerProcessFailureRecoveryTest.java:209)
> Feb 08 13:04:58   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 08 13:04:58   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 08 13:04:58   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 08 13:04:58   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 08 13:04:58   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 08 13:04:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 08 13:04:58   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 08 13:04:58   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 08 13:04:58   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Feb 08 13:04:58   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Feb 08 13:04:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Feb 08 13:04:58   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Feb 08 13:04:58   at org.junit.runners.Suite.runChild(Suite.java:128)
> Feb 08 13:04:58   at org.junit.runners.Suite.runChild(Suite.java:27)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30901=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=14617



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26069) KinesisFirehoseSinkITCase failed due to org.testcontainers.containers.ContainerLaunchException: Container startup failed

2022-02-09 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26069:

Labels: test-stability  (was: )

> KinesisFirehoseSinkITCase failed due to 
> org.testcontainers.containers.ContainerLaunchException: Container startup 
> failed
> 
>
> Key: FLINK-26069
> URL: https://issues.apache.org/jira/browse/FLINK-26069
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-02-09T20:52:36.6208358Z Feb 09 20:52:36 [ERROR] Picked up 
> JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> 2022-02-09T20:52:37.8270432Z Feb 09 20:52:37 [INFO] Running 
> org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase
> 2022-02-09T20:54:08.9842331Z Feb 09 20:54:08 [ERROR] Tests run: 1, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 91.02 s <<< FAILURE! - in 
> org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase
> 2022-02-09T20:54:08.9845140Z Feb 09 20:54:08 [ERROR] 
> org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase  Time 
> elapsed: 91.02 s  <<< ERROR!
> 2022-02-09T20:54:08.9847119Z Feb 09 20:54:08 
> org.testcontainers.containers.ContainerLaunchException: Container startup 
> failed
> 2022-02-09T20:54:08.9848834Z Feb 09 20:54:08  at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:336)
> 2022-02-09T20:54:08.9850502Z Feb 09 20:54:08  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:317)
> 2022-02-09T20:54:08.9852012Z Feb 09 20:54:08  at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1066)
> 2022-02-09T20:54:08.9853695Z Feb 09 20:54:08  at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
> 2022-02-09T20:54:08.9855316Z Feb 09 20:54:08  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-02-09T20:54:08.9856955Z Feb 09 20:54:08  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-02-09T20:54:08.9858330Z Feb 09 20:54:08  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-02-09T20:54:08.9859838Z Feb 09 20:54:08  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-02-09T20:54:08.9861123Z Feb 09 20:54:08  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-02-09T20:54:08.9862747Z Feb 09 20:54:08  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022-02-09T20:54:08.9864691Z Feb 09 20:54:08  at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> 2022-02-09T20:54:08.9866384Z Feb 09 20:54:08  at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> 2022-02-09T20:54:08.9868138Z Feb 09 20:54:08  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
> 2022-02-09T20:54:08.9869980Z Feb 09 20:54:08  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
> 2022-02-09T20:54:08.9871255Z Feb 09 20:54:08  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
> 2022-02-09T20:54:08.9872602Z Feb 09 20:54:08  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
> 2022-02-09T20:54:08.9874126Z Feb 09 20:54:08  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
> 2022-02-09T20:54:08.9875899Z Feb 09 20:54:08  at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> 2022-02-09T20:54:08.9877109Z Feb 09 20:54:08  at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> 2022-02-09T20:54:08.9878367Z Feb 09 20:54:08  at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> 2022-02-09T20:54:08.9879761Z Feb 09 20:54:08  at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> 2022-02-09T20:54:08.9881148Z Feb 09 20:54:08  at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> 2022-02-09T20:54:08.9882768Z Feb 09 20:54:08  at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> 2022-02-09T20:54:08.9884214Z Feb 09 20:54:08  at 
> 

[jira] [Created] (FLINK-26069) KinesisFirehoseSinkITCase failed due to org.testcontainers.containers.ContainerLaunchException: Container startup failed

2022-02-09 Thread Yun Gao (Jira)
Yun Gao created FLINK-26069:
---

 Summary: KinesisFirehoseSinkITCase failed due to 
org.testcontainers.containers.ContainerLaunchException: Container startup failed
 Key: FLINK-26069
 URL: https://issues.apache.org/jira/browse/FLINK-26069
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.15.0
Reporter: Yun Gao



{code:java}
2022-02-09T20:52:36.6208358Z Feb 09 20:52:36 [ERROR] Picked up 
JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
2022-02-09T20:52:37.8270432Z Feb 09 20:52:37 [INFO] Running 
org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase
2022-02-09T20:54:08.9842331Z Feb 09 20:54:08 [ERROR] Tests run: 1, Failures: 0, 
Errors: 1, Skipped: 0, Time elapsed: 91.02 s <<< FAILURE! - in 
org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase
2022-02-09T20:54:08.9845140Z Feb 09 20:54:08 [ERROR] 
org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase  Time 
elapsed: 91.02 s  <<< ERROR!
2022-02-09T20:54:08.9847119Z Feb 09 20:54:08 
org.testcontainers.containers.ContainerLaunchException: Container startup failed
2022-02-09T20:54:08.9848834Z Feb 09 20:54:08at 
org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:336)
2022-02-09T20:54:08.9850502Z Feb 09 20:54:08at 
org.testcontainers.containers.GenericContainer.start(GenericContainer.java:317)
2022-02-09T20:54:08.9852012Z Feb 09 20:54:08at 
org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1066)
2022-02-09T20:54:08.9853695Z Feb 09 20:54:08at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
2022-02-09T20:54:08.9855316Z Feb 09 20:54:08at 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2022-02-09T20:54:08.9856955Z Feb 09 20:54:08at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2022-02-09T20:54:08.9858330Z Feb 09 20:54:08at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
2022-02-09T20:54:08.9859838Z Feb 09 20:54:08at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137)
2022-02-09T20:54:08.9861123Z Feb 09 20:54:08at 
org.junit.runner.JUnitCore.run(JUnitCore.java:115)
2022-02-09T20:54:08.9862747Z Feb 09 20:54:08at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
2022-02-09T20:54:08.9864691Z Feb 09 20:54:08at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
2022-02-09T20:54:08.9866384Z Feb 09 20:54:08at 
org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
2022-02-09T20:54:08.9868138Z Feb 09 20:54:08at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
2022-02-09T20:54:08.9869980Z Feb 09 20:54:08at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
2022-02-09T20:54:08.9871255Z Feb 09 20:54:08at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
2022-02-09T20:54:08.9872602Z Feb 09 20:54:08at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
2022-02-09T20:54:08.9874126Z Feb 09 20:54:08at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
2022-02-09T20:54:08.9875899Z Feb 09 20:54:08at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
2022-02-09T20:54:08.9877109Z Feb 09 20:54:08at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
2022-02-09T20:54:08.9878367Z Feb 09 20:54:08at 
org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
2022-02-09T20:54:08.9879761Z Feb 09 20:54:08at 
org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
2022-02-09T20:54:08.9881148Z Feb 09 20:54:08at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
2022-02-09T20:54:08.9882768Z Feb 09 20:54:08at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
2022-02-09T20:54:08.9884214Z Feb 09 20:54:08at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124)
2022-02-09T20:54:08.9885475Z Feb 09 20:54:08at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
2022-02-09T20:54:08.9886856Z Feb 09 20:54:08at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
2022-02-09T20:54:08.9888037Z Feb 09 20:54:08at 

[jira] [Commented] (FLINK-26064) KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed classloader

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1749#comment-1749
 ] 

Yun Gao commented on FLINK-26064:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31035=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=44189

> KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed 
> classloader
> 
>
> Key: FLINK-26064
> URL: https://issues.apache.org/jira/browse/FLINK-26064
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Piotr Nowojski
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31044=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d
> (shortened stack trace, as full is too large)
> {noformat}
> Feb 09 20:05:04 java.util.concurrent.ExecutionException: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> (...)
> Feb 09 20:05:04 Caused by: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:98)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:43)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:204)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:200)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:179)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)
> (...)
> Feb 09 20:05:04 Caused by: java.lang.IllegalStateException: Trying to access 
> closed classloader. Please check if you store classloaders directly or 
> indirectly in static fields. If the stacktrace suggests that the leak occurs 
> in a third party library and cannot be fixed immediately, you can disable 
> this check with the configuration 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164)
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResources(FlinkUserCodeClassLoaders.java:188)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:348)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder$1.run(FactoryFinder.java:352)
> Feb 09 20:05:04   at java.security.AccessController.doPrivileged(Native 
> Method)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.findServiceProvider(FactoryFinder.java:341)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:313)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:227)
> Feb 09 20:05:04   at 
> javax.xml.stream.XMLInputFactory.newInstance(XMLInputFactory.java:154)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.protocols.query.unmarshall.XmlDomParser.createXmlInputFactory(XmlDomParser.java:124)
> Feb 09 20:05:04   at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #18699: [FLINK-25937] Restore the environment parallelism before transforming in SinkExpander#expand.

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18699:
URL: https://github.com/apache/flink/pull/18699#issuecomment-1034581144


   
   ## CI report:
   
   * 4a9ad6db0e1a01e9c33529b47053643721622eb7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31084)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25523) KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on AZP

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1748#comment-1748
 ] 

Yun Gao commented on FLINK-25523:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31022=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6879

> KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on AZP
> ---
>
> Key: FLINK-25523
> URL: https://issues.apache.org/jira/browse/FLINK-25523
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> The test {{KafkaSourceITCase$KafkaSpecificTests.testTimestamp}} fails on AZP 
> with
> {code}
> 2022-01-05T03:08:57.1647316Z java.util.concurrent.TimeoutException: The topic 
> metadata failed to propagate to Kafka broker.
> 2022-01-05T03:08:57.1660635Z  at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:214)
> 2022-01-05T03:08:57.1667856Z  at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:230)
> 2022-01-05T03:08:57.1668778Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:216)
> 2022-01-05T03:08:57.1670072Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:98)
> 2022-01-05T03:08:57.1671078Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:216)
> 2022-01-05T03:08:57.1671942Z  at 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests.testTimestamp(KafkaSourceITCase.java:104)
> 2022-01-05T03:08:57.1672619Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-05T03:08:57.1673715Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-05T03:08:57.1675000Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-05T03:08:57.1675907Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2022-01-05T03:08:57.1676587Z  at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
> 2022-01-05T03:08:57.1677316Z  at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> 2022-01-05T03:08:57.1678380Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> 2022-01-05T03:08:57.1679264Z  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> 2022-01-05T03:08:57.1680002Z  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> 2022-01-05T03:08:57.1680776Z  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
> 2022-01-05T03:08:57.1681682Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> 2022-01-05T03:08:57.1682442Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> 2022-01-05T03:08:57.1683450Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> 2022-01-05T03:08:57.1685362Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> 2022-01-05T03:08:57.1686284Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> 2022-01-05T03:08:57.1687152Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> 2022-01-05T03:08:57.1687818Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> 2022-01-05T03:08:57.1688479Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> 2022-01-05T03:08:57.1689376Z  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210)
> 2022-01-05T03:08:57.1690108Z  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-01-05T03:08:57.1690825Z  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206)
> 2022-01-05T03:08:57.1691470Z  at 
> 

[jira] [Updated] (FLINK-25523) KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on AZP

2022-02-09 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25523:

Affects Version/s: 1.13.5

> KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on AZP
> ---
>
> Key: FLINK-25523
> URL: https://issues.apache.org/jira/browse/FLINK-25523
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0, 1.13.5
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> The test {{KafkaSourceITCase$KafkaSpecificTests.testTimestamp}} fails on AZP 
> with
> {code}
> 2022-01-05T03:08:57.1647316Z java.util.concurrent.TimeoutException: The topic 
> metadata failed to propagate to Kafka broker.
> 2022-01-05T03:08:57.1660635Z  at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:214)
> 2022-01-05T03:08:57.1667856Z  at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:230)
> 2022-01-05T03:08:57.1668778Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:216)
> 2022-01-05T03:08:57.1670072Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:98)
> 2022-01-05T03:08:57.1671078Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:216)
> 2022-01-05T03:08:57.1671942Z  at 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests.testTimestamp(KafkaSourceITCase.java:104)
> 2022-01-05T03:08:57.1672619Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-05T03:08:57.1673715Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-05T03:08:57.1675000Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-05T03:08:57.1675907Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2022-01-05T03:08:57.1676587Z  at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
> 2022-01-05T03:08:57.1677316Z  at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> 2022-01-05T03:08:57.1678380Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> 2022-01-05T03:08:57.1679264Z  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> 2022-01-05T03:08:57.1680002Z  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> 2022-01-05T03:08:57.1680776Z  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
> 2022-01-05T03:08:57.1681682Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> 2022-01-05T03:08:57.1682442Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> 2022-01-05T03:08:57.1683450Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> 2022-01-05T03:08:57.1685362Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> 2022-01-05T03:08:57.1686284Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> 2022-01-05T03:08:57.1687152Z  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> 2022-01-05T03:08:57.1687818Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> 2022-01-05T03:08:57.1688479Z  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> 2022-01-05T03:08:57.1689376Z  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210)
> 2022-01-05T03:08:57.1690108Z  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-01-05T03:08:57.1690825Z  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206)
> 2022-01-05T03:08:57.1691470Z  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131)
> 2022-01-05T03:08:57.1692151Z  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #18516: [FLINK-25288][tests] add savepoint and metric test cases in source suite of connector testframe

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18516:
URL: https://github.com/apache/flink/pull/18516#issuecomment-1022021934


   
   ## CI report:
   
   * 42bf48fdbba63580ec0d5418579e2e65c413cf38 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30475)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18699: [FLINK-25937] Restore the environment parallelism before transforming in SinkExpander#expand.

2022-02-09 Thread GitBox


flinkbot commented on pull request #18699:
URL: https://github.com/apache/flink/pull/18699#issuecomment-1034582784


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4a9ad6db0e1a01e9c33529b47053643721622eb7 (Thu Feb 10 
07:33:52 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26064) KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed classloader

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489998#comment-17489998
 ] 

Yun Gao commented on FLINK-26064:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31012=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=43799

> KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed 
> classloader
> 
>
> Key: FLINK-26064
> URL: https://issues.apache.org/jira/browse/FLINK-26064
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Piotr Nowojski
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31044=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d
> (shortened stack trace, as full is too large)
> {noformat}
> Feb 09 20:05:04 java.util.concurrent.ExecutionException: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> (...)
> Feb 09 20:05:04 Caused by: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:98)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:43)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:204)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:200)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:179)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)
> (...)
> Feb 09 20:05:04 Caused by: java.lang.IllegalStateException: Trying to access 
> closed classloader. Please check if you store classloaders directly or 
> indirectly in static fields. If the stacktrace suggests that the leak occurs 
> in a third party library and cannot be fixed immediately, you can disable 
> this check with the configuration 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164)
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResources(FlinkUserCodeClassLoaders.java:188)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:348)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder$1.run(FactoryFinder.java:352)
> Feb 09 20:05:04   at java.security.AccessController.doPrivileged(Native 
> Method)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.findServiceProvider(FactoryFinder.java:341)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:313)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:227)
> Feb 09 20:05:04   at 
> javax.xml.stream.XMLInputFactory.newInstance(XMLInputFactory.java:154)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.protocols.query.unmarshall.XmlDomParser.createXmlInputFactory(XmlDomParser.java:124)
> Feb 09 20:05:04   at 
> 

[GitHub] [flink] flinkbot commented on pull request #18699: [FLINK-25937] Restore the environment parallelism before transforming in SinkExpander#expand.

2022-02-09 Thread GitBox


flinkbot commented on pull request #18699:
URL: https://github.com/apache/flink/pull/18699#issuecomment-1034581144


   
   ## CI report:
   
   * 4a9ad6db0e1a01e9c33529b47053643721622eb7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26068) ZooKeeperJobGraphsStoreITCase.testPutAndRemoveJobGraph failed on azure

2022-02-09 Thread Yun Gao (Jira)
Yun Gao created FLINK-26068:
---

 Summary: ZooKeeperJobGraphsStoreITCase.testPutAndRemoveJobGraph 
failed on azure
 Key: FLINK-26068
 URL: https://issues.apache.org/jira/browse/FLINK-26068
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.15.0
Reporter: Yun Gao



{code:java}

Feb 09 13:41:24 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException$BadVersionException:
 KeeperErrorCode = BadVersion for 
/flink/default/testPutAndRemoveJobGraph/372cd3c2dc2c8b3071d3f8fec2285fb9
Feb 09 13:41:24 at 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException.create(KeeperException.java:122)
Feb 09 13:41:24 at 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
Feb 09 13:41:24 at 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2384)
Feb 09 13:41:24 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
Feb 09 13:41:24 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
Feb 09 13:41:24 at 
org.apache.flink.shaded.curator5.org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
Feb 09 13:41:24 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
Feb 09 13:41:24 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
Feb 09 13:41:24 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
Feb 09 13:41:24 at 
org.apache.flink.runtime.zookeeper.ZooKeeperStateHandleStore.setStateHandle(ZooKeeperStateHandleStore.java:268)
Feb 09 13:41:24 at 
org.apache.flink.runtime.zookeeper.ZooKeeperStateHandleStore.replace(ZooKeeperStateHandleStore.java:232)
Feb 09 13:41:24 at 
org.apache.flink.runtime.zookeeper.ZooKeeperStateHandleStore.replace(ZooKeeperStateHandleStore.java:86)
Feb 09 13:41:24 at 
org.apache.flink.runtime.jobmanager.DefaultJobGraphStore.putJobGraph(DefaultJobGraphStore.java:226)
Feb 09 13:41:24 at 
org.apache.flink.runtime.jobmanager.ZooKeeperJobGraphsStoreITCase.testPutAndRemoveJobGraph(ZooKeeperJobGraphsStoreITCase.java:123)
Feb 09 13:41:24 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Feb 09 13:41:24 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Feb 09 13:41:24 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Feb 09 13:41:24 at java.lang.reflect.Method.invoke(Method.java:498)
Feb 09 13:41:24 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Feb 09 13:41:24 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Feb 09 13:41:24 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Feb 09 13:41:24 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Feb 09 13:41:24 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Feb 09 13:41:24 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Feb 09 13:41:24 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Feb 09 13:41:24 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Feb 09 13:41:24 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Feb 09 13:41:24 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Feb 09 13:41:24 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Feb 09 13:41:24 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Feb 09 13:41:24 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Feb 09 13:41:24 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Feb 09 13:41:24 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Feb 09 13:41:24 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Feb 09 13:41:24 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Feb 09 13:41:24 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Feb 09 13:41:24 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Feb 09 13:41:24 at 

[jira] [Updated] (FLINK-26068) ZooKeeperJobGraphsStoreITCase.testPutAndRemoveJobGraph failed on azure

2022-02-09 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26068:

Labels: test-stability  (was: )

> ZooKeeperJobGraphsStoreITCase.testPutAndRemoveJobGraph failed on azure
> --
>
> Key: FLINK-26068
> URL: https://issues.apache.org/jira/browse/FLINK-26068
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Feb 09 13:41:24 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException$BadVersionException:
>  KeeperErrorCode = BadVersion for 
> /flink/default/testPutAndRemoveJobGraph/372cd3c2dc2c8b3071d3f8fec2285fb9
> Feb 09 13:41:24   at 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException.create(KeeperException.java:122)
> Feb 09 13:41:24   at 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
> Feb 09 13:41:24   at 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2384)
> Feb 09 13:41:24   at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> Feb 09 13:41:24   at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> Feb 09 13:41:24   at 
> org.apache.flink.shaded.curator5.org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> Feb 09 13:41:24   at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> Feb 09 13:41:24   at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> Feb 09 13:41:24   at 
> org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> Feb 09 13:41:24   at 
> org.apache.flink.runtime.zookeeper.ZooKeeperStateHandleStore.setStateHandle(ZooKeeperStateHandleStore.java:268)
> Feb 09 13:41:24   at 
> org.apache.flink.runtime.zookeeper.ZooKeeperStateHandleStore.replace(ZooKeeperStateHandleStore.java:232)
> Feb 09 13:41:24   at 
> org.apache.flink.runtime.zookeeper.ZooKeeperStateHandleStore.replace(ZooKeeperStateHandleStore.java:86)
> Feb 09 13:41:24   at 
> org.apache.flink.runtime.jobmanager.DefaultJobGraphStore.putJobGraph(DefaultJobGraphStore.java:226)
> Feb 09 13:41:24   at 
> org.apache.flink.runtime.jobmanager.ZooKeeperJobGraphsStoreITCase.testPutAndRemoveJobGraph(ZooKeeperJobGraphsStoreITCase.java:123)
> Feb 09 13:41:24   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 09 13:41:24   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 09 13:41:24   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 09 13:41:24   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 09 13:41:24   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 09 13:41:24   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 09 13:41:24   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 09 13:41:24   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 09 13:41:24   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Feb 09 13:41:24   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Feb 09 13:41:24   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Feb 09 13:41:24   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Feb 09 13:41:24   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Feb 09 13:41:24   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Feb 09 13:41:24   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Feb 09 13:41:24   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Feb 09 13:41:24   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 09 13:41:24   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 09 13:41:24   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 09 13:41:24   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 

[jira] [Commented] (FLINK-25937) SQL Client end-to-end test e2e fails on AZP

2022-02-09 Thread Gen Luo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489997#comment-17489997
 ] 

Gen Luo commented on FLINK-25937:
-

The reason has been identified. 

Parallelism of a transformation with default parallelism(-1) is set when 
transforming, using the default parallelism set in the environment. However, in 
SinkExpander#expand, the environment parallelism is set to -1 at the entrance, 
to verify if the parallelism of a expanded transformation is set. The 
environment parallelism will be restored when exiting the method, but at 
present the transform is called within this scope. If the parallelism of a sink 
is not set, the parallelism of the sink transformation and all transformations 
expanded from it will not be handled, so the JobGraph generated will have 
vertices with -1 parallelism, causing the assertion failure in 
AdaptiveScheduler.

We can fix the bug by putting the restoring of the environment parallelism 
ahead of transforming the sink transformations. The pull request has been 
created, and has been verified with UpsertKafkaTableITCase.

> SQL Client end-to-end test e2e fails on AZP
> ---
>
> Key: FLINK-25937
> URL: https://issues.apache.org/jira/browse/FLINK-25937
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, API / DataStream, Runtime / Coordination, 
> Table SQL / API
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Assignee: Gen Luo
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> The {{SQL Client end-to-end test}} e2e tests fails on AZP when using the 
> {{AdaptiveScheduler}} because the scheduler expects that the parallelism is 
> set for all vertices:
> {code}
> Feb 03 03:45:13 org.apache.flink.runtime.client.JobInitializationException: 
> Could not start the JobMaster.
> Feb 03 03:45:13   at 
> org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
> Feb 03 03:45:13   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> Feb 03 03:45:13   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> Feb 03 03:45:13   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> Feb 03 03:45:13   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
> Feb 03 03:45:13   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Feb 03 03:45:13   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Feb 03 03:45:13   at java.lang.Thread.run(Thread.java:748)
> Feb 03 03:45:13 Caused by: java.util.concurrent.CompletionException: 
> java.lang.IllegalStateException: The adaptive scheduler expects the 
> parallelism being set for each JobVertex (violated JobVertex: 
> f74b775b58627a33e46b8c155b320255).
> Feb 03 03:45:13   at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> Feb 03 03:45:13   at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> Feb 03 03:45:13   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)
> Feb 03 03:45:13   ... 3 more
> Feb 03 03:45:13 Caused by: java.lang.IllegalStateException: The adaptive 
> scheduler expects the parallelism being set for each JobVertex (violated 
> JobVertex: f74b775b58627a33e46b8c155b320255).
> Feb 03 03:45:13   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
> Feb 03 03:45:13   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.assertPreconditions(AdaptiveScheduler.java:296)
> Feb 03 03:45:13   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.(AdaptiveScheduler.java:230)
> Feb 03 03:45:13   at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerFactory.createInstance(AdaptiveSchedulerFactory.java:122)
> Feb 03 03:45:13   at 
> org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:115)
> Feb 03 03:45:13   at 
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:345)
> Feb 03 03:45:13   at 
> org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:322)
> Feb 03 03:45:13   at 
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:106)
> Feb 03 03:45:13   at 
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:94)
> Feb 03 03:45:13   

[GitHub] [flink] pltbkd opened a new pull request #18699: [FLINK-25937] Restore the environment parallelism before transforming in SinkExpander#expand.

2022-02-09 Thread GitBox


pltbkd opened a new pull request #18699:
URL: https://github.com/apache/flink/pull/18699


   Parallelism of a transformation with default parallelism(-1) is set when 
transforming, using the default parallelism set in the environment. However, in 
SinkExpander#expand, the environment parallelism is set to -1 at the entrance, 
to verify if the parallelism of a expanded transformation is set. The 
environment parallelism will be restored when exiting the method, but at 
present the transform is called within this scope. If the parallelism of a sink 
is not set, the parallelism of the sink transformation and all transformations 
expanded from it will not be handled, so the JobGraph generated will have 
vertices with -1 parallelism, causing the assertion failure in 
AdaptiveScheduler.
   
   This pr fixes the bug by putting the restoring of the environment 
parallelism ahead of transforming the sink transformations. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25940) pyflink/datastream/tests/test_data_stream.py::StreamingModeDataStreamTests::test_keyed_process_function_with_state failed on AZP

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489996#comment-17489996
 ] 

Yun Gao commented on FLINK-25940:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31000=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=24147

> pyflink/datastream/tests/test_data_stream.py::StreamingModeDataStreamTests::test_keyed_process_function_with_state
>  failed on AZP
> 
>
> Key: FLINK-25940
> URL: https://issues.apache.org/jira/browse/FLINK-25940
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> The test 
> {{pyflink/datastream/tests/test_data_stream.py::StreamingModeDataStreamTests::test_keyed_process_function_with_state}}
>  fails on AZP:
> {code}
> 2022-02-02T17:44:12.1898582Z Feb 02 17:44:12 
> === FAILURES 
> ===
> 2022-02-02T17:44:12.1899860Z Feb 02 17:44:12 _ 
> StreamingModeDataStreamTests.test_keyed_process_function_with_state __
> 2022-02-02T17:44:12.1900493Z Feb 02 17:44:12 
> 2022-02-02T17:44:12.1901218Z Feb 02 17:44:12 self = 
>  testMethod=test_keyed_process_function_with_state>
> 2022-02-02T17:44:12.1901948Z Feb 02 17:44:12 
> 2022-02-02T17:44:12.1902745Z Feb 02 17:44:12 def 
> test_keyed_process_function_with_state(self):
> 2022-02-02T17:44:12.1903722Z Feb 02 17:44:12 
> self.env.get_config().set_auto_watermark_interval(2000)
> 2022-02-02T17:44:12.1904473Z Feb 02 17:44:12 
> self.env.set_stream_time_characteristic(TimeCharacteristic.EventTime)
> 2022-02-02T17:44:12.1906780Z Feb 02 17:44:12 data_stream = 
> self.env.from_collection([(1, 'hi', '1603708211000'),
> 2022-02-02T17:44:12.1908034Z Feb 02 17:44:12  
>(2, 'hello', '1603708224000'),
> 2022-02-02T17:44:12.1909166Z Feb 02 17:44:12  
>(3, 'hi', '1603708226000'),
> 2022-02-02T17:44:12.1910122Z Feb 02 17:44:12  
>(4, 'hello', '1603708289000'),
> 2022-02-02T17:44:12.1911099Z Feb 02 17:44:12  
>(5, 'hi', '1603708291000'),
> 2022-02-02T17:44:12.1912451Z Feb 02 17:44:12  
>(6, 'hello', '1603708293000')],
> 2022-02-02T17:44:12.1913456Z Feb 02 17:44:12  
>   type_info=Types.ROW([Types.INT(), Types.STRING(),
> 2022-02-02T17:44:12.1914338Z Feb 02 17:44:12  
>Types.STRING()]))
> 2022-02-02T17:44:12.1914811Z Feb 02 17:44:12 
> 2022-02-02T17:44:12.1915317Z Feb 02 17:44:12 class 
> MyTimestampAssigner(TimestampAssigner):
> 2022-02-02T17:44:12.1915724Z Feb 02 17:44:12 
> 2022-02-02T17:44:12.1916782Z Feb 02 17:44:12 def 
> extract_timestamp(self, value, record_timestamp) -> int:
> 2022-02-02T17:44:12.1917621Z Feb 02 17:44:12 return 
> int(value[2])
> 2022-02-02T17:44:12.1918262Z Feb 02 17:44:12 
> 2022-02-02T17:44:12.1918855Z Feb 02 17:44:12 class 
> MyProcessFunction(KeyedProcessFunction):
> 2022-02-02T17:44:12.1919363Z Feb 02 17:44:12 
> 2022-02-02T17:44:12.1919744Z Feb 02 17:44:12 def __init__(self):
> 2022-02-02T17:44:12.1920143Z Feb 02 17:44:12 self.value_state 
> = None
> 2022-02-02T17:44:12.1920648Z Feb 02 17:44:12 self.list_state 
> = None
> 2022-02-02T17:44:12.1921298Z Feb 02 17:44:12 self.map_state = 
> None
> 2022-02-02T17:44:12.1921864Z Feb 02 17:44:12 
> 2022-02-02T17:44:12.1922479Z Feb 02 17:44:12 def open(self, 
> runtime_context: RuntimeContext):
> 2022-02-02T17:44:12.1923907Z Feb 02 17:44:12 
> value_state_descriptor = ValueStateDescriptor('value_state', Types.INT())
> 2022-02-02T17:44:12.1924922Z Feb 02 17:44:12 self.value_state 
> = runtime_context.get_state(value_state_descriptor)
> 2022-02-02T17:44:12.1925741Z Feb 02 17:44:12 
> list_state_descriptor = ListStateDescriptor('list_state', Types.INT())
> 2022-02-02T17:44:12.1926482Z Feb 02 17:44:12 self.list_state 
> = runtime_context.get_list_state(list_state_descriptor)
> 2022-02-02T17:44:12.1927465Z Feb 02 17:44:12 
> map_state_descriptor = MapStateDescriptor('map_state', Types.INT(), 
> Types.STRING())
> 2022-02-02T17:44:12.1927998Z Feb 02 17:44:12 state_ttl_config 
> = StateTtlConfig \

[GitHub] [flink] flinkbot edited a comment on pull request #18531: [FLINK-24897] Enable application mode on YARN to use usrlib

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18531:
URL: https://github.com/apache/flink/pull/18531#issuecomment-1022985542


   
   ## CI report:
   
   * 3cd49ec96a15f95b0b9deb7b71b59038d7fe11ef Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30991)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24440) Announce and combine latest watermarks across SourceReaders

2022-02-09 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-24440.
--
Resolution: Fixed

merged commit 10d6d4f into apache:master

> Announce and combine latest watermarks across SourceReaders
> ---
>
> Key: FLINK-24440
> URL: https://issues.apache.org/jira/browse/FLINK-24440
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> # Each SourceReader should inform it's SourceCoordinator about the latest 
> watermark that it has emitted so far
> # SourceCoordinators should combine those watermarks and broadcast the 
> aggregated result back to all SourceReaders



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26067) ZooKeeperLeaderElectionConnectionHandlingTest. testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled failed due to timeout

2022-02-09 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26067:

Labels: test-stability  (was: )

> ZooKeeperLeaderElectionConnectionHandlingTest. 
> testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled 
> failed due to timeout
> --
>
> Key: FLINK-26067
> URL: https://issues.apache.org/jira/browse/FLINK-26067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Feb 09 08:58:56 [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 18.67 s <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest
> Feb 09 08:58:56 [ERROR] 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled
>   Time elapsed: 8.096 s  <<< ERROR!
> Feb 09 08:58:56 java.util.concurrent.TimeoutException
> Feb 09 08:58:56   at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:106)
> Feb 09 08:58:56   at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest$TestingContender.awaitRevokeLeadership(ZooKeeperLeaderElectionConnectionHandlingTest.java:211)
> Feb 09 08:58:56   at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.lambda$testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled$2(ZooKeeperLeaderElectionConnectionHandlingTest.java:100)
> Feb 09 08:58:56   at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.runTestWithZooKeeperConnectionProblem(ZooKeeperLeaderElectionConnectionHandlingTest.java:164)
> Feb 09 08:58:56   at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.runTestWithLostZooKeeperConnection(ZooKeeperLeaderElectionConnectionHandlingTest.java:109)
> Feb 09 08:58:56   at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled(ZooKeeperLeaderElectionConnectionHandlingTest.java:96)
> Feb 09 08:58:56   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 09 08:58:56   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 09 08:58:56   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 09 08:58:56   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 09 08:58:56   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 09 08:58:56   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 09 08:58:56   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 09 08:58:56   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 09 08:58:56   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Feb 09 08:58:56   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$CloseableStatement.evaluate(TestingFatalErrorHandlerResource.java:94)
> Feb 09 08:58:56   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$CloseableStatement.access$200(TestingFatalErrorHandlerResource.java:86)
> Feb 09 08:58:56   at 
> org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$1.evaluate(TestingFatalErrorHandlerResource.java:58)
> Feb 09 08:58:56   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 09 08:58:56   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Feb 09 08:58:56   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Feb 09 08:58:56   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Feb 09 08:58:56   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Feb 09 08:58:56   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Feb 09 08:58:56   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Feb 09 08:58:56   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Feb 09 08:58:56   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 09 08:58:56   at 
> 

[jira] [Created] (FLINK-26067) ZooKeeperLeaderElectionConnectionHandlingTest. testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled failed due to timeout

2022-02-09 Thread Yun Gao (Jira)
Yun Gao created FLINK-26067:
---

 Summary: ZooKeeperLeaderElectionConnectionHandlingTest. 
testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled 
failed due to timeout
 Key: FLINK-26067
 URL: https://issues.apache.org/jira/browse/FLINK-26067
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.15.0
Reporter: Yun Gao



{code:java}
Feb 09 08:58:56 [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 18.67 s <<< FAILURE! - in 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest
Feb 09 08:58:56 [ERROR] 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled
  Time elapsed: 8.096 s  <<< ERROR!
Feb 09 08:58:56 java.util.concurrent.TimeoutException
Feb 09 08:58:56 at 
org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:106)
Feb 09 08:58:56 at 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest$TestingContender.awaitRevokeLeadership(ZooKeeperLeaderElectionConnectionHandlingTest.java:211)
Feb 09 08:58:56 at 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.lambda$testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled$2(ZooKeeperLeaderElectionConnectionHandlingTest.java:100)
Feb 09 08:58:56 at 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.runTestWithZooKeeperConnectionProblem(ZooKeeperLeaderElectionConnectionHandlingTest.java:164)
Feb 09 08:58:56 at 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.runTestWithLostZooKeeperConnection(ZooKeeperLeaderElectionConnectionHandlingTest.java:109)
Feb 09 08:58:56 at 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionConnectionHandlingTest.testLoseLeadershipOnLostConnectionIfTolerateSuspendedConnectionsIsEnabled(ZooKeeperLeaderElectionConnectionHandlingTest.java:96)
Feb 09 08:58:56 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Feb 09 08:58:56 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Feb 09 08:58:56 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Feb 09 08:58:56 at java.lang.reflect.Method.invoke(Method.java:498)
Feb 09 08:58:56 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Feb 09 08:58:56 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Feb 09 08:58:56 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Feb 09 08:58:56 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Feb 09 08:58:56 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Feb 09 08:58:56 at 
org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$CloseableStatement.evaluate(TestingFatalErrorHandlerResource.java:94)
Feb 09 08:58:56 at 
org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$CloseableStatement.access$200(TestingFatalErrorHandlerResource.java:86)
Feb 09 08:58:56 at 
org.apache.flink.runtime.util.TestingFatalErrorHandlerResource$1.evaluate(TestingFatalErrorHandlerResource.java:58)
Feb 09 08:58:56 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Feb 09 08:58:56 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Feb 09 08:58:56 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Feb 09 08:58:56 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Feb 09 08:58:56 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Feb 09 08:58:56 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Feb 09 08:58:56 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Feb 09 08:58:56 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Feb 09 08:58:56 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Feb 09 08:58:56 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Feb 09 08:58:56 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Feb 09 08:58:56 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Feb 09 08:58:56 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)

{code}


[GitHub] [flink] pnowojski commented on pull request #18665: [FLINK-24440][source] Announce and combine latest watermarks across SourceOperators

2022-02-09 Thread GitBox


pnowojski commented on pull request #18665:
URL: https://github.com/apache/flink/pull/18665#issuecomment-1034575742


   private azure was green:
   https://dev.azure.com/pnowojski/Flink/_build/results?buildId=614=results


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pnowojski merged pull request #18665: [FLINK-24440][source] Announce and combine latest watermarks across SourceOperators

2022-02-09 Thread GitBox


pnowojski merged pull request #18665:
URL: https://github.com/apache/flink/pull/18665


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26064) KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed classloader

2022-02-09 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489992#comment-17489992
 ] 

Yun Gao commented on FLINK-26064:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30979=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=44192

> KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed 
> classloader
> 
>
> Key: FLINK-26064
> URL: https://issues.apache.org/jira/browse/FLINK-26064
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Piotr Nowojski
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31044=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d
> (shortened stack trace, as full is too large)
> {noformat}
> Feb 09 20:05:04 java.util.concurrent.ExecutionException: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> (...)
> Feb 09 20:05:04 Caused by: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:98)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:43)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:204)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:200)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:179)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)
> (...)
> Feb 09 20:05:04 Caused by: java.lang.IllegalStateException: Trying to access 
> closed classloader. Please check if you store classloaders directly or 
> indirectly in static fields. If the stacktrace suggests that the leak occurs 
> in a third party library and cannot be fixed immediately, you can disable 
> this check with the configuration 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164)
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResources(FlinkUserCodeClassLoaders.java:188)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:348)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder$1.run(FactoryFinder.java:352)
> Feb 09 20:05:04   at java.security.AccessController.doPrivileged(Native 
> Method)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.findServiceProvider(FactoryFinder.java:341)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:313)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:227)
> Feb 09 20:05:04   at 
> javax.xml.stream.XMLInputFactory.newInstance(XMLInputFactory.java:154)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.protocols.query.unmarshall.XmlDomParser.createXmlInputFactory(XmlDomParser.java:124)
> Feb 09 20:05:04   at 
> 

[jira] [Updated] (FLINK-26064) KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed classloader

2022-02-09 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26064:

Priority: Critical  (was: Major)

> KinesisFirehoseSinkITCase IllegalStateException: Trying to access closed 
> classloader
> 
>
> Key: FLINK-26064
> URL: https://issues.apache.org/jira/browse/FLINK-26064
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Piotr Nowojski
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31044=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d
> (shortened stack trace, as full is too large)
> {noformat}
> Feb 09 20:05:04 java.util.concurrent.ExecutionException: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 09 20:05:04   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> (...)
> Feb 09 20:05:04 Caused by: 
> software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Trying to access closed classloader. Please check if you store 
> classloaders directly or indirectly in static fields. If the stacktrace 
> suggests that the leak occurs in a third party library and cannot be fixed 
> immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:98)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:43)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:204)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:200)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:179)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)
> (...)
> Feb 09 20:05:04 Caused by: java.lang.IllegalStateException: Trying to access 
> closed classloader. Please check if you store classloaders directly or 
> indirectly in static fields. If the stacktrace suggests that the leak occurs 
> in a third party library and cannot be fixed immediately, you can disable 
> this check with the configuration 'classloader.check-leaked-classloader'.
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164)
> Feb 09 20:05:04   at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResources(FlinkUserCodeClassLoaders.java:188)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:348)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
> Feb 09 20:05:04   at 
> java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder$1.run(FactoryFinder.java:352)
> Feb 09 20:05:04   at java.security.AccessController.doPrivileged(Native 
> Method)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.findServiceProvider(FactoryFinder.java:341)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:313)
> Feb 09 20:05:04   at 
> javax.xml.stream.FactoryFinder.find(FactoryFinder.java:227)
> Feb 09 20:05:04   at 
> javax.xml.stream.XMLInputFactory.newInstance(XMLInputFactory.java:154)
> Feb 09 20:05:04   at 
> software.amazon.awssdk.protocols.query.unmarshall.XmlDomParser.createXmlInputFactory(XmlDomParser.java:124)
> Feb 09 20:05:04   at 
> java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:284)
> Feb 09 20:05:04   at 
> java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:180)
> Feb 09 20:05:04   at 

[GitHub] [flink] flinkbot edited a comment on pull request #18653: [FLINK-25825][connector-jdbc] MySqlCatalogITCase fails on azure

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18653:
URL: https://github.com/apache/flink/pull/18653#issuecomment-1032212433


   
   ## CI report:
   
   * 9261ad675d1c9e6f0ea94df829f94f25f18bafc4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31072)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18673: [FLINK-26004][runtime] Introduce ForwardForLocalKeyByPartitioner

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18673:
URL: https://github.com/apache/flink/pull/18673#issuecomment-1033295561


   
   ## CI report:
   
   * ce93ea4f52aad9d4d7adfaecdb470f25fdf86689 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30969)
 
   * 33d97563bf625b702679c32c67a739f880228a42 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31082)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18653: [FLINK-25825][connector-jdbc] MySqlCatalogITCase fails on azure

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18653:
URL: https://github.com/apache/flink/pull/18653#issuecomment-1032212433


   
   ## CI report:
   
   * c1c5037be96660918b0d342905087e9fa229a3b7 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31037)
 
   * 9261ad675d1c9e6f0ea94df829f94f25f18bafc4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31072)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18673: [FLINK-26004][runtime] Introduce ForwardForLocalKeyByPartitioner

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18673:
URL: https://github.com/apache/flink/pull/18673#issuecomment-1033295561


   
   ## CI report:
   
   * ce93ea4f52aad9d4d7adfaecdb470f25fdf86689 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30969)
 
   * 33d97563bf625b702679c32c67a739f880228a42 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26004) Introduce ForwardForConsecutiveHashPartitioner

2022-02-09 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang updated FLINK-26004:
---
Description: 
If there are multiple consecutive the same groupBy(i.e. keyBy), SQL planner 
will change them except the first one to use forward partitioner, so that these 
operators can be chained to reduce unnecessary shuffles.

However, sometimes the consecutive hash operators are not chained (e.g. 
multiple inputs), and this kind of forward partitioners will turn into forward 
job edges. These forward edges still have the consecutive hash assumption, so 
that they cannot be changed into rescale/rebalance edges, otherwise it can lead 
to incorrect results. This prevents the adaptive batch scheduler from 
determining parallelism for other forward edge downstream job vertices (see 
FLINK-25046).

To solve it, I propose to introduce a new 
{{{}ForwardForConsecutiveHashPartitioner{}}}. When SQL planner optimizes the 
case of multiple consecutive the same groupBy, it should use the proposed 
partitioner, so that the runtime framework can further decide whether the 
partitioner can be changed to hash or not.
h4.  

  was:
If there are multiple consecutive the same groupBy(i.e. keyBy), SQL planner 
will change them except the first one to use forward partitioner, so that these 
operators can be chained to reduce unnecessary shuffles.

However, sometimes the consecutive hash operators are not chained (e.g. 
multiple inputs), and this kind of forward partitioners will turn into forward 
job edges. These forward edges still have the consecutive hash assumption, so 
that they cannot be changed into rescale/rebalance edges, otherwise it can lead 
to incorrect results. This prevents the adaptive batch scheduler from 
determining parallelism for other forward edge downstream job vertices (see 
FLINK-25046).

To solve it, I propose to introduce a new 
{{{}ForwardForLocalKeyByPartitioner{}}}. When SQL planner optimizes the case of 
multiple consecutive the same groupBy, it should use the proposed partitioner, 
so that the runtime framework can further decide whether the partitioner can be 
changed to hash or not.
h4.  


> Introduce ForwardForConsecutiveHashPartitioner
> --
>
> Key: FLINK-26004
> URL: https://issues.apache.org/jira/browse/FLINK-26004
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
>
> If there are multiple consecutive the same groupBy(i.e. keyBy), SQL planner 
> will change them except the first one to use forward partitioner, so that 
> these operators can be chained to reduce unnecessary shuffles.
> However, sometimes the consecutive hash operators are not chained (e.g. 
> multiple inputs), and this kind of forward partitioners will turn into 
> forward job edges. These forward edges still have the consecutive hash 
> assumption, so that they cannot be changed into rescale/rebalance edges, 
> otherwise it can lead to incorrect results. This prevents the adaptive batch 
> scheduler from determining parallelism for other forward edge downstream job 
> vertices (see FLINK-25046).
> To solve it, I propose to introduce a new 
> {{{}ForwardForConsecutiveHashPartitioner{}}}. When SQL planner optimizes the 
> case of multiple consecutive the same groupBy, it should use the proposed 
> partitioner, so that the runtime framework can further decide whether the 
> partitioner can be changed to hash or not.
> h4.  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26004) Introduce ForwardForConsecutiveHashPartitioner

2022-02-09 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang updated FLINK-26004:
---
Description: 
If there are multiple consecutive the same groupBy(i.e. keyBy), SQL planner 
will change them except the first one to use forward partitioner, so that these 
operators can be chained to reduce unnecessary shuffles.

However, sometimes the consecutive hash operators are not chained (e.g. 
multiple inputs), and this kind of forward partitioners will turn into forward 
job edges. These forward edges still have the consecutive hash assumption, so 
that they cannot be changed into rescale/rebalance edges, otherwise it can lead 
to incorrect results. This prevents the adaptive batch scheduler from 
determining parallelism for other forward edge downstream job vertices (see 
FLINK-25046).

To solve it, I propose to introduce a new 
{{{}ForwardForLocalKeyByPartitioner{}}}. When SQL planner optimizes the case of 
multiple consecutive the same groupBy, it should use the proposed partitioner, 
so that the runtime framework can further decide whether the partitioner can be 
changed to hash or not.
h4.  

  was:
If there are multiple consecutive the same groupBy(i.e. keyBy), SQL planner 
will change them except the first one to use forward partitioner, so that these 
operators can be chained to reduce unnecessary shuffles.

However, sometimes the local keyBy operators are not chained (e.g. multiple 
inputs), and this kind of forward partitioners will turn into forward job 
edges. These forward edges still have the local keyBy assumption, so that they 
cannot be changed into rescale/rebalance edges, otherwise it can lead to 
incorrect results. This prevents the adaptive batch scheduler from determining 
parallelism for other forward edge downstream job vertices (see FLINK-25046).

To solve it, I propose to introduce a new 
{{{}ForwardForLocalKeyByPartitioner{}}}. When SQL planner optimizes the case of 
multiple consecutive the same groupBy, it should use the proposed partitioner, 
so that the runtime framework can further decide whether the partitioner can be 
changed to hash or not.
h4.


> Introduce ForwardForConsecutiveHashPartitioner
> --
>
> Key: FLINK-26004
> URL: https://issues.apache.org/jira/browse/FLINK-26004
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
>
> If there are multiple consecutive the same groupBy(i.e. keyBy), SQL planner 
> will change them except the first one to use forward partitioner, so that 
> these operators can be chained to reduce unnecessary shuffles.
> However, sometimes the consecutive hash operators are not chained (e.g. 
> multiple inputs), and this kind of forward partitioners will turn into 
> forward job edges. These forward edges still have the consecutive hash 
> assumption, so that they cannot be changed into rescale/rebalance edges, 
> otherwise it can lead to incorrect results. This prevents the adaptive batch 
> scheduler from determining parallelism for other forward edge downstream job 
> vertices (see FLINK-25046).
> To solve it, I propose to introduce a new 
> {{{}ForwardForLocalKeyByPartitioner{}}}. When SQL planner optimizes the case 
> of multiple consecutive the same groupBy, it should use the proposed 
> partitioner, so that the runtime framework can further decide whether the 
> partitioner can be changed to hash or not.
> h4.  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26004) Introduce ForwardForConsecutiveHashPartitioner

2022-02-09 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang updated FLINK-26004:
---
Summary: Introduce ForwardForConsecutiveHashPartitioner  (was: Introduce 
ForwardForLocalKeyByPartitioner)

> Introduce ForwardForConsecutiveHashPartitioner
> --
>
> Key: FLINK-26004
> URL: https://issues.apache.org/jira/browse/FLINK-26004
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
>
> If there are multiple consecutive the same groupBy(i.e. keyBy), SQL planner 
> will change them except the first one to use forward partitioner, so that 
> these operators can be chained to reduce unnecessary shuffles.
> However, sometimes the local keyBy operators are not chained (e.g. multiple 
> inputs), and this kind of forward partitioners will turn into forward job 
> edges. These forward edges still have the local keyBy assumption, so that 
> they cannot be changed into rescale/rebalance edges, otherwise it can lead to 
> incorrect results. This prevents the adaptive batch scheduler from 
> determining parallelism for other forward edge downstream job vertices (see 
> FLINK-25046).
> To solve it, I propose to introduce a new 
> {{{}ForwardForLocalKeyByPartitioner{}}}. When SQL planner optimizes the case 
> of multiple consecutive the same groupBy, it should use the proposed 
> partitioner, so that the runtime framework can further decide whether the 
> partitioner can be changed to hash or not.
> h4.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26047) Support usrlib in HDFS for YARN application mode

2022-02-09 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang reassigned FLINK-26047:
-

Assignee: Biao Geng

> Support usrlib in HDFS for YARN application mode
> 
>
> Key: FLINK-26047
> URL: https://issues.apache.org/jira/browse/FLINK-26047
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Biao Geng
>Assignee: Biao Geng
>Priority: Major
>
> In YARN Application mode, we currently support using user jar and lib jar 
> from HDFS. For example, we can run commands like:
> {quote}./bin/flink run-application -t yarn-application \
>   -Dyarn.provided.lib.dirs="hdfs://myhdfs/my-remote-flink-dist-dir" \
>   hdfs://myhdfs/jars/my-application.jar{quote}
> For {{usrlib}}, we currently only support local directory. I propose to add 
> HDFS support for {{usrlib}} to work with CLASSPATH_INCLUDE_USER_JAR better. 
> It can also benefit cases like using notebook to submit flink job.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26030) Set FLINK_LIB_DIR to 'lib' under working dir in YARN containers

2022-02-09 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang reassigned FLINK-26030:
-

Assignee: Biao Geng

> Set FLINK_LIB_DIR to 'lib' under working dir in YARN containers
> ---
>
> Key: FLINK-26030
> URL: https://issues.apache.org/jira/browse/FLINK-26030
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Biao Geng
>Assignee: Biao Geng
>Priority: Minor
>
> Currently, we utilize 
> {{org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils#tryFindUserLibDirectory}}
>  to locate usrlib in both flink client and cluster side. 
> This method relies on the value of environment variable {{FLINK_LIB_DIR}} to 
> find the {{{}usrlib{}}}.
> It makes sense in client side since in {{{}bin/config.sh{}}}, 
> {{FLINK_LIB_DIR}} will be set by default(i.e. {{FLINK_HOME/lib}} if not 
> exists. But in YARN cluster's containers, when we want to reuse this method 
> to find {{{}usrlib{}}}, as the YARN usually starts the process using commands 
> like
> {quote}/bin/bash -c /usr/lib/jvm/java-1.8.0/bin/java -Xmx1073741824 
> -Xms1073741824 
> -XX:MaxMetaspaceSize=268435456org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint
>  -D jobmanager.memory.off-heap.size=134217728b -D 
> jobmanager.memory.jvm-overhead.min=201326592b -D 
> jobmanager.memory.jvm-metaspace.size=268435456b -D 
> jobmanager.memory.heap.size=1073741824b -D 
> jobmanager.memory.jvm-overhead.max=201326592b ...
> {quote}
> {{FLINK_LIB_DIR}} is not guaranteed to be set in such case. Current codes 
> will use current working dir to locate the {{usrlib}} which is correct in 
> most cases. But bad things can happen if the machine which the YARN container 
> resides in has already set {{FLINK_LIB_DIR}} to a different folder. In that 
> case, codes will try to find {{usrlib}} in a undesired place.
> One possible solution would be overriding the {{FLINK_LIB_DIR}} in YARN 
> container env to the {{lib}} dir under YARN's working dir.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26030) Set FLINK_LIB_DIR to 'lib' under working dir in YARN containers

2022-02-09 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489978#comment-17489978
 ] 

Yang Wang commented on FLINK-26030:
---

Setting the {{FLINK_LIB_DIR}} to workDir/usrlib makes sense to me.

> Set FLINK_LIB_DIR to 'lib' under working dir in YARN containers
> ---
>
> Key: FLINK-26030
> URL: https://issues.apache.org/jira/browse/FLINK-26030
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Biao Geng
>Priority: Minor
>
> Currently, we utilize 
> {{org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils#tryFindUserLibDirectory}}
>  to locate usrlib in both flink client and cluster side. 
> This method relies on the value of environment variable {{FLINK_LIB_DIR}} to 
> find the {{{}usrlib{}}}.
> It makes sense in client side since in {{{}bin/config.sh{}}}, 
> {{FLINK_LIB_DIR}} will be set by default(i.e. {{FLINK_HOME/lib}} if not 
> exists. But in YARN cluster's containers, when we want to reuse this method 
> to find {{{}usrlib{}}}, as the YARN usually starts the process using commands 
> like
> {quote}/bin/bash -c /usr/lib/jvm/java-1.8.0/bin/java -Xmx1073741824 
> -Xms1073741824 
> -XX:MaxMetaspaceSize=268435456org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint
>  -D jobmanager.memory.off-heap.size=134217728b -D 
> jobmanager.memory.jvm-overhead.min=201326592b -D 
> jobmanager.memory.jvm-metaspace.size=268435456b -D 
> jobmanager.memory.heap.size=1073741824b -D 
> jobmanager.memory.jvm-overhead.max=201326592b ...
> {quote}
> {{FLINK_LIB_DIR}} is not guaranteed to be set in such case. Current codes 
> will use current working dir to locate the {{usrlib}} which is correct in 
> most cases. But bad things can happen if the machine which the YARN container 
> resides in has already set {{FLINK_LIB_DIR}} to a different folder. In that 
> case, codes will try to find {{usrlib}} in a undesired place.
> One possible solution would be overriding the {{FLINK_LIB_DIR}} in YARN 
> container env to the {{lib}} dir under YARN's working dir.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18672: [FLINK-25996][runtime] Introduce job property isDynamicGraph to ExecutionConfig

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18672:
URL: https://github.com/apache/flink/pull/18672#issuecomment-1033272608


   
   ## CI report:
   
   * ff58bdbffdc38ea1af4fa5b270bef58fcfc0f74a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30998)
 
   * e88e25f069efe98417e027028d3fdf202379dd92 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31081)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26050) Too many small sst files in rocksdb state backend when using processing time window

2022-02-09 Thread shen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489977#comment-17489977
 ] 

shen commented on FLINK-26050:
--

Created a new [issue|https://github.com/facebook/rocksdb/issues/9540] in 
rocksdb github, try to get some help.

> Too many small sst files in rocksdb state backend when using processing time 
> window
> ---
>
> Key: FLINK-26050
> URL: https://issues.apache.org/jira/browse/FLINK-26050
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.10.2, 1.14.3
>Reporter: shen
>Priority: Major
> Attachments: image-2022-02-09-21-22-13-920.png
>
>
> When using processing time window, in some workload, there will be a lot of 
> small sst files(serveral KB) in rocksdb local directory and may cause "Too 
> many files error".
> Use rocksdb tool ldb to find out content in sst files:
>  * column family of these small sst files is "processing_window-timers".
>  * most sst files are in level-1.
>  * records in sst files are almost kTypeDeletion.
>  * creation time of sst file correspond to checkpoint interval.
> These small sst files seem to be generated when flink checkpoint is 
> triggered. Although all content in sst are delete tags, they are not 
> compacted and deleted in rocksdb compaction because of not intersecting with 
> each other(rocksdb [compaction trivial 
> move|https://github.com/facebook/rocksdb/wiki/Compaction-Trivial-Move]). And 
> there seems to be no chance to delete them because of small size and not 
> intersect with other sst files.
>  
> I will attach a simple program to reproduce the problem.
>  
> Since timer in processing time window is generated in strictly ascending 
> order(both put and delete). So If workload of job happen to generate level-0 
> sst files not intersect with each other.(for example: processing window size 
> much smaller than checkpoint interval, and no window content cross checkpoint 
> interval or no new data in window crossing checkpoint interval). There will 
> be many small sst files generated until job restored from savepoint, or 
> incremental checkpoint is disabled. 
>  
> May be similar problem exists when user use timer in operators with same 
> workload.
>  
> Code to reproduce the problem:
> {code:java}
> package org.apache.flink.jira;
> import lombok.extern.slf4j.Slf4j;
> import org.apache.flink.configuration.Configuration;
> import org.apache.flink.configuration.RestOptions;
> import org.apache.flink.configuration.TaskManagerOptions;
> import org.apache.flink.contrib.streaming.state.RocksDBStateBackend;
> import org.apache.flink.streaming.api.TimeCharacteristic;
> import org.apache.flink.streaming.api.checkpoint.ListCheckpointed;
> import org.apache.flink.streaming.api.datastream.DataStreamSource;
> import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> import org.apache.flink.streaming.api.functions.source.SourceFunction;
> import 
> org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;
> import 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;
> import org.apache.flink.streaming.api.windowing.time.Time;
> import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
> import org.apache.flink.util.Collector;
> import java.util.Collections;
> import java.util.List;
> import java.util.Random;
> @Slf4j
> public class StreamApp  {
>   public static void main(String[] args) throws Exception {
> Configuration config = new Configuration();
> config.set(RestOptions.ADDRESS, "127.0.0.1");
> config.set(RestOptions.PORT, 10086);
> config.set(TaskManagerOptions.NUM_TASK_SLOTS, 6);
> new 
> StreamApp().configureApp(StreamExecutionEnvironment.createLocalEnvironment(1, 
> config));
>   }
>   public void configureApp(StreamExecutionEnvironment env) throws Exception {
> env.enableCheckpointing(2); // 20sec
> RocksDBStateBackend rocksDBStateBackend =
> new 
> RocksDBStateBackend("file:///Users/shenjiaqi/Workspace/jira/flink-51/checkpoints/",
>  true); // need to be reconfigured
> 
> rocksDBStateBackend.setDbStoragePath("/Users/shenjiaqi/Workspace/jira/flink-51/flink/rocksdb_local_db");
>  // need to be reconfigured
> env.setStateBackend(rocksDBStateBackend);
> env.getCheckpointConfig().setCheckpointTimeout(10);
> env.getCheckpointConfig().setTolerableCheckpointFailureNumber(5);
> env.setParallelism(1);
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
> env.getConfig().setTaskCancellationInterval(1);
> for (int i = 0; i < 1; ++i) {
>   createOnePipeline(env);
> }
> env.execute("StreamApp");
>   }
>   private void createOnePipeline(StreamExecutionEnvironment 

[GitHub] [flink] flinkbot edited a comment on pull request #18672: [FLINK-25996][runtime] Introduce job property isDynamicGraph to ExecutionConfig

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18672:
URL: https://github.com/apache/flink/pull/18672#issuecomment-1033272608


   
   ## CI report:
   
   * ff58bdbffdc38ea1af4fa5b270bef58fcfc0f74a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30998)
 
   * e88e25f069efe98417e027028d3fdf202379dd92 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26050) Too many small sst files in rocksdb state backend when using processing time window

2022-02-09 Thread shen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489976#comment-17489976
 ] 

shen commented on FLINK-26050:
--

[~mayuehappy] , Thx for suggestion. We are actually testing new flink job. 
Currently we prepared several work around plans, including the one you proposed.

In this Jira I want to find ways to fix the problem. Since rocksdb state 
backend is more robust and scalable. And this problem seems to be a performance 
degradation in some use case.

> Too many small sst files in rocksdb state backend when using processing time 
> window
> ---
>
> Key: FLINK-26050
> URL: https://issues.apache.org/jira/browse/FLINK-26050
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.10.2, 1.14.3
>Reporter: shen
>Priority: Major
> Attachments: image-2022-02-09-21-22-13-920.png
>
>
> When using processing time window, in some workload, there will be a lot of 
> small sst files(serveral KB) in rocksdb local directory and may cause "Too 
> many files error".
> Use rocksdb tool ldb to find out content in sst files:
>  * column family of these small sst files is "processing_window-timers".
>  * most sst files are in level-1.
>  * records in sst files are almost kTypeDeletion.
>  * creation time of sst file correspond to checkpoint interval.
> These small sst files seem to be generated when flink checkpoint is 
> triggered. Although all content in sst are delete tags, they are not 
> compacted and deleted in rocksdb compaction because of not intersecting with 
> each other(rocksdb [compaction trivial 
> move|https://github.com/facebook/rocksdb/wiki/Compaction-Trivial-Move]). And 
> there seems to be no chance to delete them because of small size and not 
> intersect with other sst files.
>  
> I will attach a simple program to reproduce the problem.
>  
> Since timer in processing time window is generated in strictly ascending 
> order(both put and delete). So If workload of job happen to generate level-0 
> sst files not intersect with each other.(for example: processing window size 
> much smaller than checkpoint interval, and no window content cross checkpoint 
> interval or no new data in window crossing checkpoint interval). There will 
> be many small sst files generated until job restored from savepoint, or 
> incremental checkpoint is disabled. 
>  
> May be similar problem exists when user use timer in operators with same 
> workload.
>  
> Code to reproduce the problem:
> {code:java}
> package org.apache.flink.jira;
> import lombok.extern.slf4j.Slf4j;
> import org.apache.flink.configuration.Configuration;
> import org.apache.flink.configuration.RestOptions;
> import org.apache.flink.configuration.TaskManagerOptions;
> import org.apache.flink.contrib.streaming.state.RocksDBStateBackend;
> import org.apache.flink.streaming.api.TimeCharacteristic;
> import org.apache.flink.streaming.api.checkpoint.ListCheckpointed;
> import org.apache.flink.streaming.api.datastream.DataStreamSource;
> import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> import org.apache.flink.streaming.api.functions.source.SourceFunction;
> import 
> org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;
> import 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;
> import org.apache.flink.streaming.api.windowing.time.Time;
> import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
> import org.apache.flink.util.Collector;
> import java.util.Collections;
> import java.util.List;
> import java.util.Random;
> @Slf4j
> public class StreamApp  {
>   public static void main(String[] args) throws Exception {
> Configuration config = new Configuration();
> config.set(RestOptions.ADDRESS, "127.0.0.1");
> config.set(RestOptions.PORT, 10086);
> config.set(TaskManagerOptions.NUM_TASK_SLOTS, 6);
> new 
> StreamApp().configureApp(StreamExecutionEnvironment.createLocalEnvironment(1, 
> config));
>   }
>   public void configureApp(StreamExecutionEnvironment env) throws Exception {
> env.enableCheckpointing(2); // 20sec
> RocksDBStateBackend rocksDBStateBackend =
> new 
> RocksDBStateBackend("file:///Users/shenjiaqi/Workspace/jira/flink-51/checkpoints/",
>  true); // need to be reconfigured
> 
> rocksDBStateBackend.setDbStoragePath("/Users/shenjiaqi/Workspace/jira/flink-51/flink/rocksdb_local_db");
>  // need to be reconfigured
> env.setStateBackend(rocksDBStateBackend);
> env.getCheckpointConfig().setCheckpointTimeout(10);
> env.getCheckpointConfig().setTolerableCheckpointFailureNumber(5);
> env.setParallelism(1);
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
>

[GitHub] [flink] flinkbot edited a comment on pull request #18698: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chinese.

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18698:
URL: https://github.com/apache/flink/pull/18698#issuecomment-1034521527


   
   ## CI report:
   
   * 3e380541d698a93d71dbfc0f88723b22c50bccf5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31080)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18698: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chinese.

2022-02-09 Thread GitBox


flinkbot commented on pull request #18698:
URL: https://github.com/apache/flink/pull/18698#issuecomment-1034521527






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25782) Translate datastream filesystem.md page into Chinese.

2022-02-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25782:
---
Labels: chinese-translation pull-request-available  (was: 
chinese-translation)

> Translate datastream filesystem.md page into Chinese.
> -
>
> Key: FLINK-25782
> URL: https://issues.apache.org/jira/browse/FLINK-25782
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: RocMarshal
>Assignee: baisike
>Priority: Minor
>  Labels: chinese-translation, pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] MrWhiteSike opened a new pull request #18698: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chinese.

2022-02-09 Thread GitBox


MrWhiteSike opened a new pull request #18698:
URL: https://github.com/apache/flink/pull/18698


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26066) Introduce FileStoreRead

2022-02-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26066:
---
Labels: pull-request-available  (was: )

> Introduce FileStoreRead
> ---
>
> Key: FLINK-26066
> URL: https://issues.apache.org/jira/browse/FLINK-26066
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Apart from {{FileStoreWrite}}, we also need a {{FileStoreRead}} operation to 
> read actual key-values for a specific partition and bucket.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] tsreaper opened a new pull request #19: [FLINK-26066] Introduce FileStoreRead

2022-02-09 Thread GitBox


tsreaper opened a new pull request #19:
URL: https://github.com/apache/flink-table-store/pull/19


   Apart from FileStoreWrite, we also need a FileStoreRead operation to read 
actual key-values for a specific partition and bucket.
   
   This PR is waiting for #18 . Please review the last commit only.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26066) Introduce FileStoreRead

2022-02-09 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-26066:
---

 Summary: Introduce FileStoreRead
 Key: FLINK-26066
 URL: https://issues.apache.org/jira/browse/FLINK-26066
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Caizhi Weng
 Fix For: 1.15.0


Apart from {{FileStoreWrite}}, we also need a {{FileStoreRead}} operation to 
read actual key-values for a specific partition and bucket.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18697:
URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510296


   
   ## CI report:
   
   * 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31079)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink

2022-02-09 Thread GitBox


flinkbot commented on pull request #18697:
URL: https://github.com/apache/flink/pull/18697#issuecomment-1034511681


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 (Thu Feb 10 
05:34:55 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] GOODBOY008 commented on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink

2022-02-09 Thread GitBox


GOODBOY008 commented on pull request #18697:
URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510767


   @tillrohrmann Please cc,Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink

2022-02-09 Thread GitBox


flinkbot commented on pull request #18697:
URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510296


   
   ## CI report:
   
   * 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26034) Add maven wrapper for flink

2022-02-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26034:
---
Labels: pull-request-available  (was: )

> Add maven wrapper for flink
> ---
>
> Key: FLINK-26034
> URL: https://issues.apache.org/jira/browse/FLINK-26034
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.15.0
>Reporter: Aiden Gong
>Assignee: Aiden Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Idea just support this feature now. It is very helpful for contributors.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] GOODBOY008 opened a new pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink

2022-02-09 Thread GitBox


GOODBOY008 opened a new pull request #18697:
URL: https://github.com/apache/flink/pull/18697


   
   
   ## What is the purpose of the change
   
   *Add maven wrapper for flink.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-24682) Unify the -C option behavior in both yarn application and per-job mode

2022-02-09 Thread Biao Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biao Geng resolved FLINK-24682.
---
Resolution: Won't Do

> Unify the -C option behavior in both yarn application and per-job mode
> --
>
> Key: FLINK-24682
> URL: https://issues.apache.org/jira/browse/FLINK-24682
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.12.3
> Environment: flink 1.12.3
> yarn 2.8.5
>Reporter: Biao Geng
>Priority: Major
>
> Recently, when switching the job submission mode from per-job mode to 
> application mode on yarn, we found the behavior of '-C' ('–-classpath') is 
> somehow misleading:
> In per-job mode, the `main()` method of the program is executed in the local 
> machine and '-C' option works well when we use it to specify some local user 
> jars like -C file://xx.jar.
> But in application mode, this option works differently: as the `main()` 
> method will be executed on the job manager in the cluster, it is unclear 
> where the url like `file://xx.jar` points. It seems that 
> `file://xx.jar` is located on the job manager machine in the cluster due 
> to the code. If that is true, it may mislead users as in per-job mode, it 
> refers to the the jars in the client machine. 
> In summary, if we can unify the -C option behavior in both yarn application 
> and per-job mode, it would help users to switch to application mode more 
> smoothly and more importantly, it makes it much easier to specify some local 
> jars, that should be loaded by UserClassLoader, on the client machine.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24682) Unify the -C option behavior in both yarn application and per-job mode

2022-02-09 Thread Biao Geng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489959#comment-17489959
 ] 

Biao Geng commented on FLINK-24682:
---

Per-job mode will be deprecate due to 
https://issues.apache.org/jira/browse/FLINK-25999

Some relevant use cases of loading user-specified jars with UserClassLoader can 
be achieved with https://issues.apache.org/jira/browse/FLINK-24897

Thus this Jira will be closed.

> Unify the -C option behavior in both yarn application and per-job mode
> --
>
> Key: FLINK-24682
> URL: https://issues.apache.org/jira/browse/FLINK-24682
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.12.3
> Environment: flink 1.12.3
> yarn 2.8.5
>Reporter: Biao Geng
>Priority: Major
>
> Recently, when switching the job submission mode from per-job mode to 
> application mode on yarn, we found the behavior of '-C' ('–-classpath') is 
> somehow misleading:
> In per-job mode, the `main()` method of the program is executed in the local 
> machine and '-C' option works well when we use it to specify some local user 
> jars like -C file://xx.jar.
> But in application mode, this option works differently: as the `main()` 
> method will be executed on the job manager in the cluster, it is unclear 
> where the url like `file://xx.jar` points. It seems that 
> `file://xx.jar` is located on the job manager machine in the cluster due 
> to the code. If that is true, it may mislead users as in per-job mode, it 
> refers to the the jars in the client machine. 
> In summary, if we can unify the -C option behavior in both yarn application 
> and per-job mode, it would help users to switch to application mode more 
> smoothly and more importantly, it makes it much easier to specify some local 
> jars, that should be loaded by UserClassLoader, on the client machine.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26030) Set FLINK_LIB_DIR to 'lib' under working dir in YARN containers

2022-02-09 Thread Biao Geng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489957#comment-17489957
 ] 

Biao Geng commented on FLINK-26030:
---

Yes, it should be a bug.

I believe to be consistent with the behavior in flink client side, in YARN 
cluster side, we should make the {{FLINK_LIB_DIR}} in YARN container env to the 
{{lib}} dir under YARN's working dir. If that is the case, I may start 
investigating on how to implement it properly. Let me know if above assumption 
make sense. Thanks!

> Set FLINK_LIB_DIR to 'lib' under working dir in YARN containers
> ---
>
> Key: FLINK-26030
> URL: https://issues.apache.org/jira/browse/FLINK-26030
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Biao Geng
>Priority: Minor
>
> Currently, we utilize 
> {{org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils#tryFindUserLibDirectory}}
>  to locate usrlib in both flink client and cluster side. 
> This method relies on the value of environment variable {{FLINK_LIB_DIR}} to 
> find the {{usrlib}}.
> It makes sense in client side since in {{bin/config.sh}}, {{FLINK_LIB_DIR}} 
> will be set by default(i.e. {{FLINK_HOME/lib}} if not exists. But in YARN 
> cluster's containers, when we want to reuse this method to find {{usrlib}}, 
> as the YARN usually starts the process using commands like 
> bq. /bin/bash -c /usr/lib/jvm/java-1.8.0/bin/java -Xmx1073741824 
> -Xms1073741824 
> -XX:MaxMetaspaceSize=268435456org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint
>  -D jobmanager.memory.off-heap.size=134217728b -D 
> jobmanager.memory.jvm-overhead.min=201326592b -D 
> jobmanager.memory.jvm-metaspace.size=268435456b -D 
> jobmanager.memory.heap.size=1073741824b -D 
> jobmanager.memory.jvm-overhead.max=201326592b ...
> {{FLINK_LIB_DIR}} is not guaranteed to be set in such case. Current codes 
> will use current working dir to locate the {{usrlib}} which is correct in 
> most cases. But bad things can happen if the machine which the YARN container 
> resides in has already set {{FLINK_LIB_DIR}} to a different folder. In that 
> case, codes will try to find {{usrlib}} in a undesired place. 
> One possible solution would be overriding the {{FLINK_LIB_DIR}} in YARN 
> container env to the {{lib}} dir under YARN's workding dir.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26030) Set FLINK_LIB_DIR to 'lib' under working dir in YARN containers

2022-02-09 Thread Biao Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biao Geng updated FLINK-26030:
--
Description: 
Currently, we utilize 
{{org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils#tryFindUserLibDirectory}}
 to locate usrlib in both flink client and cluster side. 
This method relies on the value of environment variable {{FLINK_LIB_DIR}} to 
find the {{{}usrlib{}}}.
It makes sense in client side since in {{{}bin/config.sh{}}}, {{FLINK_LIB_DIR}} 
will be set by default(i.e. {{FLINK_HOME/lib}} if not exists. But in YARN 
cluster's containers, when we want to reuse this method to find {{{}usrlib{}}}, 
as the YARN usually starts the process using commands like
{quote}/bin/bash -c /usr/lib/jvm/java-1.8.0/bin/java -Xmx1073741824 
-Xms1073741824 
-XX:MaxMetaspaceSize=268435456org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint
 -D jobmanager.memory.off-heap.size=134217728b -D 
jobmanager.memory.jvm-overhead.min=201326592b -D 
jobmanager.memory.jvm-metaspace.size=268435456b -D 
jobmanager.memory.heap.size=1073741824b -D 
jobmanager.memory.jvm-overhead.max=201326592b ...
{quote}
{{FLINK_LIB_DIR}} is not guaranteed to be set in such case. Current codes will 
use current working dir to locate the {{usrlib}} which is correct in most 
cases. But bad things can happen if the machine which the YARN container 
resides in has already set {{FLINK_LIB_DIR}} to a different folder. In that 
case, codes will try to find {{usrlib}} in a undesired place.

One possible solution would be overriding the {{FLINK_LIB_DIR}} in YARN 
container env to the {{lib}} dir under YARN's working dir.

  was:
Currently, we utilize 
{{org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils#tryFindUserLibDirectory}}
 to locate usrlib in both flink client and cluster side. 
This method relies on the value of environment variable {{FLINK_LIB_DIR}} to 
find the {{usrlib}}.
It makes sense in client side since in {{bin/config.sh}}, {{FLINK_LIB_DIR}} 
will be set by default(i.e. {{FLINK_HOME/lib}} if not exists. But in YARN 
cluster's containers, when we want to reuse this method to find {{usrlib}}, as 
the YARN usually starts the process using commands like 

bq. /bin/bash -c /usr/lib/jvm/java-1.8.0/bin/java -Xmx1073741824 -Xms1073741824 
-XX:MaxMetaspaceSize=268435456org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint
 -D jobmanager.memory.off-heap.size=134217728b -D 
jobmanager.memory.jvm-overhead.min=201326592b -D 
jobmanager.memory.jvm-metaspace.size=268435456b -D 
jobmanager.memory.heap.size=1073741824b -D 
jobmanager.memory.jvm-overhead.max=201326592b ...

{{FLINK_LIB_DIR}} is not guaranteed to be set in such case. Current codes will 
use current working dir to locate the {{usrlib}} which is correct in most 
cases. But bad things can happen if the machine which the YARN container 
resides in has already set {{FLINK_LIB_DIR}} to a different folder. In that 
case, codes will try to find {{usrlib}} in a undesired place. 

One possible solution would be overriding the {{FLINK_LIB_DIR}} in YARN 
container env to the {{lib}} dir under YARN's workding dir.


> Set FLINK_LIB_DIR to 'lib' under working dir in YARN containers
> ---
>
> Key: FLINK-26030
> URL: https://issues.apache.org/jira/browse/FLINK-26030
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Biao Geng
>Priority: Minor
>
> Currently, we utilize 
> {{org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils#tryFindUserLibDirectory}}
>  to locate usrlib in both flink client and cluster side. 
> This method relies on the value of environment variable {{FLINK_LIB_DIR}} to 
> find the {{{}usrlib{}}}.
> It makes sense in client side since in {{{}bin/config.sh{}}}, 
> {{FLINK_LIB_DIR}} will be set by default(i.e. {{FLINK_HOME/lib}} if not 
> exists. But in YARN cluster's containers, when we want to reuse this method 
> to find {{{}usrlib{}}}, as the YARN usually starts the process using commands 
> like
> {quote}/bin/bash -c /usr/lib/jvm/java-1.8.0/bin/java -Xmx1073741824 
> -Xms1073741824 
> -XX:MaxMetaspaceSize=268435456org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint
>  -D jobmanager.memory.off-heap.size=134217728b -D 
> jobmanager.memory.jvm-overhead.min=201326592b -D 
> jobmanager.memory.jvm-metaspace.size=268435456b -D 
> jobmanager.memory.heap.size=1073741824b -D 
> jobmanager.memory.jvm-overhead.max=201326592b ...
> {quote}
> {{FLINK_LIB_DIR}} is not guaranteed to be set in such case. Current codes 
> will use current working dir to locate the {{usrlib}} which is correct in 
> most cases. But bad things can happen if the machine which the YARN container 
> resides in has already set {{FLINK_LIB_DIR}} to a different folder. In that 
> case, codes will try to find {{usrlib}} in a 

[GitHub] [flink] flinkbot edited a comment on pull request #18434: [FLINK-25742][akka] Remove the serialization of rpc invocation at Fli…

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18434:
URL: https://github.com/apache/flink/pull/18434#issuecomment-1018078894


   
   ## CI report:
   
   * 6408af3118bf294fdb704e36af7568756f3bd1c5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31009)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18434: [FLINK-25742][akka] Remove the serialization of rpc invocation at Fli…

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18434:
URL: https://github.com/apache/flink/pull/18434#issuecomment-1018078894


   
   ## CI report:
   
   * 6408af3118bf294fdb704e36af7568756f3bd1c5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31009)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on pull request #18434: [FLINK-25742][akka] Remove the serialization of rpc invocation at Fli…

2022-02-09 Thread GitBox


KarmaGYZ commented on pull request #18434:
URL: https://github.com/apache/flink/pull/18434#issuecomment-1034493244


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26047) Support usrlib in HDFS for YARN application mode

2022-02-09 Thread Biao Geng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489955#comment-17489955
 ] 

Biao Geng commented on FLINK-26047:
---

hi [~wangyang0918] Sure. I would take the ticket and start working ASAP.

> Support usrlib in HDFS for YARN application mode
> 
>
> Key: FLINK-26047
> URL: https://issues.apache.org/jira/browse/FLINK-26047
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Biao Geng
>Priority: Major
>
> In YARN Application mode, we currently support using user jar and lib jar 
> from HDFS. For example, we can run commands like:
> {quote}./bin/flink run-application -t yarn-application \
>   -Dyarn.provided.lib.dirs="hdfs://myhdfs/my-remote-flink-dist-dir" \
>   hdfs://myhdfs/jars/my-application.jar{quote}
> For {{usrlib}}, we currently only support local directory. I propose to add 
> HDFS support for {{usrlib}} to work with CLASSPATH_INCLUDE_USER_JAR better. 
> It can also benefit cases like using notebook to submit flink job.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18696: [hotfix][docs] project config pages

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18696:
URL: https://github.com/apache/flink/pull/18696#issuecomment-1034481691


   
   ## CI report:
   
   * 2921ed917d22b64cb99f05dfdd0f7e119dd86880 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31078)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18696: [hotfix][docs] project config pages

2022-02-09 Thread GitBox


flinkbot commented on pull request #18696:
URL: https://github.com/apache/flink/pull/18696#issuecomment-1034481691






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] infoverload opened a new pull request #18696: [hotfix][docs] project config pages

2022-02-09 Thread GitBox


infoverload opened a new pull request #18696:
URL: https://github.com/apache/flink/pull/18696


   
   
   ## What is the purpose of the change
   
   fixes for https://github.com/apache/flink/pull/18353
   
   
   ## Brief change log
   
   - fixed typos
   - improved clarity
   - added missing Gradle example
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-25705) Translate "Metric Reporters" page of "Deployment" in to Chinese

2022-02-09 Thread Chengkai Yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489948#comment-17489948
 ] 

Chengkai Yang edited comment on FLINK-25705 at 2/10/22, 3:59 AM:
-

Hi [~jark] ,Another question,this issue is not a subtask of FLINK-11526

yet,so how to link this issue as a subtask of FLINK-11526? Or should I close 
this issue,and add a new subtask under FLINK-11526?


was (Author: JIRAUSER282569):
Hi [~jark] ,Another question,this issue is not a subtask of FLINK-11526

yet,so how to link this issue as a subtask of FLINK-11526?

> Translate "Metric Reporters" page of "Deployment" in to Chinese
> ---
>
> Key: FLINK-25705
> URL: https://issues.apache.org/jira/browse/FLINK-25705
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: Chengkai Yang
>Assignee: Chengkai Yang
>Priority: Minor
>  Labels: auto-unassigned
>
> The page url is 
> [https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters]
> The markdown file is located in 
> flink/docs/content.zh/docs/deployment/metric_reporters.md
> This issue should be merged after 
> [Flink-25830|https://issues.apache.org/jira/browse/FLINK-25830] is merged or 
> solved.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25705) Translate "Metric Reporters" page of "Deployment" in to Chinese

2022-02-09 Thread Chengkai Yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489948#comment-17489948
 ] 

Chengkai Yang commented on FLINK-25705:
---

Hi [~jark] ,Another question,this issue is not a subtask of FLINK-11526

yet,so how to link this issue as a subtask of FLINK-11526?

> Translate "Metric Reporters" page of "Deployment" in to Chinese
> ---
>
> Key: FLINK-25705
> URL: https://issues.apache.org/jira/browse/FLINK-25705
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: Chengkai Yang
>Assignee: Chengkai Yang
>Priority: Minor
>  Labels: auto-unassigned
>
> The page url is 
> [https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters]
> The markdown file is located in 
> flink/docs/content.zh/docs/deployment/metric_reporters.md
> This issue should be merged after 
> [Flink-25830|https://issues.apache.org/jira/browse/FLINK-25830] is merged or 
> solved.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18434: [FLINK-25742][akka] Remove the serialization of rpc invocation at Fli…

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18434:
URL: https://github.com/apache/flink/pull/18434#issuecomment-1018078894


   
   ## CI report:
   
   * 6408af3118bf294fdb704e36af7568756f3bd1c5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31009)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-23545) Use new schema in SqlCreateTableConverter

2022-02-09 Thread zck (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489934#comment-17489934
 ] 

zck edited comment on FLINK-23545 at 2/10/22, 3:50 AM:
---

I like this feature.

1.15  publish it?


was (Author: kcz):
I like this feature.

1.15 Can you publish it?

> Use new schema in SqlCreateTableConverter
> -
>
> Key: FLINK-23545
> URL: https://issues.apache.org/jira/browse/FLINK-23545
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: jinfeng
>Assignee: jinfeng
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.15.0
>
>
> In order to support column comment in sql create table dll. We should  use 
> new schema in SqlCreateeTableConverter. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26047) Support usrlib in HDFS for YARN application mode

2022-02-09 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489944#comment-17489944
 ] 

Yang Wang commented on FLINK-26047:
---

Nice feature. Would you like to work on this ticket?

> Support usrlib in HDFS for YARN application mode
> 
>
> Key: FLINK-26047
> URL: https://issues.apache.org/jira/browse/FLINK-26047
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Biao Geng
>Priority: Major
>
> In YARN Application mode, we currently support using user jar and lib jar 
> from HDFS. For example, we can run commands like:
> {quote}./bin/flink run-application -t yarn-application \
>   -Dyarn.provided.lib.dirs="hdfs://myhdfs/my-remote-flink-dist-dir" \
>   hdfs://myhdfs/jars/my-application.jar{quote}
> For {{usrlib}}, we currently only support local directory. I propose to add 
> HDFS support for {{usrlib}} to work with CLASSPATH_INCLUDE_USER_JAR better. 
> It can also benefit cases like using notebook to submit flink job.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25705) Translate "Metric Reporters" page of "Deployment" in to Chinese

2022-02-09 Thread Chengkai Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengkai Yang updated FLINK-25705:
--
Description: 
The page url is 
[https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters]

The markdown file is located in 
flink/docs/content.zh/docs/deployment/metric_reporters.md

This issue should be merged after 
[Flink-25830|https://issues.apache.org/jira/browse/FLINK-25830] is merged or 
solved.

  was:
The page url is 
[https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters]

The markdown file is located in 
flink/docs/content.zh/docs/deployment/metric_reporters.md

This


> Translate "Metric Reporters" page of "Deployment" in to Chinese
> ---
>
> Key: FLINK-25705
> URL: https://issues.apache.org/jira/browse/FLINK-25705
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: Chengkai Yang
>Assignee: Chengkai Yang
>Priority: Minor
>  Labels: auto-unassigned
>
> The page url is 
> [https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters]
> The markdown file is located in 
> flink/docs/content.zh/docs/deployment/metric_reporters.md
> This issue should be merged after 
> [Flink-25830|https://issues.apache.org/jira/browse/FLINK-25830] is merged or 
> solved.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zck573693104 edited a comment on pull request #16721: [FLINK-23545] Use new schema in SqlCreateTableConverter

2022-02-09 Thread GitBox


zck573693104 edited a comment on pull request #16721:
URL: https://github.com/apache/flink/pull/16721#issuecomment-1034460519


   1.15   publish it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zck573693104 commented on pull request #16721: [FLINK-23545] Use new schema in SqlCreateTableConverter

2022-02-09 Thread GitBox


zck573693104 commented on pull request #16721:
URL: https://github.com/apache/flink/pull/16721#issuecomment-1034460519


   1.15 ok ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18505: [FLINK-25796][network] Avoid record copy for result partition of sort-shuffle if there are enough buffers for better performance

2022-02-09 Thread GitBox


flinkbot edited a comment on pull request #18505:
URL: https://github.com/apache/flink/pull/18505#issuecomment-1021169033


   
   ## CI report:
   
   * d28c15f6f42d4f5982199527f4a227fec0976406 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30218)
 
   * 6216fd083e98baa0408e7884b9129afd8c135973 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31077)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25705) Translate "Metric Reporters" page of "Deployment" in to Chinese

2022-02-09 Thread Chengkai Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengkai Yang updated FLINK-25705:
--
Description: 
The page url is 
[https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters]

The markdown file is located in 
flink/docs/content.zh/docs/deployment/metric_reporters.md

This

  was:
The page url is 
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters

The markdown file is located in 
flink/docs/content.zh/docs/deployment/metric_reporters.md


> Translate "Metric Reporters" page of "Deployment" in to Chinese
> ---
>
> Key: FLINK-25705
> URL: https://issues.apache.org/jira/browse/FLINK-25705
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: Chengkai Yang
>Assignee: Chengkai Yang
>Priority: Minor
>  Labels: auto-unassigned
>
> The page url is 
> [https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters]
> The markdown file is located in 
> flink/docs/content.zh/docs/deployment/metric_reporters.md
> This



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-25705) Translate "Metric Reporters" page of "Deployment" in to Chinese

2022-02-09 Thread Chengkai Yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489940#comment-17489940
 ] 

Chengkai Yang edited comment on FLINK-25705 at 2/10/22, 3:39 AM:
-

[~xccui]  [~klion26]  [~jark] Hi,guys,we need to discuss a glossary issue here.

The word "Reporter" is not in the 
[Glossary|[https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications]]

 

The corresponding Chinese word for "Reporter" can be "报送端/发送端/上报端", and the 
"Metric Reporter" can be translated to "指标报送端/指标发送端/指标上报端",so which Chinese 
word will be better?


was (Author: JIRAUSER282569):
[~xccui]  [~klion26]  [~jark] Hi,guys,we need to discuss a glossary issue here.

The word "Reporter" is not in the 
[Glossary|[https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications]]

 

The corresponding Chinese word for "Reporter" can be "报送端/发送端/上报端", and the 
"Metric Reporter" can be translated to "指标报送端,指标发送端,指标上报段",so which Chinese 
word will be better?

> Translate "Metric Reporters" page of "Deployment" in to Chinese
> ---
>
> Key: FLINK-25705
> URL: https://issues.apache.org/jira/browse/FLINK-25705
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: Chengkai Yang
>Assignee: Chengkai Yang
>Priority: Minor
>  Labels: auto-unassigned
>
> The page url is 
> https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters
> The markdown file is located in 
> flink/docs/content.zh/docs/deployment/metric_reporters.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-25705) Translate "Metric Reporters" page of "Deployment" in to Chinese

2022-02-09 Thread Chengkai Yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489940#comment-17489940
 ] 

Chengkai Yang edited comment on FLINK-25705 at 2/10/22, 3:38 AM:
-

[~xccui]  [~klion26]  [~jark] Hi,guys,we need to discuss a glossary issue here.

The word "Reporter" is not in the 
[Glossary|[https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications]]

 

The corresponding Chinese word for "Reporter" can be "报送端/发送端/上报端", and the 
"Metric Reporter" can be translated to "指标报送端,指标发送端,指标上报段",so which Chinese 
word will be better?


was (Author: JIRAUSER282569):
[~xccui]  [~klion26]  [~jark] Hi,guys,we need to discuss a glossary issue here.

The word "Reporter" is not in the Glossary[link 
title|[https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications]|https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications],hhow]

 

The corresponding Chinese word for "Reporter" can be "报送端/发送端/上报端", and the 
"Metric Reporter" can be translated to "指标报送端,指标发送端,指标上报段",so which Chinese 
word will be better?

> Translate "Metric Reporters" page of "Deployment" in to Chinese
> ---
>
> Key: FLINK-25705
> URL: https://issues.apache.org/jira/browse/FLINK-25705
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: Chengkai Yang
>Assignee: Chengkai Yang
>Priority: Minor
>  Labels: auto-unassigned
>
> The page url is 
> https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters
> The markdown file is located in 
> flink/docs/content.zh/docs/deployment/metric_reporters.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25705) Translate "Metric Reporters" page of "Deployment" in to Chinese

2022-02-09 Thread ChengKai Yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489940#comment-17489940
 ] 

ChengKai Yang commented on FLINK-25705:
---

[~xccui]  [~klion26]  [~jark] Hi,guys,we need to discuss a glossary issue here.

The word "Reporter" is not in the Glossary[link 
title|[https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications]|https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications],hhow]

 

The corresponding Chinese word for "Reporter" can be "报送端/发送端/上报端", and the 
"Metric Reporter" can be translated to "指标报送端,指标发送端,指标上报段",so which Chinese 
word will be better?

> Translate "Metric Reporters" page of "Deployment" in to Chinese
> ---
>
> Key: FLINK-25705
> URL: https://issues.apache.org/jira/browse/FLINK-25705
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: ChengKai Yang
>Assignee: ChengKai Yang
>Priority: Minor
>  Labels: auto-unassigned
>
> The page url is 
> https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters
> The markdown file is located in 
> flink/docs/content.zh/docs/deployment/metric_reporters.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   4   5   6   7   8   9   10   >