[jira] [Updated] (FLINK-23815) Separate the concerns of PendingCheckpoint and CheckpointPlan
[ https://issues.apache.org/jira/browse/FLINK-23815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-23815: Parent: (was: FLINK-23883) Issue Type: Improvement (was: Sub-task) > Separate the concerns of PendingCheckpoint and CheckpointPlan > - > > Key: FLINK-23815 > URL: https://issues.apache.org/jira/browse/FLINK-23815 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Reporter: Yun Gao >Assignee: Yun Gao >Priority: Major > Labels: pull-request-available, stale-assigned > Fix For: 1.15.0 > > > As discussed in > https://github.com/apache/flink/pull/16655#issuecomment-899603149: > {quote} > When dealing with a PendingCheckpoint, should one be forced to also deal with > a CheckpointPlan at all? I think not, the PendingCheckpoint is purely a > tracker of gathered states and how many tasks have acknowledged and how many > we are still waiting for. That makes testing, reusability, etc. simple. > That means we should strive to have the PendingCheckpoint independent of > CheckpointPlan. I think we can achieve this with the following steps: > Much of the reason to have the CheckpointPlan in PendingCheckpoint is > because the CheckpointCoordinator often requires access to the checkpoint > plan for a pending checkpoint. And it just uses the PendingCheckpoint as a > convenient way to store the CheckpointPlan. > A cleaner solution would be changing in CheckpointCoordinator the > Map pendingCheckpoints to a Map Tuple2> pendingCheckpoints. The > CheckpointCoordinator would keep the two things it frequently needs together > in a tuple, rather than storing one in the other and thus expanding the > scope/concerns of PendingCheckpoint. > The above change should allow us to reduce the interface of > PendingCheckpoint.checkpointPlan to > PendingCheckpointFinishedTaskStateProvider. > The interface PendingCheckpointFinishedTaskStateProvider has some methods > that are about tracking state, not just about giving access to finished > state: void reportTaskFinishedOnRestore(ExecutionVertex task) and void > reportTaskHasFinishedOperators(ExecutionVertex task);. > I am wondering if those two would not be more cleanly handled outside the > PendingCheckpoint as well. Meaning in the method > CheckpointCoordinator.receiveAcknowledgeMessage() we handle the reporting of > finished operators to the CheckpointPlan. The method > PendingCheckpoint.acknowledgeTask() only deals with the states. > {quote} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-23815) Separate the concerns of PendingCheckpoint and CheckpointPlan
[ https://issues.apache.org/jira/browse/FLINK-23815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-23815: Parent: FLINK-26113 Issue Type: Sub-task (was: Improvement) > Separate the concerns of PendingCheckpoint and CheckpointPlan > - > > Key: FLINK-23815 > URL: https://issues.apache.org/jira/browse/FLINK-23815 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Checkpointing >Reporter: Yun Gao >Assignee: Yun Gao >Priority: Major > Labels: pull-request-available, stale-assigned > Fix For: 1.15.0 > > > As discussed in > https://github.com/apache/flink/pull/16655#issuecomment-899603149: > {quote} > When dealing with a PendingCheckpoint, should one be forced to also deal with > a CheckpointPlan at all? I think not, the PendingCheckpoint is purely a > tracker of gathered states and how many tasks have acknowledged and how many > we are still waiting for. That makes testing, reusability, etc. simple. > That means we should strive to have the PendingCheckpoint independent of > CheckpointPlan. I think we can achieve this with the following steps: > Much of the reason to have the CheckpointPlan in PendingCheckpoint is > because the CheckpointCoordinator often requires access to the checkpoint > plan for a pending checkpoint. And it just uses the PendingCheckpoint as a > convenient way to store the CheckpointPlan. > A cleaner solution would be changing in CheckpointCoordinator the > Map pendingCheckpoints to a Map Tuple2> pendingCheckpoints. The > CheckpointCoordinator would keep the two things it frequently needs together > in a tuple, rather than storing one in the other and thus expanding the > scope/concerns of PendingCheckpoint. > The above change should allow us to reduce the interface of > PendingCheckpoint.checkpointPlan to > PendingCheckpointFinishedTaskStateProvider. > The interface PendingCheckpointFinishedTaskStateProvider has some methods > that are about tracking state, not just about giving access to finished > state: void reportTaskFinishedOnRestore(ExecutionVertex task) and void > reportTaskHasFinishedOperators(ExecutionVertex task);. > I am wondering if those two would not be more cleanly handled outside the > PendingCheckpoint as well. Meaning in the method > CheckpointCoordinator.receiveAcknowledgeMessage() we handle the reporting of > finished operators to the CheckpointPlan. The method > PendingCheckpoint.acknowledgeTask() only deals with the states. > {quote} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-23883) (1.15) Follow up tasks and code improvements for FLIP-147
[ https://issues.apache.org/jira/browse/FLINK-23883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-23883: Summary: (1.15) Follow up tasks and code improvements for FLIP-147 (was: Follow up tasks and code improvements for FLIP-147) > (1.15) Follow up tasks and code improvements for FLIP-147 > - > > Key: FLINK-23883 > URL: https://issues.apache.org/jira/browse/FLINK-23883 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Reporter: Yun Gao >Priority: Major > Fix For: 1.15.0 > > > This issue is an umbrella issue to tracking the remaining bug fix and code > improvement for FLIP-147 implemented in > https://issues.apache.org/jira/browse/FLINK-2491 > This issue tracks issues that won't be implemented in 1.14.0, but should be > implemented in 1.15. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-26113) (1.16) Follow up tasks and code refactoring for FLIP-147
Yun Gao created FLINK-26113: --- Summary: (1.16) Follow up tasks and code refactoring for FLIP-147 Key: FLINK-26113 URL: https://issues.apache.org/jira/browse/FLINK-26113 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Reporter: Yun Gao Fix For: 1.16.0 This issue tracking the remaining issues that would be postponed to 1.16 for FLIP-147. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-11813) Standby per job mode Dispatchers don't know job's JobSchedulingStatus
[ https://issues.apache.org/jira/browse/FLINK-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-11813: -- Release Note: The issue of re-submitting a job in Application Mode when the job finished but failed during cleanup is fixed through the introduction of the new component JobResultStore which enables Flink to persist the cleanup state of a job to the file system. (see FLINK-25431) > Standby per job mode Dispatchers don't know job's JobSchedulingStatus > - > > Key: FLINK-11813 > URL: https://issues.apache.org/jira/browse/FLINK-11813 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.6.4, 1.7.2, 1.8.0, 1.9.3, 1.10.3, 1.11.3, 1.13.1, > 1.12.4 >Reporter: Till Rohrmann >Assignee: Matthias Pohl >Priority: Major > Fix For: 1.15.0 > > > At the moment, it can happen that standby {{Dispatchers}} in per job mode > will restart a terminated job after they gained leadership. The problem is > that we currently clear the {{RunningJobsRegistry}} once a job has reached a > globally terminal state. After the leading {{Dispatcher}} terminates, a > standby {{Dispatcher}} will gain leadership. Without having the information > from the {{RunningJobsRegistry}} it cannot tell whether the job has been > executed or whether the {{Dispatcher}} needs to re-execute the job. At the > moment, the {{Dispatcher}} will assume that there was a fault and hence > re-execute the job. This can lead to duplicate results. > I think we need some way to tell standby {{Dispatchers}} that a certain job > has been successfully executed. One trivial solution could be to not clean up > the {{RunningJobsRegistry}} but then we will clutter ZooKeeper. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18681: [FLINK-26032][streaming] check explicit env allowed on StreamExecutionEnvironment own.
flinkbot edited a comment on pull request #18681: URL: https://github.com/apache/flink/pull/18681#issuecomment-1033535247 ## CI report: * 77e43dbd211dc9274041996d5a9d24607dc9f953 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31299) * 2b10fb621e515705a40c62f4ccbf131992318759 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26112) Port getEndpoint method to the specific service type subclass
[ https://issues.apache.org/jira/browse/FLINK-26112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aitozi updated FLINK-26112: --- Description: In the [FLINK-20830 |https://issues.apache.org/jira/projects/FLINK/issues/FLINK-20830]we introduce serval subclass to deal with the service build and query, This ticket is meant to move the related code to the proper class (was: In the [FLINK-20830|https://issues.apache.org/jira/projects/FLINK/issues/FLINK-20830], we introduce serval subclass to deal with the service build and query, This ticket is meant to move the related code to the proper class ) > Port getEndpoint method to the specific service type subclass > - > > Key: FLINK-26112 > URL: https://issues.apache.org/jira/browse/FLINK-26112 > Project: Flink > Issue Type: Improvement >Reporter: Aitozi >Priority: Major > > In the [FLINK-20830 > |https://issues.apache.org/jira/projects/FLINK/issues/FLINK-20830]we > introduce serval subclass to deal with the service build and query, This > ticket is meant to move the related code to the proper class -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-26112) Port getEndpoint method to the specific service type subclass
Aitozi created FLINK-26112: -- Summary: Port getEndpoint method to the specific service type subclass Key: FLINK-26112 URL: https://issues.apache.org/jira/browse/FLINK-26112 Project: Flink Issue Type: Improvement Reporter: Aitozi In the [FLINK-20830|https://issues.apache.org/jira/projects/FLINK/issues/FLINK-20830], we introduce serval subclass to deal with the service build and query, This ticket is meant to move the related code to the proper class -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-23391) KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure
[ https://issues.apache.org/jira/browse/FLINK-23391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491843#comment-17491843 ] Yun Gao commented on FLINK-23391: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31298=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7286 > KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure > --- > > Key: FLINK-23391 > URL: https://issues.apache.org/jira/browse/FLINK-23391 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0, 1.13.1, 1.15.0 >Reporter: Xintong Song >Assignee: Qingsheng Ren >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20456=logs=c5612577-f1f7-5977-6ff6-7432788526f7=53f6305f-55e6-561c-8f1e-3a1dde2c77df=6783 > {code} > Jul 14 23:00:26 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 99.93 s <<< FAILURE! - in > org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest > Jul 14 23:00:26 [ERROR] > testKafkaSourceMetrics(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest) > Time elapsed: 60.225 s <<< ERROR! > Jul 14 23:00:26 java.util.concurrent.TimeoutException: Offsets are not > committed successfully. Dangling offsets: > {15213={KafkaSourceReaderTest-0=OffsetAndMetadata{offset=10, > leaderEpoch=null, metadata=''}}} > Jul 14 23:00:26 at > org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210) > Jul 14 23:00:26 at > org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testKafkaSourceMetrics(KafkaSourceReaderTest.java:275) > Jul 14 23:00:26 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Jul 14 23:00:26 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Jul 14 23:00:26 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Jul 14 23:00:26 at java.lang.reflect.Method.invoke(Method.java:498) > Jul 14 23:00:26 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > Jul 14 23:00:26 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Jul 14 23:00:26 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > Jul 14 23:00:26 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Jul 14 23:00:26 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Jul 14 23:00:26 at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239) > Jul 14 23:00:26 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Jul 14 23:00:26 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > Jul 14 23:00:26 at org.junit.rules.RunRules.evaluate(RunRules.java:20) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > Jul 14 23:00:26 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > Jul 14 23:00:26 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Jul 14 23:00:26 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Jul 14 23:00:26 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > Jul 14 23:00:26 at org.junit.runners.Suite.runChild(Suite.java:128) > Jul 14 23:00:26 at org.junit.runners.Suite.runChild(Suite.java:27) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jul 14 23:00:26 at >
[jira] [Updated] (FLINK-25431) Implement file-based JobResultStore
[ https://issues.apache.org/jira/browse/FLINK-25431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-25431: -- Release Note: Introduces a file-based version of the JobResultStore with new configuration parameters "job-result-store.storage-path" and "job-result-store.delete-on-commit". This feature persists the final state of a job including its state of cleanup. (was: Introduces a file-based version of the JobResultStore with new configuration parameters {{job-result-store.storage-path}} and {{job-result-store.delete-on-commit}}. This feature persists the final state of a job including its state of cleanup.) > Implement file-based JobResultStore > --- > > Key: FLINK-25431 > URL: https://issues.apache.org/jira/browse/FLINK-25431 > Project: Flink > Issue Type: Sub-task >Reporter: Matthias Pohl >Assignee: Mika Naylor >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > The implementation should comply to what's described in > [FLIP-194|https://cwiki.apache.org/confluence/display/FLINK/FLIP-194%3A+Introduce+the+JobResultStore] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-26111) ElasticsearchSinkITCase.testElasticsearchSink failed on azure
Yun Gao created FLINK-26111: --- Summary: ElasticsearchSinkITCase.testElasticsearchSink failed on azure Key: FLINK-26111 URL: https://issues.apache.org/jira/browse/FLINK-26111 Project: Flink Issue Type: Bug Components: Connectors / ElasticSearch Affects Versions: 1.14.3 Reporter: Yun Gao {code:java} 2022-02-12T02:31:35.0591708Z Feb 12 02:31:35 [ERROR] testElasticsearchSink Time elapsed: 32.062 s <<< ERROR! 2022-02-12T02:31:35.0592911Z Feb 12 02:31:35 org.apache.flink.runtime.client.JobExecutionException: Job execution failed. 2022-02-12T02:31:35.0594116Z Feb 12 02:31:35at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) 2022-02-12T02:31:35.0595330Z Feb 12 02:31:35at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) 2022-02-12T02:31:35.0597985Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) 2022-02-12T02:31:35.0598951Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) 2022-02-12T02:31:35.0599766Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2022-02-12T02:31:35.0600573Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2022-02-12T02:31:35.0601580Z Feb 12 02:31:35at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258) 2022-02-12T02:31:35.0670936Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) 2022-02-12T02:31:35.0672538Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) 2022-02-12T02:31:35.0673610Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2022-02-12T02:31:35.0674648Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2022-02-12T02:31:35.0675637Z Feb 12 02:31:35at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) 2022-02-12T02:31:35.0676705Z Feb 12 02:31:35at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) 2022-02-12T02:31:35.0677921Z Feb 12 02:31:35at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) 2022-02-12T02:31:35.0679204Z Feb 12 02:31:35at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) 2022-02-12T02:31:35.0680428Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) 2022-02-12T02:31:35.0681685Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) 2022-02-12T02:31:35.0682868Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2022-02-12T02:31:35.0683909Z Feb 12 02:31:35at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2022-02-12T02:31:35.0684997Z Feb 12 02:31:35at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47) 2022-02-12T02:31:35.0686539Z Feb 12 02:31:35at akka.dispatch.OnComplete.internal(Future.scala:300) 2022-02-12T02:31:35.0687383Z Feb 12 02:31:35at akka.dispatch.OnComplete.internal(Future.scala:297) 2022-02-12T02:31:35.0688214Z Feb 12 02:31:35at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) 2022-02-12T02:31:35.0689231Z Feb 12 02:31:35at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) 2022-02-12T02:31:35.0690125Z Feb 12 02:31:35at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) 2022-02-12T02:31:35.0691199Z Feb 12 02:31:35at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) 2022-02-12T02:31:35.0692578Z Feb 12 02:31:35at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) 2022-02-12T02:31:35.0693634Z Feb 12 02:31:35at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) 2022-02-12T02:31:35.0694775Z Feb 12 02:31:35at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) 2022-02-12T02:31:35.0695856Z Feb 12 02:31:35at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) 2022-02-12T02:31:35.0696776Z Feb 12 02:31:35at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621) 2022-02-12T02:31:35.069Z Feb 12 02:31:35at
[jira] [Updated] (FLINK-26111) ElasticsearchSinkITCase.testElasticsearchSink failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-26111: Labels: test-stability (was: ) > ElasticsearchSinkITCase.testElasticsearchSink failed on azure > - > > Key: FLINK-26111 > URL: https://issues.apache.org/jira/browse/FLINK-26111 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch >Affects Versions: 1.14.3 >Reporter: Yun Gao >Priority: Major > Labels: test-stability > > {code:java} > 2022-02-12T02:31:35.0591708Z Feb 12 02:31:35 [ERROR] testElasticsearchSink > Time elapsed: 32.062 s <<< ERROR! > 2022-02-12T02:31:35.0592911Z Feb 12 02:31:35 > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2022-02-12T02:31:35.0594116Z Feb 12 02:31:35 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2022-02-12T02:31:35.0595330Z Feb 12 02:31:35 at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) > 2022-02-12T02:31:35.0597985Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2022-02-12T02:31:35.0598951Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2022-02-12T02:31:35.0599766Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2022-02-12T02:31:35.0600573Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2022-02-12T02:31:35.0601580Z Feb 12 02:31:35 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258) > 2022-02-12T02:31:35.0670936Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2022-02-12T02:31:35.0672538Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2022-02-12T02:31:35.0673610Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2022-02-12T02:31:35.0674648Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2022-02-12T02:31:35.0675637Z Feb 12 02:31:35 at > org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) > 2022-02-12T02:31:35.0676705Z Feb 12 02:31:35 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) > 2022-02-12T02:31:35.0677921Z Feb 12 02:31:35 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) > 2022-02-12T02:31:35.0679204Z Feb 12 02:31:35 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) > 2022-02-12T02:31:35.0680428Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2022-02-12T02:31:35.0681685Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2022-02-12T02:31:35.0682868Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2022-02-12T02:31:35.0683909Z Feb 12 02:31:35 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2022-02-12T02:31:35.0684997Z Feb 12 02:31:35 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47) > 2022-02-12T02:31:35.0686539Z Feb 12 02:31:35 at > akka.dispatch.OnComplete.internal(Future.scala:300) > 2022-02-12T02:31:35.0687383Z Feb 12 02:31:35 at > akka.dispatch.OnComplete.internal(Future.scala:297) > 2022-02-12T02:31:35.0688214Z Feb 12 02:31:35 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) > 2022-02-12T02:31:35.0689231Z Feb 12 02:31:35 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) > 2022-02-12T02:31:35.0690125Z Feb 12 02:31:35 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2022-02-12T02:31:35.0691199Z Feb 12 02:31:35 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) > 2022-02-12T02:31:35.0692578Z Feb 12 02:31:35 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) > 2022-02-12T02:31:35.0693634Z Feb 12 02:31:35 at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) > 2022-02-12T02:31:35.0694775Z Feb 12 02:31:35 at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) >
[jira] [Updated] (FLINK-25431) Implement file-based JobResultStore
[ https://issues.apache.org/jira/browse/FLINK-25431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-25431: -- Release Note: Introduces a file-based version of the JobResultStore with new configuration parameters {{job-result-store.storage-path}} and {{job-result-store.delete-on-commit}}. This feature persists the final state of a job including its state of cleanup. > Implement file-based JobResultStore > --- > > Key: FLINK-25431 > URL: https://issues.apache.org/jira/browse/FLINK-25431 > Project: Flink > Issue Type: Sub-task >Reporter: Matthias Pohl >Assignee: Mika Naylor >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > The implementation should comply to what's described in > [FLIP-194|https://cwiki.apache.org/confluence/display/FLINK/FLIP-194%3A+Introduce+the+JobResultStore] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18656: [FLINK-25249][connector/kafka] Reimplement KafkaTestEnvironment with KafkaContainer
flinkbot edited a comment on pull request #18656: URL: https://github.com/apache/flink/pull/18656#issuecomment-1032337760 ## CI report: * 026018918ca60db860e0a0dd6454ef04a1f1861e Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31358) * fefd1a1e95647640cafbc29b333da5ed196c3e1a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31376) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) * 377f14caf340fcfbba8fa409a23529769bfcff2a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31377) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26110) AvroStreamingFileSinkITCase failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-26110: Labels: test-stability (was: ) > AvroStreamingFileSinkITCase failed on azure > --- > > Key: FLINK-26110 > URL: https://issues.apache.org/jira/browse/FLINK-26110 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Affects Versions: 1.13.5 >Reporter: Yun Gao >Priority: Major > Labels: test-stability > > {code:java} > Feb 12 01:00:00 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 2.433 s <<< FAILURE! - in > org.apache.flink.formats.avro.AvroStreamingFileSinkITCase > Feb 12 01:00:00 [ERROR] > testWriteAvroGeneric(org.apache.flink.formats.avro.AvroStreamingFileSinkITCase) > Time elapsed: 0.433 s <<< FAILURE! > Feb 12 01:00:00 java.lang.AssertionError: expected:<1> but was:<2> > Feb 12 01:00:00 at org.junit.Assert.fail(Assert.java:88) > Feb 12 01:00:00 at org.junit.Assert.failNotEquals(Assert.java:834) > Feb 12 01:00:00 at org.junit.Assert.assertEquals(Assert.java:645) > Feb 12 01:00:00 at org.junit.Assert.assertEquals(Assert.java:631) > Feb 12 01:00:00 at > org.apache.flink.formats.avro.AvroStreamingFileSinkITCase.validateResults(AvroStreamingFileSinkITCase.java:139) > Feb 12 01:00:00 at > org.apache.flink.formats.avro.AvroStreamingFileSinkITCase.testWriteAvroGeneric(AvroStreamingFileSinkITCase.java:109) > Feb 12 01:00:00 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Feb 12 01:00:00 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 12 01:00:00 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 12 01:00:00 at java.lang.reflect.Method.invoke(Method.java:498) > Feb 12 01:00:00 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > Feb 12 01:00:00 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Feb 12 01:00:00 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > Feb 12 01:00:00 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Feb 12 01:00:00 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Feb 12 01:00:00 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > Feb 12 01:00:00 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > Feb 12 01:00:00 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Feb 12 01:00:00 at java.lang.Thread.run(Thread.java:748) > Feb 12 01:00:00 > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31304=logs=c91190b6-40ae-57b2-5999-31b869b0a7c1=43529380-51b4-5e90-5af4-2dccec0ef402=13026 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-26110) AvroStreamingFileSinkITCase failed on azure
Yun Gao created FLINK-26110: --- Summary: AvroStreamingFileSinkITCase failed on azure Key: FLINK-26110 URL: https://issues.apache.org/jira/browse/FLINK-26110 Project: Flink Issue Type: Bug Components: Connectors / FileSystem Affects Versions: 1.13.5 Reporter: Yun Gao {code:java} Feb 12 01:00:00 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.433 s <<< FAILURE! - in org.apache.flink.formats.avro.AvroStreamingFileSinkITCase Feb 12 01:00:00 [ERROR] testWriteAvroGeneric(org.apache.flink.formats.avro.AvroStreamingFileSinkITCase) Time elapsed: 0.433 s <<< FAILURE! Feb 12 01:00:00 java.lang.AssertionError: expected:<1> but was:<2> Feb 12 01:00:00 at org.junit.Assert.fail(Assert.java:88) Feb 12 01:00:00 at org.junit.Assert.failNotEquals(Assert.java:834) Feb 12 01:00:00 at org.junit.Assert.assertEquals(Assert.java:645) Feb 12 01:00:00 at org.junit.Assert.assertEquals(Assert.java:631) Feb 12 01:00:00 at org.apache.flink.formats.avro.AvroStreamingFileSinkITCase.validateResults(AvroStreamingFileSinkITCase.java:139) Feb 12 01:00:00 at org.apache.flink.formats.avro.AvroStreamingFileSinkITCase.testWriteAvroGeneric(AvroStreamingFileSinkITCase.java:109) Feb 12 01:00:00 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Feb 12 01:00:00 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Feb 12 01:00:00 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Feb 12 01:00:00 at java.lang.reflect.Method.invoke(Method.java:498) Feb 12 01:00:00 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) Feb 12 01:00:00 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Feb 12 01:00:00 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) Feb 12 01:00:00 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Feb 12 01:00:00 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) Feb 12 01:00:00 at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) Feb 12 01:00:00 at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) Feb 12 01:00:00 at java.util.concurrent.FutureTask.run(FutureTask.java:266) Feb 12 01:00:00 at java.lang.Thread.run(Thread.java:748) Feb 12 01:00:00 {code} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31304=logs=c91190b6-40ae-57b2-5999-31b869b0a7c1=43529380-51b4-5e90-5af4-2dccec0ef402=13026 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26107) CoordinatorEventsExactlyOnceITCase.test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491842#comment-17491842 ] Yun Gao commented on FLINK-26107: - Hi [~dmvk] could you also have a look at this issue and https://issues.apache.org/jira/browse/FLINK-26108 ~? The errors seems to be related to adaptive scheduler~ > CoordinatorEventsExactlyOnceITCase.test failed on azure > --- > > Key: FLINK-26107 > URL: https://issues.apache.org/jira/browse/FLINK-26107 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 14 02:23:11 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 4.135 s <<< FAILURE! - in > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase > Feb 14 02:23:11 [ERROR] > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test > Time elapsed: 0.72 s <<< ERROR! > Feb 14 02:23:11 org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > Feb 14 02:23:11 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > Feb 14 02:23:11 at > org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:933) > Feb 14 02:23:11 at > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test(CoordinatorEventsExactlyOnceITCase.java:192) > Feb 14 02:23:11 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Feb 14 02:23:11 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 14 02:23:11 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 14 02:23:11 at java.lang.reflect.Method.invoke(Method.java:498) > Feb 14 02:23:11 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Feb 14 02:23:11 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Feb 14 02:23:11 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Feb 14 02:23:11 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Feb 14 02:23:11 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:137) > Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) > Feb 14 02:23:11 at > org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) > Feb 14 02:23:11 at > org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) > Feb 14 02:23:11 at > org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) > Feb 14 02:23:11 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) > Feb 14 02:23:11 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) > Feb 14 02:23:11 at >
[jira] [Commented] (FLINK-26108) CoordinatedSourceITCase.testEnumeratorCreationFails failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491839#comment-17491839 ] Yun Gao commented on FLINK-26108: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31305=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=2c68b137-b01d-55c9-e603-3ff3f320364b=24676 > CoordinatedSourceITCase.testEnumeratorCreationFails failed on azure > --- > > Key: FLINK-26108 > URL: https://issues.apache.org/jira/browse/FLINK-26108 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 14 02:10:03 [ERROR] Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, > Time elapsed: 6.65 s <<< FAILURE! - in > org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase > Feb 14 02:10:03 [ERROR] > org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase.testEnumeratorCreationFails > Time elapsed: 0.181 s <<< ERROR! > Feb 14 02:10:03 org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > Feb 14 02:10:03 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > Feb 14 02:10:03 at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > Feb 14 02:10:03 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:259) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > Feb 14 02:10:03 at > org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47) > Feb 14 02:10:03 at akka.dispatch.OnComplete.internal(Future.scala:300) > Feb 14 02:10:03 at akka.dispatch.OnComplete.internal(Future.scala:297) > Feb 14 02:10:03 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) > Feb 14 02:10:03 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) > Feb 14 02:10:03 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) > Feb 14 02:10:03 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) > Feb 14 02:10:03 at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) > Feb 14 02:10:03 at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) > Feb 14 02:10:03 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) > Feb 14 02:10:03 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621) > Feb 14 02:10:03 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24) > Feb 14
[jira] [Resolved] (FLINK-25974) Make cancellation of jobs depend on the JobResultStore
[ https://issues.apache.org/jira/browse/FLINK-25974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl resolved FLINK-25974. --- Fix Version/s: 1.15.0 Resolution: Fixed master: 17c89564ae3 e8a91fd8428 > Make cancellation of jobs depend on the JobResultStore > -- > > Key: FLINK-25974 > URL: https://issues.apache.org/jira/browse/FLINK-25974 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Matthias Pohl >Assignee: Matthias Pohl >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > {{JobManagerRunner}} instances were cancellable as long as the instance was > still registered in the {{Dispatcher.jobManagerRunnerRegistry}}. With the > cleanup being done concurrently (i.e. not relying on the > {{JobManagerRunnerRegistry}} to be cleaned up anymore), the cancellation of a > job should only be possible as long as the job is not globally finished and > before cleanup is triggered. > We should verify whether a job is listed in the {{JobResultStore}} and only > enable the user to cancel the job if that's not the case. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18717: [hotfix][avro] Add comments to explain why projection pushdown works for avro bulk format
flinkbot edited a comment on pull request #18717: URL: https://github.com/apache/flink/pull/18717#issuecomment-1035885610 ## CI report: * 19ff8d204907e1447a6b3f60c9b657140acdab58 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31359) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18715: [FLINK-25046][runtime] Convert unspecified edge to rescale when using adapive batch scheduler
flinkbot edited a comment on pull request #18715: URL: https://github.com/apache/flink/pull/18715#issuecomment-1035853003 ## CI report: * 58d1ec01c0ad3bda0e0f2d2260caac113c3fa942 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31355) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25941) KafkaSinkITCase.testAbortTransactionsAfterScaleInBeforeFirstCheckpoint fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-25941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491838#comment-17491838 ] Yun Gao commented on FLINK-25941: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31305=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=36150 > KafkaSinkITCase.testAbortTransactionsAfterScaleInBeforeFirstCheckpoint fails > on AZP > --- > > Key: FLINK-25941 > URL: https://issues.apache.org/jira/browse/FLINK-25941 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.15.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > > The test > {{KafkaSinkITCase.testAbortTransactionsAfterScaleInBeforeFirstCheckpoint}} > fails on AZP with > {code} > 2022-02-02T17:22:29.5131631Z Feb 02 17:22:29 [ERROR] > org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testAbortTransactionsAfterScaleInBeforeFirstCheckpoint > Time elapsed: 2.186 s <<< FAILURE! > 2022-02-02T17:22:29.5146972Z Feb 02 17:22:29 java.lang.AssertionError > 2022-02-02T17:22:29.5148918Z Feb 02 17:22:29 at > org.junit.Assert.fail(Assert.java:87) > 2022-02-02T17:22:29.5149843Z Feb 02 17:22:29 at > org.junit.Assert.assertTrue(Assert.java:42) > 2022-02-02T17:22:29.5150644Z Feb 02 17:22:29 at > org.junit.Assert.assertTrue(Assert.java:53) > 2022-02-02T17:22:29.5151730Z Feb 02 17:22:29 at > org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testAbortTransactionsAfterScaleInBeforeFirstCheckpoint(KafkaSinkITCase.java:267) > 2022-02-02T17:22:29.5152858Z Feb 02 17:22:29 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2022-02-02T17:22:29.5153757Z Feb 02 17:22:29 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2022-02-02T17:22:29.5155002Z Feb 02 17:22:29 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2022-02-02T17:22:29.5156464Z Feb 02 17:22:29 at > java.lang.reflect.Method.invoke(Method.java:498) > 2022-02-02T17:22:29.5157384Z Feb 02 17:22:29 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > 2022-02-02T17:22:29.5158445Z Feb 02 17:22:29 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2022-02-02T17:22:29.5159478Z Feb 02 17:22:29 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > 2022-02-02T17:22:29.5160524Z Feb 02 17:22:29 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2022-02-02T17:22:29.5161758Z Feb 02 17:22:29 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2022-02-02T17:22:29.5162775Z Feb 02 17:22:29 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2022-02-02T17:22:29.5163744Z Feb 02 17:22:29 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2022-02-02T17:22:29.5164913Z Feb 02 17:22:29 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > 2022-02-02T17:22:29.5166101Z Feb 02 17:22:29 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2022-02-02T17:22:29.5167030Z Feb 02 17:22:29 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > 2022-02-02T17:22:29.5167953Z Feb 02 17:22:29 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > 2022-02-02T17:22:29.5168956Z Feb 02 17:22:29 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > 2022-02-02T17:22:29.5169936Z Feb 02 17:22:29 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > 2022-02-02T17:22:29.5170903Z Feb 02 17:22:29 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > 2022-02-02T17:22:29.5171953Z Feb 02 17:22:29 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > 2022-02-02T17:22:29.5172919Z Feb 02 17:22:29 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > 2022-02-02T17:22:29.5173811Z Feb 02 17:22:29 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > 2022-02-02T17:22:29.5174874Z Feb 02 17:22:29 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > 2022-02-02T17:22:29.5175917Z Feb 02 17:22:29 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > 2022-02-02T17:22:29.5176851Z Feb 02 17:22:29 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > 2022-02-02T17:22:29.5177816Z Feb 02 17:22:29 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2022-02-02T17:22:29.5178816Z Feb 02 17:22:29
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) * 377f14caf340fcfbba8fa409a23529769bfcff2a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wanglijie95 commented on pull request #18715: [FLINK-25046][runtime] Convert unspecified edge to rescale when using adapive batch scheduler
wanglijie95 commented on pull request #18715: URL: https://github.com/apache/flink/pull/18715#issuecomment-1038758135 @zhuzhurk Thanks for review. I 've addressed your comment. Please help to review it again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MrWhiteSike edited a comment on pull request #18607: [FLINK-24345] [docs] Translate "File Systems" page of "Internals" int…
MrWhiteSike edited a comment on pull request #18607: URL: https://github.com/apache/flink/pull/18607#issuecomment-1029722910 [@wuchong ](https://github.com/wuchong) [@xccui](https://github.com/xccui) [@klion26](https://github.com/klion26) Could you help to review this in your available time? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26107) CoordinatorEventsExactlyOnceITCase.test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491837#comment-17491837 ] Yun Gao commented on FLINK-26107: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31305=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7c1d86e3-35bd-5fd5-3b7c-30c126a78702=9585 > CoordinatorEventsExactlyOnceITCase.test failed on azure > --- > > Key: FLINK-26107 > URL: https://issues.apache.org/jira/browse/FLINK-26107 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 14 02:23:11 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 4.135 s <<< FAILURE! - in > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase > Feb 14 02:23:11 [ERROR] > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test > Time elapsed: 0.72 s <<< ERROR! > Feb 14 02:23:11 org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > Feb 14 02:23:11 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > Feb 14 02:23:11 at > org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:933) > Feb 14 02:23:11 at > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test(CoordinatorEventsExactlyOnceITCase.java:192) > Feb 14 02:23:11 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Feb 14 02:23:11 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 14 02:23:11 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 14 02:23:11 at java.lang.reflect.Method.invoke(Method.java:498) > Feb 14 02:23:11 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Feb 14 02:23:11 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Feb 14 02:23:11 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Feb 14 02:23:11 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Feb 14 02:23:11 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:137) > Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) > Feb 14 02:23:11 at > org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) > Feb 14 02:23:11 at > org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) > Feb 14 02:23:11 at > org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) > Feb 14 02:23:11 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) > Feb 14 02:23:11 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) > Feb 14 02:23:11 at >
[GitHub] [flink] XComp merged pull request #18644: [FLINK-25974] Makes job cancellation rely on JobResultStore entry
XComp merged pull request #18644: URL: https://github.com/apache/flink/pull/18644 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MrWhiteSike edited a comment on pull request #18655: [FLINK-25799] [docs] Translate table/filesystem.md page into Chinese.
MrWhiteSike edited a comment on pull request #18655: URL: https://github.com/apache/flink/pull/18655#issuecomment-1032274801 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MrWhiteSike edited a comment on pull request #18655: [FLINK-25799] [docs] Translate table/filesystem.md page into Chinese.
MrWhiteSike edited a comment on pull request #18655: URL: https://github.com/apache/flink/pull/18655#issuecomment-1032274801 Hi, [RocMarshal](https://github.com/RocMarshal) [@wuchong](https://github.com/wuchong) [@xccui](https://github.com/xccui) [@klion26] could you help to review this? when you are available. very appreciated it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26109) Avro Confluent Schema Registry nightly end-to-end test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-26109: Labels: test-stability (was: ) > Avro Confluent Schema Registry nightly end-to-end test failed on azure > -- > > Key: FLINK-26109 > URL: https://issues.apache.org/jira/browse/FLINK-26109 > Project: Flink > Issue Type: Bug > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Major > Labels: test-stability > > {code:java} > Feb 12 07:55:02 Stopping job timeout watchdog (with pid=130662) > Feb 12 07:55:03 Checking for errors... > Feb 12 07:55:03 Found error in log files; printing first 500 lines; see full > logs for details: > ... > az209-567.vil1xujjdrkuxjp2ihtao45w0e.ax.internal.cloudapp.net > (dataPort=41161). > org.apache.flink.util.FlinkException: The TaskExecutor is shutting down. > at > org.apache.flink.runtime.taskexecutor.TaskExecutor.onStop(TaskExecutor.java:456) > ~[flink-dist-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStop(RpcEndpoint.java:214) > ~[flink-dist-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StartedState.lambda$terminate$0(AkkaRpcActor.java:568) > ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83) > ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StartedState.terminate(AkkaRpcActor.java:567) > ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at > org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:191) > ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) > ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) > ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) > ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) > ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) > ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.actor.Actor.aroundReceive(Actor.scala:537) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.actor.Actor.aroundReceive$(Actor.scala:535) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.actor.ActorCell.invoke(ActorCell.scala:548) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.dispatch.Mailbox.run(Mailbox.scala:231) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at akka.dispatch.Mailbox.exec(Mailbox.scala:243) > [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] > at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) > [?:1.8.0_312] > at > java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) > [?:1.8.0_312] > at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) > [?:1.8.0_312] > {code} > Also in the TM's log > > {code:java} > 2022-02-12 07:55:04,764 INFO > org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - RECEIVED > SIGNAL 15: SIGTERM. Shutting down as requested. > 2022-02-12 07:55:04,765 INFO > org.apache.flink.runtime.blob.PermanentBlobCache [] - Shutting > down BLOB cache > 2022-02-12 07:55:04,767 INFO >
[jira] [Created] (FLINK-26109) Avro Confluent Schema Registry nightly end-to-end test failed on azure
Yun Gao created FLINK-26109: --- Summary: Avro Confluent Schema Registry nightly end-to-end test failed on azure Key: FLINK-26109 URL: https://issues.apache.org/jira/browse/FLINK-26109 Project: Flink Issue Type: Bug Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Affects Versions: 1.15.0 Reporter: Yun Gao {code:java} Feb 12 07:55:02 Stopping job timeout watchdog (with pid=130662) Feb 12 07:55:03 Checking for errors... Feb 12 07:55:03 Found error in log files; printing first 500 lines; see full logs for details: ... az209-567.vil1xujjdrkuxjp2ihtao45w0e.ax.internal.cloudapp.net (dataPort=41161). org.apache.flink.util.FlinkException: The TaskExecutor is shutting down. at org.apache.flink.runtime.taskexecutor.TaskExecutor.onStop(TaskExecutor.java:456) ~[flink-dist-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStop(RpcEndpoint.java:214) ~[flink-dist-1.15-SNAPSHOT.jar:1.15-SNAPSHOT] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StartedState.lambda$terminate$0(AkkaRpcActor.java:568) ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83) ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StartedState.terminate(AkkaRpcActor.java:567) ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:191) ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.actor.Actor.aroundReceive(Actor.scala:537) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.actor.Actor.aroundReceive$(Actor.scala:535) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.actor.ActorCell.invoke(ActorCell.scala:548) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.dispatch.Mailbox.run(Mailbox.scala:231) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at akka.dispatch.Mailbox.exec(Mailbox.scala:243) [flink-rpc-akka_7dcae025-2017-4b0f-828d-f89a7ceb9bf7.jar:1.15-SNAPSHOT] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [?:1.8.0_312] at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [?:1.8.0_312] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [?:1.8.0_312] {code} Also in the TM's log {code:java} 2022-02-12 07:55:04,764 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested. 2022-02-12 07:55:04,765 INFO org.apache.flink.runtime.blob.PermanentBlobCache [] - Shutting down BLOB cache 2022-02-12 07:55:04,767 INFO org.apache.flink.runtime.state.TaskExecutorLocalStateStoresManager [] - Shutting down TaskExecutorLocalStateStoresManager. 2022-02-12 07:55:04,768 INFO org.apache.flink.runtime.io.disk.FileChannelManagerImpl [] - FileChannelManager removed spill file directory /tmp/flink-io-e1efe10a-812c-476b-b48a-e16f6908ada4 2022-02-12 07:55:04,769 INFO org.apache.flink.runtime.blob.TransientBlobCache [] - Shutting down BLOB cache 2022-02-12 07:55:04,771 INFO
[GitHub] [flink] MrWhiteSike edited a comment on pull request #18607: [FLINK-24345] [docs] Translate "File Systems" page of "Internals" int…
MrWhiteSike edited a comment on pull request #18607: URL: https://github.com/apache/flink/pull/18607#issuecomment-1029722910 [@wuchong ](https://github.com/wuchong) [@xccui](https://github.com/xccui) [@klion26](https://github.com/klion26) Could you review this in your available time? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MrWhiteSike edited a comment on pull request #18607: [FLINK-24345] [docs] Translate "File Systems" page of "Internals" int…
MrWhiteSike edited a comment on pull request #18607: URL: https://github.com/apache/flink/pull/18607#issuecomment-1029722910 [@wuchong ](https://github.com/wuchong) [xccui](https://github.com/xccui) [klion26](https://github.com/klion26) Could you review this in your available time? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18723: [FLINK-25640][docs] Enhance the document for blocking shuffle
flinkbot edited a comment on pull request #18723: URL: https://github.com/apache/flink/pull/18723#issuecomment-1036045794 ## CI report: * 0d4dac5919be0c6771ab22a261429e0387f781c7 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31235) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MrWhiteSike edited a comment on pull request #18607: [FLINK-24345] [docs] Translate "File Systems" page of "Internals" int…
MrWhiteSike edited a comment on pull request #18607: URL: https://github.com/apache/flink/pull/18607#issuecomment-1029722910 [@wuchong ](https://github.com/wuchong) Could you review this in your available time? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] tsreaper commented on a change in pull request #15: [FLINK-25820] Introduce Table Store Flink Source
tsreaper commented on a change in pull request #15: URL: https://github.com/apache/flink-table-store/pull/15#discussion_r805569270 ## File path: flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/source/FileStoreSourceSplitGeneratorTest.java ## @@ -0,0 +1,99 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.source; + +import org.apache.flink.table.store.file.ValueKind; +import org.apache.flink.table.store.file.manifest.ManifestEntry; +import org.apache.flink.table.store.file.mergetree.sst.SstFileMeta; +import org.apache.flink.table.store.file.operation.FileStoreScan; +import org.apache.flink.table.store.file.stats.FieldStats; + +import org.junit.jupiter.api.Test; + +import javax.annotation.Nullable; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +import static org.apache.flink.table.store.file.mergetree.compact.CompactManagerTest.row; +import static org.assertj.core.api.Assertions.assertThat; + +/** Test for {@link FileStoreSourceSplitGenerator}. */ +public class FileStoreSourceSplitGeneratorTest { + +@Test +public void test() { +FileStoreScan.Plan plan = +new FileStoreScan.Plan() { +@Nullable +@Override +public Long snapshotId() { +return null; +} + +@Override +public List files() { +return Arrays.asList( +makeEntry(1, 0, "f0"), +makeEntry(1, 0, "f1"), +makeEntry(1, 1, "f2"), +makeEntry(2, 0, "f3"), +makeEntry(2, 0, "f4"), +makeEntry(2, 0, "f5"), +makeEntry(2, 1, "f6")); +} +}; +List splits = new FileStoreSourceSplitGenerator().createSplits(plan); +assertThat(splits.size()).isEqualTo(4); +assertSplit(splits.get(0), "01", 2, 0, Arrays.asList("f3", "f4", "f5")); +assertSplit(splits.get(1), "02", 2, 1, Collections.singletonList("f6")); +assertSplit(splits.get(2), "03", 1, 0, Arrays.asList("f0", "f1")); +assertSplit(splits.get(3), "04", 1, 1, Collections.singletonList("f2")); Review comment: I mean add cases for example 9 -> 10 or 99 -> 100. See https://en.wikipedia.org/wiki/Carry_(arithmetic) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dawidwys commented on a change in pull request #18729: [FLINK-26093][tests] Adjust SavepointFormatITCase for ChangelogStateBackend
dawidwys commented on a change in pull request #18729: URL: https://github.com/apache/flink/pull/18729#discussion_r805565771 ## File path: flink-tests/src/test/java/org/apache/flink/test/checkpointing/SavepointFormatITCase.java ## @@ -78,138 +82,151 @@ LoggerAuditingExtension loggerAuditingExtension = new LoggerAuditingExtension(SavepointFormatITCase.class, Level.INFO); -private static Stream parameters() { -return Stream.of( -Arguments.of( -SavepointFormatType.CANONICAL, -HEAP, -(Consumer) -keyedState -> -assertThat( -keyedState, - instanceOf(SavepointKeyedStateHandle.class))), -Arguments.of( -SavepointFormatType.NATIVE, -HEAP, -(Consumer) -keyedState -> -assertThat( -keyedState, - instanceOf(KeyGroupsStateHandle.class))), -Arguments.of( -SavepointFormatType.CANONICAL, -ROCKSDB_FULL_SNAPSHOTS, -(Consumer) -keyedState -> -assertThat( -keyedState, - instanceOf(SavepointKeyedStateHandle.class))), -Arguments.of( -SavepointFormatType.NATIVE, -ROCKSDB_FULL_SNAPSHOTS, -(Consumer) -keyedState -> -assertThat( -keyedState, - instanceOf(KeyGroupsStateHandle.class))), -Arguments.of( -SavepointFormatType.CANONICAL, -ROCKSDB_INCREMENTAL_SNAPSHOTS, -(Consumer) -keyedState -> -assertThat( -keyedState, - instanceOf(SavepointKeyedStateHandle.class))), -Arguments.of( -SavepointFormatType.NATIVE, -ROCKSDB_INCREMENTAL_SNAPSHOTS, -(Consumer) -keyedState -> -assertThat( -keyedState, -instanceOf( - IncrementalRemoteKeyedStateHandle.class; +private static List parameters() { Review comment: Only expressing my opinion, I'll leave it up to your judgment. I see the benefit of generating all combinations through "loops", but on the downside I find it harder to 1) see what is tested, 2) add new test cases or which case/branch applies to which backend combination. ## File path: flink-tests/src/test/java/org/apache/flink/test/checkpointing/SavepointFormatITCase.java ## @@ -78,138 +82,151 @@ LoggerAuditingExtension loggerAuditingExtension = new LoggerAuditingExtension(SavepointFormatITCase.class, Level.INFO); -private static Stream parameters() { -return Stream.of( -Arguments.of( -SavepointFormatType.CANONICAL, -HEAP, -(Consumer) -keyedState -> -assertThat( -keyedState, - instanceOf(SavepointKeyedStateHandle.class))), -Arguments.of( -SavepointFormatType.NATIVE, -HEAP, -(Consumer) -keyedState -> -assertThat( -keyedState, - instanceOf(KeyGroupsStateHandle.class))), -Arguments.of( -SavepointFormatType.CANONICAL, -ROCKSDB_FULL_SNAPSHOTS, -(Consumer) -keyedState -> -assertThat( -keyedState, -
[jira] [Commented] (FLINK-26108) CoordinatedSourceITCase.testEnumeratorCreationFails failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491831#comment-17491831 ] Yun Gao commented on FLINK-26108: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31323=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=2c68b137-b01d-55c9-e603-3ff3f320364b=25527 > CoordinatedSourceITCase.testEnumeratorCreationFails failed on azure > --- > > Key: FLINK-26108 > URL: https://issues.apache.org/jira/browse/FLINK-26108 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 14 02:10:03 [ERROR] Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, > Time elapsed: 6.65 s <<< FAILURE! - in > org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase > Feb 14 02:10:03 [ERROR] > org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase.testEnumeratorCreationFails > Time elapsed: 0.181 s <<< ERROR! > Feb 14 02:10:03 org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > Feb 14 02:10:03 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > Feb 14 02:10:03 at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > Feb 14 02:10:03 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:259) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > Feb 14 02:10:03 at > org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47) > Feb 14 02:10:03 at akka.dispatch.OnComplete.internal(Future.scala:300) > Feb 14 02:10:03 at akka.dispatch.OnComplete.internal(Future.scala:297) > Feb 14 02:10:03 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) > Feb 14 02:10:03 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) > Feb 14 02:10:03 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) > Feb 14 02:10:03 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) > Feb 14 02:10:03 at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) > Feb 14 02:10:03 at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) > Feb 14 02:10:03 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) > Feb 14 02:10:03 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621) > Feb 14 02:10:03 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24) > Feb 14
[GitHub] [flink] JingGe edited a comment on pull request #18736: [hotfix][datastream] move the change and restore of env parallelism into the adjusTransformations method.
JingGe edited a comment on pull request #18736: URL: https://github.com/apache/flink/pull/18736#issuecomment-1038747241 @gaoyunhaii would you like to take a look at this PR? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26107) CoordinatorEventsExactlyOnceITCase.test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-26107: Priority: Critical (was: Major) > CoordinatorEventsExactlyOnceITCase.test failed on azure > --- > > Key: FLINK-26107 > URL: https://issues.apache.org/jira/browse/FLINK-26107 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 14 02:23:11 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 4.135 s <<< FAILURE! - in > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase > Feb 14 02:23:11 [ERROR] > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test > Time elapsed: 0.72 s <<< ERROR! > Feb 14 02:23:11 org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > Feb 14 02:23:11 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > Feb 14 02:23:11 at > org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:933) > Feb 14 02:23:11 at > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test(CoordinatorEventsExactlyOnceITCase.java:192) > Feb 14 02:23:11 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Feb 14 02:23:11 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 14 02:23:11 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 14 02:23:11 at java.lang.reflect.Method.invoke(Method.java:498) > Feb 14 02:23:11 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Feb 14 02:23:11 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Feb 14 02:23:11 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Feb 14 02:23:11 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Feb 14 02:23:11 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:137) > Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) > Feb 14 02:23:11 at > org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) > Feb 14 02:23:11 at > org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) > Feb 14 02:23:11 at > org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) > Feb 14 02:23:11 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) > Feb 14 02:23:11 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) > Feb 14 02:23:11 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) > {code} >
[GitHub] [flink] JingGe commented on pull request #18736: [hotfix][datastream] move the change and restore of env parallelism into the adjusTransformations method.
JingGe commented on pull request #18736: URL: https://github.com/apache/flink/pull/18736#issuecomment-1038747241 @gaoyunhaii would you like to take a look at this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26107) CoordinatorEventsExactlyOnceITCase.test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491830#comment-17491830 ] Yun Gao commented on FLINK-26107: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31323=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7c1d86e3-35bd-5fd5-3b7c-30c126a78702=9424 > CoordinatorEventsExactlyOnceITCase.test failed on azure > --- > > Key: FLINK-26107 > URL: https://issues.apache.org/jira/browse/FLINK-26107 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Major > Labels: test-stability > > {code:java} > Feb 14 02:23:11 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 4.135 s <<< FAILURE! - in > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase > Feb 14 02:23:11 [ERROR] > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test > Time elapsed: 0.72 s <<< ERROR! > Feb 14 02:23:11 org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > Feb 14 02:23:11 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > Feb 14 02:23:11 at > org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:933) > Feb 14 02:23:11 at > org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test(CoordinatorEventsExactlyOnceITCase.java:192) > Feb 14 02:23:11 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Feb 14 02:23:11 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Feb 14 02:23:11 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Feb 14 02:23:11 at java.lang.reflect.Method.invoke(Method.java:498) > Feb 14 02:23:11 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Feb 14 02:23:11 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Feb 14 02:23:11 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Feb 14 02:23:11 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Feb 14 02:23:11 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > Feb 14 02:23:11 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Feb 14 02:23:11 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Feb 14 02:23:11 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:137) > Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) > Feb 14 02:23:11 at > org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) > Feb 14 02:23:11 at > org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) > Feb 14 02:23:11 at > org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) > Feb 14 02:23:11 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) > Feb 14 02:23:11 at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) > Feb 14 02:23:11 at >
[jira] [Commented] (FLINK-25825) MySqlCatalogITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-25825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491829#comment-17491829 ] Yun Gao commented on FLINK-25825: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31323=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=c520d2c3-4d17-51f1-813b-4b0b74a0c307=14906 > MySqlCatalogITCase fails on azure > - > > Key: FLINK-25825 > URL: https://issues.apache.org/jira/browse/FLINK-25825 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC, Table SQL / API >Affects Versions: 1.15.0 >Reporter: Roman Khachatryan >Assignee: RocMarshal >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30189=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=c520d2c3-4d17-51f1-813b-4b0b74a0c307=13677 > > {code} > 2022-01-26T06:04:42.8019913Z Jan 26 06:04:42 [ERROR] > org.apache.flink.connector.jdbc.catalog.MySqlCatalogITCase.testFullPath Time > elapsed: 2.166 *s <<< FAILURE! > 2022-01-26T06:04:42.8025522Z Jan 26 06:04:42 java.lang.AssertionError: > expected: java.util.ArrayList<[+I[1, -1, 1, null, true, null, hello, 2021-0 > 8-04, 2021-08-04T01:54:16, -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, -1, 1, > \{"k1": "v1"}, null, col_longtext, null, -1, 1, col_mediumtext, -99, 9 9, > -1.0, 1.0, set_ele1, -1, 1, col_text, 10:32:34, 2021-08-04T01:54:16, > col_tinytext, -1, 1, null, col_varchar, 2021-08-04T01:54:16.463, 09:33:43, > 2021-08-04T01:54:16.463, null], +I[2, -1, 1, null, true, null, hello, > 2021-08-04, 2021-08-04T01:53:19, -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, > -1, 1, \{"k1": "v1"}, null, col_longtext, null, -1, 1, col_mediumtext, -99, > 99, -1.0, 1.0, set_ele1,set_ele12, -1, 1, col_text, 10:32:34, 2021-08- > 04T01:53:19, col_tinytext, -1, 1, null, col_varchar, 2021-08-04T01:53:19.098, > 09:33:43, 2021-08-04T01:53:19.098, null]]> but was: java.util.ArrayL > ist<[+I[1, -1, 1, null, true, null, hello, 2021-08-04, 2021-08-04T01:54:16, > -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, -1, 1, \{"k1": "v1"}, null, > col_longtext, null, -1, 1, col_mediumtext, -99, 99, -1.0, 1.0, set_ele1, -1, > 1, col_text, 10:32:34, 2021-08-04T01:54:16, col_tinytext, -1, 1, null , > col_varchar, 2021-08-04T01:54:16.463, 09:33:43, 2021-08-04T01:54:16.463, > null], +I[2, -1, 1, null, true, null, hello, 2021-08-04, 2021-08-04T01: > 53:19, -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, -1, 1, \{"k1": "v1"}, null, > col_longtext, null, -1, 1, col_mediumtext, -99, 99, -1.0, 1.0, set_el > e1,set_ele12, -1, 1, col_text, 10:32:34, 2021-08-04T01:53:19, col_tinytext, > -1, 1, null, col_varchar, 2021-08-04T01:53:19.098, 09:33:43, 2021-08-0 > 4T01:53:19.098, null]]> > 2022-01-26T06:04:42.8029336Z Jan 26 06:04:42 at > org.junit.Assert.fail(Assert.java:89) > 2022-01-26T06:04:42.8029824Z Jan 26 06:04:42 at > org.junit.Assert.failNotEquals(Assert.java:835) > 2022-01-26T06:04:42.8030319Z Jan 26 06:04:42 at > org.junit.Assert.assertEquals(Assert.java:120) > 2022-01-26T06:04:42.8030815Z Jan 26 06:04:42 at > org.junit.Assert.assertEquals(Assert.java:146) > 2022-01-26T06:04:42.8031419Z Jan 26 06:04:42 at > org.apache.flink.connector.jdbc.catalog.MySqlCatalogITCase.testFullPath(MySqlCatalogITCase.java*:306) > {code} > > {code} > 2022-01-26T06:04:43.2899378Z Jan 26 06:04:43 [ERROR] Failures: > 2022-01-26T06:04:43.2907942Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testFullPath:306 expected: java.util.ArrayList<[+I[1, -1, > 1, null, true, > 2022-01-26T06:04:43.2914065Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testGetTable:253 expected:<( > 2022-01-26T06:04:43.2983567Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testSelectToInsert:323 expected: > java.util.ArrayList<[+I[1, -1, 1, null, > 2022-01-26T06:04:43.2997373Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testWithoutCatalog:291 expected: > java.util.ArrayList<[+I[1, -1, 1, null, > 2022-01-26T06:04:43.3010450Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testWithoutCatalogDB:278 expected: > java.util.ArrayList<[+I[1, -1, 1, nul > {code} > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-26108) CoordinatedSourceITCase.testEnumeratorCreationFails failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-26108: Priority: Critical (was: Major) > CoordinatedSourceITCase.testEnumeratorCreationFails failed on azure > --- > > Key: FLINK-26108 > URL: https://issues.apache.org/jira/browse/FLINK-26108 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 14 02:10:03 [ERROR] Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, > Time elapsed: 6.65 s <<< FAILURE! - in > org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase > Feb 14 02:10:03 [ERROR] > org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase.testEnumeratorCreationFails > Time elapsed: 0.181 s <<< ERROR! > Feb 14 02:10:03 org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > Feb 14 02:10:03 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > Feb 14 02:10:03 at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > Feb 14 02:10:03 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:259) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > Feb 14 02:10:03 at > org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > Feb 14 02:10:03 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47) > Feb 14 02:10:03 at akka.dispatch.OnComplete.internal(Future.scala:300) > Feb 14 02:10:03 at akka.dispatch.OnComplete.internal(Future.scala:297) > Feb 14 02:10:03 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) > Feb 14 02:10:03 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) > Feb 14 02:10:03 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > Feb 14 02:10:03 at > org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) > Feb 14 02:10:03 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) > Feb 14 02:10:03 at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) > Feb 14 02:10:03 at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) > Feb 14 02:10:03 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) > Feb 14 02:10:03 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621) > Feb 14 02:10:03 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24) > Feb 14 02:10:03 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23) > Feb 14 02:10:03 at >
[jira] [Created] (FLINK-26108) CoordinatedSourceITCase.testEnumeratorCreationFails failed on azure
Yun Gao created FLINK-26108: --- Summary: CoordinatedSourceITCase.testEnumeratorCreationFails failed on azure Key: FLINK-26108 URL: https://issues.apache.org/jira/browse/FLINK-26108 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.15.0 Reporter: Yun Gao {code:java} Feb 14 02:10:03 [ERROR] Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 6.65 s <<< FAILURE! - in org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase Feb 14 02:10:03 [ERROR] org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase.testEnumeratorCreationFails Time elapsed: 0.181 s <<< ERROR! Feb 14 02:10:03 org.apache.flink.runtime.client.JobExecutionException: Job execution failed. Feb 14 02:10:03 at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) Feb 14 02:10:03 at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) Feb 14 02:10:03 at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:259) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) Feb 14 02:10:03 at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) Feb 14 02:10:03 at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) Feb 14 02:10:03 at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) Feb 14 02:10:03 at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) Feb 14 02:10:03 at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) Feb 14 02:10:03 at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47) Feb 14 02:10:03 at akka.dispatch.OnComplete.internal(Future.scala:300) Feb 14 02:10:03 at akka.dispatch.OnComplete.internal(Future.scala:297) Feb 14 02:10:03 at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) Feb 14 02:10:03 at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) Feb 14 02:10:03 at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) Feb 14 02:10:03 at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) Feb 14 02:10:03 at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) Feb 14 02:10:03 at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) Feb 14 02:10:03 at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) Feb 14 02:10:03 at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) Feb 14 02:10:03 at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621) Feb 14 02:10:03 at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24) Feb 14 02:10:03 at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23) Feb 14 02:10:03 at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532) Feb 14 02:10:03 at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) Feb 14 02:10:03 at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29) {code}
[jira] [Created] (FLINK-26107) CoordinatorEventsExactlyOnceITCase.test failed on azure
Yun Gao created FLINK-26107: --- Summary: CoordinatorEventsExactlyOnceITCase.test failed on azure Key: FLINK-26107 URL: https://issues.apache.org/jira/browse/FLINK-26107 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.15.0 Reporter: Yun Gao {code:java} Feb 14 02:23:11 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.135 s <<< FAILURE! - in org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase Feb 14 02:23:11 [ERROR] org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test Time elapsed: 0.72 s <<< ERROR! Feb 14 02:23:11 org.apache.flink.runtime.client.JobExecutionException: Job execution failed. Feb 14 02:23:11 at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) Feb 14 02:23:11 at org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:933) Feb 14 02:23:11 at org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase.test(CoordinatorEventsExactlyOnceITCase.java:192) Feb 14 02:23:11 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Feb 14 02:23:11 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Feb 14 02:23:11 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Feb 14 02:23:11 at java.lang.reflect.Method.invoke(Method.java:498) Feb 14 02:23:11 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) Feb 14 02:23:11 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Feb 14 02:23:11 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) Feb 14 02:23:11 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Feb 14 02:23:11 at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) Feb 14 02:23:11 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) Feb 14 02:23:11 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) Feb 14 02:23:11 at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) Feb 14 02:23:11 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) Feb 14 02:23:11 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) Feb 14 02:23:11 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) Feb 14 02:23:11 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) Feb 14 02:23:11 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) Feb 14 02:23:11 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) Feb 14 02:23:11 at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) Feb 14 02:23:11 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) Feb 14 02:23:11 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) Feb 14 02:23:11 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) Feb 14 02:23:11 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) Feb 14 02:23:11 at org.junit.runners.ParentRunner.run(ParentRunner.java:413) Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:137) Feb 14 02:23:11 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) Feb 14 02:23:11 at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) Feb 14 02:23:11 at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) Feb 14 02:23:11 at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) Feb 14 02:23:11 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) Feb 14 02:23:11 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) Feb 14 02:23:11 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) {code} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7c1d86e3-35bd-5fd5-3b7c-30c126a78702=9512 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) * 377f14caf340fcfbba8fa409a23529769bfcff2a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25825) MySqlCatalogITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-25825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491824#comment-17491824 ] Yun Gao commented on FLINK-25825: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=c520d2c3-4d17-51f1-813b-4b0b74a0c307=14504 > MySqlCatalogITCase fails on azure > - > > Key: FLINK-25825 > URL: https://issues.apache.org/jira/browse/FLINK-25825 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC, Table SQL / API >Affects Versions: 1.15.0 >Reporter: Roman Khachatryan >Assignee: RocMarshal >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30189=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=c520d2c3-4d17-51f1-813b-4b0b74a0c307=13677 > > {code} > 2022-01-26T06:04:42.8019913Z Jan 26 06:04:42 [ERROR] > org.apache.flink.connector.jdbc.catalog.MySqlCatalogITCase.testFullPath Time > elapsed: 2.166 *s <<< FAILURE! > 2022-01-26T06:04:42.8025522Z Jan 26 06:04:42 java.lang.AssertionError: > expected: java.util.ArrayList<[+I[1, -1, 1, null, true, null, hello, 2021-0 > 8-04, 2021-08-04T01:54:16, -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, -1, 1, > \{"k1": "v1"}, null, col_longtext, null, -1, 1, col_mediumtext, -99, 9 9, > -1.0, 1.0, set_ele1, -1, 1, col_text, 10:32:34, 2021-08-04T01:54:16, > col_tinytext, -1, 1, null, col_varchar, 2021-08-04T01:54:16.463, 09:33:43, > 2021-08-04T01:54:16.463, null], +I[2, -1, 1, null, true, null, hello, > 2021-08-04, 2021-08-04T01:53:19, -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, > -1, 1, \{"k1": "v1"}, null, col_longtext, null, -1, 1, col_mediumtext, -99, > 99, -1.0, 1.0, set_ele1,set_ele12, -1, 1, col_text, 10:32:34, 2021-08- > 04T01:53:19, col_tinytext, -1, 1, null, col_varchar, 2021-08-04T01:53:19.098, > 09:33:43, 2021-08-04T01:53:19.098, null]]> but was: java.util.ArrayL > ist<[+I[1, -1, 1, null, true, null, hello, 2021-08-04, 2021-08-04T01:54:16, > -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, -1, 1, \{"k1": "v1"}, null, > col_longtext, null, -1, 1, col_mediumtext, -99, 99, -1.0, 1.0, set_ele1, -1, > 1, col_text, 10:32:34, 2021-08-04T01:54:16, col_tinytext, -1, 1, null , > col_varchar, 2021-08-04T01:54:16.463, 09:33:43, 2021-08-04T01:54:16.463, > null], +I[2, -1, 1, null, true, null, hello, 2021-08-04, 2021-08-04T01: > 53:19, -1, 1, -1.0, 1.0, enum2, -9.1, 9.1, -1, 1, -1, 1, \{"k1": "v1"}, null, > col_longtext, null, -1, 1, col_mediumtext, -99, 99, -1.0, 1.0, set_el > e1,set_ele12, -1, 1, col_text, 10:32:34, 2021-08-04T01:53:19, col_tinytext, > -1, 1, null, col_varchar, 2021-08-04T01:53:19.098, 09:33:43, 2021-08-0 > 4T01:53:19.098, null]]> > 2022-01-26T06:04:42.8029336Z Jan 26 06:04:42 at > org.junit.Assert.fail(Assert.java:89) > 2022-01-26T06:04:42.8029824Z Jan 26 06:04:42 at > org.junit.Assert.failNotEquals(Assert.java:835) > 2022-01-26T06:04:42.8030319Z Jan 26 06:04:42 at > org.junit.Assert.assertEquals(Assert.java:120) > 2022-01-26T06:04:42.8030815Z Jan 26 06:04:42 at > org.junit.Assert.assertEquals(Assert.java:146) > 2022-01-26T06:04:42.8031419Z Jan 26 06:04:42 at > org.apache.flink.connector.jdbc.catalog.MySqlCatalogITCase.testFullPath(MySqlCatalogITCase.java*:306) > {code} > > {code} > 2022-01-26T06:04:43.2899378Z Jan 26 06:04:43 [ERROR] Failures: > 2022-01-26T06:04:43.2907942Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testFullPath:306 expected: java.util.ArrayList<[+I[1, -1, > 1, null, true, > 2022-01-26T06:04:43.2914065Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testGetTable:253 expected:<( > 2022-01-26T06:04:43.2983567Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testSelectToInsert:323 expected: > java.util.ArrayList<[+I[1, -1, 1, null, > 2022-01-26T06:04:43.2997373Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testWithoutCatalog:291 expected: > java.util.ArrayList<[+I[1, -1, 1, null, > 2022-01-26T06:04:43.3010450Z Jan 26 06:04:43 [ERROR] > MySqlCatalogITCase.testWithoutCatalogDB:278 expected: > java.util.ArrayList<[+I[1, -1, 1, nul > {code} > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26105) Running HA (hashmap, async) end-to-end test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491823#comment-17491823 ] Yun Gao commented on FLINK-26105: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347=logs=e9d3d34f-3d15-59f4-0e3e-35067d100dfe=f8a6d3eb-38cf-5cca-9a99-d0badeb5fe62=8020 > Running HA (hashmap, async) end-to-end test failed on azure > --- > > Key: FLINK-26105 > URL: https://issues.apache.org/jira/browse/FLINK-26105 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 14 01:31:29 Killed TM @ 255483 > Feb 14 01:31:29 Starting new TM. > Feb 14 01:31:42 Killed TM @ 258722 > Feb 14 01:31:42 Starting new TM. > Feb 14 01:32:00 Checking for non-empty .out files... > Feb 14 01:32:00 No non-empty .out files. > Feb 14 01:32:00 FAILURE: A JM did not take over. > Feb 14 01:32:00 One or more tests FAILED. > Feb 14 01:32:00 Stopping job timeout watchdog (with pid=250820) > Feb 14 01:32:00 Killing JM watchdog @ 252644 > Feb 14 01:32:00 Killing TM watchdog @ 253262 > Feb 14 01:32:00 [FAIL] Test script contains errors. > Feb 14 01:32:00 Checking of logs skipped. > Feb 14 01:32:00 > Feb 14 01:32:00 [FAIL] 'Running HA (hashmap, async) end-to-end test' failed > after 2 minutes and 51 seconds! Test exited with exit code 1 > Feb 14 01:32:00 > 01:32:00 ##[group]Environment Information > Feb 14 01:32:01 Searching for .dump, .dumpstream and related files in > '/home/vsts/work/1/s' > dmesg: read kernel buffer failed: Operation not permitted > Feb 14 01:32:06 Stopping taskexecutor daemon (pid: 259377) on host > fv-az313-602. > Feb 14 01:32:07 Stopping standalonesession daemon (pid: 256528) on host > fv-az313-602. > Feb 14 01:32:08 Stopping zookeeper... > Feb 14 01:32:08 Stopping zookeeper daemon (pid: 251023) on host fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 251636), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 255483), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 258722), because it is not > running anymore on fv-az313-602. > The STDIO streams did not close within 10 seconds of the exit event from > process '/usr/bin/bash'. This may indicate a child process inherited the > STDIO streams and has not yet exited. > ##[error]Bash exited with code '1'. > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347=logs=e9d3d34f-3d15-59f4-0e3e-35067d100dfe=f8a6d3eb-38cf-5cca-9a99-d0badeb5fe62=8020 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-26106) BoundedSourceITCase failed due to JVM exits with code 239
[ https://issues.apache.org/jira/browse/FLINK-26106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-26106: Labels: test-stability (was: ) > BoundedSourceITCase failed due to JVM exits with code 239 > - > > Key: FLINK-26106 > URL: https://issues.apache.org/jira/browse/FLINK-26106 > Project: Flink > Issue Type: Bug > Components: API / Core >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 14 03:52:55 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test > (integration-tests) on project flink-tests: There are test failures. > Feb 14 03:52:55 [ERROR] > Feb 14 03:52:55 [ERROR] Please refer to > /__w/1/s/flink-tests/target/surefire-reports for the individual test results. > Feb 14 03:52:55 [ERROR] Please refer to dump files (if any exist) > [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Feb 14 03:52:55 [ERROR] ExecutionException The forked VM terminated without > properly saying goodbye. VM crash or System.exit called? > Feb 14 03:52:55 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=2 -XX:+UseG1GC -Duser.country=US -Duser.language=en -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter4517679582273332440.jar > /__w/1/s/flink-tests/target/surefire 2022-02-14T02-22-54_233-jvmRun2 > surefire8233172397230542561tmp surefire_1343105378974503190tmp > Feb 14 03:52:55 [ERROR] Error occurred in starting fork, check output in log > Feb 14 03:52:55 [ERROR] Process Exit Code: 239 > Feb 14 03:52:55 [ERROR] Crashed tests: > Feb 14 03:52:55 [ERROR] BoundedSourceITCase > Feb 14 03:52:55 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > Feb 14 03:52:55 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=2 -XX:+UseG1GC -Duser.country=US -Duser.language=en -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter4517679582273332440.jar > /__w/1/s/flink-tests/target/surefire 2022-02-14T02-22-54_233-jvmRun2 > surefire8233172397230542561tmp surefire_1343105378974503190tmp > Feb 14 03:52:55 [ERROR] Error occurred in starting fork, check output in log > Feb 14 03:52:55 [ERROR] Process Exit Code: 239 > Feb 14 03:52:55 [ERROR] Crashed tests: > Feb 14 03:52:55 [ERROR] BoundedSourceITCase > Feb 14 03:52:55 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355) > Feb 14 03:52:55 [ERROR] at > org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) > Feb 14 03:52:55 [ERROR] at >
[jira] [Created] (FLINK-26106) BoundedSourceITCase failed due to JVM exits with code 239
Yun Gao created FLINK-26106: --- Summary: BoundedSourceITCase failed due to JVM exits with code 239 Key: FLINK-26106 URL: https://issues.apache.org/jira/browse/FLINK-26106 Project: Flink Issue Type: Bug Components: API / Core Affects Versions: 1.15.0 Reporter: Yun Gao {code:java} Feb 14 03:52:55 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (integration-tests) on project flink-tests: There are test failures. Feb 14 03:52:55 [ERROR] Feb 14 03:52:55 [ERROR] Please refer to /__w/1/s/flink-tests/target/surefire-reports for the individual test results. Feb 14 03:52:55 [ERROR] Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. Feb 14 03:52:55 [ERROR] ExecutionException The forked VM terminated without properly saying goodbye. VM crash or System.exit called? Feb 14 03:52:55 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m -Dmvn.forkNumber=2 -XX:+UseG1GC -Duser.country=US -Duser.language=en -jar /__w/1/s/flink-tests/target/surefire/surefirebooter4517679582273332440.jar /__w/1/s/flink-tests/target/surefire 2022-02-14T02-22-54_233-jvmRun2 surefire8233172397230542561tmp surefire_1343105378974503190tmp Feb 14 03:52:55 [ERROR] Error occurred in starting fork, check output in log Feb 14 03:52:55 [ERROR] Process Exit Code: 239 Feb 14 03:52:55 [ERROR] Crashed tests: Feb 14 03:52:55 [ERROR] BoundedSourceITCase Feb 14 03:52:55 [ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: ExecutionException The forked VM terminated without properly saying goodbye. VM crash or System.exit called? Feb 14 03:52:55 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m -Dmvn.forkNumber=2 -XX:+UseG1GC -Duser.country=US -Duser.language=en -jar /__w/1/s/flink-tests/target/surefire/surefirebooter4517679582273332440.jar /__w/1/s/flink-tests/target/surefire 2022-02-14T02-22-54_233-jvmRun2 surefire8233172397230542561tmp surefire_1343105378974503190tmp Feb 14 03:52:55 [ERROR] Error occurred in starting fork, check output in log Feb 14 03:52:55 [ERROR] Process Exit Code: 239 Feb 14 03:52:55 [ERROR] Crashed tests: Feb 14 03:52:55 [ERROR] BoundedSourceITCase Feb 14 03:52:55 [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532) Feb 14 03:52:55 [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479) Feb 14 03:52:55 [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322) Feb 14 03:52:55 [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266) Feb 14 03:52:55 [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314) Feb 14 03:52:55 [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159) Feb 14 03:52:55 [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932) Feb 14 03:52:55 [ERROR] at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) Feb 14 03:52:55 [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) Feb 14 03:52:55 [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) Feb 14 03:52:55 [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) Feb 14 03:52:55 [ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) Feb 14 03:52:55 [ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) Feb 14 03:52:55 [ERROR] at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) Feb 14 03:52:55 [ERROR] at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) Feb 14 03:52:55 [ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355) Feb 14 03:52:55 [ERROR] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) Feb 14 03:52:55 [ERROR] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) Feb 14 03:52:55 [ERROR] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216) {code} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=12999 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #15: [FLINK-25820] Introduce Table Store Flink Source
JingsongLi commented on a change in pull request #15: URL: https://github.com/apache/flink-table-store/pull/15#discussion_r805559798 ## File path: flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/source/FileStoreSourceSplitGeneratorTest.java ## @@ -0,0 +1,99 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.source; + +import org.apache.flink.table.store.file.ValueKind; +import org.apache.flink.table.store.file.manifest.ManifestEntry; +import org.apache.flink.table.store.file.mergetree.sst.SstFileMeta; +import org.apache.flink.table.store.file.operation.FileStoreScan; +import org.apache.flink.table.store.file.stats.FieldStats; + +import org.junit.jupiter.api.Test; + +import javax.annotation.Nullable; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +import static org.apache.flink.table.store.file.mergetree.compact.CompactManagerTest.row; +import static org.assertj.core.api.Assertions.assertThat; + +/** Test for {@link FileStoreSourceSplitGenerator}. */ +public class FileStoreSourceSplitGeneratorTest { + +@Test +public void test() { +FileStoreScan.Plan plan = +new FileStoreScan.Plan() { +@Nullable +@Override +public Long snapshotId() { +return null; +} + +@Override +public List files() { +return Arrays.asList( +makeEntry(1, 0, "f0"), +makeEntry(1, 0, "f1"), +makeEntry(1, 1, "f2"), +makeEntry(2, 0, "f3"), +makeEntry(2, 0, "f4"), +makeEntry(2, 0, "f5"), +makeEntry(2, 1, "f6")); +} +}; +List splits = new FileStoreSourceSplitGenerator().createSplits(plan); +assertThat(splits.size()).isEqualTo(4); +assertSplit(splits.get(0), "01", 2, 0, Arrays.asList("f3", "f4", "f5")); +assertSplit(splits.get(1), "02", 2, 1, Collections.singletonList("f6")); +assertSplit(splits.get(2), "03", 1, 0, Arrays.asList("f0", "f1")); +assertSplit(splits.get(3), "04", 1, 1, Collections.singletonList("f2")); Review comment: What do you mean? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #15: [FLINK-25820] Introduce Table Store Flink Source
JingsongLi commented on a change in pull request #15: URL: https://github.com/apache/flink-table-store/pull/15#discussion_r805559171 ## File path: flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/source/FileStoreSourceSplitReader.java ## @@ -0,0 +1,202 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.source; + +import org.apache.flink.connector.base.source.reader.RecordsWithSplitIds; +import org.apache.flink.connector.base.source.reader.splitreader.SplitReader; +import org.apache.flink.connector.base.source.reader.splitreader.SplitsAddition; +import org.apache.flink.connector.base.source.reader.splitreader.SplitsChange; +import org.apache.flink.connector.file.src.impl.FileRecords; +import org.apache.flink.connector.file.src.reader.BulkFormat; +import org.apache.flink.connector.file.src.util.MutableRecordAndPosition; +import org.apache.flink.connector.file.src.util.Pool; +import org.apache.flink.connector.file.src.util.RecordAndPosition; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.store.file.KeyValue; +import org.apache.flink.table.store.file.operation.FileStoreRead; +import org.apache.flink.table.store.file.utils.RecordReader; + +import javax.annotation.Nullable; + +import java.io.IOException; +import java.util.LinkedList; +import java.util.Queue; + +/** The {@link SplitReader} implementation for the file store source. */ +public class FileStoreSourceSplitReader +implements SplitReader, FileStoreSourceSplit> { + +private final FileStoreRead fileStoreRead; +private final boolean keyAsRecord; + +private final Queue splits; + +private final Pool pool; + +@Nullable private RecordReader currentReader; +@Nullable private String currentSplitId; +private long currentNumRead; +private RecordReader.RecordIterator currentFirstBatch; + +public FileStoreSourceSplitReader(FileStoreRead fileStoreRead, boolean keyAsRecord) { +this.fileStoreRead = fileStoreRead; +this.keyAsRecord = keyAsRecord; +this.splits = new LinkedList<>(); +this.pool = new Pool<>(1); +this.pool.add(new FileStoreRecordIterator()); +} + +@Override +public RecordsWithSplitIds> fetch() throws IOException { +checkSplitOrStartNext(); + +// pool first, avoid thread safety issues Review comment: `pool first, pool size is 1, the underlying implementation does not allow multiple batches to be read at the same time` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18656: [FLINK-25249][connector/kafka] Reimplement KafkaTestEnvironment with KafkaContainer
flinkbot edited a comment on pull request #18656: URL: https://github.com/apache/flink/pull/18656#issuecomment-1032337760 ## CI report: * e4f165728f6058ed0a6e1b910ed347e92f2747d9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31202) * 026018918ca60db860e0a0dd6454ef04a1f1861e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31358) * fefd1a1e95647640cafbc29b333da5ed196c3e1a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31376) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18516: [FLINK-25288][tests] add savepoint and metric test cases in source suite of connector testframe
flinkbot edited a comment on pull request #18516: URL: https://github.com/apache/flink/pull/18516#issuecomment-1022021934 ## CI report: * 8e38d73d82dc9ba396d483e9257e4244cf72ad0e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31353) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #15: [FLINK-25820] Introduce Table Store Flink Source
JingsongLi commented on a change in pull request #15: URL: https://github.com/apache/flink-table-store/pull/15#discussion_r805557046 ## File path: flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/source/FileStoreSourceSplitGenerator.java ## @@ -0,0 +1,75 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.source; + +import org.apache.flink.table.store.file.operation.FileStoreScan; + +import java.util.List; +import java.util.stream.Collectors; + +/** + * The {@code FileStoreSplitGenerator}'s task is to plan all files to be read and to split them into + * a set of {@link FileStoreSourceSplit}. + */ +public class FileStoreSourceSplitGenerator { + +/** + * The current Id as a mutable string representation. This covers more values than the integer + * value range, so we should never overflow. + */ +private final char[] currentId = "00".toCharArray(); + +public List createSplits(FileStoreScan scan) { +return createSplits(scan.plan()); +} + +public List createSplits(FileStoreScan.Plan plan) { +return plan.groupByPartFiles().entrySet().stream() +.flatMap( +pe -> +pe.getValue().entrySet().stream() +.map( +be -> +new FileStoreSourceSplit( +getNextId(), +pe.getKey(), +be.getKey(), + be.getValue( +.collect(Collectors.toList()); +} + +protected final String getNextId() { +// because we just increment numbers, we increment the char representation directly, +// rather than incrementing an integer and converting it to a string representation +// every time again (requires quite some expensive conversion logic). +incrementCharArrayByOne(currentId, currentId.length - 1); +return new String(currentId); +} + +private static void incrementCharArrayByOne(char[] array, int pos) { +char c = array[pos]; +c++; + +if (c > '9') { +c = '0'; +incrementCharArrayByOne(array, pos - 1); +} +array[pos] = c; +} Review comment: Just throw readable exception is OK... There are to many splits... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26105) Running HA (hashmap, async) end-to-end test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-26105: Priority: Critical (was: Major) > Running HA (hashmap, async) end-to-end test failed on azure > --- > > Key: FLINK-26105 > URL: https://issues.apache.org/jira/browse/FLINK-26105 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.15.0 >Reporter: Yun Gao >Priority: Critical > Labels: test-stability > > {code:java} > Feb 14 01:31:29 Killed TM @ 255483 > Feb 14 01:31:29 Starting new TM. > Feb 14 01:31:42 Killed TM @ 258722 > Feb 14 01:31:42 Starting new TM. > Feb 14 01:32:00 Checking for non-empty .out files... > Feb 14 01:32:00 No non-empty .out files. > Feb 14 01:32:00 FAILURE: A JM did not take over. > Feb 14 01:32:00 One or more tests FAILED. > Feb 14 01:32:00 Stopping job timeout watchdog (with pid=250820) > Feb 14 01:32:00 Killing JM watchdog @ 252644 > Feb 14 01:32:00 Killing TM watchdog @ 253262 > Feb 14 01:32:00 [FAIL] Test script contains errors. > Feb 14 01:32:00 Checking of logs skipped. > Feb 14 01:32:00 > Feb 14 01:32:00 [FAIL] 'Running HA (hashmap, async) end-to-end test' failed > after 2 minutes and 51 seconds! Test exited with exit code 1 > Feb 14 01:32:00 > 01:32:00 ##[group]Environment Information > Feb 14 01:32:01 Searching for .dump, .dumpstream and related files in > '/home/vsts/work/1/s' > dmesg: read kernel buffer failed: Operation not permitted > Feb 14 01:32:06 Stopping taskexecutor daemon (pid: 259377) on host > fv-az313-602. > Feb 14 01:32:07 Stopping standalonesession daemon (pid: 256528) on host > fv-az313-602. > Feb 14 01:32:08 Stopping zookeeper... > Feb 14 01:32:08 Stopping zookeeper daemon (pid: 251023) on host fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 251636), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 255483), because it is not > running anymore on fv-az313-602. > Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 258722), because it is not > running anymore on fv-az313-602. > The STDIO streams did not close within 10 seconds of the exit event from > process '/usr/bin/bash'. This may indicate a child process inherited the > STDIO streams and has not yet exited. > ##[error]Bash exited with code '1'. > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347=logs=e9d3d34f-3d15-59f4-0e3e-35067d100dfe=f8a6d3eb-38cf-5cca-9a99-d0badeb5fe62=8020 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-26105) Running HA (hashmap, async) end-to-end test failed on azure
Yun Gao created FLINK-26105: --- Summary: Running HA (hashmap, async) end-to-end test failed on azure Key: FLINK-26105 URL: https://issues.apache.org/jira/browse/FLINK-26105 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.15.0 Reporter: Yun Gao {code:java} Feb 14 01:31:29 Killed TM @ 255483 Feb 14 01:31:29 Starting new TM. Feb 14 01:31:42 Killed TM @ 258722 Feb 14 01:31:42 Starting new TM. Feb 14 01:32:00 Checking for non-empty .out files... Feb 14 01:32:00 No non-empty .out files. Feb 14 01:32:00 FAILURE: A JM did not take over. Feb 14 01:32:00 One or more tests FAILED. Feb 14 01:32:00 Stopping job timeout watchdog (with pid=250820) Feb 14 01:32:00 Killing JM watchdog @ 252644 Feb 14 01:32:00 Killing TM watchdog @ 253262 Feb 14 01:32:00 [FAIL] Test script contains errors. Feb 14 01:32:00 Checking of logs skipped. Feb 14 01:32:00 Feb 14 01:32:00 [FAIL] 'Running HA (hashmap, async) end-to-end test' failed after 2 minutes and 51 seconds! Test exited with exit code 1 Feb 14 01:32:00 01:32:00 ##[group]Environment Information Feb 14 01:32:01 Searching for .dump, .dumpstream and related files in '/home/vsts/work/1/s' dmesg: read kernel buffer failed: Operation not permitted Feb 14 01:32:06 Stopping taskexecutor daemon (pid: 259377) on host fv-az313-602. Feb 14 01:32:07 Stopping standalonesession daemon (pid: 256528) on host fv-az313-602. Feb 14 01:32:08 Stopping zookeeper... Feb 14 01:32:08 Stopping zookeeper daemon (pid: 251023) on host fv-az313-602. Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 251636), because it is not running anymore on fv-az313-602. Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 255483), because it is not running anymore on fv-az313-602. Feb 14 01:32:09 Skipping taskexecutor daemon (pid: 258722), because it is not running anymore on fv-az313-602. The STDIO streams did not close within 10 seconds of the exit event from process '/usr/bin/bash'. This may indicate a child process inherited the STDIO streams and has not yet exited. ##[error]Bash exited with code '1'. {code} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31347=logs=e9d3d34f-3d15-59f4-0e3e-35067d100dfe=f8a6d3eb-38cf-5cca-9a99-d0badeb5fe62=8020 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #15: [FLINK-25820] Introduce Table Store Flink Source
JingsongLi commented on a change in pull request #15: URL: https://github.com/apache/flink-table-store/pull/15#discussion_r805557046 ## File path: flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/source/FileStoreSourceSplitGenerator.java ## @@ -0,0 +1,75 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.connector.source; + +import org.apache.flink.table.store.file.operation.FileStoreScan; + +import java.util.List; +import java.util.stream.Collectors; + +/** + * The {@code FileStoreSplitGenerator}'s task is to plan all files to be read and to split them into + * a set of {@link FileStoreSourceSplit}. + */ +public class FileStoreSourceSplitGenerator { + +/** + * The current Id as a mutable string representation. This covers more values than the integer + * value range, so we should never overflow. + */ +private final char[] currentId = "00".toCharArray(); + +public List createSplits(FileStoreScan scan) { +return createSplits(scan.plan()); +} + +public List createSplits(FileStoreScan.Plan plan) { +return plan.groupByPartFiles().entrySet().stream() +.flatMap( +pe -> +pe.getValue().entrySet().stream() +.map( +be -> +new FileStoreSourceSplit( +getNextId(), +pe.getKey(), +be.getKey(), + be.getValue( +.collect(Collectors.toList()); +} + +protected final String getNextId() { +// because we just increment numbers, we increment the char representation directly, +// rather than incrementing an integer and converting it to a string representation +// every time again (requires quite some expensive conversion logic). +incrementCharArrayByOne(currentId, currentId.length - 1); +return new String(currentId); +} + +private static void incrementCharArrayByOne(char[] array, int pos) { +char c = array[pos]; +c++; + +if (c > '9') { +c = '0'; +incrementCharArrayByOne(array, pos - 1); +} +array[pos] = c; +} Review comment: Just throw exception is OK... There are to many splits... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18656: [FLINK-25249][connector/kafka] Reimplement KafkaTestEnvironment with KafkaContainer
flinkbot edited a comment on pull request #18656: URL: https://github.com/apache/flink/pull/18656#issuecomment-1032337760 ## CI report: * e4f165728f6058ed0a6e1b910ed347e92f2747d9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31202) * 026018918ca60db860e0a0dd6454ef04a1f1861e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31358) * fefd1a1e95647640cafbc29b333da5ed196c3e1a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe
flinkbot edited a comment on pull request #18496: URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632 ## CI report: * 0034fb25f7fbbbcf302fb18626d7983f32732ca5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339) * b8513c81bd9bc1e30efa4ea1fae35d30fd33472c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) * 377f14caf340fcfbba8fa409a23529769bfcff2a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] twalthr commented on pull request #18710: [FLINK-26055][table][annotations] Modified CompiledPlan#getFlinkVersion return type to FlinkVersion
twalthr commented on pull request #18710: URL: https://github.com/apache/flink/pull/18710#issuecomment-1038728000 @slinkydeveloper It seems the build is still red. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe
flinkbot edited a comment on pull request #18496: URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632 ## CI report: * 0034fb25f7fbbbcf302fb18626d7983f32732ca5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339) * b8513c81bd9bc1e30efa4ea1fae35d30fd33472c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MrWhiteSike edited a comment on pull request #18655: [FLINK-25799] [docs] Translate table/filesystem.md page into Chinese.
MrWhiteSike edited a comment on pull request #18655: URL: https://github.com/apache/flink/pull/18655#issuecomment-1032274801 Hi, [RocMarshal](https://github.com/RocMarshal) [@wuchong](https://github.com/wuchong) could you help to review this? when you are available. very appreciated it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-26104) KeyError: 'type_info' in PyFlink test
Huang Xingbo created FLINK-26104: Summary: KeyError: 'type_info' in PyFlink test Key: FLINK-26104 URL: https://issues.apache.org/jira/browse/FLINK-26104 Project: Flink Issue Type: Bug Components: API / Python Affects Versions: 1.14.3 Reporter: Huang Xingbo {code:java} 2022-02-14T04:33:10.9891373Z Feb 14 04:33:10 E Caused by: java.lang.RuntimeException: Failed to create stage bundle factory! INFO:root:Initializing Python harness: /__w/1/s/flink-python/pyflink/fn_execution/beam/beam_boot.py --id=103-1 --provision_endpoint=localhost:46669 2022-02-14T04:33:10.9892470Z Feb 14 04:33:10 E INFO:root:Starting up Python harness in a standalone process. 2022-02-14T04:33:10.9893079Z Feb 14 04:33:10 E Traceback (most recent call last): 2022-02-14T04:33:10.9894030Z Feb 14 04:33:10 E File "/__w/1/s/flink-python/dev/.conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main 2022-02-14T04:33:10.9894791Z Feb 14 04:33:10 E "__main__", mod_spec) 2022-02-14T04:33:10.9895653Z Feb 14 04:33:10 E File "/__w/1/s/flink-python/dev/.conda/lib/python3.7/runpy.py", line 85, in _run_code 2022-02-14T04:33:10.9896395Z Feb 14 04:33:10 E exec(code, run_globals) 2022-02-14T04:33:10.9904913Z Feb 14 04:33:10 E File "/__w/1/s/flink-python/pyflink/fn_execution/beam/beam_boot.py", line 116, in 2022-02-14T04:33:10.9930244Z Feb 14 04:33:10 E from pyflink.fn_execution.beam import beam_sdk_worker_main 2022-02-14T04:33:10.9931563Z Feb 14 04:33:10 E File "/__w/1/s/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py", line 21, in 2022-02-14T04:33:10.9932630Z Feb 14 04:33:10 E import pyflink.fn_execution.beam.beam_operations # noqa # pylint: disable=unused-import 2022-02-14T04:33:10.9933754Z Feb 14 04:33:10 E File "/__w/1/s/flink-python/pyflink/fn_execution/beam/beam_operations.py", line 23, in 2022-02-14T04:33:10.9934415Z Feb 14 04:33:10 E from pyflink.fn_execution import flink_fn_execution_pb2 2022-02-14T04:33:10.9935335Z Feb 14 04:33:10 E File "/__w/1/s/flink-python/pyflink/fn_execution/flink_fn_execution_pb2.py", line 2581, in 2022-02-14T04:33:10.9936378Z Feb 14 04:33:10 E _SCHEMA_FIELDTYPE.fields_by_name['time_info'].containing_oneof = _SCHEMA_FIELDTYPE.oneofs_by_name['type_info'] 2022-02-14T04:33:10.9946519Z Feb 14 04:33:10 E KeyError: 'type_info' 2022-02-14T04:33:10.9947110Z Feb 14 04:33:10 E 2022-02-14T04:33:10.9947911Z Feb 14 04:33:10 E at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:566) 2022-02-14T04:33:10.9949048Z Feb 14 04:33:10 E at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:255) 2022-02-14T04:33:10.9950162Z Feb 14 04:33:10 E at org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:131) 2022-02-14T04:33:10.9951344Z Feb 14 04:33:10 E at org.apache.flink.streaming.api.operators.python.AbstractOneInputPythonFunctionOperator.open(AbstractOneInputPythonFunctionOperator.java:116) 2022-02-14T04:33:10.9952487Z Feb 14 04:33:10 E at org.apache.flink.streaming.api.operators.python.PythonKeyedProcessOperator.open(PythonKeyedProcessOperator.java:121) 2022-02-14T04:33:10.9953561Z Feb 14 04:33:10 E at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110) 2022-02-14T04:33:10.9954565Z Feb 14 04:33:10 E at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:711) 2022-02-14T04:33:10.9955522Z Feb 14 04:33:10 E at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55) 2022-02-14T04:33:10.9956492Z Feb 14 04:33:10 E at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:687) 2022-02-14T04:33:10.9957378Z Feb 14 04:33:10 E at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:654) 2022-02-14T04:33:10.9958252Z Feb 14 04:33:10 E at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) 2022-02-14T04:33:10.9959108Z Feb 14 04:33:10 E at
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) * 377f14caf340fcfbba8fa409a23529769bfcff2a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18745: [FLINK-24607] Make OperatorCoordinator closing sequence more robust.
flinkbot edited a comment on pull request #18745: URL: https://github.com/apache/flink/pull/18745#issuecomment-1038713655 ## CI report: * 1cb9f56136e9aae7583737bb17428c3c72ce0805 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31374) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #18745: [FLINK-24607] Make OperatorCoordinator closing sequence more robust.
wuchong commented on pull request #18745: URL: https://github.com/apache/flink/pull/18745#issuecomment-1038713848 @becketqin , do you want this PR get merged in 1.15 release? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #18745: [FLINK-24607] Make OperatorCoordinator closing sequence more robust.
flinkbot commented on pull request #18745: URL: https://github.com/apache/flink/pull/18745#issuecomment-1038713655 ## CI report: * 1cb9f56136e9aae7583737bb17428c3c72ce0805 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #18745: [FLINK-24607] Make OperatorCoordinator closing sequence more robust.
wuchong commented on pull request #18745: URL: https://github.com/apache/flink/pull/18745#issuecomment-1038713270 cc @leonardBang -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) * 377f14caf340fcfbba8fa409a23529769bfcff2a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #18745: [FLINK-24607] Make OperatorCoordinator closing sequence more robust.
flinkbot commented on pull request #18745: URL: https://github.com/apache/flink/pull/18745#issuecomment-1038712533 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 1cb9f56136e9aae7583737bb17428c3c72ce0805 (Mon Feb 14 06:48:53 UTC 2022) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] becketqin commented on pull request #18745: [FLINK-24607] Make OperatorCoordinator closing sequence more robust.
becketqin commented on pull request #18745: URL: https://github.com/apache/flink/pull/18745#issuecomment-1038710684 Ping @wuchong @pnowojski for review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-24607) SourceCoordinator may miss to close SplitEnumerator when failover frequently
[ https://issues.apache.org/jira/browse/FLINK-24607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-24607: --- Labels: pull-request-available (was: ) > SourceCoordinator may miss to close SplitEnumerator when failover frequently > > > Key: FLINK-24607 > URL: https://issues.apache.org/jira/browse/FLINK-24607 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.3 >Reporter: Jark Wu >Assignee: Jiangjie Qin >Priority: Critical > Labels: pull-request-available > Fix For: 1.15.0, 1.13.6, 1.14.4 > > Attachments: jobmanager.log > > > We are having a connection leak problem when using mysql-cdc [1] source. We > observed that many enumerators are not closed from the JM log. > {code} > ➜ test123 cat jobmanager.log | grep "SourceCoordinator \[\] - Restoring > SplitEnumerator" | wc -l > 264 > ➜ test123 cat jobmanager.log | grep "SourceCoordinator \[\] - Starting split > enumerator" | wc -l > 264 > ➜ test123 cat jobmanager.log | grep "MySqlSourceEnumerator \[\] - Starting > enumerator" | wc -l > 263 > ➜ test123 cat jobmanager.log | grep "SourceCoordinator \[\] - Closing > SourceCoordinator" | wc -l > 264 > ➜ test123 cat jobmanager.log | grep "MySqlSourceEnumerator \[\] - Closing > enumerator" | wc -l > 195 > {code} > We added "Closing enumerator" log in {{MySqlSourceEnumerator#close()}}, and > "Starting enumerator" in {{MySqlSourceEnumerator#start()}}. From the above > result you can see that SourceCoordinator is restored and closed 264 times, > split enumerator is started 264 but only closed 195 times. It seems that > {{SourceCoordinator}} misses to close enumerator when job failover > frequently. > I also went throught the code of {{SourceCoordinator}} and found some > suspicious point: > The {{started}} flag and {{enumerator}} is assigned in the main thread, > however {{SourceCoordinator#close()}} is executed async by > {{DeferrableCoordinator#closeAsync}}. That means the close method will check > the {{started}} and {{enumerator}} variable async. Is there any concurrency > problem here which mean lead to dirty read and miss to close the > {{enumerator}}? > I'm still not sure, because it's hard to reproduce locally, and we can't > deploy a custom flink version to production env. > [1]: https://github.com/ververica/flink-cdc-connectors -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] becketqin opened a new pull request #18745: [FLINK-24607] Make OperatorCoordinator closing sequence more robust.
becketqin opened a new pull request #18745: URL: https://github.com/apache/flink/pull/18745 ## What is the purpose of the change Currently the closing sequence of the `OperatorCoordinator` does not swallow the exceptions thrown in the middle. So when exception is thrown in the middle, the rest of the closing sequence will be skipped. This patch makes the closing sequence of `OperatorCoordinator` more robust by swallowing the exception thrown in the middle, and also retries interrupting the threads in case the the closing sequence is blocked multiple times. Some of the methods, such as `closeQuietly()`, in the `IOUtils` seem better to be put in `ComponentClosingUtils`. Will do that in a separate PR. ## Brief change log 0dfe9b7 Add util methods to shutdown executor services. 1cb9f56 Make OperatorCoordinator closure more robust. ## Verifying this change This change added the following unit tests: 1. `ComponentClosingUtilsTest` class. 2. `SourceCoordinatorTest.testBlockOnClose()` test method. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-25940) pyflink/datastream/tests/test_data_stream.py::StreamingModeDataStreamTests::test_keyed_process_function_with_state failed on AZP
[ https://issues.apache.org/jira/browse/FLINK-25940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huang Xingbo resolved FLINK-25940. -- Resolution: Fixed Merged into master via c92cda97108c6d6818e0fd6c22dba274884f3479 > pyflink/datastream/tests/test_data_stream.py::StreamingModeDataStreamTests::test_keyed_process_function_with_state > failed on AZP > > > Key: FLINK-25940 > URL: https://issues.apache.org/jira/browse/FLINK-25940 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.15.0 >Reporter: Till Rohrmann >Assignee: Huang Xingbo >Priority: Critical > Labels: pull-request-available, test-stability > > The test > {{pyflink/datastream/tests/test_data_stream.py::StreamingModeDataStreamTests::test_keyed_process_function_with_state}} > fails on AZP: > {code} > 2022-02-02T17:44:12.1898582Z Feb 02 17:44:12 > === FAILURES > === > 2022-02-02T17:44:12.1899860Z Feb 02 17:44:12 _ > StreamingModeDataStreamTests.test_keyed_process_function_with_state __ > 2022-02-02T17:44:12.1900493Z Feb 02 17:44:12 > 2022-02-02T17:44:12.1901218Z Feb 02 17:44:12 self = > testMethod=test_keyed_process_function_with_state> > 2022-02-02T17:44:12.1901948Z Feb 02 17:44:12 > 2022-02-02T17:44:12.1902745Z Feb 02 17:44:12 def > test_keyed_process_function_with_state(self): > 2022-02-02T17:44:12.1903722Z Feb 02 17:44:12 > self.env.get_config().set_auto_watermark_interval(2000) > 2022-02-02T17:44:12.1904473Z Feb 02 17:44:12 > self.env.set_stream_time_characteristic(TimeCharacteristic.EventTime) > 2022-02-02T17:44:12.1906780Z Feb 02 17:44:12 data_stream = > self.env.from_collection([(1, 'hi', '1603708211000'), > 2022-02-02T17:44:12.1908034Z Feb 02 17:44:12 >(2, 'hello', '1603708224000'), > 2022-02-02T17:44:12.1909166Z Feb 02 17:44:12 >(3, 'hi', '1603708226000'), > 2022-02-02T17:44:12.1910122Z Feb 02 17:44:12 >(4, 'hello', '1603708289000'), > 2022-02-02T17:44:12.1911099Z Feb 02 17:44:12 >(5, 'hi', '1603708291000'), > 2022-02-02T17:44:12.1912451Z Feb 02 17:44:12 >(6, 'hello', '1603708293000')], > 2022-02-02T17:44:12.1913456Z Feb 02 17:44:12 > type_info=Types.ROW([Types.INT(), Types.STRING(), > 2022-02-02T17:44:12.1914338Z Feb 02 17:44:12 >Types.STRING()])) > 2022-02-02T17:44:12.1914811Z Feb 02 17:44:12 > 2022-02-02T17:44:12.1915317Z Feb 02 17:44:12 class > MyTimestampAssigner(TimestampAssigner): > 2022-02-02T17:44:12.1915724Z Feb 02 17:44:12 > 2022-02-02T17:44:12.1916782Z Feb 02 17:44:12 def > extract_timestamp(self, value, record_timestamp) -> int: > 2022-02-02T17:44:12.1917621Z Feb 02 17:44:12 return > int(value[2]) > 2022-02-02T17:44:12.1918262Z Feb 02 17:44:12 > 2022-02-02T17:44:12.1918855Z Feb 02 17:44:12 class > MyProcessFunction(KeyedProcessFunction): > 2022-02-02T17:44:12.1919363Z Feb 02 17:44:12 > 2022-02-02T17:44:12.1919744Z Feb 02 17:44:12 def __init__(self): > 2022-02-02T17:44:12.1920143Z Feb 02 17:44:12 self.value_state > = None > 2022-02-02T17:44:12.1920648Z Feb 02 17:44:12 self.list_state > = None > 2022-02-02T17:44:12.1921298Z Feb 02 17:44:12 self.map_state = > None > 2022-02-02T17:44:12.1921864Z Feb 02 17:44:12 > 2022-02-02T17:44:12.1922479Z Feb 02 17:44:12 def open(self, > runtime_context: RuntimeContext): > 2022-02-02T17:44:12.1923907Z Feb 02 17:44:12 > value_state_descriptor = ValueStateDescriptor('value_state', Types.INT()) > 2022-02-02T17:44:12.1924922Z Feb 02 17:44:12 self.value_state > = runtime_context.get_state(value_state_descriptor) > 2022-02-02T17:44:12.1925741Z Feb 02 17:44:12 > list_state_descriptor = ListStateDescriptor('list_state', Types.INT()) > 2022-02-02T17:44:12.1926482Z Feb 02 17:44:12 self.list_state > = runtime_context.get_list_state(list_state_descriptor) > 2022-02-02T17:44:12.1927465Z Feb 02 17:44:12 > map_state_descriptor = MapStateDescriptor('map_state', Types.INT(), > Types.STRING()) > 2022-02-02T17:44:12.1927998Z Feb 02 17:44:12 state_ttl_config > = StateTtlConfig \ > 2022-02-02T17:44:12.1928444Z Feb 02 17:44:12 >
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] HuangXingBo closed pull request #18742: [FLINK-25940][python] Fix the unstable test test_keyed_process_function_with_state in PyFlink
HuangXingBo closed pull request #18742: URL: https://github.com/apache/flink/pull/18742 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) * 377f14caf340fcfbba8fa409a23529769bfcff2a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26103) Introduce log store
[ https://issues.apache.org/jira/browse/FLINK-26103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-26103: --- Labels: pull-request-available (was: ) > Introduce log store > --- > > Key: FLINK-26103 > URL: https://issues.apache.org/jira/browse/FLINK-26103 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Reporter: Jingsong Lee >Assignee: Jingsong Lee >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.1.0 > > > Introduce log store: > * Introduce log store interfaces > * Implement Kafka log store -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-table-store] JingsongLi opened a new pull request #20: [FLINK-26103] Introduce log store
JingsongLi opened a new pull request #20: URL: https://github.com/apache/flink-table-store/pull/20 Introduce log store: - Refactor SinkRecord - Introduce log store interfaces - Implement Kafka log store -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wenlong88 commented on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
wenlong88 commented on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1038705401 @gaoyunhaii thanks for the detailed review, I have addressed the comments -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-26100) Set up Flink ML Document Website
[ https://issues.apache.org/jira/browse/FLINK-26100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-26100: --- Labels: pull-request-available (was: ) > Set up Flink ML Document Website > > > Key: FLINK-26100 > URL: https://issues.apache.org/jira/browse/FLINK-26100 > Project: Flink > Issue Type: New Feature > Components: Library / Machine Learning >Affects Versions: ml-2.0.0 >Reporter: Yunfeng Zhou >Priority: Major > Labels: pull-request-available > > Set up Flink ML's document website based on flink document and statefun > document website. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink-ml] yunfengzhou-hub opened a new pull request #57: [FLINK-26100] Set up Flink ML Document Website
yunfengzhou-hub opened a new pull request #57: URL: https://github.com/apache/flink-ml/pull/57 ## What is the purpose of the change - Set up Flink ML Document Website ## Brief change log - Add `docs` folder containing all configurations needed to setup the framework for Flink ML's document website. - Add basic documentation about example operator, quick start and release announcement blog. - Add guidelines to build and start the website, as well as to contribute documents to `docs` folder in README. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with @public(Evolving): (no) - Does this pull request introduce a new feature? (no) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-26102) connector test by using 'flink run --python'
[ https://issues.apache.org/jira/browse/FLINK-26102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu closed FLINK-26102. --- Resolution: Not A Problem Hi [~waittting], you should post this kind of question in go to the [user mailing list|https://flink.apache.org/gettinghelp.html#user-mailing-list] instead of JIRA. Besides, I suggest you post more information about the question to help others identify the problem, e.g. the exception stack, the sample code, etc. > connector test by using 'flink run --python' > - > > Key: FLINK-26102 > URL: https://issues.apache.org/jira/browse/FLINK-26102 > Project: Flink > Issue Type: Bug >Affects Versions: 1.13.3 >Reporter: waittting >Priority: Major > > When I used 'flink run --python' to test my connector, I got an error '{*}No > operations allowed after connection closed.{*}' My connector's code does use > jdbc conn, but when I use sql-client to test the connector with the same SQL, > it's ok. So I wonder if there is a problem with the execution environment of > 'flink run -python' -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (FLINK-26102) connector test by using 'flink run --python'
[ https://issues.apache.org/jira/browse/FLINK-26102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17491814#comment-17491814 ] Dian Fu edited comment on FLINK-26102 at 2/14/22, 6:34 AM: --- Hi [~waittting], you should post this kind of question in the [user mailing list|https://flink.apache.org/gettinghelp.html#user-mailing-list] instead of JIRA. Besides, I suggest you post more information about the question to help others identify the problem, e.g. the exception stack, the sample code, etc. was (Author: dianfu): Hi [~waittting], you should post this kind of question in go to the [user mailing list|https://flink.apache.org/gettinghelp.html#user-mailing-list] instead of JIRA. Besides, I suggest you post more information about the question to help others identify the problem, e.g. the exception stack, the sample code, etc. > connector test by using 'flink run --python' > - > > Key: FLINK-26102 > URL: https://issues.apache.org/jira/browse/FLINK-26102 > Project: Flink > Issue Type: Bug >Affects Versions: 1.13.3 >Reporter: waittting >Priority: Major > > When I used 'flink run --python' to test my connector, I got an error '{*}No > operations allowed after connection closed.{*}' My connector's code does use > jdbc conn, but when I use sql-client to test the connector with the same SQL, > it's ok. So I wonder if there is a problem with the execution environment of > 'flink run -python' -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (FLINK-26103) Introduce log store
[ https://issues.apache.org/jira/browse/FLINK-26103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee reassigned FLINK-26103: Assignee: Jingsong Lee > Introduce log store > --- > > Key: FLINK-26103 > URL: https://issues.apache.org/jira/browse/FLINK-26103 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Reporter: Jingsong Lee >Assignee: Jingsong Lee >Priority: Major > Fix For: table-store-0.1.0 > > > Introduce log store: > * Introduce log store interfaces > * Implement Kafka log store -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (FLINK-26103) Introduce log store
Jingsong Lee created FLINK-26103: Summary: Introduce log store Key: FLINK-26103 URL: https://issues.apache.org/jira/browse/FLINK-26103 Project: Flink Issue Type: Sub-task Components: Table Store Reporter: Jingsong Lee Fix For: table-store-0.1.0 Introduce log store: * Introduce log store interfaces * Implement Kafka log store -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31372) * 5b1db23198aec87c376a6793562ce0753a02b8f3 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18400: [FLINK-25198][docs] Add doc about name and description of operator
flinkbot edited a comment on pull request #18400: URL: https://github.com/apache/flink/pull/18400#issuecomment-1016057839 ## CI report: * 0ee72a0732026c046d991d959033995c7baf44e1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29680) * b49a58bb711123c9cc370345b864cd529a56f8fe UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MrWhiteSike edited a comment on pull request #18655: [FLINK-25799] [docs] Translate table/filesystem.md page into Chinese.
MrWhiteSike edited a comment on pull request #18655: URL: https://github.com/apache/flink/pull/18655#issuecomment-1032274801 Hi, [@wuchong](https://github.com/wuchong) could you help to review this? when you are available. very appreciated it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18738: [FLINK-24745][format][json] Add support for Oracle OGG JSON format parser
flinkbot edited a comment on pull request #18738: URL: https://github.com/apache/flink/pull/18738#issuecomment-1037051889 ## CI report: * 2a0939dfb92ff0485a51a01416b3ed7a86caf79e Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31348) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-26042) PyFlinkEmbeddedSubInterpreterTests. test_udf_without_arguments failed on azure
[ https://issues.apache.org/jira/browse/FLINK-26042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huang Xingbo reassigned FLINK-26042: Assignee: Huang Xingbo > PyFlinkEmbeddedSubInterpreterTests. test_udf_without_arguments failed on azure > -- > > Key: FLINK-26042 > URL: https://issues.apache.org/jira/browse/FLINK-26042 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.15.0 >Reporter: Yun Gao >Assignee: Huang Xingbo >Priority: Major > Labels: test-stability > > {code:java} > 2022-02-08T02:55:16.0701246Z Feb 08 02:55:16 > === FAILURES > === > 2022-02-08T02:55:16.0702483Z Feb 08 02:55:16 > PyFlinkEmbeddedSubInterpreterTests.test_udf_without_arguments _ > 2022-02-08T02:55:16.0703190Z Feb 08 02:55:16 > 2022-02-08T02:55:16.0703959Z Feb 08 02:55:16 self = > testMethod=test_udf_without_arguments> > 2022-02-08T02:55:16.0704967Z Feb 08 02:55:16 > 2022-02-08T02:55:16.0705639Z Feb 08 02:55:16 def > test_udf_without_arguments(self): > 2022-02-08T02:55:16.0706641Z Feb 08 02:55:16 one = udf(lambda: 1, > result_type=DataTypes.BIGINT(), deterministic=True) > 2022-02-08T02:55:16.0707595Z Feb 08 02:55:16 two = udf(lambda: 2, > result_type=DataTypes.BIGINT(), deterministic=False) > 2022-02-08T02:55:16.0713079Z Feb 08 02:55:16 > 2022-02-08T02:55:16.0714866Z Feb 08 02:55:16 table_sink = > source_sink_utils.TestAppendSink(['a', 'b'], > 2022-02-08T02:55:16.0716495Z Feb 08 02:55:16 > [DataTypes.BIGINT(), DataTypes.BIGINT()]) > 2022-02-08T02:55:16.0717411Z Feb 08 02:55:16 > self.t_env.register_table_sink("Results", table_sink) > 2022-02-08T02:55:16.0718059Z Feb 08 02:55:16 > 2022-02-08T02:55:16.0719148Z Feb 08 02:55:16 t = > self.t_env.from_elements([(1, 2), (2, 5), (3, 1)], ['a', 'b']) > 2022-02-08T02:55:16.0719974Z Feb 08 02:55:16 > t.select(one(), > two()).execute_insert("Results").wait() > 2022-02-08T02:55:16.0720697Z Feb 08 02:55:16 > 2022-02-08T02:55:16.0721294Z Feb 08 02:55:16 > pyflink/table/tests/test_udf.py:252: > 2022-02-08T02:55:16.0722119Z Feb 08 02:55:16 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > 2022-02-08T02:55:16.0722943Z Feb 08 02:55:16 > pyflink/table/table_result.py:76: in wait > 2022-02-08T02:55:16.0723686Z Feb 08 02:55:16 > get_method(self._j_table_result, "await")() > 2022-02-08T02:55:16.0725024Z Feb 08 02:55:16 > .tox/py37-cython/lib/python3.7/site-packages/py4j/java_gateway.py:1322: in > __call__ > 2022-02-08T02:55:16.0726044Z Feb 08 02:55:16 answer, self.gateway_client, > self.target_id, self.name) > 2022-02-08T02:55:16.0726824Z Feb 08 02:55:16 pyflink/util/exceptions.py:146: > in deco > 2022-02-08T02:55:16.0727569Z Feb 08 02:55:16 return f(*a, **kw) > 2022-02-08T02:55:16.0728326Z Feb 08 02:55:16 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > 2022-02-08T02:55:16.0728995Z Feb 08 02:55:16 > 2022-02-08T02:55:16.0729717Z Feb 08 02:55:16 answer = 'x' > 2022-02-08T02:55:16.0730447Z Feb 08 02:55:16 gateway_client = > > 2022-02-08T02:55:16.0731465Z Feb 08 02:55:16 target_id = 'o26503', name = > 'await' > 2022-02-08T02:55:16.0732045Z Feb 08 02:55:16 > 2022-02-08T02:55:16.0732763Z Feb 08 02:55:16 def get_return_value(answer, > gateway_client, target_id=None, name=None): > 2022-02-08T02:55:16.0733699Z Feb 08 02:55:16 """Converts an answer > received from the Java gateway into a Python object. > 2022-02-08T02:55:16.0734508Z Feb 08 02:55:16 > 2022-02-08T02:55:16.0735205Z Feb 08 02:55:16 For example, string > representation of integers are converted to Python > 2022-02-08T02:55:16.0736228Z Feb 08 02:55:16 integer, string > representation of objects are converted to JavaObject > 2022-02-08T02:55:16.0736974Z Feb 08 02:55:16 instances, etc. > 2022-02-08T02:55:16.0737508Z Feb 08 02:55:16 > 2022-02-08T02:55:16.0738185Z Feb 08 02:55:16 :param answer: the > string returned by the Java gateway > 2022-02-08T02:55:16.0739074Z Feb 08 02:55:16 :param gateway_client: > the gateway client used to communicate with the Java > 2022-02-08T02:55:16.0739994Z Feb 08 02:55:16 Gateway. Only > necessary if the answer is a reference (e.g., object, > 2022-02-08T02:55:16.0740723Z Feb 08 02:55:16 list, map) > 2022-02-08T02:55:16.0741491Z Feb 08 02:55:16 :param target_id: the > name of the object from which the answer comes from > 2022-02-08T02:55:16.0742350Z Feb 08 02:55:16 (e.g., *object1* in > `object1.hello()`). Optional. >