[GitHub] [flink] predatorray opened a new pull request #16928: [FLINK-23899][docs-zh] Translate the "Elastic Scaling" page into Chinese
predatorray opened a new pull request #16928: URL: https://github.com/apache/flink/pull/16928 ## What is the purpose of the change Translated the "Elastic Scaling" page into Chinese. ## Brief change log Translated the "Elastic Scaling" page into Chinese. ## Verifying this change This change is a translation without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? not applicable -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-18592) StreamingFileSink fails due to truncating HDFS file failure
[ https://issues.apache.org/jira/browse/FLINK-18592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Lin updated FLINK-18592: - Priority: Major (was: Minor) > StreamingFileSink fails due to truncating HDFS file failure > --- > > Key: FLINK-18592 > URL: https://issues.apache.org/jira/browse/FLINK-18592 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Affects Versions: 1.10.1 >Reporter: JIAN WANG >Priority: Major > Labels: auto-deprioritized-major, pull-request-available > Fix For: 1.10.4, 1.14.0, 1.11.5 > > > I meet the issue on flink-1.10.1. I use flink on YARN(3.0.0-cdh6.3.2) with > StreamingFileSink. > code part like this: > {code} > public static StreamingFileSink build(String dir, > BucketAssigner assigner, String prefix) { > return StreamingFileSink.forRowFormat(new Path(dir), new > SimpleStringEncoder()) > .withRollingPolicy( > DefaultRollingPolicy > .builder() > > .withRolloverInterval(TimeUnit.HOURS.toMillis(2)) > > .withInactivityInterval(TimeUnit.MINUTES.toMillis(10)) > .withMaxPartSize(1024L * 1024L * 1024L > * 50) // Max 50GB > .build()) > .withBucketAssigner(assigner) > > .withOutputFileConfig(OutputFileConfig.builder().withPartPrefix(prefix).build()) > .build(); > } > {code} > The error is > {noformat} > java.io.IOException: > Problem while truncating file: > hdfs:///business_log/hashtag/2020-06-25/.hashtag-122-37.inprogress.8e65f69c-b5ba-4466-a844-ccc0a5a93de2 > {noformat} > Due to this issue, it can not restart from the latest checkpoint and > savepoint. > Currently, my workaround is that we keep latest 3 checkpoint, and if it > fails, I manually restart from penult checkpoint. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23556) SQLClientSchemaRegistryITCase fails with " Subject ... not found"
[ https://issues.apache.org/jira/browse/FLINK-23556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402730#comment-17402730 ] Biao Geng commented on FLINK-23556: --- hi [~xtsong], I think my pull request is ready. I tried 3 times of e2e tests and this case works fine. I would appreciate a lot if you can help me to find a reviewer for it. Thanks! The pr link: [https://github.com/apache/flink/pull/16864|https://github.com/apache/flink/pull/16864] > SQLClientSchemaRegistryITCase fails with " Subject ... not found" > - > > Key: FLINK-23556 > URL: https://issues.apache.org/jira/browse/FLINK-23556 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Biao Geng >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21129=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=cc5499f8-bdde-5157-0d76-b6528ecd808e=25337 > {code} > Jul 28 23:37:48 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 209.44 s <<< FAILURE! - in > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase > Jul 28 23:37:48 [ERROR] > testWriting(org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase) > Time elapsed: 81.146 s <<< ERROR! > Jul 28 23:37:48 > io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: > Subject 'test-user-behavior-d18d4af2-3830-4620-9993-340c13f50cc2-value' not > found.; error code: 40401 > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:769) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:760) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getAllVersions(CachedSchemaRegistryClient.java:364) > Jul 28 23:37:48 at > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.getAllVersions(SQLClientSchemaRegistryITCase.java:230) > Jul 28 23:37:48 at > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testWriting(SQLClientSchemaRegistryITCase.java:195) > Jul 28 23:37:48 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Jul 28 23:37:48 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Jul 28 23:37:48 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Jul 28 23:37:48 at java.lang.reflect.Method.invoke(Method.java:498) > Jul 28 23:37:48 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Jul 28 23:37:48 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Jul 28 23:37:48 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > Jul 28 23:37:48 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Jul 28 23:37:48 at java.lang.Thread.run(Thread.java:748) > Jul 28 23:37:48 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23907) Type Migration: introducing primitive functional interfaces
[ https://issues.apache.org/jira/browse/FLINK-23907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleg Smirnov updated FLINK-23907: - Description: Hey! We are a collaborative group of researchers from JetBrains Research and Oregon State University, and we are testing our data-driven [plugin|https://github.com/JetBrains-Research/data-driven-type-migration], which is based on the IntelliJ's [Type Migration|https://www.jetbrains.com/help/idea/type-migration.html] framework and adjusts it using custom structural-replace templates that express the adaptations required to perform the type changes. I want to apply several type changes using it and open the PR, thus introducing primitive functional interfaces in order to prevent unnecessary boxing (like *{{BooleanSupplier}}* instead *{{Supplier}}*, {{*OptionalInt*}} instead of {{*Optional*}}, {{*Predicate*}} instead of {{*Function, etc.), since it can affect the performance of the code (_Effective Java,_ Items 44, 61). The patch itself is already prepared (because it is done automatically using the plugin), so I guess I will need to open this ticket, receive your approval, and then open the PR? It would help us a lot to evaluate the usefulness of our approach! Thank you in advance! was: Hey! We are a collaborative group of researchers from JetBrains Research and Oregon State University, and we are testing our data-driven [plugin|https://github.com/JetBrains-Research/data-driven-type-migration], which is based on the IntelliJ's [Type Migration|https://www.jetbrains.com/help/idea/type-migration.html] framework and adjusts it using custom structural-replace templates that express the adaptations required to perform the type changes. I want to apply several type changes using it and open the PR, thus introducing primitive functional interfaces in order to prevent unnecessary boxing (like BooleanSupplier instead Supplier, OptionalInt instead of Optional, Predicate instead of Function, etc. ), since it can affect the performance of the code (Effective Java, Items 44, 61). The patch itself is already prepared (because it is done automatically using the plugin), so I guess I will need to open this ticket, receive your approval, and then open the PR? It would help us a lot to evaluate the usefulness of our approach! Thank you in advance! > Type Migration: introducing primitive functional interfaces > --- > > Key: FLINK-23907 > URL: https://issues.apache.org/jira/browse/FLINK-23907 > Project: Flink > Issue Type: Improvement >Reporter: Oleg Smirnov >Priority: Minor > Labels: refactoring, type-migration > > Hey! > We are a collaborative group of researchers from JetBrains Research and > Oregon State University, and we are testing our data-driven > [plugin|https://github.com/JetBrains-Research/data-driven-type-migration], > which is based on the IntelliJ's [Type > Migration|https://www.jetbrains.com/help/idea/type-migration.html] framework > and adjusts it using custom structural-replace templates that express the > adaptations required to perform the type changes. > I want to apply several type changes using it and open the PR, thus > introducing primitive functional interfaces in order to prevent unnecessary > boxing (like *{{BooleanSupplier}}* instead *{{Supplier}}*, > {{*OptionalInt*}} instead of {{*Optional*}}, {{*Predicate*}} > instead of {{*Function, etc.), since it can affect the > performance of the code (_Effective Java,_ Items 44, 61). > The patch itself is already prepared (because it is done automatically using > the plugin), so I guess I will need to open this ticket, receive your > approval, and then open the PR? > It would help us a lot to evaluate the usefulness of our approach! > Thank you in advance! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23905) Reduce the load on JobManager when submitting large-scale job with a big user jar
[ https://issues.apache.org/jira/browse/FLINK-23905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402729#comment-17402729 ] huntercc commented on FLINK-23905: -- Thanks your reply and practical advice, [~trohrmann]. In fact, we have adopted a similar method by configuring the yarn.ship-files parameter, which greatly shortens the time spent in this step. I'm worried that there will be dependency conflicts in this way, especially when we use the yarn session mode. I venture to suppose that it would be better if this part of the work could be transparent to users. > Reduce the load on JobManager when submitting large-scale job with a big user > jar > - > > Key: FLINK-23905 > URL: https://issues.apache.org/jira/browse/FLINK-23905 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Reporter: huntercc >Priority: Major > > As described in FLINK-20612 and FLINK-21731, there are some time-consuming > steps in the job startup phase. Recently, we found that when submitting a > large-scale job with a large user jar, the time spent on changing the status > of a task from deploying to running accounts for a high proportion of the > total time-consuming. > In the task initialization stage, the user jar needs to be pulled from the > JobManager through BlobService. JobManager has to allocate a lot of computing > power to distribute the files, which leads to a heavy load in the start-up > stage. More generally, JobManager fails to respond to the RPC request sent by > the TaskManager side in time due to high load, causing some timeout > exceptions, such as akka timeout exception, which leads to job restart and > further prolongs the start-up time of the job. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23907) Type Migration: introducing primitive functional interfaces
[ https://issues.apache.org/jira/browse/FLINK-23907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleg Smirnov updated FLINK-23907: - Description: Hey! We are a collaborative group of researchers from JetBrains Research and Oregon State University, and we are testing our data-driven [plugin|https://github.com/JetBrains-Research/data-driven-type-migration], which is based on the IntelliJ's [Type Migration|https://www.jetbrains.com/help/idea/type-migration.html] framework and adjusts it using custom structural-replace templates that express the adaptations required to perform the type changes. I want to apply several type changes using it and open the PR, thus introducing primitive functional interfaces in order to prevent unnecessary boxing (like BooleanSupplier instead Supplier, OptionalInt instead of Optional, Predicate instead of Function, etc. ), since it can affect the performance of the code (Effective Java, Items 44, 61). The patch itself is already prepared (because it is done automatically using the plugin), so I guess I will need to open this ticket, receive your approval, and then open the PR? It would help us a lot to evaluate the usefulness of our approach! Thank you in advance! was: Hey! We are a group of researchers, and we are testing our data-driven [plugin|https://github.com/JetBrains-Research/data-driven-type-migration], which is based on the IntelliJ's [Type Migration|https://www.jetbrains.com/help/idea/type-migration.html] framework and adjusts it using custom structural-replace templates that express the adaptations required to perform the type change. I want to apply several type changes using it and open the PR, thus introducing primitive functional interfaces in order to prevent unnecessary boxing (like BooleanSupplier instead Supplier, OptionalInt instead of Optional, etc. ), since it can affect the performance of the code (Effective Java, Items 44, 61). The patch itself is already prepared, so I guess I will need to open this ticket, receive your approval, and then open the PR? Thank you in advance! > Type Migration: introducing primitive functional interfaces > --- > > Key: FLINK-23907 > URL: https://issues.apache.org/jira/browse/FLINK-23907 > Project: Flink > Issue Type: Improvement >Reporter: Oleg Smirnov >Priority: Minor > Labels: refactoring, type-migration > > Hey! > We are a collaborative group of researchers from JetBrains Research and > Oregon State University, and we are testing our data-driven > [plugin|https://github.com/JetBrains-Research/data-driven-type-migration], > which is based on the IntelliJ's [Type > Migration|https://www.jetbrains.com/help/idea/type-migration.html] framework > and adjusts it using custom structural-replace templates that express the > adaptations required to perform the type changes. > I want to apply several type changes using it and open the PR, thus > introducing primitive functional interfaces in order to prevent unnecessary > boxing (like BooleanSupplier instead Supplier, OptionalInt instead > of Optional, Predicate instead of Function, etc. ), > since it can affect the performance of the code (Effective Java, Items 44, > 61). > The patch itself is already prepared (because it is done automatically using > the plugin), so I guess I will need to open this ticket, receive your > approval, and then open the PR? > It would help us a lot to evaluate the usefulness of our approach! > Thank you in advance! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-16152) Translate "Operator/index" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402715#comment-17402715 ] wuguihu commented on FLINK-16152: - Hello, [~jark] Excuse me for taking up your time. I have finished this ticket. Would you like to review it for me.Thank you very much! > Translate "Operator/index" into Chinese > --- > > Key: FLINK-16152 > URL: https://issues.apache.org/jira/browse/FLINK-16152 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Yun Gao >Assignee: wuguihu >Priority: Major > Labels: auto-unassigned, pull-request-available > Fix For: 1.14.0 > > > The page is located at _docs/dev/stream/operators/index.zh.md_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-5601) Window operator does not checkpoint watermarks
[ https://issues.apache.org/jira/browse/FLINK-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-5601: -- Labels: auto-deprioritized-critical auto-unassigned pull-request-available stale-major (was: auto-deprioritized-critical auto-unassigned pull-request-available) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Major but is unassigned and neither itself nor its Sub-Tasks have been updated for 60 days. I have gone ahead and added a "stale-major" to the issue". If this ticket is a Major, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Window operator does not checkpoint watermarks > -- > > Key: FLINK-5601 > URL: https://issues.apache.org/jira/browse/FLINK-5601 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0 >Reporter: Ufuk Celebi >Priority: Major > Labels: auto-deprioritized-critical, auto-unassigned, > pull-request-available, stale-major > > During release testing [~stefanrichte...@gmail.com] and I noticed that > watermarks are not checkpointed in the window operator. > This can lead to non determinism when restoring checkpoints. I was running an > adjusted {{SessionWindowITCase}} via Kafka for testing migration and > rescaling and ran into failures, because the data generator required > determinisitic behaviour. > What happened was that on restore it could happen that late elements were not > dropped, because the watermarks needed to be re-established after restore > first. > [~aljoscha] Do you know whether there is a special reason for explicitly not > checkpointing watermarks? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18568) Add Support for Azure Data Lake Store Gen 2 in Streaming File Sink
[ https://issues.apache.org/jira/browse/FLINK-18568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-18568: --- Labels: auto-deprioritized-major stale-assigned (was: auto-deprioritized-major) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issue is assigned but has not received an update in 30 days, so it has been labeled "stale-assigned". If you are still working on the issue, please remove the label and add a comment updating the community on your progress. If this issue is waiting on feedback, please consider this a reminder to the committer/reviewer. Flink is a very active project, and so we appreciate your patience. If you are no longer working on the issue, please unassign yourself so someone else may work on it. > Add Support for Azure Data Lake Store Gen 2 in Streaming File Sink > -- > > Key: FLINK-18568 > URL: https://issues.apache.org/jira/browse/FLINK-18568 > Project: Flink > Issue Type: Improvement > Components: API / DataStream, Connectors / Common >Affects Versions: 1.12.0 >Reporter: Israel Ekpo >Assignee: Srinivasulu Punuru >Priority: Minor > Labels: auto-deprioritized-major, stale-assigned > Fix For: 1.14.0 > > > The objective of this improvement is to add support for Azure Data Lake Store > Gen 2 (ADLS Gen2) [2] as one of the supported filesystems for the Streaming > File Sink [1] > [1] > https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/connectors/streamfile_sink.html > [2] https://hadoop.apache.org/docs/current/hadoop-azure/abfs.html -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23845) [DOCS]PushGateway metrics group not delete when job shutdown
[ https://issues.apache.org/jira/browse/FLINK-23845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-23845: --- Labels: pull-request-available stale-blocker (was: pull-request-available) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as a Blocker but is unassigned and neither itself nor its Sub-Tasks have been updated for 1 days. I have gone ahead and marked it "stale-blocker". If this ticket is a Blocker, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > [DOCS]PushGateway metrics group not delete when job shutdown > > > Key: FLINK-23845 > URL: https://issues.apache.org/jira/browse/FLINK-23845 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: camilesing >Priority: Blocker > Labels: pull-request-available, stale-blocker > > see https://issues.apache.org/jira/browse/FLINK-20691 . whatever the problem > has always existed, we should avoid other guys met it -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22981) FlinkKafkaProducerMigrationTest.testRestoreProducer fail with timeout
[ https://issues.apache.org/jira/browse/FLINK-22981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22981: --- Labels: auto-deprioritized-major test-stability (was: stale-major test-stability) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > FlinkKafkaProducerMigrationTest.testRestoreProducer fail with timeout > -- > > Key: FLINK-22981 > URL: https://issues.apache.org/jira/browse/FLINK-22981 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.13.1 >Reporter: Dawid Wysakowicz >Priority: Minor > Labels: auto-deprioritized-major, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18953=logs=c5612577-f1f7-5977-6ff6-7432788526f7=53f6305f-55e6-561c-8f1e-3a1dde2c77df=6831 > {code} > Jun 13 22:40:34 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 5, > Time elapsed: 221.7 s <<< FAILURE! - in > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerMigrationTest > Jun 13 22:40:34 [ERROR] testRestoreProducer[Migration Savepoint: > 1.12](org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerMigrationTest) > Time elapsed: 85.865 s <<< ERROR! > Jun 13 22:40:34 org.apache.kafka.common.errors.TimeoutException: > org.apache.kafka.common.errors.TimeoutException: Timeout expired after > 6milliseconds while awaiting InitProducerId > Jun 13 22:40:34 Caused by: org.apache.kafka.common.errors.TimeoutException: > Timeout expired after 6milliseconds while awaiting InitProducerId > Jun 13 22:40:34 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #16864: [FLINK-23556][tests] Make SQLClientSchemaRegistryITCase more stable
flinkbot edited a comment on pull request #16864: URL: https://github.com/apache/flink/pull/16864#issuecomment-900421523 ## CI report: * 6aee1804b521f84a72e483ef7830ea2a191eff43 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22604) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16927: [FLINK-18592][Connectors/FileSystem] StreamingFileSink fails due to truncating HDFS file failure
flinkbot edited a comment on pull request #16927: URL: https://github.com/apache/flink/pull/16927#issuecomment-903143712 ## CI report: * d0fbb5aa88be503514642bb0571a4af183a21edf Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22603) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16864: [FLINK-23556][tests] Make SQLClientSchemaRegistryITCase more stable
flinkbot edited a comment on pull request #16864: URL: https://github.com/apache/flink/pull/16864#issuecomment-900421523 ## CI report: * e47cc45fd35764b1b65c6a0dd41bf863b8dcee49 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22584) * 6aee1804b521f84a72e483ef7830ea2a191eff43 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22604) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16925: [BP-1.13][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot edited a comment on pull request #16925: URL: https://github.com/apache/flink/pull/16925#issuecomment-903114231 ## CI report: * 3edefca893bec20e8fd17c75ac708368bb1893be Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22601) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16926: [BP-1.12][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot edited a comment on pull request #16926: URL: https://github.com/apache/flink/pull/16926#issuecomment-903114240 ## CI report: * 345ab64e359f37b511140766ca708fa7dfe28207 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22602) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16927: [FLINK-18592][Connectors/FileSystem] StreamingFileSink fails due to truncating HDFS file failure
flinkbot edited a comment on pull request #16927: URL: https://github.com/apache/flink/pull/16927#issuecomment-903143712 ## CI report: * d0fbb5aa88be503514642bb0571a4af183a21edf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22603) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23725) HadoopFsCommitter, file rename failure
[ https://issues.apache.org/jira/browse/FLINK-23725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402652#comment-17402652 ] Paul Lin commented on FLINK-23725: -- [~todd5167] Would you like to provide a fix? If not I could take this issue. > HadoopFsCommitter, file rename failure > -- > > Key: FLINK-23725 > URL: https://issues.apache.org/jira/browse/FLINK-23725 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem, Connectors / Hadoop > Compatibility, FileSystems >Affects Versions: 1.11.1, 1.12.1 >Reporter: todd >Priority: Major > > When the HDFS file is written, if the part file exists, only false will be > returned if the duplicate name fails.Whether to throw an exception that > already exists in the part, or print related logs. > > ``` > org.apache.flink.runtime.fs.hdfs.HadoopRecoverableFsDataOutputStream.HadoopFsCommitter#commit > public void commit() throws IOException { > final Path src = recoverable.tempFile(); > final Path dest = recoverable.targetFile(); > final long expectedLength = recoverable.offset(); > try { > //always return false or ture > fs.rename(src, dest); > } catch (IOException e) { > throw new IOException( > "Committing file by rename failed: " + src + " to " + dest, e); > } > } > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-23725) HadoopFsCommitter, file rename failure
[ https://issues.apache.org/jira/browse/FLINK-23725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402650#comment-17402650 ] Paul Lin edited comment on FLINK-23725 at 8/21/21, 5:18 PM: I've also met this issue. If the file name already exists, FileCommiter would silently skip the commit, which may lead to data loss. The root cause is that #rename would not throw exceptions if the target file already exists or the src file doesn't exist, instead it returns false to indicate the operation is failed, as [Hadoop ClientProtocal|https://github.com/apache/hadoop/blob/b6d19718204af02da6e2ed0b83d5936824371fc0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java#L520)] mentioned. I think in both cases we should throw an exception. was (Author: paul lin): I've also met this issue. If the file name already exists, FileCommiter would silently skip the commit, which may lead to data loss. The root cause is that #rename would not throw exceptions if the target file already exists or the src file doesn't exist, instead it returns false to indicate the operation is failed, as [Hadoop ClientProtocal|[https://github.com/apache/hadoop/blob/b6d19718204af02da6e2ed0b83d5936824371fc0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java#L520]|https://github.com/apache/hadoop/blob/b6d19718204af02da6e2ed0b83d5936824371fc0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java#L520)] mentioned. I think in both cases we should throw an exception. > HadoopFsCommitter, file rename failure > -- > > Key: FLINK-23725 > URL: https://issues.apache.org/jira/browse/FLINK-23725 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem, Connectors / Hadoop > Compatibility, FileSystems >Affects Versions: 1.11.1, 1.12.1 >Reporter: todd >Priority: Major > > When the HDFS file is written, if the part file exists, only false will be > returned if the duplicate name fails.Whether to throw an exception that > already exists in the part, or print related logs. > > ``` > org.apache.flink.runtime.fs.hdfs.HadoopRecoverableFsDataOutputStream.HadoopFsCommitter#commit > public void commit() throws IOException { > final Path src = recoverable.tempFile(); > final Path dest = recoverable.targetFile(); > final long expectedLength = recoverable.offset(); > try { > //always return false or ture > fs.rename(src, dest); > } catch (IOException e) { > throw new IOException( > "Committing file by rename failed: " + src + " to " + dest, e); > } > } > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-23725) HadoopFsCommitter, file rename failure
[ https://issues.apache.org/jira/browse/FLINK-23725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402650#comment-17402650 ] Paul Lin edited comment on FLINK-23725 at 8/21/21, 5:17 PM: I've also met this issue. If the file name already exists, FileCommiter would silently skip the commit, which may lead to data loss. The root cause is that #rename would not throw exceptions if the target file already exists or the src file doesn't exist, instead it returns false to indicate the operation is failed, as [Hadoop ClientProtocal|[https://github.com/apache/hadoop/blob/b6d19718204af02da6e2ed0b83d5936824371fc0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java#L520]|https://github.com/apache/hadoop/blob/b6d19718204af02da6e2ed0b83d5936824371fc0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java#L520)] mentioned. I think in both cases we should throw an exception. was (Author: paul lin): I've also met this issue. If the file name already exists, FileCommiter would silently skip the commit, which may lead to data loss. The root cause is that #rename would not throw exceptions if the target file already exists or the src file doesn't exist, instead it returns false to indicate the operation is failed, as [Hadoop ClientProtocal]([https://github.com/apache/hadoop/blob/b6d19718204af02da6e2ed0b83d5936824371fc0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java#L520)] mentioned. I think in both cases we should throw an exception. > HadoopFsCommitter, file rename failure > -- > > Key: FLINK-23725 > URL: https://issues.apache.org/jira/browse/FLINK-23725 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem, Connectors / Hadoop > Compatibility, FileSystems >Affects Versions: 1.11.1, 1.12.1 >Reporter: todd >Priority: Major > > When the HDFS file is written, if the part file exists, only false will be > returned if the duplicate name fails.Whether to throw an exception that > already exists in the part, or print related logs. > > ``` > org.apache.flink.runtime.fs.hdfs.HadoopRecoverableFsDataOutputStream.HadoopFsCommitter#commit > public void commit() throws IOException { > final Path src = recoverable.tempFile(); > final Path dest = recoverable.targetFile(); > final long expectedLength = recoverable.offset(); > try { > //always return false or ture > fs.rename(src, dest); > } catch (IOException e) { > throw new IOException( > "Committing file by rename failed: " + src + " to " + dest, e); > } > } > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23725) HadoopFsCommitter, file rename failure
[ https://issues.apache.org/jira/browse/FLINK-23725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402650#comment-17402650 ] Paul Lin commented on FLINK-23725: -- I've also met this issue. If the file name already exists, FileCommiter would silently skip the commit, which may lead to data loss. The root cause is that #rename would not throw exceptions if the target file already exists or the src file doesn't exist, instead it returns false to indicate the operation is failed, as [Hadoop ClientProtocal]([https://github.com/apache/hadoop/blob/b6d19718204af02da6e2ed0b83d5936824371fc0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java#L520)] mentioned. I think in both cases we should throw an exception. > HadoopFsCommitter, file rename failure > -- > > Key: FLINK-23725 > URL: https://issues.apache.org/jira/browse/FLINK-23725 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem, Connectors / Hadoop > Compatibility, FileSystems >Affects Versions: 1.11.1, 1.12.1 >Reporter: todd >Priority: Major > > When the HDFS file is written, if the part file exists, only false will be > returned if the duplicate name fails.Whether to throw an exception that > already exists in the part, or print related logs. > > ``` > org.apache.flink.runtime.fs.hdfs.HadoopRecoverableFsDataOutputStream.HadoopFsCommitter#commit > public void commit() throws IOException { > final Path src = recoverable.tempFile(); > final Path dest = recoverable.targetFile(); > final long expectedLength = recoverable.offset(); > try { > //always return false or ture > fs.rename(src, dest); > } catch (IOException e) { > throw new IOException( > "Committing file by rename failed: " + src + " to " + dest, e); > } > } > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #16864: [FLINK-23556][tests] Make SQLClientSchemaRegistryITCase more stable
flinkbot edited a comment on pull request #16864: URL: https://github.com/apache/flink/pull/16864#issuecomment-900421523 ## CI report: * e47cc45fd35764b1b65c6a0dd41bf863b8dcee49 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22584) * 6aee1804b521f84a72e483ef7830ea2a191eff43 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16927: [FLINK-18592][Connectors/FileSystem] StreamingFileSink fails due to truncating HDFS file failure
flinkbot commented on pull request #16927: URL: https://github.com/apache/flink/pull/16927#issuecomment-903143712 ## CI report: * d0fbb5aa88be503514642bb0571a4af183a21edf UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16924: [FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot edited a comment on pull request #16924: URL: https://github.com/apache/flink/pull/16924#issuecomment-903114219 ## CI report: * e0e672092cd5069bc988fc6199efbd7a7d4b3ebf Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22600) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16919: [BP-1.13][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot edited a comment on pull request #16919: URL: https://github.com/apache/flink/pull/16919#issuecomment-903111288 ## CI report: * 6a1074e68be13e0dfe740b496f5a625cac7d17bd UNKNOWN * 23feb97bcdbcd2586a02e2d352b5e6890500cc03 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22595) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16918: [FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot edited a comment on pull request #16918: URL: https://github.com/apache/flink/pull/16918#issuecomment-903111276 ## CI report: * c769365c8d629a0ae5862682e82d4b99f9ab19e9 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22594) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16921: [FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot edited a comment on pull request #16921: URL: https://github.com/apache/flink/pull/16921#issuecomment-903111309 ## CI report: * 5ae9f4c72ca6e2e7425e4eb8b0de0a76d3b25ab5 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22597) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16920: [BP-1.12][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot edited a comment on pull request #16920: URL: https://github.com/apache/flink/pull/16920#issuecomment-903111299 ## CI report: * 974ee37ffb42ad322192e453707480b00618d4df Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22596) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16917: [FLINK-20461][tests] Check replication factor before asking for JobResult
flinkbot edited a comment on pull request #16917: URL: https://github.com/apache/flink/pull/16917#issuecomment-903106985 ## CI report: * f08a9fc073f7f96c05304e4801c41c216d2e62e8 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22593) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16927: [FLINK-18592][Connectors/FileSystem] StreamingFileSink fails due to truncating HDFS file failure
flinkbot commented on pull request #16927: URL: https://github.com/apache/flink/pull/16927#issuecomment-903138507 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit d0fbb5aa88be503514642bb0571a4af183a21edf (Sat Aug 21 16:08:14 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-18592).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] link3280 opened a new pull request #16927: [FLINK-18592][Connectors/FileSystem] StreamingFileSink fails due to truncating HDFS file failure
link3280 opened a new pull request #16927: URL: https://github.com/apache/flink/pull/16927 ## What is the purpose of the change In the case of HDFS, upon job recovery, StreamingFileSink would not wait for lease recoveries to complete before truncating a file (now it would try to truncate the file after a timeout, no matter if the lease is revoked or not). This may lead to an IOException because the file length could be behind the actual length and the checkpointed length. What's worse, the job may fall into an endless restart loop, because a new invoke of #recoverLease will interrupt the previous one (see [HBase's RecoverLeaseFSUtils](https://github.com/apache/hbase/blob/a9a1b9524daa9e33541c655620b9c07d5a93d533/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/util/RecoverLeaseFSUtils.java#L68)). Moreover, we should wait for block recoveries which may be triggered by truncate calls (as mentioned in [Hadoop FileSystem Javadoc](https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#truncate)), before appending to the recovered files. This PR fixes the problem, but with two hard-coded timeout thresholds since it requires interfaces changes to make these timeouts configurable. If the lease recovery of the block recovery fails, an IOException would be thrown, which triggers a restart of the job. ## Brief change log - Wait for lease recoveries to complete before truncating in-progress files. - Wait for possible block recoveries to complete before appending to recovered files. ## Verifying this change I simply tested it on an HDFS cluster with 3 nodes, but it may require further tests. This change added tests and can be verified as follows: 1. Start a Flink job writing recoverable HDFS files. 2. Manually kill a DateNode which StreamingFileSink is writing to (to trigger lease recovery and block recovery). 3. Restart the job from the latest successful checkpoint. The files should be properly recovered. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20461) YARNFileReplicationITCase.testPerJobModeWithDefaultFileReplication
[ https://issues.apache.org/jira/browse/FLINK-20461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402622#comment-17402622 ] Gabor Somogyi commented on FLINK-20461: --- Thanks for checking it too, added my findings to the PR. I think we're on track :) > YARNFileReplicationITCase.testPerJobModeWithDefaultFileReplication > -- > > Key: FLINK-20461 > URL: https://issues.apache.org/jira/browse/FLINK-20461 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.11.3, 1.12.0, 1.13.0, 1.14.0 >Reporter: Huang Xingbo >Assignee: Till Rohrmann >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.14.0 > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10450=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf] > {code:java} > [ERROR] > testPerJobModeWithDefaultFileReplication(org.apache.flink.yarn.YARNFileReplicationITCase) > Time elapsed: 32.501 s <<< ERROR! java.io.FileNotFoundException: File does > not exist: > hdfs://localhost:46072/user/agent04_azpcontainer/.flink/application_1606950278664_0001/flink-dist_2.11-1.12-SNAPSHOT.jar > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1441) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1434) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1434) > at > org.apache.flink.yarn.YARNFileReplicationITCase.extraVerification(YARNFileReplicationITCase.java:148) > at > org.apache.flink.yarn.YARNFileReplicationITCase.deployPerJob(YARNFileReplicationITCase.java:113) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] gaborgsomogyi commented on pull request #16917: [FLINK-20461][tests] Check replication factor before asking for JobResult
gaborgsomogyi commented on pull request #16917: URL: https://github.com/apache/flink/pull/16917#issuecomment-903127554 Lately I've put the test into a loop with extra logs and checking similar path. The test has failed after hge amount of time and my findings are the following: * `waitApplicationFinishedElseKillIt` returned w/o exception because state reached `FINISHED` * YARN started to clean up directories etc... * Depending on how fast this clean-up is `flinkUberjar` is either there or already deleted All in all I agree with the direction but personally I would add `Assert.fail` on the case where `flinkUberjar` is missing (before getting file status [here](https://github.com/apache/flink/blob/82c1cc12ec6830d6e9cff27eb77dbebbe354f703/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNFileReplicationITCase.java#L165)). I think that would help later analysis (job already failed or killed). I'm just starting the loop w/ the suggested change and let's see whether this solves it or not. Will come back w/ the result in couple of days... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23457) Sending the buffer of the right size for broadcast
[ https://issues.apache.org/jira/browse/FLINK-23457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402619#comment-17402619 ] zhengyu.lou commented on FLINK-23457: - Could you please provide some more details and assign this ticket to me?thx [~akalashnikov] > Sending the buffer of the right size for broadcast > -- > > Key: FLINK-23457 > URL: https://issues.apache.org/jira/browse/FLINK-23457 > Project: Flink > Issue Type: Sub-task >Reporter: Anton Kalashnikov >Priority: Major > > It is not enough to know just the number of available buffers (credits) for > the downstream because the size of these buffers can be different. So we are > proposing to resolve this problem in the following way: If the downstream > buffer size is changed then the upstream should send the buffer of the size > not greater than the new one regardless of how big the current buffer on the > upstream. (pollBuffer should receive parameters like bufferSize and return > buffer not greater than it) > Downstream will be able to support any buffer size < max buffer size, so it > should be just good enough to request BufferBuilder with new size after > getting announcement, and leaving existing BufferBuilder/BufferConsumers > unchanged. In other words code in {{PipelinedSubpartition(View)}} doesn’t > need to be changed (apart of forwarding new buffer size to the > {{BufferWritingResultPartition}}). All buffer size adjustments can be > implemented exclusively in {{BufferWritingResultPartition}}. > If different downstream subtasks have different throughput and hence > different desired buffer sizes, then a single upstream subtask has to support > having two different subpartitions with different buffer sizes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22848) Deprecate unquoted options for SET / RESET
[ https://issues.apache.org/jira/browse/FLINK-22848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402618#comment-17402618 ] Ingo Bürk commented on FLINK-22848: --- [~louzhengyu] We cannot work on this yet, we need to wait at least until after 1.14, but we probably need to discuss how quickly to deprecate this first anyway. > Deprecate unquoted options for SET / RESET > -- > > Key: FLINK-22848 > URL: https://issues.apache.org/jira/browse/FLINK-22848 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Client >Affects Versions: 1.14.0 >Reporter: Ingo Bürk >Priority: Minor > Labels: auto-deprioritized-major > > Eventually we should agree to a version in which to deprecate, and a version > in which to remove, the unquoted syntax for SET / RESET: > {code:java} > // To be deprecated / removed > SET a = b; > RESET a; > // New > SET 'a' = 'b'; > RESET 'a';{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #16916: [BP-1.13][FLINK-23871][Runtime/Coordination] Dispatcher should handle finishing job exception when recover
flinkbot edited a comment on pull request #16916: URL: https://github.com/apache/flink/pull/16916#issuecomment-903093580 ## CI report: * 06993697801c13f27e95dd303861af057720a93f Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22589) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16894: [FLINK-23871][Runtime/Coordination]Dispatcher should handle finishing…
flinkbot edited a comment on pull request #16894: URL: https://github.com/apache/flink/pull/16894#issuecomment-901800802 ## CI report: * 1c80d779378fc92ccc8667b5e271caea5d5312b3 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22588) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-22848) Deprecate unquoted options for SET / RESET
[ https://issues.apache.org/jira/browse/FLINK-22848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402615#comment-17402615 ] zhengyu.lou commented on FLINK-22848: - can you assign this ticket to me?thx [~airblader] > Deprecate unquoted options for SET / RESET > -- > > Key: FLINK-22848 > URL: https://issues.apache.org/jira/browse/FLINK-22848 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Client >Affects Versions: 1.14.0 >Reporter: Ingo Bürk >Priority: Minor > Labels: auto-deprioritized-major > > Eventually we should agree to a version in which to deprecate, and a version > in which to remove, the unquoted syntax for SET / RESET: > {code:java} > // To be deprecated / removed > SET a = b; > RESET a; > // New > SET 'a' = 'b'; > RESET 'a';{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #16926: [BP-1.12][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot edited a comment on pull request #16926: URL: https://github.com/apache/flink/pull/16926#issuecomment-903114240 ## CI report: * 345ab64e359f37b511140766ca708fa7dfe28207 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22602) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16925: [BP-1.13][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot edited a comment on pull request #16925: URL: https://github.com/apache/flink/pull/16925#issuecomment-903114231 ## CI report: * 3edefca893bec20e8fd17c75ac708368bb1893be Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22601) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16924: [FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot edited a comment on pull request #16924: URL: https://github.com/apache/flink/pull/16924#issuecomment-903114219 ## CI report: * e0e672092cd5069bc988fc6199efbd7a7d4b3ebf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22600) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16919: [BP-1.13][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot edited a comment on pull request #16919: URL: https://github.com/apache/flink/pull/16919#issuecomment-903111288 ## CI report: * 6a1074e68be13e0dfe740b496f5a625cac7d17bd UNKNOWN * 23feb97bcdbcd2586a02e2d352b5e6890500cc03 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22595) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] link3280 commented on pull request #16891: [FLINK-23868][Client] JobExecutionResult printed when suppressSysout is on
link3280 commented on pull request #16891: URL: https://github.com/apache/flink/pull/16891#issuecomment-903117545 Thanks a lot for your inputs! @zentol @tillrohrmann @tisonkun In addition to the background, I was trying to submit jobs through Flink client interfaces in a gateway process (like Ververica's Flink SQL gateway) and found unexpected outputs in the process's stdout. I think `suppressSysout` was meant for this case when jobs are submitted in a process where the stdout is not (like CliFrontend does) directly showed to end users. @tisonkun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-23907) Type Migration: introducing primitive functional interfaces
Oleg Smirnov created FLINK-23907: Summary: Type Migration: introducing primitive functional interfaces Key: FLINK-23907 URL: https://issues.apache.org/jira/browse/FLINK-23907 Project: Flink Issue Type: Improvement Reporter: Oleg Smirnov Hey! We are a group of researchers, and we are testing our data-driven [plugin|https://github.com/JetBrains-Research/data-driven-type-migration], which is based on the IntelliJ's [Type Migration|https://www.jetbrains.com/help/idea/type-migration.html] framework and adjusts it using custom structural-replace templates that express the adaptations required to perform the type change. I want to apply several type changes using it and open the PR, thus introducing primitive functional interfaces in order to prevent unnecessary boxing (like BooleanSupplier instead Supplier, OptionalInt instead of Optional, etc. ), since it can affect the performance of the code (Effective Java, Items 44, 61). The patch itself is already prepared, so I guess I will need to open this ticket, receive your approval, and then open the PR? Thank you in advance! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #16925: [BP-1.13][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot commented on pull request #16925: URL: https://github.com/apache/flink/pull/16925#issuecomment-903114231 ## CI report: * 3edefca893bec20e8fd17c75ac708368bb1893be UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16926: [BP-1.12][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot commented on pull request #16926: URL: https://github.com/apache/flink/pull/16926#issuecomment-903114240 ## CI report: * 345ab64e359f37b511140766ca708fa7dfe28207 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16924: [FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot commented on pull request #16924: URL: https://github.com/apache/flink/pull/16924#issuecomment-903114219 ## CI report: * e0e672092cd5069bc988fc6199efbd7a7d4b3ebf UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16921: [FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot edited a comment on pull request #16921: URL: https://github.com/apache/flink/pull/16921#issuecomment-903111309 ## CI report: * 5ae9f4c72ca6e2e7425e4eb8b0de0a76d3b25ab5 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22597) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16919: [BP-1.13][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot edited a comment on pull request #16919: URL: https://github.com/apache/flink/pull/16919#issuecomment-903111288 ## CI report: * 6a1074e68be13e0dfe740b496f5a625cac7d17bd UNKNOWN * 23feb97bcdbcd2586a02e2d352b5e6890500cc03 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16920: [BP-1.12][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot edited a comment on pull request #16920: URL: https://github.com/apache/flink/pull/16920#issuecomment-903111299 ## CI report: * 974ee37ffb42ad322192e453707480b00618d4df Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22596) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16918: [FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot edited a comment on pull request #16918: URL: https://github.com/apache/flink/pull/16918#issuecomment-903111276 ## CI report: * c769365c8d629a0ae5862682e82d4b99f9ab19e9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22594) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16922: [BP-1.13][FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot edited a comment on pull request #16922: URL: https://github.com/apache/flink/pull/16922#issuecomment-903111319 ## CI report: * 2474a2795e1ca71c33e6a7e239ea9ff22a1b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22598) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16923: [BP-1.12][FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot edited a comment on pull request #16923: URL: https://github.com/apache/flink/pull/16923#issuecomment-903111328 ## CI report: * b4504e3bdd8acb0e664560c8b4f0eec931f00d3a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22599) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16926: [BP-1.12][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot commented on pull request #16926: URL: https://github.com/apache/flink/pull/16926#issuecomment-903111714 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 345ab64e359f37b511140766ca708fa7dfe28207 (Sat Aug 21 12:47:19 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16925: [BP-1.13][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot commented on pull request #16925: URL: https://github.com/apache/flink/pull/16925#issuecomment-903111487 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 3edefca893bec20e8fd17c75ac708368bb1893be (Sat Aug 21 12:45:16 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann opened a new pull request #16926: [BP-1.12][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
tillrohrmann opened a new pull request #16926: URL: https://github.com/apache/flink/pull/16926 Backport of #16924 to `release-1.12.` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16922: [BP-1.13][FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot commented on pull request #16922: URL: https://github.com/apache/flink/pull/16922#issuecomment-903111319 ## CI report: * 2474a2795e1ca71c33e6a7e239ea9ff22a1b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16923: [BP-1.12][FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot commented on pull request #16923: URL: https://github.com/apache/flink/pull/16923#issuecomment-903111328 ## CI report: * b4504e3bdd8acb0e664560c8b4f0eec931f00d3a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16921: [FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot commented on pull request #16921: URL: https://github.com/apache/flink/pull/16921#issuecomment-903111309 ## CI report: * 5ae9f4c72ca6e2e7425e4eb8b0de0a76d3b25ab5 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16920: [BP-1.12][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot commented on pull request #16920: URL: https://github.com/apache/flink/pull/16920#issuecomment-903111299 ## CI report: * 974ee37ffb42ad322192e453707480b00618d4df UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16919: [BP-1.13][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot commented on pull request #16919: URL: https://github.com/apache/flink/pull/16919#issuecomment-903111288 ## CI report: * 6a1074e68be13e0dfe740b496f5a625cac7d17bd UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16918: [FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot commented on pull request #16918: URL: https://github.com/apache/flink/pull/16918#issuecomment-903111276 ## CI report: * c769365c8d629a0ae5862682e82d4b99f9ab19e9 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann opened a new pull request #16925: [BP-1.13][FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
tillrohrmann opened a new pull request #16925: URL: https://github.com/apache/flink/pull/16925 Backport of #16924 to `release-1.13`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16917: [FLINK-20461][tests] Check replication factor before asking for JobResult
flinkbot edited a comment on pull request #16917: URL: https://github.com/apache/flink/pull/16917#issuecomment-903106985 ## CI report: * f08a9fc073f7f96c05304e4801c41c216d2e62e8 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22593) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16924: [FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
flinkbot commented on pull request #16924: URL: https://github.com/apache/flink/pull/16924#issuecomment-903111031 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit e0e672092cd5069bc988fc6199efbd7a7d4b3ebf (Sat Aug 21 12:41:12 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-22333) Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy task timeout.
[ https://issues.apache.org/jira/browse/FLINK-22333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-22333: --- Labels: pull-request-available test-stability (was: test-stability) > Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy > task timeout. > --- > > Key: FLINK-22333 > URL: https://issues.apache.org/jira/browse/FLINK-22333 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Assignee: Till Rohrmann >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16694=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=12329 > {code:java} > 2021-04-16T23:37:23.5719280Z Apr 16 23:37:23 > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2021-04-16T23:37:23.5739250Z Apr 16 23:37:23 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2021-04-16T23:37:23.5759329Z Apr 16 23:37:23 at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) > 2021-04-16T23:37:23.5779145Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2021-04-16T23:37:23.5799204Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2021-04-16T23:37:23.5819302Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-04-16T23:37:23.5839106Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-04-16T23:37:23.5859276Z Apr 16 23:37:23 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > 2021-04-16T23:37:23.5868964Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-04-16T23:37:23.5869925Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-04-16T23:37:23.5919839Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-04-16T23:37:23.5959562Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-04-16T23:37:23.5989732Z Apr 16 23:37:23 at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081) > 2021-04-16T23:37:23.6019422Z Apr 16 23:37:23 at > akka.dispatch.OnComplete.internal(Future.scala:264) > 2021-04-16T23:37:23.6039067Z Apr 16 23:37:23 at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-04-16T23:37:23.6060126Z Apr 16 23:37:23 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-04-16T23:37:23.6089258Z Apr 16 23:37:23 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-04-16T23:37:23.6119150Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-04-16T23:37:23.6139149Z Apr 16 23:37:23 at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-04-16T23:37:23.6159077Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > 2021-04-16T23:37:23.6189432Z Apr 16 23:37:23 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > 2021-04-16T23:37:23.6215243Z Apr 16 23:37:23 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > 2021-04-16T23:37:23.6219148Z Apr 16 23:37:23 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > 2021-04-16T23:37:23.6220221Z Apr 16 23:37:23 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-04-16T23:37:23.6249411Z Apr 16 23:37:23 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > 2021-04-16T23:37:23.6259145Z Apr 16 23:37:23 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > 2021-04-16T23:37:23.6289272Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-04-16T23:37:23.6309243Z Apr 16 23:37:23 at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > 2021-04-16T23:37:23.6359306Z Apr 16 23:37:23 at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > 2021-04-16T23:37:23.6369399Z Apr 16 23:37:23 at >
[GitHub] [flink] tillrohrmann opened a new pull request #16924: [FLINK-22333][tests] Harden Elasticsearch7DynamicSinkITCase.testWritingDocuments by setting parallelism to 4
tillrohrmann opened a new pull request #16924: URL: https://github.com/apache/flink/pull/16924 This commit hardens the `Elasticsearch7DynamicSinkITCase.testWritingDocuments` tests by settings its parallelism to 4. Otherwise the test is run with as many CPUs are available on the machine. This can slow down the test on our CI infrastructure. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23906) Increase akka.ask.timeout for tests using the MiniCluster
[ https://issues.apache.org/jira/browse/FLINK-23906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-23906: -- Description: We have seen over the last couple of weeks/months an increased number of test failures because of {{TimeoutException}} that were triggered because the {{akka.ask.timeout}} was exceeded. The reason for this was that on our CI infrastructure it can happen that there are pauses of more than 10s (not sure about the exact reason) or our infrastructure simply being slow. In order to harden all tests relying on the {{MiniCluster}} I propose to increase the {{akka.ask.timeout}} to 5 minutes if nothing else has been configured. was: We have seen over the last couple of weeks/months an increased number of test failures because of {{TimeoutException}} that were triggered because the {{akka.ask.timeout}} was exceeded. The reason for this was that on our CI infrastructure it can happen that there are pauses of more than 10s (not sure about the exact reason) or our infrastructure simply being slow. In order to harden all tests relying on the {{MiniCluster}} I propose to increase the {{akka.ask.timeout}} to a minute if nothing else has been configured. > Increase akka.ask.timeout for tests using the MiniCluster > - > > Key: FLINK-23906 > URL: https://issues.apache.org/jira/browse/FLINK-23906 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination, Tests >Affects Versions: 1.14.0, 1.12.5, 1.13.2 >Reporter: Till Rohrmann >Assignee: Till Rohrmann >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > We have seen over the last couple of weeks/months an increased number of test > failures because of {{TimeoutException}} that were triggered because the > {{akka.ask.timeout}} was exceeded. The reason for this was that on our CI > infrastructure it can happen that there are pauses of more than 10s (not sure > about the exact reason) or our infrastructure simply being slow. > In order to harden all tests relying on the {{MiniCluster}} I propose to > increase the {{akka.ask.timeout}} to 5 minutes if nothing else has been > configured. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-22333) Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy task timeout.
[ https://issues.apache.org/jira/browse/FLINK-22333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann reassigned FLINK-22333: - Assignee: Till Rohrmann > Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy > task timeout. > --- > > Key: FLINK-22333 > URL: https://issues.apache.org/jira/browse/FLINK-22333 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Assignee: Till Rohrmann >Priority: Major > Labels: test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16694=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=12329 > {code:java} > 2021-04-16T23:37:23.5719280Z Apr 16 23:37:23 > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2021-04-16T23:37:23.5739250Z Apr 16 23:37:23 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2021-04-16T23:37:23.5759329Z Apr 16 23:37:23 at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) > 2021-04-16T23:37:23.5779145Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2021-04-16T23:37:23.5799204Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2021-04-16T23:37:23.5819302Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-04-16T23:37:23.5839106Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-04-16T23:37:23.5859276Z Apr 16 23:37:23 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > 2021-04-16T23:37:23.5868964Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-04-16T23:37:23.5869925Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-04-16T23:37:23.5919839Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-04-16T23:37:23.5959562Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-04-16T23:37:23.5989732Z Apr 16 23:37:23 at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081) > 2021-04-16T23:37:23.6019422Z Apr 16 23:37:23 at > akka.dispatch.OnComplete.internal(Future.scala:264) > 2021-04-16T23:37:23.6039067Z Apr 16 23:37:23 at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-04-16T23:37:23.6060126Z Apr 16 23:37:23 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-04-16T23:37:23.6089258Z Apr 16 23:37:23 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-04-16T23:37:23.6119150Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-04-16T23:37:23.6139149Z Apr 16 23:37:23 at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-04-16T23:37:23.6159077Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > 2021-04-16T23:37:23.6189432Z Apr 16 23:37:23 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > 2021-04-16T23:37:23.6215243Z Apr 16 23:37:23 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > 2021-04-16T23:37:23.6219148Z Apr 16 23:37:23 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > 2021-04-16T23:37:23.6220221Z Apr 16 23:37:23 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-04-16T23:37:23.6249411Z Apr 16 23:37:23 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > 2021-04-16T23:37:23.6259145Z Apr 16 23:37:23 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > 2021-04-16T23:37:23.6289272Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-04-16T23:37:23.6309243Z Apr 16 23:37:23 at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > 2021-04-16T23:37:23.6359306Z Apr 16 23:37:23 at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > 2021-04-16T23:37:23.6369399Z Apr 16 23:37:23 at >
[jira] [Updated] (FLINK-22333) Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy task timeout.
[ https://issues.apache.org/jira/browse/FLINK-22333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-22333: -- Fix Version/s: 1.13.3 1.12.6 1.14.0 > Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy > task timeout. > --- > > Key: FLINK-22333 > URL: https://issues.apache.org/jira/browse/FLINK-22333 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Priority: Major > Labels: test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16694=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=12329 > {code:java} > 2021-04-16T23:37:23.5719280Z Apr 16 23:37:23 > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2021-04-16T23:37:23.5739250Z Apr 16 23:37:23 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2021-04-16T23:37:23.5759329Z Apr 16 23:37:23 at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) > 2021-04-16T23:37:23.5779145Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2021-04-16T23:37:23.5799204Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2021-04-16T23:37:23.5819302Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-04-16T23:37:23.5839106Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-04-16T23:37:23.5859276Z Apr 16 23:37:23 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > 2021-04-16T23:37:23.5868964Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-04-16T23:37:23.5869925Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-04-16T23:37:23.5919839Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-04-16T23:37:23.5959562Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-04-16T23:37:23.5989732Z Apr 16 23:37:23 at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081) > 2021-04-16T23:37:23.6019422Z Apr 16 23:37:23 at > akka.dispatch.OnComplete.internal(Future.scala:264) > 2021-04-16T23:37:23.6039067Z Apr 16 23:37:23 at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-04-16T23:37:23.6060126Z Apr 16 23:37:23 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-04-16T23:37:23.6089258Z Apr 16 23:37:23 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-04-16T23:37:23.6119150Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-04-16T23:37:23.6139149Z Apr 16 23:37:23 at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-04-16T23:37:23.6159077Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > 2021-04-16T23:37:23.6189432Z Apr 16 23:37:23 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > 2021-04-16T23:37:23.6215243Z Apr 16 23:37:23 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > 2021-04-16T23:37:23.6219148Z Apr 16 23:37:23 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > 2021-04-16T23:37:23.6220221Z Apr 16 23:37:23 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-04-16T23:37:23.6249411Z Apr 16 23:37:23 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > 2021-04-16T23:37:23.6259145Z Apr 16 23:37:23 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > 2021-04-16T23:37:23.6289272Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-04-16T23:37:23.6309243Z Apr 16 23:37:23 at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > 2021-04-16T23:37:23.6359306Z Apr 16 23:37:23 at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > 2021-04-16T23:37:23.6369399Z Apr 16 23:37:23 at >
[jira] [Commented] (FLINK-22333) Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy task timeout.
[ https://issues.apache.org/jira/browse/FLINK-22333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402601#comment-17402601 ] Till Rohrmann commented on FLINK-22333: --- Same analysis as for FLINK-21538 and with the same conclusions. > Elasticsearch7DynamicSinkITCase.testWritingDocuments failed due to deploy > task timeout. > --- > > Key: FLINK-22333 > URL: https://issues.apache.org/jira/browse/FLINK-22333 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Guowei Ma >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16694=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=12329 > {code:java} > 2021-04-16T23:37:23.5719280Z Apr 16 23:37:23 > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2021-04-16T23:37:23.5739250Z Apr 16 23:37:23 at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2021-04-16T23:37:23.5759329Z Apr 16 23:37:23 at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) > 2021-04-16T23:37:23.5779145Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2021-04-16T23:37:23.5799204Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2021-04-16T23:37:23.5819302Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-04-16T23:37:23.5839106Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-04-16T23:37:23.5859276Z Apr 16 23:37:23 at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > 2021-04-16T23:37:23.5868964Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-04-16T23:37:23.5869925Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-04-16T23:37:23.5919839Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-04-16T23:37:23.5959562Z Apr 16 23:37:23 at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-04-16T23:37:23.5989732Z Apr 16 23:37:23 at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081) > 2021-04-16T23:37:23.6019422Z Apr 16 23:37:23 at > akka.dispatch.OnComplete.internal(Future.scala:264) > 2021-04-16T23:37:23.6039067Z Apr 16 23:37:23 at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-04-16T23:37:23.6060126Z Apr 16 23:37:23 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-04-16T23:37:23.6089258Z Apr 16 23:37:23 at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-04-16T23:37:23.6119150Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-04-16T23:37:23.6139149Z Apr 16 23:37:23 at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-04-16T23:37:23.6159077Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > 2021-04-16T23:37:23.6189432Z Apr 16 23:37:23 at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > 2021-04-16T23:37:23.6215243Z Apr 16 23:37:23 at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > 2021-04-16T23:37:23.6219148Z Apr 16 23:37:23 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > 2021-04-16T23:37:23.6220221Z Apr 16 23:37:23 at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-04-16T23:37:23.6249411Z Apr 16 23:37:23 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > 2021-04-16T23:37:23.6259145Z Apr 16 23:37:23 at > scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > 2021-04-16T23:37:23.6289272Z Apr 16 23:37:23 at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > 2021-04-16T23:37:23.6309243Z Apr 16 23:37:23 at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > 2021-04-16T23:37:23.6359306Z Apr 16 23:37:23 at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > 2021-04-16T23:37:23.6369399Z Apr 16 23:37:23 at >
[GitHub] [flink] flinkbot commented on pull request #16922: [BP-1.13][FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot commented on pull request #16922: URL: https://github.com/apache/flink/pull/16922#issuecomment-903107995 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 2474a2795e1ca71c33e6a7e239ea9ff22a1b (Sat Aug 21 12:15:46 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16923: [BP-1.12][FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot commented on pull request #16923: URL: https://github.com/apache/flink/pull/16923#issuecomment-903107987 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit b4504e3bdd8acb0e664560c8b4f0eec931f00d3a (Sat Aug 21 12:15:44 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16921: [FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
flinkbot commented on pull request #16921: URL: https://github.com/apache/flink/pull/16921#issuecomment-903107733 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 5ae9f4c72ca6e2e7425e4eb8b0de0a76d3b25ab5 (Sat Aug 21 12:13:41 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann opened a new pull request #16923: [BP-1.12][FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
tillrohrmann opened a new pull request #16923: URL: https://github.com/apache/flink/pull/16923 Backport of #16921 to `release-1.12`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann opened a new pull request #16922: [BP-1.13][FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
tillrohrmann opened a new pull request #16922: URL: https://github.com/apache/flink/pull/16922 Backport of #16921 to `release-1.13`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23906) Increase akka.ask.timeout for tests using the MiniCluster
[ https://issues.apache.org/jira/browse/FLINK-23906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-23906: --- Labels: pull-request-available test-stability (was: test-stability) > Increase akka.ask.timeout for tests using the MiniCluster > - > > Key: FLINK-23906 > URL: https://issues.apache.org/jira/browse/FLINK-23906 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination, Tests >Affects Versions: 1.14.0, 1.12.5, 1.13.2 >Reporter: Till Rohrmann >Assignee: Till Rohrmann >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > We have seen over the last couple of weeks/months an increased number of test > failures because of {{TimeoutException}} that were triggered because the > {{akka.ask.timeout}} was exceeded. The reason for this was that on our CI > infrastructure it can happen that there are pauses of more than 10s (not sure > about the exact reason) or our infrastructure simply being slow. > In order to harden all tests relying on the {{MiniCluster}} I propose to > increase the {{akka.ask.timeout}} to a minute if nothing else has been > configured. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] tillrohrmann opened a new pull request #16921: [FLINK-23906][tests] Increase the default akka.ask.timeout for the MiniCluster to 5 minutes
tillrohrmann opened a new pull request #16921: URL: https://github.com/apache/flink/pull/16921 This commit sets the akka.ask.timeout, if not explicitly configured, to 5 minutes when using the MiniCluster. The idea behind this change is to harden all our tests that rely on the MiniCluster and run into TimeoutExceptions on our slow CI infrastructure. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16917: [FLINK-20461][tests] Check replication factor before asking for JobResult
flinkbot commented on pull request #16917: URL: https://github.com/apache/flink/pull/16917#issuecomment-903106985 ## CI report: * f08a9fc073f7f96c05304e4801c41c216d2e62e8 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23906) Increase akka.ask.timeout for tests using the MiniCluster
[ https://issues.apache.org/jira/browse/FLINK-23906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-23906: -- Description: We have seen over the last couple of weeks/months an increased number of test failures because of {{TimeoutException}} that were triggered because the {{akka.ask.timeout}} was exceeded. The reason for this was that on our CI infrastructure it can happen that there are pauses of more than 10s (not sure about the exact reason) or our infrastructure simply being slow. In order to harden all tests relying on the {{MiniCluster}} I propose to increase the {{akka.ask.timeout}} to a minute if nothing else has been configured. was: We have seen over the last couple of weeks/months an increased number of test failures because of {{TimeoutException}} that were triggered because the {{akka.ask.timeout}} was exceeded. The reason for this was that on our CI infrastructure it can happen that there are pauses of more than 10s (not sure about the exact reason). In order to harden all tests relying on the {{MiniCluster}} I propose to increase the {{akka.ask.timeout}} to a minute if nothing else has been configured. > Increase akka.ask.timeout for tests using the MiniCluster > - > > Key: FLINK-23906 > URL: https://issues.apache.org/jira/browse/FLINK-23906 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination, Tests >Affects Versions: 1.14.0, 1.12.5, 1.13.2 >Reporter: Till Rohrmann >Assignee: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > We have seen over the last couple of weeks/months an increased number of test > failures because of {{TimeoutException}} that were triggered because the > {{akka.ask.timeout}} was exceeded. The reason for this was that on our CI > infrastructure it can happen that there are pauses of more than 10s (not sure > about the exact reason) or our infrastructure simply being slow. > In order to harden all tests relying on the {{MiniCluster}} I propose to > increase the {{akka.ask.timeout}} to a minute if nothing else has been > configured. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-21538) Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job
[ https://issues.apache.org/jira/browse/FLINK-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402598#comment-17402598 ] Till Rohrmann commented on FLINK-21538: --- For the 2) point I've created FLINK-23906. > Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job > -- > > Key: FLINK-21538 > URL: https://issues.apache.org/jira/browse/FLINK-21538 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch, Runtime / Coordination >Affects Versions: 1.12.1, 1.13.0 >Reporter: Dawid Wysakowicz >Assignee: Till Rohrmann >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available, test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13868=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361 > {code} > 2021-02-27T00:16:06.9493539Z > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2021-02-27T00:16:06.9494494Z at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2021-02-27T00:16:06.9495733Z at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117) > 2021-02-27T00:16:06.9496596Z at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2021-02-27T00:16:06.9497354Z at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2021-02-27T00:16:06.9525795Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9526744Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9527784Z at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > 2021-02-27T00:16:06.9528552Z at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-02-27T00:16:06.9529271Z at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-02-27T00:16:06.9530013Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9530482Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9531068Z at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046) > 2021-02-27T00:16:06.9531544Z at > akka.dispatch.OnComplete.internal(Future.scala:264) > 2021-02-27T00:16:06.9531908Z at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-02-27T00:16:06.9532449Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-02-27T00:16:06.9532860Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-02-27T00:16:06.9533245Z at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2021-02-27T00:16:06.9533721Z at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-02-27T00:16:06.9534225Z at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) > 2021-02-27T00:16:06.9534697Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) > 2021-02-27T00:16:06.9535217Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) > 2021-02-27T00:16:06.9535718Z at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) > 2021-02-27T00:16:06.9536127Z at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573) > 2021-02-27T00:16:06.9536861Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > 2021-02-27T00:16:06.9537394Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-02-27T00:16:06.9537916Z at > scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532) > 2021-02-27T00:16:06.9605804Z at > scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) > 2021-02-27T00:16:06.9606794Z at > scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29) > 2021-02-27T00:16:06.9607642Z at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2021-02-27T00:16:06.9608419Z at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > 2021-02-27T00:16:06.9609252Z at > akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91) > 2021-02-27T00:16:06.9610024Z at > scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12) >
[jira] [Created] (FLINK-23906) Increase akka.ask.timeout for tests using the MiniCluster
Till Rohrmann created FLINK-23906: - Summary: Increase akka.ask.timeout for tests using the MiniCluster Key: FLINK-23906 URL: https://issues.apache.org/jira/browse/FLINK-23906 Project: Flink Issue Type: Improvement Components: Runtime / Coordination, Tests Affects Versions: 1.13.2, 1.12.5, 1.14.0 Reporter: Till Rohrmann Assignee: Till Rohrmann Fix For: 1.14.0, 1.12.6, 1.13.3 We have seen over the last couple of weeks/months an increased number of test failures because of {{TimeoutException}} that were triggered because the {{akka.ask.timeout}} was exceeded. The reason for this was that on our CI infrastructure it can happen that there are pauses of more than 10s (not sure about the exact reason). In order to harden all tests relying on the {{MiniCluster}} I propose to increase the {{akka.ask.timeout}} to a minute if nothing else has been configured. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #16920: [BP-1.12][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot commented on pull request #16920: URL: https://github.com/apache/flink/pull/16920#issuecomment-903106464 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 974ee37ffb42ad322192e453707480b00618d4df (Sat Aug 21 12:03:34 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16919: [BP-1.13][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot commented on pull request #16919: URL: https://github.com/apache/flink/pull/16919#issuecomment-903106168 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 6a1074e68be13e0dfe740b496f5a625cac7d17bd (Sat Aug 21 12:01:31 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann opened a new pull request #16920: [BP-1.12][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
tillrohrmann opened a new pull request #16920: URL: https://github.com/apache/flink/pull/16920 Backport of #16918 to `release-1.12`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tillrohrmann opened a new pull request #16919: [BP-1.13][FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
tillrohrmann opened a new pull request #16919: URL: https://github.com/apache/flink/pull/16919 Backport of #16918 to `release-1.13`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16918: [FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
flinkbot commented on pull request #16918: URL: https://github.com/apache/flink/pull/16918#issuecomment-903105608 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit c769365c8d629a0ae5862682e82d4b99f9ab19e9 (Sat Aug 21 11:57:30 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-21538) Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job
[ https://issues.apache.org/jira/browse/FLINK-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-21538: --- Labels: auto-deprioritized-major auto-unassigned pull-request-available test-stability (was: auto-deprioritized-major auto-unassigned test-stability) > Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job > -- > > Key: FLINK-21538 > URL: https://issues.apache.org/jira/browse/FLINK-21538 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch, Runtime / Coordination >Affects Versions: 1.12.1, 1.13.0 >Reporter: Dawid Wysakowicz >Assignee: Till Rohrmann >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available, test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13868=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361 > {code} > 2021-02-27T00:16:06.9493539Z > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2021-02-27T00:16:06.9494494Z at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2021-02-27T00:16:06.9495733Z at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117) > 2021-02-27T00:16:06.9496596Z at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2021-02-27T00:16:06.9497354Z at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2021-02-27T00:16:06.9525795Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9526744Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9527784Z at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > 2021-02-27T00:16:06.9528552Z at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-02-27T00:16:06.9529271Z at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-02-27T00:16:06.9530013Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9530482Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9531068Z at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046) > 2021-02-27T00:16:06.9531544Z at > akka.dispatch.OnComplete.internal(Future.scala:264) > 2021-02-27T00:16:06.9531908Z at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-02-27T00:16:06.9532449Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-02-27T00:16:06.9532860Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-02-27T00:16:06.9533245Z at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2021-02-27T00:16:06.9533721Z at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-02-27T00:16:06.9534225Z at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) > 2021-02-27T00:16:06.9534697Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) > 2021-02-27T00:16:06.9535217Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) > 2021-02-27T00:16:06.9535718Z at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) > 2021-02-27T00:16:06.9536127Z at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573) > 2021-02-27T00:16:06.9536861Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > 2021-02-27T00:16:06.9537394Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-02-27T00:16:06.9537916Z at > scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532) > 2021-02-27T00:16:06.9605804Z at > scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) > 2021-02-27T00:16:06.9606794Z at > scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29) > 2021-02-27T00:16:06.9607642Z at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2021-02-27T00:16:06.9608419Z at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > 2021-02-27T00:16:06.9609252Z at > akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91) > 2021-02-27T00:16:06.9610024Z at >
[GitHub] [flink] tillrohrmann opened a new pull request #16918: [FLINK-21538][tests] Set default parallelism to 4 for Elasticsearch6DynamicSinkITCase.testWritingDocuments
tillrohrmann opened a new pull request #16918: URL: https://github.com/apache/flink/pull/16918 This commit sets the default parallelism of Elasticsearch6DynamicSinkITCase.testWritingDocuments to 4 in order to reduce the load for our CI infrastructure. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-23894) Resolve/hide akka warning regarding RemoteActorRefProvider
[ https://issues.apache.org/jira/browse/FLINK-23894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-23894. Fix Version/s: 1.14.0 Resolution: Fixed master: 82c1cc12ec6830d6e9cff27eb77dbebbe354f703 > Resolve/hide akka warning regarding RemoteActorRefProvider > -- > > Key: FLINK-23894 > URL: https://issues.apache.org/jira/browse/FLINK-23894 > Project: Flink > Issue Type: Sub-task >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > > {code} > 2021-08-20 10:30:31,111 WARN RemoteActorRefProvider [] - Using the 'remote' > ActorRefProvider directly, which is a low-level layer. For most use cases, > the 'cluster' abstraction on top of remoting is more suitable instead. > 2021-08-20 10:30:31,112 WARN RemoteActorRefProvider [] - Akka Cluster not > in use - Using Akka Cluster is recommended if you need remote watch and > deploy. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zentol merged pull request #16909: [FLINK-23894][akka] Disable warning from RemoteActorRefProvider
zentol merged pull request #16909: URL: https://github.com/apache/flink/pull/16909 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-21538) Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job
[ https://issues.apache.org/jira/browse/FLINK-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-21538: -- Fix Version/s: 1.13.3 1.12.6 1.14.0 > Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job > -- > > Key: FLINK-21538 > URL: https://issues.apache.org/jira/browse/FLINK-21538 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch, Runtime / Coordination >Affects Versions: 1.12.1, 1.13.0 >Reporter: Dawid Wysakowicz >Assignee: Till Rohrmann >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13868=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361 > {code} > 2021-02-27T00:16:06.9493539Z > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2021-02-27T00:16:06.9494494Z at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2021-02-27T00:16:06.9495733Z at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117) > 2021-02-27T00:16:06.9496596Z at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2021-02-27T00:16:06.9497354Z at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2021-02-27T00:16:06.9525795Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9526744Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9527784Z at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > 2021-02-27T00:16:06.9528552Z at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-02-27T00:16:06.9529271Z at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-02-27T00:16:06.9530013Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9530482Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9531068Z at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046) > 2021-02-27T00:16:06.9531544Z at > akka.dispatch.OnComplete.internal(Future.scala:264) > 2021-02-27T00:16:06.9531908Z at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-02-27T00:16:06.9532449Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-02-27T00:16:06.9532860Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-02-27T00:16:06.9533245Z at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2021-02-27T00:16:06.9533721Z at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-02-27T00:16:06.9534225Z at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) > 2021-02-27T00:16:06.9534697Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) > 2021-02-27T00:16:06.9535217Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) > 2021-02-27T00:16:06.9535718Z at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) > 2021-02-27T00:16:06.9536127Z at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573) > 2021-02-27T00:16:06.9536861Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > 2021-02-27T00:16:06.9537394Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-02-27T00:16:06.9537916Z at > scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532) > 2021-02-27T00:16:06.9605804Z at > scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) > 2021-02-27T00:16:06.9606794Z at > scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29) > 2021-02-27T00:16:06.9607642Z at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2021-02-27T00:16:06.9608419Z at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > 2021-02-27T00:16:06.9609252Z at > akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91) > 2021-02-27T00:16:06.9610024Z at > scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12) > 2021-02-27T00:16:06.9613676Z at >
[jira] [Commented] (FLINK-21538) Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job
[ https://issues.apache.org/jira/browse/FLINK-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402592#comment-17402592 ] Till Rohrmann commented on FLINK-21538: --- Looking at this test failure two things are interesting: 1) The tests don't configure a parallelism. That's why we run a job with a parallelism of 32. This slows down the execution. 2) The execution is not super fast on the CI infrastructure. That's why we run into the 10s {{akka.ask.timeout}}. I would suggest two things: 1) Configuring a lower parallelism to reduce the complexity of the test. 2) Set a higher default {{akka.ask.timeout}} when using the {{MiniCluster}}. This should also solve a lot of other test instabilities that are caused by timeouts due to slow CI infrastructure. > Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job > -- > > Key: FLINK-21538 > URL: https://issues.apache.org/jira/browse/FLINK-21538 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch, Runtime / Coordination >Affects Versions: 1.12.1, 1.13.0 >Reporter: Dawid Wysakowicz >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13868=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361 > {code} > 2021-02-27T00:16:06.9493539Z > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2021-02-27T00:16:06.9494494Z at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2021-02-27T00:16:06.9495733Z at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117) > 2021-02-27T00:16:06.9496596Z at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2021-02-27T00:16:06.9497354Z at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2021-02-27T00:16:06.9525795Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9526744Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9527784Z at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > 2021-02-27T00:16:06.9528552Z at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-02-27T00:16:06.9529271Z at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-02-27T00:16:06.9530013Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9530482Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9531068Z at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046) > 2021-02-27T00:16:06.9531544Z at > akka.dispatch.OnComplete.internal(Future.scala:264) > 2021-02-27T00:16:06.9531908Z at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-02-27T00:16:06.9532449Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-02-27T00:16:06.9532860Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-02-27T00:16:06.9533245Z at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2021-02-27T00:16:06.9533721Z at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-02-27T00:16:06.9534225Z at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) > 2021-02-27T00:16:06.9534697Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) > 2021-02-27T00:16:06.9535217Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) > 2021-02-27T00:16:06.9535718Z at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) > 2021-02-27T00:16:06.9536127Z at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573) > 2021-02-27T00:16:06.9536861Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > 2021-02-27T00:16:06.9537394Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-02-27T00:16:06.9537916Z at > scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532) > 2021-02-27T00:16:06.9605804Z at > scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) > 2021-02-27T00:16:06.9606794Z at > scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29) > 2021-02-27T00:16:06.9607642Z
[jira] [Assigned] (FLINK-21538) Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job
[ https://issues.apache.org/jira/browse/FLINK-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann reassigned FLINK-21538: - Assignee: Till Rohrmann > Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job > -- > > Key: FLINK-21538 > URL: https://issues.apache.org/jira/browse/FLINK-21538 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch, Runtime / Coordination >Affects Versions: 1.12.1, 1.13.0 >Reporter: Dawid Wysakowicz >Assignee: Till Rohrmann >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13868=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361 > {code} > 2021-02-27T00:16:06.9493539Z > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2021-02-27T00:16:06.9494494Z at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > 2021-02-27T00:16:06.9495733Z at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117) > 2021-02-27T00:16:06.9496596Z at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2021-02-27T00:16:06.9497354Z at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2021-02-27T00:16:06.9525795Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9526744Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9527784Z at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > 2021-02-27T00:16:06.9528552Z at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > 2021-02-27T00:16:06.9529271Z at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > 2021-02-27T00:16:06.9530013Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2021-02-27T00:16:06.9530482Z at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > 2021-02-27T00:16:06.9531068Z at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046) > 2021-02-27T00:16:06.9531544Z at > akka.dispatch.OnComplete.internal(Future.scala:264) > 2021-02-27T00:16:06.9531908Z at > akka.dispatch.OnComplete.internal(Future.scala:261) > 2021-02-27T00:16:06.9532449Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > 2021-02-27T00:16:06.9532860Z at > akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > 2021-02-27T00:16:06.9533245Z at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2021-02-27T00:16:06.9533721Z at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > 2021-02-27T00:16:06.9534225Z at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) > 2021-02-27T00:16:06.9534697Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) > 2021-02-27T00:16:06.9535217Z at > scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) > 2021-02-27T00:16:06.9535718Z at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) > 2021-02-27T00:16:06.9536127Z at > akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573) > 2021-02-27T00:16:06.9536861Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > 2021-02-27T00:16:06.9537394Z at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > 2021-02-27T00:16:06.9537916Z at > scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532) > 2021-02-27T00:16:06.9605804Z at > scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) > 2021-02-27T00:16:06.9606794Z at > scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29) > 2021-02-27T00:16:06.9607642Z at > scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) > 2021-02-27T00:16:06.9608419Z at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > 2021-02-27T00:16:06.9609252Z at > akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91) > 2021-02-27T00:16:06.9610024Z at > scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12) > 2021-02-27T00:16:06.9613676Z at > scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81) > 2021-02-27T00:16:06.9615526Z
[GitHub] [flink] flinkbot commented on pull request #16917: [FLINK-20461][tests] Check replication factor before asking for JobResult
flinkbot commented on pull request #16917: URL: https://github.com/apache/flink/pull/16917#issuecomment-903103600 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit f08a9fc073f7f96c05304e4801c41c216d2e62e8 (Sat Aug 21 11:39:37 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-23048) GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane fails due to akka timeout
[ https://issues.apache.org/jira/browse/FLINK-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann closed FLINK-23048. - Resolution: Cannot Reproduce > GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane > fails due to akka timeout > -- > > Key: FLINK-23048 > URL: https://issues.apache.org/jira/browse/FLINK-23048 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.12.4 >Reporter: Xintong Song >Assignee: Till Rohrmann >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=56781494-ebb0-5eae-f732-b9c397ec6ede=6568c985-5fcc-5b89-1ebd-0385b8088b14=7957 > {code} > [ERROR] Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 48.296 s <<< FAILURE! - in > org.apache.flink.table.runtime.stream.table.GroupWindowITCase > [ERROR] > testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane(org.apache.flink.table.runtime.stream.table.GroupWindowITCase) > Time elapsed: 40.358 s <<< ERROR! > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117) > at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1061) > at akka.dispatch.OnComplete.internal(Future.scala:264) > at akka.dispatch.OnComplete.internal(Future.scala:261) > at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) > at > akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44) > at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > Caused by: org.apache.flink.runtime.JobException: Recovery is
[jira] [Assigned] (FLINK-23048) GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane fails due to akka timeout
[ https://issues.apache.org/jira/browse/FLINK-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann reassigned FLINK-23048: - Assignee: Till Rohrmann > GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane > fails due to akka timeout > -- > > Key: FLINK-23048 > URL: https://issues.apache.org/jira/browse/FLINK-23048 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.12.4 >Reporter: Xintong Song >Assignee: Till Rohrmann >Priority: Major > Labels: test-stability > Fix For: 1.12.6 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=56781494-ebb0-5eae-f732-b9c397ec6ede=6568c985-5fcc-5b89-1ebd-0385b8088b14=7957 > {code} > [ERROR] Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 48.296 s <<< FAILURE! - in > org.apache.flink.table.runtime.stream.table.GroupWindowITCase > [ERROR] > testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane(org.apache.flink.table.runtime.stream.table.GroupWindowITCase) > Time elapsed: 40.358 s <<< ERROR! > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117) > at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1061) > at akka.dispatch.OnComplete.internal(Future.scala:264) > at akka.dispatch.OnComplete.internal(Future.scala:261) > at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) > at > akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44) > at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > Caused by:
[jira] [Updated] (FLINK-23048) GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane fails due to akka timeout
[ https://issues.apache.org/jira/browse/FLINK-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-23048: -- Fix Version/s: (was: 1.12.6) > GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane > fails due to akka timeout > -- > > Key: FLINK-23048 > URL: https://issues.apache.org/jira/browse/FLINK-23048 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.12.4 >Reporter: Xintong Song >Assignee: Till Rohrmann >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=56781494-ebb0-5eae-f732-b9c397ec6ede=6568c985-5fcc-5b89-1ebd-0385b8088b14=7957 > {code} > [ERROR] Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 48.296 s <<< FAILURE! - in > org.apache.flink.table.runtime.stream.table.GroupWindowITCase > [ERROR] > testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane(org.apache.flink.table.runtime.stream.table.GroupWindowITCase) > Time elapsed: 40.358 s <<< ERROR! > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117) > at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1061) > at akka.dispatch.OnComplete.internal(Future.scala:264) > at akka.dispatch.OnComplete.internal(Future.scala:261) > at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) > at > akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44) > at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > Caused by: org.apache.flink.runtime.JobException: Recovery is
[jira] [Commented] (FLINK-23048) GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane fails due to akka timeout
[ https://issues.apache.org/jira/browse/FLINK-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17402589#comment-17402589 ] Till Rohrmann commented on FLINK-23048: --- I suspect that this is caused by our CI infrastructure where pauses of 10s can happen. Since the logs are no longer available this is hard to verify though. I suggest to close this ticket as cannot reproduce and to investigate it further once it reoccurs. > GroupWindowITCase.testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane > fails due to akka timeout > -- > > Key: FLINK-23048 > URL: https://issues.apache.org/jira/browse/FLINK-23048 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.12.4 >Reporter: Xintong Song >Priority: Major > Labels: test-stability > Fix For: 1.12.6 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=56781494-ebb0-5eae-f732-b9c397ec6ede=6568c985-5fcc-5b89-1ebd-0385b8088b14=7957 > {code} > [ERROR] Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 48.296 s <<< FAILURE! - in > org.apache.flink.table.runtime.stream.table.GroupWindowITCase > [ERROR] > testEventTimeSlidingGroupWindowOverTimeNonOverlappingSplitPane(org.apache.flink.table.runtime.stream.table.GroupWindowITCase) > Time elapsed: 40.358 s <<< ERROR! > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) > at > org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117) > at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > at > org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) > at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) > at > org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1061) > at akka.dispatch.OnComplete.internal(Future.scala:264) > at akka.dispatch.OnComplete.internal(Future.scala:261) > at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) > at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > at > org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) > at > scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) > at > scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) > at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) > at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) > at > akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) > at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) > at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) > at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) > at > akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) > at > scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) > at > akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44) > at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at >
[jira] [Updated] (FLINK-20461) YARNFileReplicationITCase.testPerJobModeWithDefaultFileReplication
[ https://issues.apache.org/jira/browse/FLINK-20461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-20461: --- Labels: pull-request-available test-stability (was: test-stability) > YARNFileReplicationITCase.testPerJobModeWithDefaultFileReplication > -- > > Key: FLINK-20461 > URL: https://issues.apache.org/jira/browse/FLINK-20461 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.11.3, 1.12.0, 1.13.0, 1.14.0 >Reporter: Huang Xingbo >Assignee: Till Rohrmann >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.14.0 > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10450=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf] > {code:java} > [ERROR] > testPerJobModeWithDefaultFileReplication(org.apache.flink.yarn.YARNFileReplicationITCase) > Time elapsed: 32.501 s <<< ERROR! java.io.FileNotFoundException: File does > not exist: > hdfs://localhost:46072/user/agent04_azpcontainer/.flink/application_1606950278664_0001/flink-dist_2.11-1.12-SNAPSHOT.jar > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1441) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1434) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1434) > at > org.apache.flink.yarn.YARNFileReplicationITCase.extraVerification(YARNFileReplicationITCase.java:148) > at > org.apache.flink.yarn.YARNFileReplicationITCase.deployPerJob(YARNFileReplicationITCase.java:113) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)