[jira] [Updated] (FLINK-22791) HybridSource user documentation
[ https://issues.apache.org/jira/browse/FLINK-22791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-22791: - Component/s: Documentation > HybridSource user documentation > --- > > Key: FLINK-22791 > URL: https://issues.apache.org/jira/browse/FLINK-22791 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Common, Documentation >Reporter: Thomas Weise >Priority: Blocker > Fix For: 1.14.0 > > > Add user documentation for HybridSource. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #16869: [runtime] fix packge import mode for DefaultLeaderRetrievalService
flinkbot commented on pull request #16869: URL: https://github.com/apache/flink/pull/16869#issuecomment-900832930 ## CI report: * c9d650fd898ee818953858fa1954bda4b5f1fce2 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…
flinkbot edited a comment on pull request #16745: URL: https://github.com/apache/flink/pull/16745#issuecomment-894632163 ## CI report: * 8012720d8036bdae16feaafed425f3024dfc14f9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22420) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16683: [FLINK-23846][docs]improve PushGatewayReporter config description
flinkbot edited a comment on pull request #16683: URL: https://github.com/apache/flink/pull/16683#issuecomment-891515480 ## CI report: * 63aa6e2f3c05fac9150399fe2a9c519cbe3e1d1e UNKNOWN * a51a065dc64d6b495ff12761b4babb117529da2b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22422) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23834) Test StreamTableEnvironment batch mode manually
[ https://issues.apache.org/jira/browse/FLINK-23834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-23834: - Priority: Blocker (was: Major) > Test StreamTableEnvironment batch mode manually > --- > > Key: FLINK-23834 > URL: https://issues.apache.org/jira/browse/FLINK-23834 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Reporter: Timo Walther >Priority: Blocker > Labels: release-testing > Fix For: 1.14.0 > > > Test a program that mixes DataStream API and Table API batch mode. Including > some connectors. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wsry commented on pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
wsry commented on pull request #16844: URL: https://github.com/apache/flink/pull/16844#issuecomment-900828858 > One last question: > > What if we move the change to `releaseInternal`, like this. In this way, we do not need for refacotring or changing fail(). WDYT? > > ``` >@Override > protected void releaseInternal() { > > if (broadcastBufferBuilder != null) { > broadcastBufferBuilder.close(); > broadcastBufferBuilder = null; > } > for (int i = 0; i < unicastBufferBuilders.length; ++i) { > if (unicastBufferBuilders[i] != null) { > unicastBufferBuilders[i].close(); > unicastBufferBuilders[i] = null; > } > } > // Release all subpartitions > for (ResultSubpartition subpartition : subpartitions) { > try { > subpartition.release(); > } > // Catch this in order to ensure that release is called on all subpartitions > catch (Throwable t) { > LOG.error("Error during release of result subpartition: " + t.getMessage(), t); > } > } > } > ``` I guess the release method can also be called from other threads? Then we may still have concurrency issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-23851) AggregationsITCase.testDistinctAfterAggregate fails due to AskTimeoutException
Xintong Song created FLINK-23851: Summary: AggregationsITCase.testDistinctAfterAggregate fails due to AskTimeoutException Key: FLINK-23851 URL: https://issues.apache.org/jira/browse/FLINK-23851 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.13.2 Reporter: Xintong Song Fix For: 1.13.3 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22417=logs=56781494-ebb0-5eae-f732-b9c397ec6ede=6568c985-5fcc-5b89-1ebd-0385b8088b14=8303 {code} Aug 17 21:27:07 Caused by: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/rpc/taskmanager_288#1055484738]] after [1 ms]. Message of type [org.apache.flink.runtime.rpc.messages.LocalRpcInvocation]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply. Aug 17 21:27:07 at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) Aug 17 21:27:07 at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) Aug 17 21:27:07 at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648) Aug 17 21:27:07 at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205) Aug 17 21:27:07 at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) Aug 17 21:27:07 at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) Aug 17 21:27:07 at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) Aug 17 21:27:07 at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328) Aug 17 21:27:07 at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279) Aug 17 21:27:07 at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wsry commented on pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
wsry commented on pull request #16844: URL: https://github.com/apache/flink/pull/16844#issuecomment-900827949 > @wsry thanks for your changes. They look correct for me but I have a couple of notices. First of all, I suggest splitting this one commit into two: the first one with the refactoring, the second one with the fixing of the bug. > Secondly, I don't really understand the current contract of `fail` and `close`. You can take a look at my comments in PR for more details. I think it make sense that we split the patch into two commits. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23848) PulsarSourceITCase is failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-23848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400800#comment-17400800 ] Xintong Song commented on FLINK-23848: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22415=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=2c68b137-b01d-55c9-e603-3ff3f320364b=24424 > PulsarSourceITCase is failed on Azure > - > > Key: FLINK-23848 > URL: https://issues.apache.org/jira/browse/FLINK-23848 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.14.0 >Reporter: Jark Wu >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 > {code} > 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] > testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 > s <<< ERROR! > 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: > Failed to fetch next result > 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) > 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) > 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38 at > org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) > 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) > 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) > 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2021-08-17T20:17:38.241Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2021-08-17T20:17:38.2450363Z Aug 17
[jira] [Commented] (FLINK-23829) SavepointITCase JVM crash on azure
[ https://issues.apache.org/jira/browse/FLINK-23829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400799#comment-17400799 ] Xintong Song commented on FLINK-23829: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22415=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae=5108 > SavepointITCase JVM crash on azure > -- > > Key: FLINK-23829 > URL: https://issues.apache.org/jira/browse/FLINK-23829 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.14.0 >Reporter: Xintong Song >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22293=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=5224 > {code} > Aug 16 16:26:11 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test > (integration-tests) on project flink-tests: There are test failures. > Aug 16 16:26:11 [ERROR] > Aug 16 16:26:11 [ERROR] Please refer to > /__w/1/s/flink-tests/target/surefire-reports for the individual test results. > Aug 16 16:26:11 [ERROR] Please refer to dump files (if any exist) > [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Aug 16 16:26:11 [ERROR] ExecutionException The forked VM terminated without > properly saying goodbye. VM crash or System.exit called? > Aug 16 16:26:11 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=2 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter8870094541887019356.jar > /__w/1/s/flink-tests/target/surefire 2021-08-16T15-45-06_363-jvmRun2 > surefire8582412554358604743tmp surefire_2118489584967019297925tmp > Aug 16 16:26:11 [ERROR] Error occurred in starting fork, check output in log > Aug 16 16:26:11 [ERROR] Process Exit Code: 239 > Aug 16 16:26:11 [ERROR] Crashed tests: > Aug 16 16:26:11 [ERROR] org.apache.flink.test.checkpointing.SavepointITCase > Aug 16 16:26:11 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > Aug 16 16:26:11 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=2 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter8870094541887019356.jar > /__w/1/s/flink-tests/target/surefire 2021-08-16T15-45-06_363-jvmRun2 > surefire8582412554358604743tmp surefire_2118489584967019297925tmp > Aug 16 16:26:11 [ERROR] Error occurred in starting fork, check output in log > Aug 16 16:26:11 [ERROR] Process Exit Code: 239 > Aug 16 16:26:11 [ERROR] Crashed tests: > Aug 16 16:26:11 [ERROR] org.apache.flink.test.checkpointing.SavepointITCase > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > Aug 16 16:26:11 [ERROR] at >
[GitHub] [flink] flinkbot commented on pull request #16869: fix packge import mode for DefaultLeaderRetrievalService
flinkbot commented on pull request #16869: URL: https://github.com/apache/flink/pull/16869#issuecomment-900825384 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit c9d650fd898ee818953858fa1954bda4b5f1fce2 (Wed Aug 18 05:36:19 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **Invalid pull request title: No valid Jira ID provided** Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16823: [FLINK-23845][docs]improve PushGatewayReporter config:deleteOnShutdown de…
flinkbot edited a comment on pull request #16823: URL: https://github.com/apache/flink/pull/16823#issuecomment-898865990 ## CI report: * 656fc2a8cd23e86b8abf656d80593e4df7c2e8a5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22421) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16629: [FLINK-23847][connectors][kafka] improve error message when valueDeseriali…
flinkbot edited a comment on pull request #16629: URL: https://github.com/apache/flink/pull/16629#issuecomment-888753020 ## CI report: * a3f89c52f22638b58005b76670229e2952d6bf83 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22424) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wangjunyou opened a new pull request #16869: fix packge import mode for DefaultLeaderRetrievalService
wangjunyou opened a new pull request #16869: URL: https://github.com/apache/flink/pull/16869 ## What is the purpose of the change *Modify import method, static import Preconditions.checkState to DefaultLeaderRetrievalService.java* ## Brief change log - *remove import org.apache.flink.util.Preconditions* - *add import static org.apache.flink.util.Preconditions.checkState* ## Verifying this change *This modification has been included in the existing test* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14893: [FLINK-21321][Runtime/StateBackends] improve RocksDB incremental rescale performance by using deleteRange operator
flinkbot edited a comment on pull request #14893: URL: https://github.com/apache/flink/pull/14893#issuecomment-774815157 ## CI report: * 521b6d5ad32becde67f8477258633b1e1a1ac59b Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13102) * f06580b2165ffc8bd6b13df1a9ce3bf2b241a3c3 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22427) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-23727) Skip null values in SimpleStringSchema#deserialize
[ https://issues.apache.org/jira/browse/FLINK-23727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400787#comment-17400787 ] Paul Lin edited comment on FLINK-23727 at 8/18/21, 5:13 AM: [~paul8263] WRT solutions, I've considered adding a flag as you suggested, but I doubt in what cases users would need NPE? That's obviously not a best practice. Moreover, Flink deserializer interface states that implementations should return null instead of throwing exceptions if the record can not be deserialized. So I think we could just fix the NPE, no need for an extra parameter. was (Author: paul lin): [~paul8263] WRT solutions, I've considered adding a flag as you suggested, but I doubt in what cases users would need NPE? That's obviously not a best practice. Moreover, Flink deserializer interface states that implementations should return null if the record can not be deserialized instead of throwing exceptions. So I think we could just fix the NPE, no need for an extra parameter. > Skip null values in SimpleStringSchema#deserialize > -- > > Key: FLINK-23727 > URL: https://issues.apache.org/jira/browse/FLINK-23727 > Project: Flink > Issue Type: Bug > Components: Connectors / Common >Affects Versions: 1.13.2 >Reporter: Paul Lin >Priority: Major > > In Kafka use cases, it's valid to send a message with a key and a null > payload as a tombstone. But SimpleStringSchema, which is frequently used as a > message value deserializer, throws NPE when the input value is null. We > should tolerate null values in SimpleStringSchema (simply return null to skip > the records), otherwise users need to implement a custom one. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23727) Skip null values in SimpleStringSchema#deserialize
[ https://issues.apache.org/jira/browse/FLINK-23727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400787#comment-17400787 ] Paul Lin commented on FLINK-23727: -- [~paul8263] WRT solutions, I've considered adding a flag as you suggested, but I doubt in what cases users would need NPE? That's obviously not a best practice. Moreover, Flink deserializer interface states that implementations should return null if the record can not be deserialized instead of throwing exceptions. So I think we could just fix the NPE, no need for an extra parameter. > Skip null values in SimpleStringSchema#deserialize > -- > > Key: FLINK-23727 > URL: https://issues.apache.org/jira/browse/FLINK-23727 > Project: Flink > Issue Type: Bug > Components: Connectors / Common >Affects Versions: 1.13.2 >Reporter: Paul Lin >Priority: Major > > In Kafka use cases, it's valid to send a message with a key and a null > payload as a tombstone. But SimpleStringSchema, which is frequently used as a > message value deserializer, throws NPE when the input value is null. We > should tolerate null values in SimpleStringSchema (simply return null to skip > the records), otherwise users need to implement a custom one. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-21200) User defined functions refuse to accept multi-set arguments without type hints
[ https://issues.apache.org/jira/browse/FLINK-21200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400784#comment-17400784 ] Timo Walther commented on FLINK-21200: -- [~Kenyore] This issue has rather low priority. The current behavior is kind of intended. Multisets are not used frequently. Do you think it should receive higher importance? > User defined functions refuse to accept multi-set arguments without type hints > -- > > Key: FLINK-21200 > URL: https://issues.apache.org/jira/browse/FLINK-21200 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.11.3, 1.12.2, 1.13.0 >Reporter: Caizhi Weng >Priority: Minor > Labels: auto-deprioritized-major > > The > [document|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/functions/udfs.html#type-inference] > states that the default conversion Java class of the {{t MULTISET}} type > should be {{java.util.Map}}. However user defined > functions with this type of argument refuses to accept {{MULTISET}} as > argument. > To reproduce this bug, add the following test case to > {{org.apache.flink.table.planner.runtime.batch.sql.agg.WindowAggregateITCase}} > {code:scala} > @Test > def myTest(): Unit = { > tEnv.executeSql("CREATE TEMPORARY FUNCTION myFun AS > 'org.apache.flink.table.planner.GetMultisetValue'") > checkResult( > s""" >|SELECT myFun(c, '10') FROM >|(SELECT >| TUMBLE_START(ts, INTERVAL '3' SECOND) AS win_start, >| COLLECT(c) AS c >|FROM Table3WithTimestamp >|GROUP BY TUMBLE(ts, INTERVAL '3' SECOND)) >|""".stripMargin, > Seq()) > } > {code} > and add this user defined function to {{org.apache.flink.table.planner}} > package > {code:java} > package org.apache.flink.table.planner; > import org.apache.flink.table.functions.ScalarFunction; > import java.util.Map; > public class GetMultisetValue extends ScalarFunction { > public static Integer eval(Map data, String key) { > return data.getOrDefault(key, 0); > } > } > {code} > The following exception occurs when running the test: > {code} > org.apache.flink.table.api.ValidationException: SQL validation failed. > Invalid function call: > default_catalog.default_database.myFun(MULTISET NOT NULL, CHAR(2) NOT > NULL) > at > org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:152) > at > org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:111) > at > org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:189) > at > org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:77) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:649) > at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.parseQuery(BatchTestBase.scala:295) > at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:137) > at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:104) > at > org.apache.flink.table.planner.runtime.batch.sql.agg.WindowAggregateITCase.myTest(WindowAggregateITCase.scala:52) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at
[GitHub] [flink] flinkbot edited a comment on pull request #14893: [FLINK-21321][Runtime/StateBackends] improve RocksDB incremental rescale performance by using deleteRange operator
flinkbot edited a comment on pull request #14893: URL: https://github.com/apache/flink/pull/14893#issuecomment-774815157 ## CI report: * 521b6d5ad32becde67f8477258633b1e1a1ac59b Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13102) * f06580b2165ffc8bd6b13df1a9ce3bf2b241a3c3 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] lgo commented on pull request #14893: [FLINK-21321][Runtime/StateBackends] improve RocksDB incremental rescale performance by using deleteRange operator
lgo commented on pull request #14893: URL: https://github.com/apache/flink/pull/14893#issuecomment-900806314 I merged in `master` to resolve conflicts and want to thaw this change because the RocksDB version was bumped to a version where this operation is stable. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-21321) Change RocksDB incremental checkpoint re-scaling to use deleteRange
[ https://issues.apache.org/jira/browse/FLINK-21321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400773#comment-17400773 ] Joey Pereira commented on FLINK-21321: -- [~yunta] would you be open merging this change now because the RocksDB version was bumped? > Change RocksDB incremental checkpoint re-scaling to use deleteRange > --- > > Key: FLINK-21321 > URL: https://issues.apache.org/jira/browse/FLINK-21321 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Reporter: Joey Pereira >Priority: Minor > Labels: pull-request-available > > In FLINK-8790, it was suggested to use RocksDB's {{deleteRange}} API to more > efficiently clip the databases for the desired target group. > During the PR for that ticket, > [#5582|https://github.com/apache/flink/pull/5582], the change did not end up > using the {{deleteRange}} method as it was an experimental feature in > RocksDB. > At this point {{deleteRange}} is in a far less experimental state now but I > believe is still formally "experimental". It is heavily by many others like > CockroachDB and TiKV and they have teased out several bugs in complex > interactions over the years. > For certain re-scaling situations where restores trigger > {{restoreWithScaling}} and the DB clipping logic, this would likely reduce an > O[n] operation (N = state size/records) to O(1). For large state apps, this > would potentially represent a non-trivial amount of time spent for > re-scaling. In the case of my workplace, we have an operator with 100s of > billions of records in state and re-scaling was taking a long time (>>30min, > but it has been awhile since doing it). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23727) Skip null values in SimpleStringSchema#deserialize
[ https://issues.apache.org/jira/browse/FLINK-23727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400771#comment-17400771 ] Yao Zhang commented on FLINK-23727: --- Hi [~Paul Lin], I encountered the same issue with very early Flink version. Personally I suggest we should rework SimpleStringSchema for better usability. We could add a parameter indicating that whether null value should be tolerated or not. > Skip null values in SimpleStringSchema#deserialize > -- > > Key: FLINK-23727 > URL: https://issues.apache.org/jira/browse/FLINK-23727 > Project: Flink > Issue Type: Bug > Components: Connectors / Common >Affects Versions: 1.13.2 >Reporter: Paul Lin >Priority: Major > > In Kafka use cases, it's valid to send a message with a key and a null > payload as a tombstone. But SimpleStringSchema, which is frequently used as a > message value deserializer, throws NPE when the input value is null. We > should tolerate null values in SimpleStringSchema (simply return null to skip > the records), otherwise users need to implement a custom one. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] curcur edited a comment on pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
curcur edited a comment on pull request #16844: URL: https://github.com/apache/flink/pull/16844#issuecomment-900799942 One last question: What if we move the change to `releaseInternal`, like this. In this way, we do not need for refacotring or changing fail(). WDYT? ``` @Override protected void releaseInternal() { if (broadcastBufferBuilder != null) { broadcastBufferBuilder.close(); broadcastBufferBuilder = null; } for (int i = 0; i < unicastBufferBuilders.length; ++i) { if (unicastBufferBuilders[i] != null) { unicastBufferBuilders[i].close(); unicastBufferBuilders[i] = null; } } // Release all subpartitions for (ResultSubpartition subpartition : subpartitions) { try { subpartition.release(); } // Catch this in order to ensure that release is called on all subpartitions catch (Throwable t) { LOG.error("Error during release of result subpartition: " + t.getMessage(), t); } } } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-23850) Test Kafka table connector with new runtime provider
Qingsheng Ren created FLINK-23850: - Summary: Test Kafka table connector with new runtime provider Key: FLINK-23850 URL: https://issues.apache.org/jira/browse/FLINK-23850 Project: Flink Issue Type: Improvement Components: Connectors / Kafka Affects Versions: 1.14.0 Reporter: Qingsheng Ren Fix For: 1.14.0 The runtime provider of Kafka table connector has been replaced with new KafkaSource and KafkaSink. The table connector requires to be tested to make sure nothing is surprised to Table/SQL API users. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] curcur edited a comment on pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
curcur edited a comment on pull request #16844: URL: https://github.com/apache/flink/pull/16844#issuecomment-900799942 One last question: What if we move the change to `releaseInternal`, like this. In this way, we do not need for refacotring or changing fail(). WDYT? ``` @Override protected void releaseInternal() { if (broadcastBufferBuilder != null) { broadcastBufferBuilder.close(); broadcastBufferBuilder = null; } for (int i = 0; i < unicastBufferBuilders.length; ++i) { if (unicastBufferBuilders[i] != null) { unicastBufferBuilders[i].close(); unicastBufferBuilders[i] = null; } } // Release all subpartitions for (ResultSubpartition subpartition : subpartitions) { try { subpartition.release(); } // Catch this in order to ensure that release is called on all subpartitions catch (Throwable t) { LOG.error("Error during release of result subpartition: " + t.getMessage(), t); } } } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
curcur commented on pull request #16844: URL: https://github.com/apache/flink/pull/16844#issuecomment-900799942 One last question: What if we move the change to `releaseInternal`, like this. In this way, we do not need for refacotring or changing fail(). WDYT? ``` @Override protected void releaseInternal() { // Release all subpartitions for (ResultSubpartition subpartition : subpartitions) { try { subpartition.release(); } // Catch this in order to ensure that release is called on all subpartitions catch (Throwable t) { LOG.error("Error during release of result subpartition: " + t.getMessage(), t); } } if (broadcastBufferBuilder != null) { broadcastBufferBuilder.close(); broadcastBufferBuilder = null; } for (int i = 0; i < unicastBufferBuilders.length; ++i) { if (unicastBufferBuilders[i] != null) { unicastBufferBuilders[i].close(); unicastBufferBuilders[i] = null; } } } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
curcur commented on a change in pull request #16844: URL: https://github.com/apache/flink/pull/16844#discussion_r690885852 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java ## @@ -1182,7 +1186,10 @@ void cancelOrFailAndCancelInvokableInternal(ExecutionState targetState, Throwabl Runnable canceler = new TaskCanceler( LOG, -this::closeNetworkResources, +() -> { +failAllResultPartitions(); +closeAllInputGates(); +}, Review comment: exaplain why changing resource release -> fail + release buff pool (but not buffers). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
curcur commented on a change in pull request #16844: URL: https://github.com/apache/flink/pull/16844#discussion_r690885852 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java ## @@ -1182,7 +1186,10 @@ void cancelOrFailAndCancelInvokableInternal(ExecutionState targetState, Throwabl Runnable canceler = new TaskCanceler( LOG, -this::closeNetworkResources, +() -> { +failAllResultPartitions(); +closeAllInputGates(); +}, Review comment: exaplain why changing resource release -> fail + release reources. ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultPartition.java ## @@ -250,15 +250,20 @@ public void release(Throwable cause) { /** Releases all produced data including both those stored in memory and persisted on disk. */ protected abstract void releaseInternal(); -@Override -public void close() { +private void closeBufferPool() { if (bufferPool != null) { bufferPool.lazyDestroy(); } } +@Override +public void close() { +closeBufferPool(); +} + @Override public void fail(@Nullable Throwable throwable) { +closeBufferPool(); Review comment: add a comment why adding closebuffer in fail -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur edited a comment on pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
curcur edited a comment on pull request #16844: URL: https://github.com/apache/flink/pull/16844#issuecomment-900796934 Synced up offline. Thanks Yingjie @wsry for the explanation! I am fine with the change. Please add three comments/explanations either in the code or in this PR (I will also comment inline). 1. Why adding `closebufferpool()` in `fail()` 2. Why changing "closeResources" -> `fail` is necesary in task cancler. 3. Why the refactoring is necessary. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
curcur commented on pull request #16844: URL: https://github.com/apache/flink/pull/16844#issuecomment-900796934 Thanks Yingjie @wsry for the explanation! I am fine with the change. Please add three comments/explanations either in the code or in this PR (I will also comment inline). 1. Why adding `closebufferpool()` in `fail()` 2. Why changing "closeResources" -> `fail` is necesary in task cancler. 3. Why the refactoring is necessary. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23525) Docker command fails on Azure: Exit code 137 returned from process: file name '/usr/bin/docker'
[ https://issues.apache.org/jira/browse/FLINK-23525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400767#comment-17400767 ] Xintong Song commented on FLINK-23525: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22415=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10340 > Docker command fails on Azure: Exit code 137 returned from process: file name > '/usr/bin/docker' > --- > > Key: FLINK-23525 > URL: https://issues.apache.org/jira/browse/FLINK-23525 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines >Affects Versions: 1.14.0, 1.13.1 >Reporter: Dawid Wysakowicz >Priority: Critical > Labels: auto-deprioritized-blocker, test-stability > Fix For: 1.14.0 > > Attachments: screenshot-1.png > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21053=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10034 > {code} > ##[error]Exit code 137 returned from process: file name '/usr/bin/docker', > arguments 'exec -i -u 1001 -w /home/vsts_azpcontainer > 9dca235e075b70486fac576ee17cee722940edf575e5478e0a52def5b46c28b5 > /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23796) UnalignedCheckpointRescaleITCase JVM crash on Azure
[ https://issues.apache.org/jira/browse/FLINK-23796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400766#comment-17400766 ] Xintong Song commented on FLINK-23796: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22415=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=5142 > UnalignedCheckpointRescaleITCase JVM crash on Azure > --- > > Key: FLINK-23796 > URL: https://issues.apache.org/jira/browse/FLINK-23796 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.14.0 >Reporter: Xintong Song >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=0=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=5182 > {code} > Aug 16 01:03:17 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test > (integration-tests) on project flink-tests: There are test failures. > Aug 16 01:03:17 [ERROR] > Aug 16 01:03:17 [ERROR] Please refer to > /__w/1/s/flink-tests/target/surefire-reports for the individual test results. > Aug 16 01:03:17 [ERROR] Please refer to dump files (if any exist) > [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Aug 16 01:03:17 [ERROR] ExecutionException The forked VM terminated without > properly saying goodbye. VM crash or System.exit called? > Aug 16 01:03:17 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter4077406518503777021.jar > /__w/1/s/flink-tests/target/surefire 2021-08-15T23-59-56_973-jvmRun1 > surefire4438021626717472043tmp surefire_1445134621790231688950tmp > Aug 16 01:03:17 [ERROR] Error occurred in starting fork, check output in log > Aug 16 01:03:17 [ERROR] Process Exit Code: 137 > Aug 16 01:03:17 [ERROR] Crashed tests: > Aug 16 01:03:17 [ERROR] > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase > Aug 16 01:03:17 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > Aug 16 01:03:17 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter4077406518503777021.jar > /__w/1/s/flink-tests/target/surefire 2021-08-15T23-59-56_973-jvmRun1 > surefire4438021626717472043tmp surefire_1445134621790231688950tmp > Aug 16 01:03:17 [ERROR] Error occurred in starting fork, check output in log > Aug 16 01:03:17 [ERROR] Process Exit Code: 137 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure
[ https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400764#comment-17400764 ] Xintong Song commented on FLINK-22889: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22415=logs=e1276d0f-df12-55ec-86b5-c0ad597d83c9=66648bdf-9af9-503d-c9a7-11f783a19935=14154 > JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure > > > Key: FLINK-22889 > URL: https://issues.apache.org/jira/browse/FLINK-22889 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC >Affects Versions: 1.14.0, 1.13.1 >Reporter: Dawid Wysakowicz >Assignee: Roman Khachatryan >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23849) Support react to the node decommissioning change state on yarn and do graceful restart
zlzhang0122 created FLINK-23849: --- Summary: Support react to the node decommissioning change state on yarn and do graceful restart Key: FLINK-23849 URL: https://issues.apache.org/jira/browse/FLINK-23849 Project: Flink Issue Type: Improvement Components: Deployment / YARN Affects Versions: 1.13.2, 1.13.1, 1.12.2 Reporter: zlzhang0122 Fix For: 1.15.0 Now we are not interested in node updates in YarnContainerEventHandler.onNodesUpdated, but sometimes we want to evict the running flink process on the node and graceful restart on the other node because of some unexpected reason such as the physical machine need to be recycle or the cloud computing cluster need to be migration. Thus, we can react to the node decommissioning change state, and call the stopWithSavepoint and then restart it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23828) KafkaSourceITCase.testIdleReader fails on azure
[ https://issues.apache.org/jira/browse/FLINK-23828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400763#comment-17400763 ] Xintong Song commented on FLINK-23828: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22418=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7112 > KafkaSourceITCase.testIdleReader fails on azure > --- > > Key: FLINK-23828 > URL: https://issues.apache.org/jira/browse/FLINK-23828 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0 >Reporter: Xintong Song >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22284=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7355 > {code} > Aug 16 14:25:00 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 67.241 s <<< FAILURE! - in > org.apache.flink.connector.kafka.source.KafkaSourceITCase > Aug 16 14:25:00 [ERROR] testIdleReader{TestEnvironment, ExternalContext}[1] > Time elapsed: 0.918 s <<< FAILURE! > Aug 16 14:25:00 java.lang.AssertionError: > Aug 16 14:25:00 > Aug 16 14:25:00 Expected: Records consumed by Flink should be identical to > test data and preserve the order in multiple splits > Aug 16 14:25:00 but: Unexpected record 'la3OaJDch7vuUXDmGOYf' > Aug 16 14:25:00 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > Aug 16 14:25:00 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > Aug 16 14:25:00 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testIdleReader(SourceTestSuiteBase.java:193) > Aug 16 14:25:00 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Aug 16 14:25:00 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Aug 16 14:25:00 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Aug 16 14:25:00 at java.lang.reflect.Method.invoke(Method.java:498) > Aug 16 14:25:00 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > Aug 16 14:25:00 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > Aug 16 14:25:00 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > Aug 16 14:25:00 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) > Aug 16 14:25:00 at > org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) > Aug 16 14:25:00 at >
[jira] [Updated] (FLINK-23828) KafkaSourceITCase.testIdleReader fails on azure
[ https://issues.apache.org/jira/browse/FLINK-23828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-23828: - Priority: Critical (was: Major) > KafkaSourceITCase.testIdleReader fails on azure > --- > > Key: FLINK-23828 > URL: https://issues.apache.org/jira/browse/FLINK-23828 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0 >Reporter: Xintong Song >Priority: Critical > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22284=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7355 > {code} > Aug 16 14:25:00 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 67.241 s <<< FAILURE! - in > org.apache.flink.connector.kafka.source.KafkaSourceITCase > Aug 16 14:25:00 [ERROR] testIdleReader{TestEnvironment, ExternalContext}[1] > Time elapsed: 0.918 s <<< FAILURE! > Aug 16 14:25:00 java.lang.AssertionError: > Aug 16 14:25:00 > Aug 16 14:25:00 Expected: Records consumed by Flink should be identical to > test data and preserve the order in multiple splits > Aug 16 14:25:00 but: Unexpected record 'la3OaJDch7vuUXDmGOYf' > Aug 16 14:25:00 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > Aug 16 14:25:00 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > Aug 16 14:25:00 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testIdleReader(SourceTestSuiteBase.java:193) > Aug 16 14:25:00 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Aug 16 14:25:00 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Aug 16 14:25:00 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Aug 16 14:25:00 at java.lang.reflect.Method.invoke(Method.java:498) > Aug 16 14:25:00 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > Aug 16 14:25:00 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > Aug 16 14:25:00 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > Aug 16 14:25:00 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) > Aug 16 14:25:00 at > org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) > Aug 16 14:25:00 at > org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) > Aug 16 14:25:00 at >
[jira] [Commented] (FLINK-23351) FileReadingWatermarkITCase.testWatermarkEmissionWithChaining fails due to "too few watermarks emitted" on azure
[ https://issues.apache.org/jira/browse/FLINK-23351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400762#comment-17400762 ] Xintong Song commented on FLINK-23351: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22401=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=10740 > FileReadingWatermarkITCase.testWatermarkEmissionWithChaining fails due to > "too few watermarks emitted" on azure > --- > > Key: FLINK-23351 > URL: https://issues.apache.org/jira/browse/FLINK-23351 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.14.0 >Reporter: Xintong Song >Assignee: Arvid Heise >Priority: Critical > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20267=logs=a549b384-c55a-52c0-c451-00e0477ab6db=81f2da51-a161-54c7-5b84-6001fed26530=10500 > {code} > Jul 10 22:53:45 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 21.065 s <<< FAILURE! - in > org.apache.flink.test.streaming.api.FileReadingWatermarkITCase > Jul 10 22:53:45 [ERROR] > testWatermarkEmissionWithChaining(org.apache.flink.test.streaming.api.FileReadingWatermarkITCase) > Time elapsed: 20.25 s <<< FAILURE! > Jul 10 22:53:45 java.lang.AssertionError: too few watermarks emitted in 3057 > ms expected:<305.0> but was:<124.0> > Jul 10 22:53:45 at org.junit.Assert.fail(Assert.java:89) > Jul 10 22:53:45 at org.junit.Assert.failNotEquals(Assert.java:835) > Jul 10 22:53:45 at org.junit.Assert.assertEquals(Assert.java:555) > Jul 10 22:53:45 at > org.apache.flink.test.streaming.api.FileReadingWatermarkITCase.testWatermarkEmissionWithChaining(FileReadingWatermarkITCase.java:79) > Jul 10 22:53:45 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Jul 10 22:53:45 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Jul 10 22:53:45 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Jul 10 22:53:45 at java.lang.reflect.Method.invoke(Method.java:498) > Jul 10 22:53:45 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Jul 10 22:53:45 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Jul 10 22:53:45 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Jul 10 22:53:45 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Jul 10 22:53:45 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > Jul 10 22:53:45 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Jul 10 22:53:45 at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > Jul 10 22:53:45 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > Jul 10 22:53:45 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > Jul 10 22:53:45 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > Jul 10 22:53:45 at > org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > Jul 10 22:53:45 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > Jul 10 22:53:45 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > Jul 10 22:53:45 at > org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > Jul 10 22:53:45 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > Jul 10 22:53:45 at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > Jul 10 22:53:45 at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > Jul 10 22:53:45 at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > Jul 10 22:53:45 at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > Jul 10 22:53:45 at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > Jul 10 22:53:45 at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > Jul 10 22:53:45 at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > Jul 10 22:53:45 at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > Jul 10 22:53:45 at >
[jira] [Commented] (FLINK-22198) KafkaTableITCase hang.
[ https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400761#comment-17400761 ] Xintong Song commented on FLINK-22198: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22396=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7380 > KafkaTableITCase hang. > -- > > Key: FLINK-22198 > URL: https://issues.apache.org/jira/browse/FLINK-22198 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0, 1.12.4 >Reporter: Guowei Ma >Assignee: Qingsheng Ren >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625 > There is no any artifacts. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23525) Docker command fails on Azure: Exit code 137 returned from process: file name '/usr/bin/docker'
[ https://issues.apache.org/jira/browse/FLINK-23525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400760#comment-17400760 ] Xintong Song commented on FLINK-23525: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22383=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=14053 > Docker command fails on Azure: Exit code 137 returned from process: file name > '/usr/bin/docker' > --- > > Key: FLINK-23525 > URL: https://issues.apache.org/jira/browse/FLINK-23525 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines >Affects Versions: 1.14.0, 1.13.1 >Reporter: Dawid Wysakowicz >Priority: Critical > Labels: auto-deprioritized-blocker, test-stability > Fix For: 1.14.0 > > Attachments: screenshot-1.png > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21053=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10034 > {code} > ##[error]Exit code 137 returned from process: file name '/usr/bin/docker', > arguments 'exec -i -u 1001 -w /home/vsts_azpcontainer > 9dca235e075b70486fac576ee17cee722940edf575e5478e0a52def5b46c28b5 > /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23525) Docker command fails on Azure: Exit code 137 returned from process: file name '/usr/bin/docker'
[ https://issues.apache.org/jira/browse/FLINK-23525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400759#comment-17400759 ] Xintong Song commented on FLINK-23525: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22378=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=14487 > Docker command fails on Azure: Exit code 137 returned from process: file name > '/usr/bin/docker' > --- > > Key: FLINK-23525 > URL: https://issues.apache.org/jira/browse/FLINK-23525 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines >Affects Versions: 1.14.0, 1.13.1 >Reporter: Dawid Wysakowicz >Priority: Critical > Labels: auto-deprioritized-blocker, test-stability > Fix For: 1.14.0 > > Attachments: screenshot-1.png > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21053=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10034 > {code} > ##[error]Exit code 137 returned from process: file name '/usr/bin/docker', > arguments 'exec -i -u 1001 -w /home/vsts_azpcontainer > 9dca235e075b70486fac576ee17cee722940edf575e5478e0a52def5b46c28b5 > /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23525) Docker command fails on Azure: Exit code 137 returned from process: file name '/usr/bin/docker'
[ https://issues.apache.org/jira/browse/FLINK-23525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400758#comment-17400758 ] Xintong Song commented on FLINK-23525: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22375=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=5360d54c-8d94-5d85-304e-a89267eb785a=9746 > Docker command fails on Azure: Exit code 137 returned from process: file name > '/usr/bin/docker' > --- > > Key: FLINK-23525 > URL: https://issues.apache.org/jira/browse/FLINK-23525 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines >Affects Versions: 1.14.0, 1.13.1 >Reporter: Dawid Wysakowicz >Priority: Critical > Labels: auto-deprioritized-blocker, test-stability > Fix For: 1.14.0 > > Attachments: screenshot-1.png > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21053=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10034 > {code} > ##[error]Exit code 137 returned from process: file name '/usr/bin/docker', > arguments 'exec -i -u 1001 -w /home/vsts_azpcontainer > 9dca235e075b70486fac576ee17cee722940edf575e5478e0a52def5b46c28b5 > /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22387) UpsertKafkaTableITCase hangs when setting up kafka
[ https://issues.apache.org/jira/browse/FLINK-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400755#comment-17400755 ] Xintong Song commented on FLINK-22387: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22374=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7123 > UpsertKafkaTableITCase hangs when setting up kafka > -- > > Key: FLINK-22387 > URL: https://issues.apache.org/jira/browse/FLINK-22387 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Table SQL / Ecosystem >Affects Versions: 1.14.0, 1.13.1, 1.12.4 >Reporter: Dawid Wysakowicz >Assignee: Shengkai Fang >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16901=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6932 > {code} > 2021-04-20T20:01:32.2276988Z Apr 20 20:01:32 "main" #1 prio=5 os_prio=0 > tid=0x7fe87400b000 nid=0x4028 runnable [0x7fe87df22000] > 2021-04-20T20:01:32.2277666Z Apr 20 20:01:32java.lang.Thread.State: > RUNNABLE > 2021-04-20T20:01:32.2278338Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.Buffer.getByte(Buffer.java:312) > 2021-04-20T20:01:32.2279325Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:310) > 2021-04-20T20:01:32.2280656Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:492) > 2021-04-20T20:01:32.2281603Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471) > 2021-04-20T20:01:32.2282163Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.Util.skipAll(Util.java:204) > 2021-04-20T20:01:32.2282870Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.Util.discard(Util.java:186) > 2021-04-20T20:01:32.2283494Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.close(Http1ExchangeCodec.java:511) > 2021-04-20T20:01:32.2284460Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.ForwardingSource.close(ForwardingSource.java:43) > 2021-04-20T20:01:32.2285183Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.close(Exchange.java:313) > 2021-04-20T20:01:32.2285756Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.RealBufferedSource.close(RealBufferedSource.java:476) > 2021-04-20T20:01:32.2286287Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.Util.closeQuietly(Util.java:139) > 2021-04-20T20:01:32.2286795Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.ResponseBody.close(ResponseBody.java:192) > 2021-04-20T20:01:32.2287270Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.Response.close(Response.java:290) > 2021-04-20T20:01:32.2287913Z Apr 20 20:01:32 at > org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.close(OkDockerHttpClient.java:285) > 2021-04-20T20:01:32.2288606Z Apr 20 20:01:32 at > org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$null$0(DefaultInvocationBuilder.java:272) > 2021-04-20T20:01:32.2289295Z Apr 20 20:01:32 at > org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$$Lambda$340/2058508175.close(Unknown > Source) > 2021-04-20T20:01:32.2289886Z Apr 20 20:01:32 at > com.github.dockerjava.api.async.ResultCallbackTemplate.close(ResultCallbackTemplate.java:77) > 2021-04-20T20:01:32.2290567Z Apr 20 20:01:32 at > org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:202) > 2021-04-20T20:01:32.2291051Z Apr 20 20:01:32 at > org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:205) > 2021-04-20T20:01:32.2291879Z Apr 20 20:01:32 - locked <0xe9cd50f8> > (a [Ljava.lang.Object;) > 2021-04-20T20:01:32.2292313Z Apr 20 20:01:32 at > org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14) > 2021-04-20T20:01:32.2292870Z Apr 20 20:01:32 at > org.testcontainers.LazyDockerClient.authConfig(LazyDockerClient.java:12) > 2021-04-20T20:01:32.2293383Z Apr 20 20:01:32 at > org.testcontainers.containers.GenericContainer.start(GenericContainer.java:310) > 2021-04-20T20:01:32.2293890Z Apr 20 20:01:32 at > org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1029) > 2021-04-20T20:01:32.2294578Z Apr 20 20:01:32 at >
[jira] [Commented] (FLINK-23848) PulsarSourceITCase is failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-23848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400752#comment-17400752 ] Xintong Song commented on FLINK-23848: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22359=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=24751 > PulsarSourceITCase is failed on Azure > - > > Key: FLINK-23848 > URL: https://issues.apache.org/jira/browse/FLINK-23848 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.14.0 >Reporter: Jark Wu >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 > {code} > 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] > testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 > s <<< ERROR! > 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: > Failed to fetch next result > 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) > 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) > 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38 at > org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) > 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) > 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) > 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2021-08-17T20:17:38.241Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2021-08-17T20:17:38.2450363Z Aug 17
[jira] [Commented] (FLINK-23697) flink standalone Isolation is poor, why not do as spark does
[ https://issues.apache.org/jira/browse/FLINK-23697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400753#comment-17400753 ] jackylau commented on FLINK-23697: -- hi [~trohrmann], could we support this in the feature ? > flink standalone Isolation is poor, why not do as spark does > > > Key: FLINK-23697 > URL: https://issues.apache.org/jira/browse/FLINK-23697 > Project: Flink > Issue Type: Bug >Reporter: jackylau >Priority: Major > > flink standalone Isolation is poor, why not do as spark does. > spark abstract cluster manager, executor just like flink taskmanager. > spark worker(standalone) is process, and executor is as child process of > worker. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23848) PulsarSourceITCase is failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-23848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-23848: - Labels: test-stability (was: ) > PulsarSourceITCase is failed on Azure > - > > Key: FLINK-23848 > URL: https://issues.apache.org/jira/browse/FLINK-23848 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.14.0 >Reporter: Jark Wu >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 > {code} > 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] > testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 > s <<< ERROR! > 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: > Failed to fetch next result > 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) > 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) > 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38 at > org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) > 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) > 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) > 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2021-08-17T20:17:38.241Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2021-08-17T20:17:38.2450363Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2021-08-17T20:17:38.2451001Z Aug 17
[jira] [Updated] (FLINK-23848) PulsarSourceITCase is failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-23848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-23848: - Affects Version/s: 1.14.0 > PulsarSourceITCase is failed on Azure > - > > Key: FLINK-23848 > URL: https://issues.apache.org/jira/browse/FLINK-23848 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.14.0 >Reporter: Jark Wu >Priority: Major > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 > {code} > 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] > testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 > s <<< ERROR! > 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: > Failed to fetch next result > 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) > 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) > 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38 at > org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) > 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) > 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) > 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2021-08-17T20:17:38.241Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2021-08-17T20:17:38.2450363Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2021-08-17T20:17:38.2451001Z Aug 17 20:17:38 at >
[jira] [Updated] (FLINK-23848) PulsarSourceITCase is failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-23848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-23848: - Issue Type: Bug (was: Improvement) > PulsarSourceITCase is failed on Azure > - > > Key: FLINK-23848 > URL: https://issues.apache.org/jira/browse/FLINK-23848 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Reporter: Jark Wu >Priority: Major > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 > {code} > 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] > testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 > s <<< ERROR! > 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: > Failed to fetch next result > 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) > 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) > 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38 at > org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) > 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) > 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) > 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2021-08-17T20:17:38.241Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2021-08-17T20:17:38.2450363Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2021-08-17T20:17:38.2451001Z Aug 17 20:17:38 at >
[jira] [Updated] (FLINK-23848) PulsarSourceITCase is failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-23848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-23848: - Fix Version/s: 1.14.0 > PulsarSourceITCase is failed on Azure > - > > Key: FLINK-23848 > URL: https://issues.apache.org/jira/browse/FLINK-23848 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.14.0 >Reporter: Jark Wu >Priority: Major > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 > {code} > 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] > testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 > s <<< ERROR! > 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: > Failed to fetch next result > 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) > 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) > 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38 at > org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) > 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) > 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) > 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2021-08-17T20:17:38.241Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2021-08-17T20:17:38.2450363Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2021-08-17T20:17:38.2451001Z Aug 17 20:17:38 at >
[GitHub] [flink] flinkbot edited a comment on pull request #16860: [FLINK-22198][connector/kafka] Redirect KafkaContainer output to log4j and print debug log if test hangs
flinkbot edited a comment on pull request #16860: URL: https://github.com/apache/flink/pull/16860#issuecomment-900154319 ## CI report: * d6bfdd97c618e764305515917075f97a38ebcd32 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22381) * b2f0a44536b7206856c366dfddd3f58e01766033 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22426) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16851: [FLINK-23776][streaming] Fix handling of timestamp-less records in Source metrics.
flinkbot edited a comment on pull request #16851: URL: https://github.com/apache/flink/pull/16851#issuecomment-899772155 ## CI report: * 2178ac743f87edaf1792db765a0fc0b5b058c66e Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22419) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…
flinkbot edited a comment on pull request #16740: URL: https://github.com/apache/flink/pull/16740#issuecomment-894176423 ## CI report: * 56b337596cb3b2db4fe3d4aa985b4ce8851102ba Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21956) * 9296b4edc8df450cdace492e137e58a1dba609c4 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22425) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16683: [FLINK-23846][docs]improve PushGatewayReporter config description
flinkbot edited a comment on pull request #16683: URL: https://github.com/apache/flink/pull/16683#issuecomment-891515480 ## CI report: * 3b3f16a9694732d6211a63d324d4bb579933fe87 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21354) * 63aa6e2f3c05fac9150399fe2a9c519cbe3e1d1e UNKNOWN * a51a065dc64d6b495ff12761b4babb117529da2b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22422) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16629: [FLINK-23847][connectors][kafka] improve error message when valueDeseriali…
flinkbot edited a comment on pull request #16629: URL: https://github.com/apache/flink/pull/16629#issuecomment-888753020 ## CI report: * cabb650967f9a523bca917ae6409398e5c6ccd2a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21136) * a3f89c52f22638b58005b76670229e2952d6bf83 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22424) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17104) Support registering custom JobStatusListeners from config
[ https://issues.apache.org/jira/browse/FLINK-17104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400743#comment-17400743 ] Wenhao Ji commented on FLINK-17104: --- It has been a while since I opened the discussion about this feature. I hope everyone who is watching this ticket could participate in the [discussion on the mailing list|https://lists.apache.org/list.html?d...@flink.apache.org:2020-9:JobStatusListeners] . And I also have created a [POC|https://github.com/predatorray/flink/commit/2cab8bb1119162213632db984d2eb7529b8140e7] for this and hope you can share some suggestions about it. > Support registering custom JobStatusListeners from config > - > > Key: FLINK-17104 > URL: https://issues.apache.org/jira/browse/FLINK-17104 > Project: Flink > Issue Type: New Feature > Components: API / Core >Reporter: Canbin Zheng >Priority: Minor > Labels: pull-request-available, stale-minor > > Currently, a variety of users are asking for registering custom > JobStatusListener support to get timely feedback on the status transition of > the jobs. This could be an important feature for effective Flink cluster > monitoring systems. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wsry commented on pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
wsry commented on pull request #16844: URL: https://github.com/apache/flink/pull/16844#issuecomment-900783119 > Thanks @wsry for fixing up! But I have a small concern about this change. > > 1. It is necessary to recycle BufferBuilder in the BufferWritingResultPartition#close method, but not sure why we add `closeBufferPool()` in the `fail` method. > 2. Originally, `fail()` is responsible for propagating failure and release `**buffers**`, but not the `**buffer pool**`. `closeNetworkResources` is responsible to close network-related resources after `fail()` is called. >In the `TaskCanceler`, `closeNetworkResources` is called to recycle resources and finally, BufferWritingResultPartition#close() will also be called as well as `closeBufferPool()` in `super.close()`. > > I am confused why we break this contract and include resource clean-up in fail(), and have to include fail in a cancler? > > Put in another way, why simply add point 1 (the close method added in BufferWritingResultPartition) is not enough? > > Also, how about the `release` method (the same method to release buffer consumers) instead of `close` method. That may be a better place? > > Let's discuss and sync up offline tomorrow. 1. The close buffer pool operation in the fail method is just for this: >>> // Early release of input and output buffer pools. We do this >>> // in order to unblock async Threads, which produce/consume the >>> // intermediate streams outside of the main Task Thread (like >>> // the Kafka consumer). 2. In my understanding the fail method is similar to the release method, it release the partition (including all subpartitions), which means the partition can not be consumed anymore. The subpartition does the buffer releasing and the data releasing and mark the releasing in the same method. The reason why we do not release buffer builders in the canceler thread is to avoid race condition and lock. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16860: [FLINK-22198][connector/kafka] Redirect KafkaContainer output to log4j and print debug log if test hangs
flinkbot edited a comment on pull request #16860: URL: https://github.com/apache/flink/pull/16860#issuecomment-900154319 ## CI report: * d6bfdd97c618e764305515917075f97a38ebcd32 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22381) * b2f0a44536b7206856c366dfddd3f58e01766033 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23847) [Kafka] Error msg are obscure when KafkaConsumer init, valueDeserializer is null
[ https://issues.apache.org/jira/browse/FLINK-23847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-23847: --- Labels: pull-request-available (was: ) > [Kafka] Error msg are obscure when KafkaConsumer init, valueDeserializer is > null > > > Key: FLINK-23847 > URL: https://issues.apache.org/jira/browse/FLINK-23847 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: camilesing >Priority: Critical > Labels: pull-request-available > > As the title, i think the msg can be clearer. > > _this.deserializer = checkNotNull(deserializer, "valueDeserializer");_ > _->_ > _this.deserializer = checkNotNull(deserializer, "valueDeserializer cannot be > null");_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…
flinkbot edited a comment on pull request #16740: URL: https://github.com/apache/flink/pull/16740#issuecomment-894176423 ## CI report: * 56b337596cb3b2db4fe3d4aa985b4ce8851102ba Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21956) * 9296b4edc8df450cdace492e137e58a1dba609c4 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16683: [FLINK-23846][docs]improve PushGatewayReporter config description
flinkbot edited a comment on pull request #16683: URL: https://github.com/apache/flink/pull/16683#issuecomment-891515480 ## CI report: * 3b3f16a9694732d6211a63d324d4bb579933fe87 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21354) * 63aa6e2f3c05fac9150399fe2a9c519cbe3e1d1e UNKNOWN * a51a065dc64d6b495ff12761b4babb117529da2b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16629: [FLINK-23847][connectors][kafka] improve error message when valueDeseriali…
flinkbot edited a comment on pull request #16629: URL: https://github.com/apache/flink/pull/16629#issuecomment-888753020 ## CI report: * cabb650967f9a523bca917ae6409398e5c6ccd2a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21136) * a3f89c52f22638b58005b76670229e2952d6bf83 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23493) python tests hang on Azure
[ https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400730#comment-17400730 ] Huang Xingbo commented on FLINK-23493: -- https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22416_data=ew0KICAic291cmNlIjogIlNsYWNrUGlwZWxpbmVzQXBwIiwNCiAgInNvdXJjZV9ldmVudF9uYW1lIjogIm1zLnZzcy1waXBlbGluZXMucnVuLXN0YXRlLWNoYW5nZWQtZXZlbnQiDQp9 > python tests hang on Azure > -- > > Key: FLINK-23493 > URL: https://issues.apache.org/jira/browse/FLINK-23493 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.14.0, 1.13.1, 1.12.4 >Reporter: Dawid Wysakowicz >Assignee: Huang Xingbo >Priority: Blocker > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23749) Testing Window Join
[ https://issues.apache.org/jira/browse/FLINK-23749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400728#comment-17400728 ] lixiaobao commented on FLINK-23749: --- Can you assign this issue to me? I'd like to working on this ticket. Thank you very much! > Testing Window Join > --- > > Key: FLINK-23749 > URL: https://issues.apache.org/jira/browse/FLINK-23749 > Project: Flink > Issue Type: Improvement > Components: Tests >Reporter: JING ZHANG >Priority: Blocker > Labels: release-testing > Fix For: 1.14.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23837) Differences between documentation and implementation for taskmanager.memory.network.max
[ https://issues.apache.org/jira/browse/FLINK-23837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400726#comment-17400726 ] Yao Zhang commented on FLINK-23837: --- Hi all, I looked into the default flink-conf.yml file: {code:yaml} # The amount of memory going to the network stack. These numbers usually need # no tuning. Adjusting them may be necessary in case of an "Insufficient number # of network buffers" error. The default min is 64MB, the default max is 1GB. # # taskmanager.memory.network.fraction: 0.1 # taskmanager.memory.network.min: 64mb # taskmanager.memory.network.max: 1gb {code} It says the default max is 1GB but this configuration is not enabled by default. So the default conf and Flink docs contradict with what actually in the code. > Differences between documentation and implementation for > taskmanager.memory.network.max > > > Key: FLINK-23837 > URL: https://issues.apache.org/jira/browse/FLINK-23837 > Project: Flink > Issue Type: Bug > Components: Runtime / Configuration >Affects Versions: 1.12.0 >Reporter: Wawrzyniec Nowak >Priority: Minor > > The default value for taskmanager.memory.network.max is [documented > |https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/config/#taskmanager-memory-network-max] > as 1gb but in the logs it says that > {code:java} > org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils - The > configuration option taskmanager.memory.network.max required for local > execution is not set, setting it to its default value 64mb. > {code} > And it looks like 64mb is actually the default: > https://github.com/apache/flink/blob/9e8c551958f30d5b54cbb26d2a232d96e1e0988c/flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskExecutorResourceUtils.java#L244 > Please either adjust documentation or change implementation to set 1gb by > default -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23813) DeleteExecutor NPE
[ https://issues.apache.org/jira/browse/FLINK-23813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400727#comment-17400727 ] Junning Liang commented on FLINK-23813: --- hi, [~roman_khachatryan]. this is the deepest stacktrace. !image-2021-08-18-10-49-43-941.png|width=957,height=317! > DeleteExecutor NPE > -- > > Key: FLINK-23813 > URL: https://issues.apache.org/jira/browse/FLINK-23813 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC >Affects Versions: 1.14.0, 1.12.5, 1.13.2 >Reporter: Junning Liang >Priority: Critical > Fix For: 1.14.0, 1.12.6, 1.13.3 > > Attachments: image-2021-08-18-10-49-43-941.png > > > Encountered a situation where I get an NPE from JDBCUpsertOutputFormat. > This occurs when jdbc disconnected and try to reconnect. > I need to write data to mysql in upsert way in sql, So it must group by > unique key and the JdbcBatchingOutputFormat of Jdbc sink would use > TableJdbcUpsertOutputFormat. > > Jdbc would disconnected when The data interval exceeds the set connection > time.I see that when jdbc reconnect , only > JdbcBatchingOutputFormat#jdbcStatementExecutor(insert) would > prepareStatements but TableJdbcUpsertOutputFormat#deleteExecutor would not > prepareStatements so that come up NPE. > if in JdbcBatchingOutputFormat have a protected function to reset > PrepareStatement and TableJdbcUpsertOutputFormat override this function to > reset deleteExecutor, it would work well. > prepareStatements -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23813) DeleteExecutor NPE
[ https://issues.apache.org/jira/browse/FLINK-23813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junning Liang updated FLINK-23813: -- Attachment: image-2021-08-18-10-49-43-941.png > DeleteExecutor NPE > -- > > Key: FLINK-23813 > URL: https://issues.apache.org/jira/browse/FLINK-23813 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC >Affects Versions: 1.14.0, 1.12.5, 1.13.2 >Reporter: Junning Liang >Priority: Critical > Fix For: 1.14.0, 1.12.6, 1.13.3 > > Attachments: image-2021-08-18-10-49-43-941.png > > > Encountered a situation where I get an NPE from JDBCUpsertOutputFormat. > This occurs when jdbc disconnected and try to reconnect. > I need to write data to mysql in upsert way in sql, So it must group by > unique key and the JdbcBatchingOutputFormat of Jdbc sink would use > TableJdbcUpsertOutputFormat. > > Jdbc would disconnected when The data interval exceeds the set connection > time.I see that when jdbc reconnect , only > JdbcBatchingOutputFormat#jdbcStatementExecutor(insert) would > prepareStatements but TableJdbcUpsertOutputFormat#deleteExecutor would not > prepareStatements so that come up NPE. > if in JdbcBatchingOutputFormat have a protected function to reset > PrepareStatement and TableJdbcUpsertOutputFormat override this function to > reset deleteExecutor, it would work well. > prepareStatements -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wsry commented on a change in pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
wsry commented on a change in pull request #16844: URL: https://github.com/apache/flink/pull/16844#discussion_r690859332 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java ## @@ -972,24 +972,28 @@ private void releaseResources() { for (ResultPartitionWriter partitionWriter : consumableNotifyingPartitionWriters) { taskEventDispatcher.unregisterPartition(partitionWriter.getPartitionId()); -if (isCanceledOrFailed()) { -partitionWriter.fail(getFailureCause()); -} } -closeNetworkResources(); +if (isCanceledOrFailed()) { +failAllResultPartitions(); +} +closeAllResultPartitions(); Review comment: In my understanding the fail method more likes release which mean the result partition can not be consumed and the close method closes all network resources (network buffers). We can consume a ResultPartition after the network buffers are recycled, for example, the blocking partition. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-16154) Translate "Operator/Join" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400720#comment-17400720 ] lixiaobao commented on FLINK-16154: --- Ok,I will translate other page,tks > Translate "Operator/Join" into Chinese > -- > > Key: FLINK-16154 > URL: https://issues.apache.org/jira/browse/FLINK-16154 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Yun Gao >Priority: Major > Labels: auto-unassigned, pull-request-available > Fix For: 1.14.0 > > > The page is located at _"docs/dev/stream/operators/joining.zh.md"_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] lirui-apache commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…
lirui-apache commented on pull request #16745: URL: https://github.com/apache/flink/pull/16745#issuecomment-900769304 > > Could you post links to the authentication design you mentioned? I tried with hive 2.3.6 and found this is actually allowed. For example, in a kerberized env, you can kinit as `user1` but run Hive CLI as `user2`. And choose `SessionStateConfigUserAuthenticator` as the authentication provider. Then you can create tables whose owner is `user2`. Besides, Hive 3.x supports [altering table owner](https://issues.apache.org/jira/browse/HIVE-18762), so I doubt Hive requires table owner to be the same as the UGI creating the table in a secure cluster. > > i dont have link. According to development experience, in a security cluster, the authorized user must be the same as the authenticated user or proxy user, and the authenticated user cannot be changed. Therefore, in a security cluster, authorized users cannot be specified. > ok, i have updated the PR Firstly, I don't think there's such a thing as "authorized user". User identity is solely determined by authentication, and authorization is to determine what the user can access. Secondly, the question here is not about whether we should do authorization with the authenticated user, it's about whether we should require the table owner to be the same as the authenticated user creating that table. If Hive itself doesn't enforce such requirement, we shouldn't do it in Flink. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wsry commented on a change in pull request #16844: [FLINK-23724][network] Fix the network buffer leak when ResultPartition is released
wsry commented on a change in pull request #16844: URL: https://github.com/apache/flink/pull/16844#discussion_r690857837 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultPartition.java ## @@ -250,15 +250,20 @@ public void release(Throwable cause) { /** Releases all produced data including both those stored in memory and persisted on disk. */ protected abstract void releaseInternal(); -@Override -public void close() { +private void closeBufferPool() { if (bufferPool != null) { bufferPool.lazyDestroy(); } } +@Override +public void close() { +closeBufferPool(); +} + @Override public void fail(@Nullable Throwable throwable) { +closeBufferPool(); Review comment: In my understanding the close method of result partition will be always called by the task thread to release the network resources regardless of success or failure. I think for network resources releasing, it is enough. I guess the only reason to call the close method in the TaskCanceler thread is as the comment suggested: >>>// Early release of input and output buffer pools. We do this >>>// in order to unblock async Threads, which produce/consume the >>>// intermediate streams outside of the main Task Thread (like >>>// the Kafka consumer). I did not look into this and just moved the BufferPool close operation to the fail method to avoid breaking this behavior. About the fail method, I think it more likes another release method without knowing the partition id. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-16154) Translate "Operator/Join" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400719#comment-17400719 ] Wenzhong Duan commented on FLINK-16154: --- Hi [~q977734161], the translations are already done. But they have not been reviewed. > Translate "Operator/Join" into Chinese > -- > > Key: FLINK-16154 > URL: https://issues.apache.org/jira/browse/FLINK-16154 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Yun Gao >Priority: Major > Labels: auto-unassigned, pull-request-available > Fix For: 1.14.0 > > > The page is located at _"docs/dev/stream/operators/joining.zh.md"_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Issue Comment Deleted] (FLINK-16154) Translate "Operator/Join" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Gao updated FLINK-16154: Comment: was deleted (was: Hi [~q977734161] Very thanks, I see there is already an PR open, so let's first check with the author about whether he want to continue~) > Translate "Operator/Join" into Chinese > -- > > Key: FLINK-16154 > URL: https://issues.apache.org/jira/browse/FLINK-16154 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Yun Gao >Priority: Major > Labels: auto-unassigned, pull-request-available > Fix For: 1.14.0 > > > The page is located at _"docs/dev/stream/operators/joining.zh.md"_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] paul8263 commented on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…
paul8263 commented on pull request #16740: URL: https://github.com/apache/flink/pull/16740#issuecomment-900766636 Hi @tsreaper , I solved the conflicts in 9296b4e. When I ran the unit tests I encountered an error: org.apache.flink.changelog.fs.FsStateChangelogStorageFactory in org.apache.flink.changelog.fs is not public, cannot access it from external packages. In FLINK-23279, org.apache.flink.changelog.fs.FsStateChangelogStorageFactory was added in StreamFaultToleranceTestBase.java. It seems that changes in other commits might lead to the test failure. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] paul8263 removed a comment on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…
paul8263 removed a comment on pull request #16740: URL: https://github.com/apache/flink/pull/16740#issuecomment-900766636 Hi @tsreaper , I solved the conflicts in 9296b4e. When I ran the unit tests I encountered an error: org.apache.flink.changelog.fs.FsStateChangelogStorageFactory in org.apache.flink.changelog.fs is not public, cannot access it from external packages. In FLINK-23279, org.apache.flink.changelog.fs.FsStateChangelogStorageFactory was added in StreamFaultToleranceTestBase.java. It seems that changes in other commits might lead to the test failure. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #16852: [FLINK-13636][docs-zh]Translate the "Flink DataStream API Programming Guide" page into Chinese
wuchong commented on pull request #16852: URL: https://github.com/apache/flink/pull/16852#issuecomment-900766528 cc @RocMarshal , do you have time to review this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-23844) Fix spelling mistakes for "async"
[ https://issues.apache.org/jira/browse/FLINK-23844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-23844. --- Fix Version/s: 1.14.0 Assignee: wuguihu Resolution: Fixed Fixed in master: d5cdd6f3b207c01b3ff7dd363d07527e0185f347 > Fix spelling mistakes for "async" > - > > Key: FLINK-23844 > URL: https://issues.apache.org/jira/browse/FLINK-23844 > Project: Flink > Issue Type: Bug >Reporter: wuguihu >Assignee: wuguihu >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > > Fix spelling mistakes for "async" > The 'aysnc' should be changed to 'async'. > > 1. > flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/source/HBaseRowDataAsyncLookupFunction.java > {code:java} > Line 111: > hbase-aysnc-lookup-worker ==> hbase-async-lookup-worker > {code} > 2. > flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java > {code:java} > Line 949: > AysncWaitOperator ==> AsyncWaitOperator{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] paul8263 commented on pull request #16740: [FLINK-23614][table-planner] The resulting scale of TRUNCATE(DECIMAL,…
paul8263 commented on pull request #16740: URL: https://github.com/apache/flink/pull/16740#issuecomment-900766227 Hi @tsreaper , I solved the conflicts in 9296b4e. When I ran the unit tests I encountered an error: org.apache.flink.changelog.fs.FsStateChangelogStorageFactory in org.apache.flink.changelog.fs is not public, cannot access it from external packages. In FLINK-23279, org.apache.flink.changelog.fs.FsStateChangelogStorageFactory was added in StreamFaultToleranceTestBase.java. It seems that changes in other commits might lead to the test failure. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong merged pull request #16868: [FLINK-23844]Fix spelling mistakes for "async"
wuchong merged pull request #16868: URL: https://github.com/apache/flink/pull/16868 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16823: [FLINK-23845][docs]improve PushGatewayReporter config:deleteOnShutdown de…
flinkbot edited a comment on pull request #16823: URL: https://github.com/apache/flink/pull/16823#issuecomment-898865990 ## CI report: * 3ca95e2c4af83ecc4125f4c86747b7006cc41961 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22177) * 656fc2a8cd23e86b8abf656d80593e4df7c2e8a5 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22421) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-23848) PulsarSourceITCase is failed on Azure
Jark Wu created FLINK-23848: --- Summary: PulsarSourceITCase is failed on Azure Key: FLINK-23848 URL: https://issues.apache.org/jira/browse/FLINK-23848 Project: Flink Issue Type: Improvement Components: Connectors / Pulsar Reporter: Jark Wu https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 {code} 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running org.apache.flink.connector.pulsar.source.PulsarSourceITCase 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in org.apache.flink.connector.pulsar.source.PulsarSourceITCase 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 s <<< ERROR! 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: Failed to fetch next result 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38at org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38at org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38at org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38at org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38at java.lang.reflect.Method.invoke(Method.java:498) 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 2021-08-17T20:17:38.241Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 2021-08-17T20:17:38.2450363Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 2021-08-17T20:17:38.2451001Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 2021-08-17T20:17:38.2451614Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 2021-08-17T20:17:38.2452440Z Aug 17 20:17:38at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
[GitHub] [flink] flinkbot edited a comment on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…
flinkbot edited a comment on pull request #16745: URL: https://github.com/apache/flink/pull/16745#issuecomment-894632163 ## CI report: * ec30ee61c2adc94b71eede27342a6e4a42a23e56 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22167) * 8012720d8036bdae16feaafed425f3024dfc14f9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22420) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16683: [FLINK-23846][docs]improve PushGatewayReporter config description
flinkbot edited a comment on pull request #16683: URL: https://github.com/apache/flink/pull/16683#issuecomment-891515480 ## CI report: * 3b3f16a9694732d6211a63d324d4bb579933fe87 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21354) * 63aa6e2f3c05fac9150399fe2a9c519cbe3e1d1e UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-16154) Translate "Operator/Join" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400715#comment-17400715 ] Yun Gao commented on FLINK-16154: - Hi [~q977734161] Very thanks, I see there is already an PR open, so let's first check with the author about whether he want to continue~ > Translate "Operator/Join" into Chinese > -- > > Key: FLINK-16154 > URL: https://issues.apache.org/jira/browse/FLINK-16154 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Yun Gao >Priority: Major > Labels: auto-unassigned, pull-request-available > Fix For: 1.14.0 > > > The page is located at _"docs/dev/stream/operators/joining.zh.md"_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] gaoyunhaii commented on pull request #14858: [FLINK-16154] Translate "Operator/Join" into Chinese
gaoyunhaii commented on pull request #14858: URL: https://github.com/apache/flink/pull/14858#issuecomment-900764728 Hi @dijkwxyz sorry for the very late reply, do you want to still work on this PR~? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-16152) Translate "Operator/index" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-16152: --- Assignee: wuguihu > Translate "Operator/index" into Chinese > --- > > Key: FLINK-16152 > URL: https://issues.apache.org/jira/browse/FLINK-16152 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Yun Gao >Assignee: wuguihu >Priority: Major > Labels: auto-unassigned, pull-request-available > Fix For: 1.14.0 > > > The page is located at _docs/dev/stream/operators/index.zh.md_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23847) [Kafka] Error msg are obscure when KafkaConsumer init, valueDeserializer is null
[ https://issues.apache.org/jira/browse/FLINK-23847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] camilesing updated FLINK-23847: --- Component/s: (was: Documentation) Connectors / Kafka > [Kafka] Error msg are obscure when KafkaConsumer init, valueDeserializer is > null > > > Key: FLINK-23847 > URL: https://issues.apache.org/jira/browse/FLINK-23847 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: camilesing >Priority: Critical > > As the title, i think the msg can be clearer. > > _this.deserializer = checkNotNull(deserializer, "valueDeserializer");_ > _->_ > _this.deserializer = checkNotNull(deserializer, "valueDeserializer cannot be > null");_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] gaoyunhaii commented on pull request #14819: [FLINK-16152] Translate "Operator/index" into Chinese
gaoyunhaii commented on pull request #14819: URL: https://github.com/apache/flink/pull/14819#issuecomment-900764458 Hi @dijkwxyz sorry for the very late reply, do you want to still work on this PR~? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23847) [Kafka] Error msg are obscure when KafkaConsumer init, valueDeserializer is null
[ https://issues.apache.org/jira/browse/FLINK-23847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] camilesing updated FLINK-23847: --- Summary: [Kafka] Error msg are obscure when KafkaConsumer init, valueDeserializer is null (was: [DOCS] Error msg are obscure when KafkaConsumer init, valueDeserializer is null) > [Kafka] Error msg are obscure when KafkaConsumer init, valueDeserializer is > null > > > Key: FLINK-23847 > URL: https://issues.apache.org/jira/browse/FLINK-23847 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: camilesing >Priority: Critical > > As the title, i think the msg can be clearer. > > _this.deserializer = checkNotNull(deserializer, "valueDeserializer");_ > _->_ > _this.deserializer = checkNotNull(deserializer, "valueDeserializer cannot be > null");_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23847) [DOCS] Error msg are obscure when KafkaConsumer init, valueDeserializer is null
camilesing created FLINK-23847: -- Summary: [DOCS] Error msg are obscure when KafkaConsumer init, valueDeserializer is null Key: FLINK-23847 URL: https://issues.apache.org/jira/browse/FLINK-23847 Project: Flink Issue Type: Bug Components: Documentation Reporter: camilesing As the title, i think the msg can be clearer. _this.deserializer = checkNotNull(deserializer, "valueDeserializer");_ _->_ _this.deserializer = checkNotNull(deserializer, "valueDeserializer cannot be null");_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] xintongsong commented on a change in pull request #16860: [FLINK-22198][connector/kafka] Redirect KafkaContainer output to log4j and print debug log if test hangs
xintongsong commented on a change in pull request #16860: URL: https://github.com/apache/flink/pull/16860#discussion_r690849889 ## File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaTableTestBase.java ## @@ -95,4 +142,71 @@ public void deleteTestTopic(String topic) { admin.deleteTopics(Collections.singletonList(topic)); } } + +// For Debug Logging Purpose -- + +protected void scheduleTimeoutLogger(Duration period, Runnable loggingAction) { Review comment: These methods can be `private`. - scheduleTimeoutLogger - cancelTimeoutLogger - describeExternalTopics - logTopicPartitionStatus -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23846) [DOCS]PushGatewayReporter config description obscure
[ https://issues.apache.org/jira/browse/FLINK-23846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-23846: --- Labels: pull-request-available (was: ) > [DOCS]PushGatewayReporter config description obscure > > > Key: FLINK-23846 > URL: https://issues.apache.org/jira/browse/FLINK-23846 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: camilesing >Priority: Critical > Labels: pull-request-available > > the randomJobNameSuffix config description: _Specifies whether a random > suffix should be appended to the job name_ > > when i first to see it, i dont down know what this means. so i search a lot > of information and practice, until i understand that. > > i think the config description can be clearer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] camilesing commented on pull request #16683: [FLINK-23846][docs]improve PushGatewayReporter config description
camilesing commented on pull request #16683: URL: https://github.com/apache/flink/pull/16683#issuecomment-900760531 @infoverload thx your comment, I've done what you said -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-16154) Translate "Operator/Join" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400712#comment-17400712 ] lixiaobao edited comment on FLINK-16154 at 8/18/21, 2:14 AM: - Hi [~gaoyunhaii] [~jark] , I find this page is not translated. can you assign this issue to me? I'd like to working on this ticket. Thank you very much! was (Author: q977734161): Hi Jark Wu , I find this page is not translated. can you assign this issue to me? I'd like to working on this ticket. Thank you very much! > Translate "Operator/Join" into Chinese > -- > > Key: FLINK-16154 > URL: https://issues.apache.org/jira/browse/FLINK-16154 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Yun Gao >Priority: Major > Labels: auto-unassigned, pull-request-available > Fix For: 1.14.0 > > > The page is located at _"docs/dev/stream/operators/joining.zh.md"_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-16154) Translate "Operator/Join" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400712#comment-17400712 ] lixiaobao commented on FLINK-16154: --- Hi Jark Wu , I find this page is not translated. can you assign this issue to me? I'd like to working on this ticket. Thank you very much! > Translate "Operator/Join" into Chinese > -- > > Key: FLINK-16154 > URL: https://issues.apache.org/jira/browse/FLINK-16154 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Yun Gao >Priority: Major > Labels: auto-unassigned, pull-request-available > Fix For: 1.14.0 > > > The page is located at _"docs/dev/stream/operators/joining.zh.md"_ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23556) SQLClientSchemaRegistryITCase fails with " Subject ... not found"
[ https://issues.apache.org/jira/browse/FLINK-23556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400706#comment-17400706 ] Xintong Song commented on FLINK-23556: -- Thanks for the updates, [~bgeng777]. Do you need a review for the PR, or you want o spend more time confirming your hypothesis? > SQLClientSchemaRegistryITCase fails with " Subject ... not found" > - > > Key: FLINK-23556 > URL: https://issues.apache.org/jira/browse/FLINK-23556 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Biao Geng >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21129=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=cc5499f8-bdde-5157-0d76-b6528ecd808e=25337 > {code} > Jul 28 23:37:48 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 209.44 s <<< FAILURE! - in > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase > Jul 28 23:37:48 [ERROR] > testWriting(org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase) > Time elapsed: 81.146 s <<< ERROR! > Jul 28 23:37:48 > io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: > Subject 'test-user-behavior-d18d4af2-3830-4620-9993-340c13f50cc2-value' not > found.; error code: 40401 > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:769) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:760) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getAllVersions(CachedSchemaRegistryClient.java:364) > Jul 28 23:37:48 at > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.getAllVersions(SQLClientSchemaRegistryITCase.java:230) > Jul 28 23:37:48 at > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testWriting(SQLClientSchemaRegistryITCase.java:195) > Jul 28 23:37:48 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Jul 28 23:37:48 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Jul 28 23:37:48 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Jul 28 23:37:48 at java.lang.reflect.Method.invoke(Method.java:498) > Jul 28 23:37:48 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Jul 28 23:37:48 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Jul 28 23:37:48 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > Jul 28 23:37:48 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Jul 28 23:37:48 at java.lang.Thread.run(Thread.java:748) > Jul 28 23:37:48 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23846) [DOCS]PushGatewayReporter config description obscure
camilesing created FLINK-23846: -- Summary: [DOCS]PushGatewayReporter config description obscure Key: FLINK-23846 URL: https://issues.apache.org/jira/browse/FLINK-23846 Project: Flink Issue Type: Bug Components: Documentation Reporter: camilesing the randomJobNameSuffix config description: _Specifies whether a random suffix should be appended to the job name_ when i first to see it, i dont down know what this means. so i search a lot of information and practice, until i understand that. i think the config description can be clearer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23845) [DOCS]PushGateway metrics group not delete when job shutdown
[ https://issues.apache.org/jira/browse/FLINK-23845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-23845: --- Labels: pull-request-available (was: ) > [DOCS]PushGateway metrics group not delete when job shutdown > > > Key: FLINK-23845 > URL: https://issues.apache.org/jira/browse/FLINK-23845 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: camilesing >Priority: Blocker > Labels: pull-request-available > > see https://issues.apache.org/jira/browse/FLINK-20691 . whatever the problem > has always existed, we should avoid other guys met it -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #16823: [FLINK-23845][docs]improve PushGatewayReporter config:deleteOnShutdown de…
flinkbot edited a comment on pull request #16823: URL: https://github.com/apache/flink/pull/16823#issuecomment-898865990 ## CI report: * 3ca95e2c4af83ecc4125f4c86747b7006cc41961 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22177) * 656fc2a8cd23e86b8abf656d80593e4df7c2e8a5 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…
flinkbot edited a comment on pull request #16745: URL: https://github.com/apache/flink/pull/16745#issuecomment-894632163 ## CI report: * ec30ee61c2adc94b71eede27342a6e4a42a23e56 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22167) * 8012720d8036bdae16feaafed425f3024dfc14f9 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23845) [DOCS]PushGateway metrics group not delete when job shutdown
[ https://issues.apache.org/jira/browse/FLINK-23845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400702#comment-17400702 ] camilesing commented on FLINK-23845: https://github.com/apache/flink/pull/16823 > [DOCS]PushGateway metrics group not delete when job shutdown > > > Key: FLINK-23845 > URL: https://issues.apache.org/jira/browse/FLINK-23845 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: camilesing >Priority: Blocker > > see https://issues.apache.org/jira/browse/FLINK-20691 . whatever the problem > has always existed, we should avoid other guys met it -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] cuibo01 commented on pull request #16745: [FLINK-22246]when use HiveCatalog create table , can't set Table owne…
cuibo01 commented on pull request #16745: URL: https://github.com/apache/flink/pull/16745#issuecomment-900743034 > Could you post links to the authentication design you mentioned? I tried with hive 2.3.6 and found this is actually allowed. For example, in a kerberized env, you can kinit as `user1` but run Hive CLI as `user2`. And choose `SessionStateConfigUserAuthenticator` as the authentication provider. Then you can create tables whose owner is `user2`. Besides, Hive 3.x supports [altering table owner](https://issues.apache.org/jira/browse/HIVE-18762), so I doubt Hive requires table owner to be the same as the UGI creating the table in a secure cluster. i dont have link. According to development experience, in a security cluster, the authorized user must be the same as the authenticated user or proxy user, and the authenticated user cannot be changed. Therefore, in a security cluster, authorized users cannot be specified. ok, i have updated the PR -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org