[GitHub] [flink] link3280 opened a new pull request #16891: [FLINK-23868][Client] JobExecutionResult printed event if suppressSysout is on
link3280 opened a new pull request #16891: URL: https://github.com/apache/flink/pull/16891 ## What is the purpose of the change Environments prints job execution results to stdout by default and provided a flag `suppressSysout` to disable the behavior. This flag is useful when submitting jobs through REST API or other programmatic approaches. However, JobExecutionResult is still printed when this flag is on, which looks like a bug to me. ## Brief change log Skip printing JobExecutionResult if suppressSysout is on. ## Verifying this change This change is a trivial rework/code cleanup without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: yes - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduces a new feature? no - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hapihu edited a comment on pull request #16878: [FLINK-23859][typo][flink-core][flink-connectors]fix typo for code
hapihu edited a comment on pull request #16878: URL: https://github.com/apache/flink/pull/16878#issuecomment-901628257 ```bash #8、 codespell flink-formats/ -S '*.xml' -S '*.iml' -S '*.txt' -S '*/target/*' flink-formats/flink-orc/src/main/java/org/apache/flink/orc/vector/AbstractOrcColumnVector.java:85: Unsupport ==> Unsupported flink-formats/flink-orc-nohive/src/main/java/org/apache/flink/orc/nohive/vector/AbstractOrcNoHiveVector.java:70: Unsupport ==> Unsupported flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/RowDataToAvroConverters.java:70: accroding ==> according flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroFactory.java:71: initalize ==> initialize ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hapihu edited a comment on pull request #16878: [FLINK-23859][typo][flink-core][flink-connectors]fix typo for code
hapihu edited a comment on pull request #16878: URL: https://github.com/apache/flink/pull/16878#issuecomment-901629744 ```bash # 1、 codespell flink-java/ -S '*.xml' -S '*.iml' -S '*.txt' -S '*/target/*' flink-java/src/main/java/org/apache/flink/api/java/DataSet.java:1538: writen ==> written ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hapihu commented on pull request #16878: [FLINK-23859][typo][flink-core][flink-connectors]fix typo for code
hapihu commented on pull request #16878: URL: https://github.com/apache/flink/pull/16878#issuecomment-901629744 # 1、 codespell flink-java/ -S '*.xml' -S '*.iml' -S '*.txt' -S '*/target/*' flink-java/src/main/java/org/apache/flink/api/java/DataSet.java:1538: writen ==> written -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hapihu commented on pull request #16878: [FLINK-23859][typo][flink-core][flink-connectors]fix typo for code
hapihu commented on pull request #16878: URL: https://github.com/apache/flink/pull/16878#issuecomment-901628257 #8、 codespell flink-formats/ -S '*.xml' -S '*.iml' -S '*.txt' -S '*/target/*' flink-formats/flink-orc/src/main/java/org/apache/flink/orc/vector/AbstractOrcColumnVector.java:85: Unsupport ==> Unsupported flink-formats/flink-orc-nohive/src/main/java/org/apache/flink/orc/nohive/vector/AbstractOrcNoHiveVector.java:70: Unsupport ==> Unsupported flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/RowDataToAvroConverters.java:70: accroding ==> according flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroFactory.java:71: initalize ==> initialize -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hapihu commented on pull request #16878: [FLINK-23859][typo][flink-core][flink-connectors]fix typo for code
hapihu commented on pull request #16878: URL: https://github.com/apache/flink/pull/16878#issuecomment-901625661 ```bash 7、codespell flink-examples/ -S '*.xml' -S '*.iml' -S '*.txt' -S '*/target/*' flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/join/WindowJoin.java:42: steams ==> streams flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/join/WindowJoin.java:44: steams ==> streams flink-examples/flink-examples-streaming/src/main/scala/org/apache/flink/streaming/scala/examples/join/WindowJoin.scala:31: steams ==> streams flink-examples/flink-examples-streaming/src/main/scala/org/apache/flink/streaming/scala/examples/join/WindowJoin.scala:34: steams ==> streams flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/misc/CollectionExecutionExample.java:83: availble ==> available ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hapihu commented on pull request #16878: [FLINK-23859][typo][flink-core][flink-connectors]fix typo for code
hapihu commented on pull request #16878: URL: https://github.com/apache/flink/pull/16878#issuecomment-901623657 ```bash 5、codespell flink-dist/ -S '*.xml' -S '*.iml' -S '*.txt' -S '*/target/*' flink-dist/src/main/flink-bin/bin/config.sh:378: Auxilliary ==> Auxiliary flink-dist/src/main/flink-bin/bin/config.sh:505: becuase ==> because ``` ```bash 6、codespell flink-end-to-end-tests/ -S '*.xml' -S '*.iml' -S '*.txt' -S '*/target/*' -S '*.ans' -S '*.sql' flink-end-to-end-tests/flink-netty-shuffle-memory-control-test/src/main/java/org/apache/flink/streaming/tests/NettyShuffleMemoryControlTestProgram.java:85: positve ==> positive flink-end-to-end-tests/flink-python-test/python/python_job.py:38: pipleline ==> pipeline flink-end-to-end-tests/test-scripts/common_yarn_docker.sh:200: succesfully ==> successfully flink-end-to-end-tests/test-scripts/kafka-common.sh:98: propery ==> propert ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23868) JobExecutionResult printed event if suppressSysout is on
[ https://issues.apache.org/jira/browse/FLINK-23868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Lin updated FLINK-23868: - Description: Environments prints job execution results to stdout by default and provided a flag `suppressSysout` to disable the behavior. This flag is useful when submitting jobs through REST API or other programmatic approaches. However, JobExecutionResult is still printed when this flag is on, which looks like a bug to me. (was: Environments prints job execution results to stdout by default and provided a flag `suppressSysout` to disable the behavior. This flag is useful when submitting jobs through REST API or by programmatic approaches. However, JobExecutionResult is still printed when this flag is on, which looks like a bug to me.) > JobExecutionResult printed event if suppressSysout is on > > > Key: FLINK-23868 > URL: https://issues.apache.org/jira/browse/FLINK-23868 > Project: Flink > Issue Type: Bug > Components: Client / Job Submission >Affects Versions: 1.14.0, 1.13.2 >Reporter: Paul Lin >Priority: Minor > > Environments prints job execution results to stdout by default and provided a > flag `suppressSysout` to disable the behavior. This flag is useful when > submitting jobs through REST API or other programmatic approaches. However, > JobExecutionResult is still printed when this flag is on, which looks like a > bug to me. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-23868) JobExecutionResult printed event if suppressSysout is on
Paul Lin created FLINK-23868: Summary: JobExecutionResult printed event if suppressSysout is on Key: FLINK-23868 URL: https://issues.apache.org/jira/browse/FLINK-23868 Project: Flink Issue Type: Bug Components: Client / Job Submission Affects Versions: 1.13.2, 1.14.0 Reporter: Paul Lin Environments prints job execution results to stdout by default and provided a flag `suppressSysout` to disable the behavior. This flag is useful when submitting jobs through REST API or by programmatic approaches. However, JobExecutionResult is still printed when this flag is on, which looks like a bug to me. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20726) Introduce Pulsar connector
[ https://issues.apache.org/jira/browse/FLINK-20726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401476#comment-17401476 ] Nicholas Jiang commented on FLINK-20726: [~Jianyun Zhao], Is this lack of the document of Pulsar Sink? > Introduce Pulsar connector > -- > > Key: FLINK-20726 > URL: https://issues.apache.org/jira/browse/FLINK-20726 > Project: Flink > Issue Type: New Feature > Components: Connectors / Common >Affects Versions: 1.13.0 >Reporter: Jianyun Zhao >Priority: Major > Labels: auto-unassigned > Fix For: 1.14.0 > > > Pulsar is an important player in messaging middleware, and it is essential > for Flink to support Pulsar. > Our existing code is maintained at > [streamnative/pulsar-flink|https://github.com/streamnative/pulsar-flink] , > next we will split it into several pr merges back to the community. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23856) Support all JSON methods in PyFlink
[ https://issues.apache.org/jira/browse/FLINK-23856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-23856: Component/s: API / Python > Support all JSON methods in PyFlink > --- > > Key: FLINK-23856 > URL: https://issues.apache.org/jira/browse/FLINK-23856 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Ingo Bürk >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-23757) Support JSON_EXISTS / JSON_VALUE methods in pyFlink
[ https://issues.apache.org/jira/browse/FLINK-23757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu closed FLINK-23757. --- Fix Version/s: 1.14.0 Resolution: Fixed Merged to master via ba9b4f1039037d4a2da4f90a2f83612ccf2fa708 > Support JSON_EXISTS / JSON_VALUE methods in pyFlink > --- > > Key: FLINK-23757 > URL: https://issues.apache.org/jira/browse/FLINK-23757 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Ingo Bürk >Assignee: Ingo Bürk >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] dianfu closed pull request #16874: [FLINK-23757][python] Support json_exists and json_value
dianfu closed pull request #16874: URL: https://github.com/apache/flink/pull/16874 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16886: [FLINK-23727][core] Skip null values in SimpleStringSchema#deserialize
flinkbot edited a comment on pull request #16886: URL: https://github.com/apache/flink/pull/16886#issuecomment-901539563 ## CI report: * dcf915f4bd81ce935f935792de2effd0e8ffe634 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22467) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-23867) FlinkKafkaInternalProducerITCase.testCommitTransactionAfterClosed fails with IllegalStateException
Xintong Song created FLINK-23867: Summary: FlinkKafkaInternalProducerITCase.testCommitTransactionAfterClosed fails with IllegalStateException Key: FLINK-23867 URL: https://issues.apache.org/jira/browse/FLINK-23867 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: 1.13.2 Reporter: Xintong Song Fix For: 1.13.3 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22465=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=6862 {code} Aug 18 23:20:14 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.905 s <<< FAILURE! - in org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase Aug 18 23:20:14 [ERROR] testCommitTransactionAfterClosed(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase) Time elapsed: 7.848 s <<< ERROR! Aug 18 23:20:14 java.lang.Exception: Unexpected exception, expected but was Aug 18 23:20:14 at org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:28) Aug 18 23:20:14 at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) Aug 18 23:20:14 at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) Aug 18 23:20:14 at java.util.concurrent.FutureTask.run(FutureTask.java:266) Aug 18 23:20:14 at java.lang.Thread.run(Thread.java:748) Aug 18 23:20:14 Caused by: java.lang.AssertionError: The message should have been successfully sent expected null, but was: Aug 18 23:20:14 at org.junit.Assert.fail(Assert.java:88) Aug 18 23:20:14 at org.junit.Assert.failNotNull(Assert.java:755) Aug 18 23:20:14 at org.junit.Assert.assertNull(Assert.java:737) Aug 18 23:20:14 at org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.getClosedProducer(FlinkKafkaInternalProducerITCase.java:228) Aug 18 23:20:14 at org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testCommitTransactionAfterClosed(FlinkKafkaInternalProducerITCase.java:177) Aug 18 23:20:14 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Aug 18 23:20:14 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Aug 18 23:20:14 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Aug 18 23:20:14 at java.lang.reflect.Method.invoke(Method.java:498) Aug 18 23:20:14 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) Aug 18 23:20:14 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Aug 18 23:20:14 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) Aug 18 23:20:14 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Aug 18 23:20:14 at org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:19) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23848) PulsarSourceITCase is failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-23848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401472#comment-17401472 ] Xintong Song commented on FLINK-23848: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22463=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=2c68b137-b01d-55c9-e603-3ff3f320364b=24746 > PulsarSourceITCase is failed on Azure > - > > Key: FLINK-23848 > URL: https://issues.apache.org/jira/browse/FLINK-23848 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.14.0 >Reporter: Jark Wu >Priority: Critical > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 > {code} > 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] > testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 > s <<< ERROR! > 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: > Failed to fetch next result > 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) > 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) > 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38 at > org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) > 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) > 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) > 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2021-08-17T20:17:38.241Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2021-08-17T20:17:38.2450363Z Aug 17
[jira] [Commented] (FLINK-23828) KafkaSourceITCase.testIdleReader fails on azure
[ https://issues.apache.org/jira/browse/FLINK-23828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401471#comment-17401471 ] Xintong Song commented on FLINK-23828: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22463=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=918e890f-5ed9-5212-a25e-962628fb4bc5=7079 > KafkaSourceITCase.testIdleReader fails on azure > --- > > Key: FLINK-23828 > URL: https://issues.apache.org/jira/browse/FLINK-23828 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0 >Reporter: Xintong Song >Priority: Critical > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22284=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7355 > {code} > Aug 16 14:25:00 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, > Time elapsed: 67.241 s <<< FAILURE! - in > org.apache.flink.connector.kafka.source.KafkaSourceITCase > Aug 16 14:25:00 [ERROR] testIdleReader{TestEnvironment, ExternalContext}[1] > Time elapsed: 0.918 s <<< FAILURE! > Aug 16 14:25:00 java.lang.AssertionError: > Aug 16 14:25:00 > Aug 16 14:25:00 Expected: Records consumed by Flink should be identical to > test data and preserve the order in multiple splits > Aug 16 14:25:00 but: Unexpected record 'la3OaJDch7vuUXDmGOYf' > Aug 16 14:25:00 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > Aug 16 14:25:00 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > Aug 16 14:25:00 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testIdleReader(SourceTestSuiteBase.java:193) > Aug 16 14:25:00 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Aug 16 14:25:00 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Aug 16 14:25:00 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Aug 16 14:25:00 at java.lang.reflect.Method.invoke(Method.java:498) > Aug 16 14:25:00 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > Aug 16 14:25:00 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > Aug 16 14:25:00 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > Aug 16 14:25:00 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > Aug 16 14:25:00 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) > Aug 16 14:25:00 at > org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) > Aug 16 14:25:00 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) > Aug 16 14:25:00 at >
[jira] [Commented] (FLINK-23391) KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure
[ https://issues.apache.org/jira/browse/FLINK-23391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401470#comment-17401470 ] Xintong Song commented on FLINK-23391: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22463=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=f7d83ad5-3324-5307-0eb3-819065cdcb38=8303 > KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure > --- > > Key: FLINK-23391 > URL: https://issues.apache.org/jira/browse/FLINK-23391 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.13.1 >Reporter: Xintong Song >Priority: Major > Labels: test-stability > Fix For: 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20456=logs=c5612577-f1f7-5977-6ff6-7432788526f7=53f6305f-55e6-561c-8f1e-3a1dde2c77df=6783 > {code} > Jul 14 23:00:26 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 99.93 s <<< FAILURE! - in > org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest > Jul 14 23:00:26 [ERROR] > testKafkaSourceMetrics(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest) > Time elapsed: 60.225 s <<< ERROR! > Jul 14 23:00:26 java.util.concurrent.TimeoutException: Offsets are not > committed successfully. Dangling offsets: > {15213={KafkaSourceReaderTest-0=OffsetAndMetadata{offset=10, > leaderEpoch=null, metadata=''}}} > Jul 14 23:00:26 at > org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210) > Jul 14 23:00:26 at > org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testKafkaSourceMetrics(KafkaSourceReaderTest.java:275) > Jul 14 23:00:26 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Jul 14 23:00:26 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Jul 14 23:00:26 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Jul 14 23:00:26 at java.lang.reflect.Method.invoke(Method.java:498) > Jul 14 23:00:26 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > Jul 14 23:00:26 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Jul 14 23:00:26 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > Jul 14 23:00:26 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Jul 14 23:00:26 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Jul 14 23:00:26 at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239) > Jul 14 23:00:26 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > Jul 14 23:00:26 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > Jul 14 23:00:26 at org.junit.rules.RunRules.evaluate(RunRules.java:20) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > Jul 14 23:00:26 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > Jul 14 23:00:26 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Jul 14 23:00:26 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > Jul 14 23:00:26 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > Jul 14 23:00:26 at org.junit.runners.Suite.runChild(Suite.java:128) > Jul 14 23:00:26 at org.junit.runners.Suite.runChild(Suite.java:27) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > Jul 14 23:00:26 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > Jul 14 23:00:26 at
[GitHub] [flink] flinkbot edited a comment on pull request #16889: [FLINK-16152][doc-zh]Translate "Operator/index"(docs/content.zh/docs/dev/datastream/operators/overview.md) into Chinese
flinkbot edited a comment on pull request #16889: URL: https://github.com/apache/flink/pull/16889#issuecomment-901589961 ## CI report: * d73a6ff8b8eb25198ec374300d96744f36144dad Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22473) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16888: [FLINK-23765][python] Fix the NPE in Python UDTF
flinkbot edited a comment on pull request #16888: URL: https://github.com/apache/flink/pull/16888#issuecomment-901589942 ## CI report: * 796d67e230bf26f1f4437eb53be4560ec49ec846 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22472) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16890: [FLINK-23765][python] Fix the NPE in Python UDTF
flinkbot edited a comment on pull request #16890: URL: https://github.com/apache/flink/pull/16890#issuecomment-901589996 ## CI report: * 05d7697f85fa7b58cf55bc1dc3bcb0b963875f59 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22474) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-23861) flink sql client support dynamic params
[ https://issues.apache.org/jira/browse/FLINK-23861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401465#comment-17401465 ] zhangbinzaifendou edited comment on FLINK-23861 at 8/19/21, 4:29 AM: - [~jark] Thank you for your reply! 1 There is no detailed test for the specific execution time, but from the perspective of the method execution process, the process is very lengthy and not elegant enough. -c k=v is simple and convenient. 2 Spark, Hive and other systems have related parameter configurations, such as --conf --hiveconf. 3 Avoid using single quotation marks and maintain good usage habits. was (Author: zhangbinzaifendou): [~jark] Thank you for your reply! 1 There is no detailed test for the specific execution time, but from the perspective of the method execution process, the process is very lengthy and not elegant enough. The -c/--conf k=v is simple and convenient. 2 Spark, Hive and other systems have related parameter configurations, such as --conf --hiveconf. 3 Avoid using single quotation marks and maintain good usage habits. > flink sql client support dynamic params > --- > > Key: FLINK-23861 > URL: https://issues.apache.org/jira/browse/FLINK-23861 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Client >Affects Versions: 1.13.2 >Reporter: zhangbinzaifendou >Priority: Minor > Labels: pull-request-available > Attachments: image-2021-08-18-23-41-13-629.png, > image-2021-08-18-23-42-04-257.png, image-2021-08-18-23-43-04-323.png > > > 1 Every time the set command is executed, the method call process is very > long and a new createTableEnvironment object is created > 2 As a result of the previous discussion in FLINK-22770, I don't think it's a > good habit for users to put quotes around keys and values. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23861) flink sql client support dynamic params
[ https://issues.apache.org/jira/browse/FLINK-23861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401465#comment-17401465 ] zhangbinzaifendou commented on FLINK-23861: --- [~jark] Thank you for your reply! 1 There is no detailed test for the specific execution time, but from the perspective of the method execution process, the process is very lengthy and not elegant enough. The -c/--conf k=v is simple and convenient. 2 Spark, Hive and other systems have related parameter configurations, such as --conf --hiveconf. 3 Avoid using single quotation marks and maintain good usage habits. > flink sql client support dynamic params > --- > > Key: FLINK-23861 > URL: https://issues.apache.org/jira/browse/FLINK-23861 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Client >Affects Versions: 1.13.2 >Reporter: zhangbinzaifendou >Priority: Minor > Labels: pull-request-available > Attachments: image-2021-08-18-23-41-13-629.png, > image-2021-08-18-23-42-04-257.png, image-2021-08-18-23-43-04-323.png > > > 1 Every time the set command is executed, the method call process is very > long and a new createTableEnvironment object is created > 2 As a result of the previous discussion in FLINK-22770, I don't think it's a > good habit for users to put quotes around keys and values. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23525) Docker command fails on Azure: Exit code 137 returned from process: file name '/usr/bin/docker'
[ https://issues.apache.org/jira/browse/FLINK-23525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401460#comment-17401460 ] Xintong Song commented on FLINK-23525: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22463=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10336 > Docker command fails on Azure: Exit code 137 returned from process: file name > '/usr/bin/docker' > --- > > Key: FLINK-23525 > URL: https://issues.apache.org/jira/browse/FLINK-23525 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines >Affects Versions: 1.14.0, 1.13.1 >Reporter: Dawid Wysakowicz >Priority: Critical > Labels: auto-deprioritized-blocker, test-stability > Fix For: 1.14.0 > > Attachments: screenshot-1.png > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21053=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10034 > {code} > ##[error]Exit code 137 returned from process: file name '/usr/bin/docker', > arguments 'exec -i -u 1001 -w /home/vsts_azpcontainer > 9dca235e075b70486fac576ee17cee722940edf575e5478e0a52def5b46c28b5 > /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22387) UpsertKafkaTableITCase hangs when setting up kafka
[ https://issues.apache.org/jira/browse/FLINK-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401462#comment-17401462 ] Xintong Song commented on FLINK-22387: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22463=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=576aba0a-d787-51b6-6a92-cf233f360582=7180 > UpsertKafkaTableITCase hangs when setting up kafka > -- > > Key: FLINK-22387 > URL: https://issues.apache.org/jira/browse/FLINK-22387 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Table SQL / Ecosystem >Affects Versions: 1.14.0, 1.13.1, 1.12.4 >Reporter: Dawid Wysakowicz >Assignee: Shengkai Fang >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16901=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6932 > {code} > 2021-04-20T20:01:32.2276988Z Apr 20 20:01:32 "main" #1 prio=5 os_prio=0 > tid=0x7fe87400b000 nid=0x4028 runnable [0x7fe87df22000] > 2021-04-20T20:01:32.2277666Z Apr 20 20:01:32java.lang.Thread.State: > RUNNABLE > 2021-04-20T20:01:32.2278338Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.Buffer.getByte(Buffer.java:312) > 2021-04-20T20:01:32.2279325Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:310) > 2021-04-20T20:01:32.2280656Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:492) > 2021-04-20T20:01:32.2281603Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471) > 2021-04-20T20:01:32.2282163Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.Util.skipAll(Util.java:204) > 2021-04-20T20:01:32.2282870Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.Util.discard(Util.java:186) > 2021-04-20T20:01:32.2283494Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.close(Http1ExchangeCodec.java:511) > 2021-04-20T20:01:32.2284460Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.ForwardingSource.close(ForwardingSource.java:43) > 2021-04-20T20:01:32.2285183Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.close(Exchange.java:313) > 2021-04-20T20:01:32.2285756Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.RealBufferedSource.close(RealBufferedSource.java:476) > 2021-04-20T20:01:32.2286287Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.Util.closeQuietly(Util.java:139) > 2021-04-20T20:01:32.2286795Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.ResponseBody.close(ResponseBody.java:192) > 2021-04-20T20:01:32.2287270Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.Response.close(Response.java:290) > 2021-04-20T20:01:32.2287913Z Apr 20 20:01:32 at > org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.close(OkDockerHttpClient.java:285) > 2021-04-20T20:01:32.2288606Z Apr 20 20:01:32 at > org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$null$0(DefaultInvocationBuilder.java:272) > 2021-04-20T20:01:32.2289295Z Apr 20 20:01:32 at > org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$$Lambda$340/2058508175.close(Unknown > Source) > 2021-04-20T20:01:32.2289886Z Apr 20 20:01:32 at > com.github.dockerjava.api.async.ResultCallbackTemplate.close(ResultCallbackTemplate.java:77) > 2021-04-20T20:01:32.2290567Z Apr 20 20:01:32 at > org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:202) > 2021-04-20T20:01:32.2291051Z Apr 20 20:01:32 at > org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:205) > 2021-04-20T20:01:32.2291879Z Apr 20 20:01:32 - locked <0xe9cd50f8> > (a [Ljava.lang.Object;) > 2021-04-20T20:01:32.2292313Z Apr 20 20:01:32 at > org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14) > 2021-04-20T20:01:32.2292870Z Apr 20 20:01:32 at > org.testcontainers.LazyDockerClient.authConfig(LazyDockerClient.java:12) > 2021-04-20T20:01:32.2293383Z Apr 20 20:01:32 at > org.testcontainers.containers.GenericContainer.start(GenericContainer.java:310) > 2021-04-20T20:01:32.2293890Z Apr 20 20:01:32 at > org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1029) > 2021-04-20T20:01:32.2294578Z Apr 20 20:01:32 at >
[jira] [Commented] (FLINK-23796) UnalignedCheckpointRescaleITCase JVM crash on Azure
[ https://issues.apache.org/jira/browse/FLINK-23796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401458#comment-17401458 ] Xintong Song commented on FLINK-23796: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22463=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=5142 > UnalignedCheckpointRescaleITCase JVM crash on Azure > --- > > Key: FLINK-23796 > URL: https://issues.apache.org/jira/browse/FLINK-23796 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.14.0 >Reporter: Xintong Song >Priority: Critical > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=0=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=5182 > {code} > Aug 16 01:03:17 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test > (integration-tests) on project flink-tests: There are test failures. > Aug 16 01:03:17 [ERROR] > Aug 16 01:03:17 [ERROR] Please refer to > /__w/1/s/flink-tests/target/surefire-reports for the individual test results. > Aug 16 01:03:17 [ERROR] Please refer to dump files (if any exist) > [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Aug 16 01:03:17 [ERROR] ExecutionException The forked VM terminated without > properly saying goodbye. VM crash or System.exit called? > Aug 16 01:03:17 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter4077406518503777021.jar > /__w/1/s/flink-tests/target/surefire 2021-08-15T23-59-56_973-jvmRun1 > surefire4438021626717472043tmp surefire_1445134621790231688950tmp > Aug 16 01:03:17 [ERROR] Error occurred in starting fork, check output in log > Aug 16 01:03:17 [ERROR] Process Exit Code: 137 > Aug 16 01:03:17 [ERROR] Crashed tests: > Aug 16 01:03:17 [ERROR] > org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase > Aug 16 01:03:17 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > Aug 16 01:03:17 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=1 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter4077406518503777021.jar > /__w/1/s/flink-tests/target/surefire 2021-08-15T23-59-56_973-jvmRun1 > surefire4438021626717472043tmp surefire_1445134621790231688950tmp > Aug 16 01:03:17 [ERROR] Error occurred in starting fork, check output in log > Aug 16 01:03:17 [ERROR] Process Exit Code: 137 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23848) PulsarSourceITCase is failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-23848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401459#comment-17401459 ] Xintong Song commented on FLINK-23848: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22463=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=24426 > PulsarSourceITCase is failed on Azure > - > > Key: FLINK-23848 > URL: https://issues.apache.org/jira/browse/FLINK-23848 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.14.0 >Reporter: Jark Wu >Priority: Major > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 > {code} > 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] > testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 > s <<< ERROR! > 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: > Failed to fetch next result > 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) > 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) > 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38 at > org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) > 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) > 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) > 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2021-08-17T20:17:38.241Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2021-08-17T20:17:38.2450363Z Aug 17
[jira] [Updated] (FLINK-23848) PulsarSourceITCase is failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-23848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-23848: - Priority: Critical (was: Major) > PulsarSourceITCase is failed on Azure > - > > Key: FLINK-23848 > URL: https://issues.apache.org/jira/browse/FLINK-23848 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.14.0 >Reporter: Jark Wu >Priority: Critical > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22412=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461 > {code} > 2021-08-17T20:11:53.7228789Z Aug 17 20:11:53 [INFO] Running > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2429467Z Aug 17 20:17:38 [ERROR] Tests run: 8, Failures: > 0, Errors: 1, Skipped: 0, Time elapsed: 344.515 s <<< FAILURE! - in > org.apache.flink.connector.pulsar.source.PulsarSourceITCase > 2021-08-17T20:17:38.2430693Z Aug 17 20:17:38 [ERROR] > testMultipleSplits{TestEnvironment, ExternalContext}[2] Time elapsed: 66.766 > s <<< ERROR! > 2021-08-17T20:17:38.2431387Z Aug 17 20:17:38 java.lang.RuntimeException: > Failed to fetch next result > 2021-08-17T20:17:38.2432035Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) > 2021-08-17T20:17:38.2433345Z Aug 17 20:17:38 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-08-17T20:17:38.2434175Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:151) > 2021-08-17T20:17:38.2435028Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.utils.TestDataMatchers$MultipleSplitDataMatcher.matchesSafely(TestDataMatchers.java:133) > 2021-08-17T20:17:38.2438387Z Aug 17 20:17:38 at > org.hamcrest.TypeSafeDiagnosingMatcher.matches(TypeSafeDiagnosingMatcher.java:55) > 2021-08-17T20:17:38.2439100Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12) > 2021-08-17T20:17:38.2439708Z Aug 17 20:17:38 at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) > 2021-08-17T20:17:38.2440299Z Aug 17 20:17:38 at > org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:156) > 2021-08-17T20:17:38.2441007Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-08-17T20:17:38.2441526Z Aug 17 20:17:38 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-08-17T20:17:38.2442068Z Aug 17 20:17:38 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-08-17T20:17:38.2442759Z Aug 17 20:17:38 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-08-17T20:17:38.2443247Z Aug 17 20:17:38 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) > 2021-08-17T20:17:38.2443812Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2021-08-17T20:17:38.241Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2021-08-17T20:17:38.2445101Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2021-08-17T20:17:38.2445688Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2021-08-17T20:17:38.2446328Z Aug 17 20:17:38 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92) > 2021-08-17T20:17:38.2447303Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2021-08-17T20:17:38.2448336Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2021-08-17T20:17:38.2448999Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2021-08-17T20:17:38.2449689Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2021-08-17T20:17:38.2450363Z Aug 17 20:17:38 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2021-08-17T20:17:38.2451001Z Aug 17
[jira] [Created] (FLINK-23866) KafkaTransactionLogITCase.testGetTransactionsToAbort fails with IllegalStateException
Xintong Song created FLINK-23866: Summary: KafkaTransactionLogITCase.testGetTransactionsToAbort fails with IllegalStateException Key: FLINK-23866 URL: https://issues.apache.org/jira/browse/FLINK-23866 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: 1.14.0 Reporter: Xintong Song Fix For: 1.14.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22463=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7098 {code} Aug 18 23:14:24 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 67.32 s <<< FAILURE! - in org.apache.flink.connector.kafka.sink.KafkaTransactionLogITCase Aug 18 23:14:24 [ERROR] testGetTransactionsToAbort Time elapsed: 22.35 s <<< ERROR! Aug 18 23:14:24 java.lang.IllegalStateException: You can only check the position for partitions assigned to this consumer. Aug 18 23:14:24 at org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1717) Aug 18 23:14:24 at org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1684) Aug 18 23:14:24 at org.apache.flink.connector.kafka.sink.KafkaTransactionLog.hasReadAllRecords(KafkaTransactionLog.java:144) Aug 18 23:14:24 at org.apache.flink.connector.kafka.sink.KafkaTransactionLog.getTransactionsToAbort(KafkaTransactionLog.java:133) Aug 18 23:14:24 at org.apache.flink.connector.kafka.sink.KafkaTransactionLogITCase.testGetTransactionsToAbort(KafkaTransactionLogITCase.java:110) Aug 18 23:14:24 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Aug 18 23:14:24 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Aug 18 23:14:24 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Aug 18 23:14:24 at java.lang.reflect.Method.invoke(Method.java:498) Aug 18 23:14:24 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) Aug 18 23:14:24 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Aug 18 23:14:24 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) Aug 18 23:14:24 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Aug 18 23:14:24 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) Aug 18 23:14:24 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) Aug 18 23:14:24 at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) Aug 18 23:14:24 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) Aug 18 23:14:24 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) Aug 18 23:14:24 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) Aug 18 23:14:24 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) Aug 18 23:14:24 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) Aug 18 23:14:24 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) Aug 18 23:14:24 at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) Aug 18 23:14:24 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) Aug 18 23:14:24 at org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30) Aug 18 23:14:24 at org.junit.rules.RunRules.evaluate(RunRules.java:20) Aug 18 23:14:24 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) Aug 18 23:14:24 at org.junit.runners.ParentRunner.run(ParentRunner.java:413) Aug 18 23:14:24 at org.junit.runner.JUnitCore.run(JUnitCore.java:137) Aug 18 23:14:24 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) Aug 18 23:14:24 at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43) Aug 18 23:14:24 at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) Aug 18 23:14:24 at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) Aug 18 23:14:24 at java.util.Iterator.forEachRemaining(Iterator.java:116) Aug 18 23:14:24 at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) Aug 18 23:14:24 at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) Aug 18 23:14:24 at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) Aug 18 23:14:24 at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) Aug 18 23:14:24 at
[GitHub] [flink] flinkbot commented on pull request #16888: [FLINK-23765][python] Fix the NPE in Python UDTF
flinkbot commented on pull request #16888: URL: https://github.com/apache/flink/pull/16888#issuecomment-901589942 ## CI report: * 796d67e230bf26f1f4437eb53be4560ec49ec846 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-22198) KafkaTableITCase hang.
[ https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401457#comment-17401457 ] Xintong Song edited comment on FLINK-22198 at 8/19/21, 3:56 AM: Thanks for the updates. bq. I don't have the permission to change the state of this ticket. Xintong Song Could you help to change it to in-progress? Thanks~ Only the assignee can do it. There is a problem in jira that a `user` can be assigned to the ticket but cannot move it to in-progress, unless he/she is a `contributor`. I've grant you the `contributor` permission. You should be able to do it now. was (Author: xintongsong): Thanks for the updates. bq. I don't have the permission to change the state of this ticket. Xintong Song Could you help to change it to in-progress? Thanks~ Only the assignee can do it. There was a problem in jira that a `user` can be assigned to the ticket but cannot move it to in-progress, unless he/she is a `contributor`. I've grant you the `contributor` permission. You should be able to do it now. > KafkaTableITCase hang. > -- > > Key: FLINK-22198 > URL: https://issues.apache.org/jira/browse/FLINK-22198 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0, 1.12.4 >Reporter: Guowei Ma >Assignee: Qingsheng Ren >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625 > There is no any artifacts. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22198) KafkaTableITCase hang.
[ https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401457#comment-17401457 ] Xintong Song commented on FLINK-22198: -- Thanks for the updates. bq. I don't have the permission to change the state of this ticket. Xintong Song Could you help to change it to in-progress? Thanks~ Only the assignee can do it. There was a problem in jira that a `user` can be assigned to the ticket but cannot move it to in-progress, unless he/she is a `contributor`. I've grant you the `contributor` permission. You should be able to do it now. > KafkaTableITCase hang. > -- > > Key: FLINK-22198 > URL: https://issues.apache.org/jira/browse/FLINK-22198 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0, 1.12.4 >Reporter: Guowei Ma >Assignee: Qingsheng Ren >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625 > There is no any artifacts. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure
[ https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401455#comment-17401455 ] Xintong Song commented on FLINK-22889: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22463=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=14819 > JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure > > > Key: FLINK-22889 > URL: https://issues.apache.org/jira/browse/FLINK-22889 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC >Affects Versions: 1.14.0, 1.13.1 >Reporter: Dawid Wysakowicz >Assignee: Roman Khachatryan >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22198) KafkaTableITCase hang.
[ https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401454#comment-17401454 ] Qingsheng Ren commented on FLINK-22198: --- Thanks [~xtsong] for the reminder. I had a discussion with [~lindong] and we checked the log of the latest instance. These logs caught our attention: {code:java} 13:27:11,089 INFO [Log partition=key_full_value_topic_avro-0, dir=/var/lib/kafka/data] Found deletable segments with base offsets [0] due to retention time 60480ms breach (kafka.log.Log) 13:27:11,101 INFO [ProducerStateManager partition=key_full_value_topic_avro-0] Writing producer snapshot at offset 3 (kafka.log.ProducerStateManager) 13:27:11,104 INFO [Log partition=key_full_value_topic_avro-0, dir=/var/lib/kafka/data] Rolled new log segment at offset 3 in 15 ms. (kafka.log.Log) 13:27:11,106 INFO [Log partition=key_full_value_topic_avro-0, dir=/var/lib/kafka/data] Scheduling segments for deletion List(LogSegment(baseOffset=0, size=233, lastModifiedTime=1629293231000, largestTime=1583845931123)) (kafka.log.Log) 13:27:11,107 INFO [Log partition=key_full_value_topic_avro-0, dir=/var/lib/kafka/data] Incrementing log start offset to 3 (kafka.log.Log) {code} Basically a retention time based log deletion was triggered, so messages written to Kafka is deleted, then the test case hanged because it cannot receive expected messages. These logs proves the hypothesis made by [~lindong] that the clock is skewed on producer side. We are still investigating the reason and will update the comment once we have progress. I don't have the permission to change the state of this ticket. [~xtsong] Could you help to change it to {{in-progress}}? Thanks~ > KafkaTableITCase hang. > -- > > Key: FLINK-22198 > URL: https://issues.apache.org/jira/browse/FLINK-22198 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0, 1.12.4 >Reporter: Guowei Ma >Assignee: Qingsheng Ren >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625 > There is no any artifacts. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #16889: [FLINK-16152][doc-zh]Translate "Operator/index"(docs/content.zh/docs/dev/datastream/operators/overview.md) into Chinese
flinkbot commented on pull request #16889: URL: https://github.com/apache/flink/pull/16889#issuecomment-901583304 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit d73a6ff8b8eb25198ec374300d96744f36144dad (Thu Aug 19 03:35:56 UTC 2021) ✅no warnings Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16890: [FLINK-23765][python] Fix the NPE in Python UDTF
flinkbot commented on pull request #16890: URL: https://github.com/apache/flink/pull/16890#issuecomment-901583296 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 05d7697f85fa7b58cf55bc1dc3bcb0b963875f59 (Thu Aug 19 03:35:54 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16887: [FLINK-23755][table] Modify the default value of table.dynamic-table-options.enabled to true
flinkbot edited a comment on pull request #16887: URL: https://github.com/apache/flink/pull/16887#issuecomment-901570019 ## CI report: * 2d7ead01d06725c50cc41af410a20af132dd2635 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22469) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] HuangXingBo opened a new pull request #16890: [FLINK-23765][python] Fix the NPE in Python UDTF
HuangXingBo opened a new pull request #16890: URL: https://github.com/apache/flink/pull/16890 ## What is the purpose of the change *This pull request will fix the NPE in Python UDTF* ## Brief change log - *Change the logic of emitResult in Python Table Function Operator for facing parts of results have been received* ## Verifying this change This change added tests and can be verified as follows: - *Original Java UT and python IT* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hapihu opened a new pull request #16889: [FLINK-16152][doc-zh]Translate "Operator/index"(docs/content.zh/docs/dev/datastream/operators/overview.md) into Chinese
hapihu opened a new pull request #16889: URL: https://github.com/apache/flink/pull/16889 Translate "Operator/index"(docs/content.zh/docs/dev/datastream/operators/overview.md) into Chinese -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16888: [FLINK-23765][python] Fix the NPE in Python UDTF
flinkbot commented on pull request #16888: URL: https://github.com/apache/flink/pull/16888#issuecomment-901582081 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 796d67e230bf26f1f4437eb53be4560ec49ec846 (Thu Aug 19 03:31:43 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] HuangXingBo opened a new pull request #16888: [FLINK-23765][python] Fix the NPE in Python UDTF
HuangXingBo opened a new pull request #16888: URL: https://github.com/apache/flink/pull/16888 ## What is the purpose of the change *This pull request will fix the NPE in Python UDTF* ## Brief change log - *Change the logic of emitResult in Python Table Function Operator for facing parts of results have been received* ## Verifying this change This change added tests and can be verified as follows: - *Original Java UT and python IT* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23865) Class cast error caused by nested Pojo in legacy outputConversion
[ https://issues.apache.org/jira/browse/FLINK-23865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zoucao updated FLINK-23865: --- Description: code: {code:java} Table table = tbEnv.fromValues(DataTypes.ROW( DataTypes.FIELD("innerPojo", DataTypes.ROW(DataTypes.FIELD("c", STRING(, DataTypes.FIELD("b", STRING()), DataTypes.FIELD("a", INT())), Row.of(Row.of("str-c"), "str-b", 1)); DataStream pojoDataStream = tbEnv.toAppendStream(table, Pojo.class); - public static class Pojo{ public InnerPojo innerPojo; public String b; public int a; public Pojo() { } } public static class InnerPojo { public String c; public InnerPojo() { } }{code} error: {code:java} java.lang.ClassCastException: org.apache.flink.table.types.logical.IntType cannot be cast to org.apache.flink.table.types.logical.RowType java.lang.ClassCastException: org.apache.flink.table.types.logical.IntType cannot be cast to org.apache.flink.table.types.logical.RowType at org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$1.apply(TableSinkUtils.scala:163) at org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$1.apply(TableSinkUtils.scala:155) {code} The fields of PojoTypeInfo is in the alphabet order, such that in `expandPojoTypeToSchema`, 'pojoType' and 'queryLogicalType' should have own index,but now we use the pojo field index to get 'queryLogicalType', this will casue the field type mismatch. It should be fixed like : {code:java} val queryIndex = queryLogicalType.getFieldIndex(name) val nestedLogicalType = queryLogicalType.getFields()(queryIndex).getType.asInstanceOf[RowType]{code} was: code: {code:java} Table table = tbEnv.fromValues(DataTypes.ROW( DataTypes.FIELD("innerPojo", DataTypes.ROW(DataTypes.FIELD("c", STRING(, DataTypes.FIELD("b", STRING()), DataTypes.FIELD("a", INT())), Row.of(Row.of("str-c"), "str-b", 1)); DataStream pojoDataStream = tbEnv.toAppendStream(table, Pojo.class); - public static class Pojo{ public InnerPojo innerPojo; public String b; public int a; public Pojo() { } } public static class InnerPojo { public String c; public InnerPojo() { } }{code} error: {code:java} java.lang.ClassCastException: org.apache.flink.table.types.logical.IntType cannot be cast to org.apache.flink.table.types.logical.RowTypejava.lang.ClassCastException: org.apache.flink.table.types.logical.IntType cannot be cast to org.apache.flink.table.types.logical.RowType at org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$1.apply(TableSinkUtils.scala:163) at org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$1.apply(TableSinkUtils.scala:155) {code} The fields of PojoTypeInfo is in the alphabet order, such that in `expandPojoTypeToSchema`, 'pojoType' and 'queryLogicalType' should have own index,but now we use the pojo field index to get 'queryLogicalType', this will casue the field type mismatch. It should be fixed like : {code:java} val queryIndex = queryLogicalType.getFieldIndex(name) val nestedLogicalType = queryLogicalType.getFields()(queryIndex).getType.asInstanceOf[RowType]{code} > Class cast error caused by nested Pojo in legacy outputConversion > - > > Key: FLINK-23865 > URL: https://issues.apache.org/jira/browse/FLINK-23865 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.13.2 >Reporter: zoucao >Priority: Major > > code: > {code:java} > Table table = tbEnv.fromValues(DataTypes.ROW( > DataTypes.FIELD("innerPojo", DataTypes.ROW(DataTypes.FIELD("c", > STRING(, > DataTypes.FIELD("b", STRING()), > DataTypes.FIELD("a", INT())), > Row.of(Row.of("str-c"), "str-b", 1)); > DataStream pojoDataStream = tbEnv.toAppendStream(table, Pojo.class); > - > public static class Pojo{ > public InnerPojo innerPojo; > public String b; > public int a; > public Pojo() { > } > } > public static class InnerPojo { > public String c; > public InnerPojo() { > } > }{code} > error: > {code:java} > java.lang.ClassCastException: org.apache.flink.table.types.logical.IntType > cannot be cast to org.apache.flink.table.types.logical.RowType > java.lang.ClassCastException: org.apache.flink.table.types.logical.IntType > cannot be cast to org.apache.flink.table.types.logical.RowType at > org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$1.apply(TableSinkUtils.scala:163) > at > org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$1.apply(TableSinkUtils.scala:155) > {code} > The fields of PojoTypeInfo is in the alphabet order, such that in > `expandPojoTypeToSchema`,
[jira] [Created] (FLINK-23865) Class cast error caused by nested Pojo in legacy outputConversion
zoucao created FLINK-23865: -- Summary: Class cast error caused by nested Pojo in legacy outputConversion Key: FLINK-23865 URL: https://issues.apache.org/jira/browse/FLINK-23865 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.13.2 Reporter: zoucao code: {code:java} Table table = tbEnv.fromValues(DataTypes.ROW( DataTypes.FIELD("innerPojo", DataTypes.ROW(DataTypes.FIELD("c", STRING(, DataTypes.FIELD("b", STRING()), DataTypes.FIELD("a", INT())), Row.of(Row.of("str-c"), "str-b", 1)); DataStream pojoDataStream = tbEnv.toAppendStream(table, Pojo.class); - public static class Pojo{ public InnerPojo innerPojo; public String b; public int a; public Pojo() { } } public static class InnerPojo { public String c; public InnerPojo() { } }{code} error: {code:java} java.lang.ClassCastException: org.apache.flink.table.types.logical.IntType cannot be cast to org.apache.flink.table.types.logical.RowTypejava.lang.ClassCastException: org.apache.flink.table.types.logical.IntType cannot be cast to org.apache.flink.table.types.logical.RowType at org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$1.apply(TableSinkUtils.scala:163) at org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$1.apply(TableSinkUtils.scala:155) {code} The fields of PojoTypeInfo is in the alphabet order, such that in `expandPojoTypeToSchema`, 'pojoType' and 'queryLogicalType' should have own index,but now we use the pojo field index to get 'queryLogicalType', this will casue the field type mismatch. It should be fixed like : {code:java} val queryIndex = queryLogicalType.getFieldIndex(name) val nestedLogicalType = queryLogicalType.getFields()(queryIndex).getType.asInstanceOf[RowType]{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zhuzhurk closed pull request #16842: [FLINK-23806][runtime] Avoid StackOverflowException when a large scale job failed to acquire enough slots in time
zhuzhurk closed pull request #16842: URL: https://github.com/apache/flink/pull/16842 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16887: [FLINK-23755][table] Modify the default value of table.dynamic-table-options.enabled to true
flinkbot commented on pull request #16887: URL: https://github.com/apache/flink/pull/16887#issuecomment-901570019 ## CI report: * 2d7ead01d06725c50cc41af410a20af132dd2635 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15898: [FLINK-21439][core] WIP: Adds Exception History for AdaptiveScheduler
flinkbot edited a comment on pull request #15898: URL: https://github.com/apache/flink/pull/15898#issuecomment-838996225 ## CI report: * 521e19eecadf39a226c5b5be4ed5348485656eab Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21085) * c34ba95f4220031885b7d11ff7c601d3b000c9b1 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22466) * 903a79eeb0d944922d19124c14c6bd4eb0b3 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22468) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] SteNicholas commented on pull request #16887: [FLINK-23755][table] Modify the default value of table.dynamic-table-options.enabled to true
SteNicholas commented on pull request #16887: URL: https://github.com/apache/flink/pull/16887#issuecomment-901563667 @wuchong , thanks for the review. I have removed the note for `table.dynamic-table-options.enabled` in `hints.md`. Please take a look. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23861) flink sql client support dynamic params
[ https://issues.apache.org/jira/browse/FLINK-23861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401443#comment-17401443 ] Jark Wu commented on FLINK-23861: - IIUC, your main concern is the performance of SET command because it {{createTableEnvironment}} ? How long will it take? > flink sql client support dynamic params > --- > > Key: FLINK-23861 > URL: https://issues.apache.org/jira/browse/FLINK-23861 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Client >Affects Versions: 1.13.2 >Reporter: zhangbinzaifendou >Priority: Minor > Labels: pull-request-available > Attachments: image-2021-08-18-23-41-13-629.png, > image-2021-08-18-23-42-04-257.png, image-2021-08-18-23-43-04-323.png > > > 1 Every time the set command is executed, the method call process is very > long and a new createTableEnvironment object is created > 2 As a result of the previous discussion in FLINK-22770, I don't think it's a > good habit for users to put quotes around keys and values. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-23861) flink sql client support dynamic params
[ https://issues.apache.org/jira/browse/FLINK-23861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401443#comment-17401443 ] Jark Wu edited comment on FLINK-23861 at 8/19/21, 2:30 AM: --- IIUC, your main concern is the performance of SET command because it {{createTableEnvironment}} ? How long will it take from your observation? was (Author: jark): IIUC, your main concern is the performance of SET command because it {{createTableEnvironment}} ? How long will it take? > flink sql client support dynamic params > --- > > Key: FLINK-23861 > URL: https://issues.apache.org/jira/browse/FLINK-23861 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Client >Affects Versions: 1.13.2 >Reporter: zhangbinzaifendou >Priority: Minor > Labels: pull-request-available > Attachments: image-2021-08-18-23-41-13-629.png, > image-2021-08-18-23-42-04-257.png, image-2021-08-18-23-43-04-323.png > > > 1 Every time the set command is executed, the method call process is very > long and a new createTableEnvironment object is created > 2 As a result of the previous discussion in FLINK-22770, I don't think it's a > good habit for users to put quotes around keys and values. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #15898: [FLINK-21439][core] WIP: Adds Exception History for AdaptiveScheduler
flinkbot edited a comment on pull request #15898: URL: https://github.com/apache/flink/pull/15898#issuecomment-838996225 ## CI report: * 521e19eecadf39a226c5b5be4ed5348485656eab Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21085) * c34ba95f4220031885b7d11ff7c601d3b000c9b1 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22466) * 903a79eeb0d944922d19124c14c6bd4eb0b3 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23829) SavepointITCase JVM crash on azure
[ https://issues.apache.org/jira/browse/FLINK-23829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401442#comment-17401442 ] Yun Gao commented on FLINK-23829: - Ok, I changed the status~ > SavepointITCase JVM crash on azure > -- > > Key: FLINK-23829 > URL: https://issues.apache.org/jira/browse/FLINK-23829 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.14.0 >Reporter: Xintong Song >Assignee: Yun Gao >Priority: Blocker > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22293=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=5224 > {code} > Aug 16 16:26:11 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test > (integration-tests) on project flink-tests: There are test failures. > Aug 16 16:26:11 [ERROR] > Aug 16 16:26:11 [ERROR] Please refer to > /__w/1/s/flink-tests/target/surefire-reports for the individual test results. > Aug 16 16:26:11 [ERROR] Please refer to dump files (if any exist) > [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Aug 16 16:26:11 [ERROR] ExecutionException The forked VM terminated without > properly saying goodbye. VM crash or System.exit called? > Aug 16 16:26:11 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=2 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter8870094541887019356.jar > /__w/1/s/flink-tests/target/surefire 2021-08-16T15-45-06_363-jvmRun2 > surefire8582412554358604743tmp surefire_2118489584967019297925tmp > Aug 16 16:26:11 [ERROR] Error occurred in starting fork, check output in log > Aug 16 16:26:11 [ERROR] Process Exit Code: 239 > Aug 16 16:26:11 [ERROR] Crashed tests: > Aug 16 16:26:11 [ERROR] org.apache.flink.test.checkpointing.SavepointITCase > Aug 16 16:26:11 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > Aug 16 16:26:11 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=2 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter8870094541887019356.jar > /__w/1/s/flink-tests/target/surefire 2021-08-16T15-45-06_363-jvmRun2 > surefire8582412554358604743tmp surefire_2118489584967019297925tmp > Aug 16 16:26:11 [ERROR] Error occurred in starting fork, check output in log > Aug 16 16:26:11 [ERROR] Process Exit Code: 239 > Aug 16 16:26:11 [ERROR] Crashed tests: > Aug 16 16:26:11 [ERROR] org.apache.flink.test.checkpointing.SavepointITCase > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) > Aug 16 16:26:11 [ERROR] at >
[GitHub] [flink] flinkbot commented on pull request #16887: [FLINK-23755][table] Modify the default value of table.dynamic-table-options.enabled to true
flinkbot commented on pull request #16887: URL: https://github.com/apache/flink/pull/16887#issuecomment-901559968 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit ec75477331d4ad0be1a111841d583d2f85c63ae1 (Thu Aug 19 02:27:43 UTC 2021) **Warnings:** * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-23755).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23829) SavepointITCase JVM crash on azure
[ https://issues.apache.org/jira/browse/FLINK-23829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401438#comment-17401438 ] Xintong Song commented on FLINK-23829: -- Thanks, [~gaoyunhaii]. BTW, could you please move the ticket to in-progress? > SavepointITCase JVM crash on azure > -- > > Key: FLINK-23829 > URL: https://issues.apache.org/jira/browse/FLINK-23829 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.14.0 >Reporter: Xintong Song >Assignee: Yun Gao >Priority: Blocker > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22293=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=5224 > {code} > Aug 16 16:26:11 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test > (integration-tests) on project flink-tests: There are test failures. > Aug 16 16:26:11 [ERROR] > Aug 16 16:26:11 [ERROR] Please refer to > /__w/1/s/flink-tests/target/surefire-reports for the individual test results. > Aug 16 16:26:11 [ERROR] Please refer to dump files (if any exist) > [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. > Aug 16 16:26:11 [ERROR] ExecutionException The forked VM terminated without > properly saying goodbye. VM crash or System.exit called? > Aug 16 16:26:11 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=2 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter8870094541887019356.jar > /__w/1/s/flink-tests/target/surefire 2021-08-16T15-45-06_363-jvmRun2 > surefire8582412554358604743tmp surefire_2118489584967019297925tmp > Aug 16 16:26:11 [ERROR] Error occurred in starting fork, check output in log > Aug 16 16:26:11 [ERROR] Process Exit Code: 239 > Aug 16 16:26:11 [ERROR] Crashed tests: > Aug 16 16:26:11 [ERROR] org.apache.flink.test.checkpointing.SavepointITCase > Aug 16 16:26:11 [ERROR] > org.apache.maven.surefire.booter.SurefireBooterForkException: > ExecutionException The forked VM terminated without properly saying goodbye. > VM crash or System.exit called? > Aug 16 16:26:11 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-tests/target > && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m > -Dmvn.forkNumber=2 -XX:+UseG1GC -jar > /__w/1/s/flink-tests/target/surefire/surefirebooter8870094541887019356.jar > /__w/1/s/flink-tests/target/surefire 2021-08-16T15-45-06_363-jvmRun2 > surefire8582412554358604743tmp surefire_2118489584967019297925tmp > Aug 16 16:26:11 [ERROR] Error occurred in starting fork, check output in log > Aug 16 16:26:11 [ERROR] Process Exit Code: 239 > Aug 16 16:26:11 [ERROR] Crashed tests: > Aug 16 16:26:11 [ERROR] org.apache.flink.test.checkpointing.SavepointITCase > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > Aug 16 16:26:11 [ERROR] at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > Aug 16 16:26:11 [ERROR] at >
[GitHub] [flink] wuchong commented on a change in pull request #16887: [FLINK-23755][table] Modify the default value of table.dynamic-table-options.enabled to true
wuchong commented on a change in pull request #16887: URL: https://github.com/apache/flink/pull/16887#discussion_r691727770 ## File path: docs/content/docs/dev/table/sql/queries/hints.md ## @@ -50,8 +50,7 @@ these options can be specified flexibly in per-table scope within each query. Thus it is very suitable to use for the ad-hoc queries in interactive terminal, for example, in the SQL-CLI, you can specify to ignore the parse error for a CSV source just by adding a dynamic option `/*+ OPTIONS('csv.ignore-parse-errors'='true') */`. -Note: Dynamic table options default is forbidden to use because it may change the semantics of the query. -You need to set the config option `table.dynamic-table-options.enabled` to be `true` explicitly (default is false), +Note: Dynamic table options default is allowed, i.e. set the config option `table.dynamic-table-options.enabled` to be `true`, Review comment: It is enabled by default, so I think we don't need this note now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23556) SQLClientSchemaRegistryITCase fails with " Subject ... not found"
[ https://issues.apache.org/jira/browse/FLINK-23556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401437#comment-17401437 ] Xintong Song commented on FLINK-23556: -- Sounds good, thanks. BTW, could you move the ticket to in-progress? > SQLClientSchemaRegistryITCase fails with " Subject ... not found" > - > > Key: FLINK-23556 > URL: https://issues.apache.org/jira/browse/FLINK-23556 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Biao Geng >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21129=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=cc5499f8-bdde-5157-0d76-b6528ecd808e=25337 > {code} > Jul 28 23:37:48 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 209.44 s <<< FAILURE! - in > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase > Jul 28 23:37:48 [ERROR] > testWriting(org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase) > Time elapsed: 81.146 s <<< ERROR! > Jul 28 23:37:48 > io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: > Subject 'test-user-behavior-d18d4af2-3830-4620-9993-340c13f50cc2-value' not > found.; error code: 40401 > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:769) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:760) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getAllVersions(CachedSchemaRegistryClient.java:364) > Jul 28 23:37:48 at > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.getAllVersions(SQLClientSchemaRegistryITCase.java:230) > Jul 28 23:37:48 at > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testWriting(SQLClientSchemaRegistryITCase.java:195) > Jul 28 23:37:48 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Jul 28 23:37:48 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Jul 28 23:37:48 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Jul 28 23:37:48 at java.lang.reflect.Method.invoke(Method.java:498) > Jul 28 23:37:48 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Jul 28 23:37:48 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Jul 28 23:37:48 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > Jul 28 23:37:48 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Jul 28 23:37:48 at java.lang.Thread.run(Thread.java:748) > Jul 28 23:37:48 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23743) Test the fine-grained resource management
[ https://issues.apache.org/jira/browse/FLINK-23743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yangze Guo updated FLINK-23743: --- Parent: FLINK-21924 Issue Type: Sub-task (was: Improvement) > Test the fine-grained resource management > - > > Key: FLINK-23743 > URL: https://issues.apache.org/jira/browse/FLINK-23743 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Reporter: Yangze Guo >Priority: Blocker > Labels: release-testing > Fix For: 1.14.0 > > > The newly introduced fine-grained resource management[1] allows you to > control the resource consumption of your workload in finer granularity. The > feature documentation with the current set of limitations can be found here > [2]. > In order to test this new feature I recommend to follow the documentation and > to try it out wrt the stated limitations. Everything which is not explicitly > contained in the set of limitations should work. > [1] https://issues.apache.org/jira/browse/FLINK-21924 > [2] https://github.com/apache/flink/pull/16561 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23493) python tests hang on Azure
[ https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401436#comment-17401436 ] Xintong Song commented on FLINK-23493: -- [~hxbks2ks], please move the ticket to in-progress, since you have started investigating it. > python tests hang on Azure > -- > > Key: FLINK-23493 > URL: https://issues.apache.org/jira/browse/FLINK-23493 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.14.0, 1.13.1, 1.12.4 >Reporter: Dawid Wysakowicz >Assignee: Huang Xingbo >Priority: Blocker > Labels: test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22198) KafkaTableITCase hang.
[ https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401434#comment-17401434 ] Xintong Song commented on FLINK-22198: -- [~renqs], BTW, please move the ticket to in-progress as we're investigating it. > KafkaTableITCase hang. > -- > > Key: FLINK-22198 > URL: https://issues.apache.org/jira/browse/FLINK-22198 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0, 1.12.4 >Reporter: Guowei Ma >Assignee: Qingsheng Ren >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625 > There is no any artifacts. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23743) Test the fine-grained resource management
[ https://issues.apache.org/jira/browse/FLINK-23743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401435#comment-17401435 ] Yangze Guo commented on FLINK-23743: [~joemoe] I'm fine with it. I just saw the previous testing issues, e.g. FLINK-22135, are isolated. > Test the fine-grained resource management > - > > Key: FLINK-23743 > URL: https://issues.apache.org/jira/browse/FLINK-23743 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Reporter: Yangze Guo >Priority: Blocker > Labels: release-testing > Fix For: 1.14.0 > > > The newly introduced fine-grained resource management[1] allows you to > control the resource consumption of your workload in finer granularity. The > feature documentation with the current set of limitations can be found here > [2]. > In order to test this new feature I recommend to follow the documentation and > to try it out wrt the stated limitations. Everything which is not explicitly > contained in the set of limitations should work. > [1] https://issues.apache.org/jira/browse/FLINK-21924 > [2] https://github.com/apache/flink/pull/16561 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22198) KafkaTableITCase hang.
[ https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401432#comment-17401432 ] Xintong Song commented on FLINK-22198: -- [~renqs] [~lindong] I think the latest instance occurred after the log changes being merged. Could you help take another look? > KafkaTableITCase hang. > -- > > Key: FLINK-22198 > URL: https://issues.apache.org/jira/browse/FLINK-22198 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0, 1.12.4 >Reporter: Guowei Ma >Assignee: Qingsheng Ren >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625 > There is no any artifacts. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22198) KafkaTableITCase hang.
[ https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401431#comment-17401431 ] Xintong Song commented on FLINK-22198: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22447=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7121 > KafkaTableITCase hang. > -- > > Key: FLINK-22198 > URL: https://issues.apache.org/jira/browse/FLINK-22198 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0, 1.12.4 >Reporter: Guowei Ma >Assignee: Qingsheng Ren >Priority: Blocker > Labels: pull-request-available, stale-blocker, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625 > There is no any artifacts. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-23755) Modify the default value of table.dynamic-table-options.enabled to true
[ https://issues.apache.org/jira/browse/FLINK-23755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-23755: --- Labels: pull-request-available (was: ) > Modify the default value of table.dynamic-table-options.enabled to true > --- > > Key: FLINK-23755 > URL: https://issues.apache.org/jira/browse/FLINK-23755 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Reporter: Jingsong Lee >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] SteNicholas opened a new pull request #16887: [FLINK-23755][table] Modify the default value of table.dynamic-table-options.enabled to true
SteNicholas opened a new pull request #16887: URL: https://github.com/apache/flink/pull/16887 ## What is the purpose of the change *The table option `table.dynamic-table-options.enabled` is used to enable the `OPTIONS` hint used to specify table options dynamically, which default value is false. It's recommended to modify the default value of `table.dynamic-table-options.enabled` to true.* ## Brief change log - *Modify the default value of `TableConfigOptions.TABLE_DYNAMIC_TABLE_OPTIONS_ENABLED` to true.* ## Verifying this change ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23727) Skip null values in SimpleStringSchema#deserialize
[ https://issues.apache.org/jira/browse/FLINK-23727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401428#comment-17401428 ] Paul Lin commented on FLINK-23727: -- [~paul8263] Thanks for your contribution, but the PR involves serialization which is beyond the scope of this issue. I'm not sure if it's valid to pass nulls to a serializer, because stream elements should not be null. Plus, I've prepared PR for this issue, just waiting for the issue to be assigned as the contribution guide requires. > Skip null values in SimpleStringSchema#deserialize > -- > > Key: FLINK-23727 > URL: https://issues.apache.org/jira/browse/FLINK-23727 > Project: Flink > Issue Type: Bug > Components: Connectors / Common >Affects Versions: 1.13.2 >Reporter: Paul Lin >Priority: Major > Labels: pull-request-available > > In Kafka use cases, it's valid to send a message with a key and a null > payload as a tombstone. But SimpleStringSchema, which is frequently used as a > message value deserializer, throws NPE when the input value is null. We > should tolerate null values in SimpleStringSchema (simply return null to skip > the records), otherwise users need to implement a custom one. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] RocMarshal commented on pull request #16852: [FLINK-13636][docs-zh]Translate the "Flink DataStream API Programming Guide" page into Chinese
RocMarshal commented on pull request #16852: URL: https://github.com/apache/flink/pull/16852#issuecomment-901551930 Thanks @hapihu for the contribution and @wuchong trust. Please let me check it step by step. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-22387) UpsertKafkaTableITCase hangs when setting up kafka
[ https://issues.apache.org/jira/browse/FLINK-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401427#comment-17401427 ] Xintong Song commented on FLINK-22387: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22431=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7245 > UpsertKafkaTableITCase hangs when setting up kafka > -- > > Key: FLINK-22387 > URL: https://issues.apache.org/jira/browse/FLINK-22387 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Table SQL / Ecosystem >Affects Versions: 1.14.0, 1.13.1, 1.12.4 >Reporter: Dawid Wysakowicz >Assignee: Shengkai Fang >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.14.0, 1.12.6, 1.13.3 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16901=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6932 > {code} > 2021-04-20T20:01:32.2276988Z Apr 20 20:01:32 "main" #1 prio=5 os_prio=0 > tid=0x7fe87400b000 nid=0x4028 runnable [0x7fe87df22000] > 2021-04-20T20:01:32.2277666Z Apr 20 20:01:32java.lang.Thread.State: > RUNNABLE > 2021-04-20T20:01:32.2278338Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.Buffer.getByte(Buffer.java:312) > 2021-04-20T20:01:32.2279325Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:310) > 2021-04-20T20:01:32.2280656Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:492) > 2021-04-20T20:01:32.2281603Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471) > 2021-04-20T20:01:32.2282163Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.Util.skipAll(Util.java:204) > 2021-04-20T20:01:32.2282870Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.Util.discard(Util.java:186) > 2021-04-20T20:01:32.2283494Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.close(Http1ExchangeCodec.java:511) > 2021-04-20T20:01:32.2284460Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.ForwardingSource.close(ForwardingSource.java:43) > 2021-04-20T20:01:32.2285183Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.close(Exchange.java:313) > 2021-04-20T20:01:32.2285756Z Apr 20 20:01:32 at > org.testcontainers.shaded.okio.RealBufferedSource.close(RealBufferedSource.java:476) > 2021-04-20T20:01:32.2286287Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.internal.Util.closeQuietly(Util.java:139) > 2021-04-20T20:01:32.2286795Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.ResponseBody.close(ResponseBody.java:192) > 2021-04-20T20:01:32.2287270Z Apr 20 20:01:32 at > org.testcontainers.shaded.okhttp3.Response.close(Response.java:290) > 2021-04-20T20:01:32.2287913Z Apr 20 20:01:32 at > org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.close(OkDockerHttpClient.java:285) > 2021-04-20T20:01:32.2288606Z Apr 20 20:01:32 at > org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$null$0(DefaultInvocationBuilder.java:272) > 2021-04-20T20:01:32.2289295Z Apr 20 20:01:32 at > org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$$Lambda$340/2058508175.close(Unknown > Source) > 2021-04-20T20:01:32.2289886Z Apr 20 20:01:32 at > com.github.dockerjava.api.async.ResultCallbackTemplate.close(ResultCallbackTemplate.java:77) > 2021-04-20T20:01:32.2290567Z Apr 20 20:01:32 at > org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:202) > 2021-04-20T20:01:32.2291051Z Apr 20 20:01:32 at > org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:205) > 2021-04-20T20:01:32.2291879Z Apr 20 20:01:32 - locked <0xe9cd50f8> > (a [Ljava.lang.Object;) > 2021-04-20T20:01:32.2292313Z Apr 20 20:01:32 at > org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14) > 2021-04-20T20:01:32.2292870Z Apr 20 20:01:32 at > org.testcontainers.LazyDockerClient.authConfig(LazyDockerClient.java:12) > 2021-04-20T20:01:32.2293383Z Apr 20 20:01:32 at > org.testcontainers.containers.GenericContainer.start(GenericContainer.java:310) > 2021-04-20T20:01:32.2293890Z Apr 20 20:01:32 at > org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1029) > 2021-04-20T20:01:32.2294578Z Apr 20 20:01:32 at >
[GitHub] [flink] bytesandwich commented on pull request #15898: [FLINK-21439][core] WIP: Adds Exception History for AdaptiveScheduler
bytesandwich commented on pull request #15898: URL: https://github.com/apache/flink/pull/15898#issuecomment-901549266 Hi @XComp I just switched jobs so I was out for a bit. I see how you envision the test and I implemented it that way. PTAL! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16886: [FLINK-23727][core] Skip null values in SimpleStringSchema#deserialize
flinkbot edited a comment on pull request #16886: URL: https://github.com/apache/flink/pull/16886#issuecomment-901539563 ## CI report: * dcf915f4bd81ce935f935792de2effd0e8ffe634 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22467) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] camilesing commented on pull request #16823: [FLINK-23845][docs]improve PushGatewayReporter config:deleteOnShutdown de…
camilesing commented on pull request #16823: URL: https://github.com/apache/flink/pull/16823#issuecomment-901540056 @tsreaper hi, can you review it? thx -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] camilesing edited a comment on pull request #16629: [FLINK-23847][connectors][kafka] improve error message when valueDeseriali…
camilesing edited a comment on pull request #16629: URL: https://github.com/apache/flink/pull/16629#issuecomment-901539082 @tsreaper hi, can you review it? thx -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16886: [FLINK-23727][core] Skip null values in SimpleStringSchema#deserialize
flinkbot commented on pull request #16886: URL: https://github.com/apache/flink/pull/16886#issuecomment-901539563 ## CI report: * dcf915f4bd81ce935f935792de2effd0e8ffe634 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] camilesing commented on pull request #16629: [FLINK-23847][connectors][kafka] improve error message when valueDeseriali…
camilesing commented on pull request #16629: URL: https://github.com/apache/flink/pull/16629#issuecomment-901539082 @tsreape hi, can you review it? thx -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15898: [FLINK-21439][core] WIP: Adds Exception History for AdaptiveScheduler
flinkbot edited a comment on pull request #15898: URL: https://github.com/apache/flink/pull/15898#issuecomment-838996225 ## CI report: * 521e19eecadf39a226c5b5be4ed5348485656eab Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21085) * c34ba95f4220031885b7d11ff7c601d3b000c9b1 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22466) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15898: [FLINK-21439][core] WIP: Adds Exception History for AdaptiveScheduler
flinkbot edited a comment on pull request #15898: URL: https://github.com/apache/flink/pull/15898#issuecomment-838996225 ## CI report: * 521e19eecadf39a226c5b5be4ed5348485656eab Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21085) * c34ba95f4220031885b7d11ff7c601d3b000c9b1 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16886: [FLINK-23727][core] Skip null values in SimpleStringSchema#deserialize
flinkbot commented on pull request #16886: URL: https://github.com/apache/flink/pull/16886#issuecomment-901524380 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit dcf915f4bd81ce935f935792de2effd0e8ffe634 (Thu Aug 19 00:47:44 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-23727).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23727) Skip null values in SimpleStringSchema#deserialize
[ https://issues.apache.org/jira/browse/FLINK-23727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-23727: --- Labels: pull-request-available (was: ) > Skip null values in SimpleStringSchema#deserialize > -- > > Key: FLINK-23727 > URL: https://issues.apache.org/jira/browse/FLINK-23727 > Project: Flink > Issue Type: Bug > Components: Connectors / Common >Affects Versions: 1.13.2 >Reporter: Paul Lin >Priority: Major > Labels: pull-request-available > > In Kafka use cases, it's valid to send a message with a key and a null > payload as a tombstone. But SimpleStringSchema, which is frequently used as a > message value deserializer, throws NPE when the input value is null. We > should tolerate null values in SimpleStringSchema (simply return null to skip > the records), otherwise users need to implement a custom one. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] paul8263 opened a new pull request #16886: [FLINK-23727][core] Skip null values in SimpleStringSchema#deserialize
paul8263 opened a new pull request #16886: URL: https://github.com/apache/flink/pull/16886 ## What is the purpose of the change Fixe the issue that the resulting scale of TRUNCATE(DECIMAL, ...) is not correct. ## Brief change log - flink-core/src/main/java/org/apache/flink/api/common/serialization/SimpleStringSchema.java - flink-core/src/test/java/org/apache/flink/api/common/serialization/SimpleStringSchemaTest.java ## Verifying this change This change added tests and can be verified as follows: - Added more test cases in flink-core/src/test/java/org/apache/flink/api/common/serialization/SimpleStringSchemaTest.java ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16885: [FLINK-23862][runtime] Cleanup StreamTask if cancelled after restore …
flinkbot edited a comment on pull request #16885: URL: https://github.com/apache/flink/pull/16885#issuecomment-901300956 ## CI report: * 507d2f8129e46bac8a31f5cc6371e1d627c16b2b Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22462) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-22644) Translate "Native Kubernetes" page into Chinese
[ https://issues.apache.org/jira/browse/FLINK-22644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22644: --- Labels: auto-unassigned pull-request-available stale-major (was: auto-unassigned pull-request-available) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Major but is unassigned and neither itself nor its Sub-Tasks have been updated for 60 days. I have gone ahead and added a "stale-major" to the issue". If this ticket is a Major, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Translate "Native Kubernetes" page into Chinese > --- > > Key: FLINK-22644 > URL: https://issues.apache.org/jira/browse/FLINK-22644 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: Yuchen Cheng >Priority: Major > Labels: auto-unassigned, pull-request-available, stale-major > > The page url is > https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/native_kubernetes/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-18574) DownloadPipelineArtifact failes occasionally
[ https://issues.apache.org/jira/browse/FLINK-18574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-18574: --- Labels: stale-critical test-stability (was: test-stability) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Critical but is unassigned and neither itself nor its Sub-Tasks have been updated for 14 days. I have gone ahead and marked it "stale-critical". If this ticket is critical, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > DownloadPipelineArtifact failes occasionally > > > Key: FLINK-18574 > URL: https://issues.apache.org/jira/browse/FLINK-18574 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines >Reporter: Dian Fu >Priority: Critical > Labels: stale-critical, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4414=logs=bdd9ea51-4de2-506a-d4d9-f3930e4d2355=c1dcc74d-b153-580b-95a4-73d5cd91b503 > {code} > 2020-07-11T20:38:15.9784577Z ##[section]Starting: DownloadPipelineArtifact > 2020-07-11T20:38:15.9799212Z > == > 2020-07-11T20:38:15.9799811Z Task : Download Pipeline Artifacts > 2020-07-11T20:38:15.9800305Z Description : Download build and pipeline > artifacts > 2020-07-11T20:38:15.9800753Z Version : 2.3.1 > 2020-07-11T20:38:15.9801328Z Author : Microsoft Corporation > 2020-07-11T20:38:15.9801942Z Help : > https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/download-pipeline-artifact > 2020-07-11T20:38:15.9802614Z > == > 2020-07-11T20:38:16.5226960Z Download from the specified build: #4414 > 2020-07-11T20:38:16.5246147Z Download artifact to: > /home/agent07/myagent/_work/2/flink_artifact > 2020-07-11T20:38:20.6397951Z Information, ApplicationInsightsTelemetrySender > will correlate events with X-TFS-Session 38415228-535b-4652-8afc-41e3fcfa5c67 > 2020-07-11T20:38:20.6546003Z Information, DedupManifestArtifactClient will > correlate http requests with X-TFS-Session > 38415228-535b-4652-8afc-41e3fcfa5c67 > 2020-07-11T20:38:20.6563787Z Information, Minimatch patterns: [**] > 2020-07-11T20:38:20.7577072Z Information, > ArtifactHttpRetryMessageHandler.SendAsync: > https://vsblobprodsu6weu.vsblob.visualstudio.com/A2d3c0ac8-fecf-45be-8407-6d87302181a9/_apis/dedup/nodes/4C7343B40F7AB12049F6929A6A4ECD738EFADAC85D010AF5877ACD0682CC6C5802 > attempt 1/6 failed with StatusCode RedirectMethod, IsRetryableResponse False > 2020-07-11T20:38:35.8527079Z Information, Filtered 130283 files from the > Minimatch filters supplied. > 2020-07-11T20:38:35.8865820Z Information, Downloaded 0.0 MB out of 1,170.3 MB > (0%). > 2020-07-11T20:38:40.8869716Z Information, Downloaded 4.6 MB out of 1,170.3 MB > (0%). > 2020-07-11T20:38:45.9379430Z Information, Downloaded 24.5 MB out of 1,170.3 > MB (2%). > 2020-07-11T20:38:50.9539945Z Information, Downloaded 41.8 MB out of 1,170.3 > MB (4%). > 2020-07-11T20:38:53.5482129Z Warning, > [https://nsevsblobprodsu6weus42.blob.core.windows.net/db2d3c0ac8fecf45be84076d87302181a9/454487F48A36900D2D7FE8928376FDB49F9F3BF90B3A5474B7EF8D9ABE8BDC7201?sv=2019-02-02=b=HGudFZh7Cd3TOBUMGO7rIq0DJxPQORzHkSVs3iYeDPc%3D=https=2020-07-12T21%3A16%3A18Z=r=x-e2eid-e61f9696-89054b42-93868ec3-c0739ba7-session-38415228-535b4652-8afc41e3-fcfa5c67] > Try 1/5, retryable exception caught. Retrying in 00:00:01. > System.Net.Http.HttpRequestException: An error occurred while sending the > request. > 2020-07-11T20:38:53.5484031Z ---> System.Net.Http.CurlException: Couldn't > connect to server > 2020-07-11T20:38:53.5484420Zat > System.Net.Http.CurlHandler.ThrowIfCURLEError(CURLcode error) > 2020-07-11T20:38:53.5484929Zat > System.Net.Http.CurlHandler.MultiAgent.FinishRequest(StrongToWeakReference`1 > easyWrapper, CURLcode messageResult) > 2020-07-11T20:38:53.5485733Z--- End of inner exception stack trace --- > 2020-07-11T20:38:53.5486394Zat > System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, > HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts) > 2020-07-11T20:38:53.5487612Zat > Microsoft.VisualStudio.Services.Common.TaskCancellationExtensions.EnforceCancellation[TResult](Task`1 > task, CancellationToken cancellationToken, Func`1 makeMessage, String file, > String member, Int32 line) > 2020-07-11T20:38:53.5488656Zat > Microsoft.VisualStudio.Services.BlobStore.WebApi.DedupStoreHttpClient.<>c__DisplayClass57_0.d.MoveNext() >
[jira] [Updated] (FLINK-22416) UpsertKafkaTableITCase hangs when collecting results
[ https://issues.apache.org/jira/browse/FLINK-22416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22416: --- Labels: stale-assigned test-stability (was: test-stability) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issue is assigned but has not received an update in 30 days, so it has been labeled "stale-assigned". If you are still working on the issue, please remove the label and add a comment updating the community on your progress. If this issue is waiting on feedback, please consider this a reminder to the committer/reviewer. Flink is a very active project, and so we appreciate your patience. If you are no longer working on the issue, please unassign yourself so someone else may work on it. > UpsertKafkaTableITCase hangs when collecting results > > > Key: FLINK-22416 > URL: https://issues.apache.org/jira/browse/FLINK-22416 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Table SQL / Ecosystem >Affects Versions: 1.13.0 >Reporter: Dawid Wysakowicz >Assignee: Qingsheng Ren >Priority: Major > Labels: stale-assigned, test-stability > Fix For: 1.14.0 > > Attachments: idea-test.png, threads_report.txt > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17037=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=7002 > {code} > 2021-04-22T11:16:35.6812919Z Apr 22 11:16:35 [ERROR] > testSourceSinkWithKeyAndPartialValue[format = > csv](org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase) > Time elapsed: 30.01 s <<< ERROR! > 2021-04-22T11:16:35.6814151Z Apr 22 11:16:35 > org.junit.runners.model.TestTimedOutException: test timed out after 30 seconds > 2021-04-22T11:16:35.6814781Z Apr 22 11:16:35 at > java.lang.Thread.sleep(Native Method) > 2021-04-22T11:16:35.6815444Z Apr 22 11:16:35 at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237) > 2021-04-22T11:16:35.6816250Z Apr 22 11:16:35 at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113) > 2021-04-22T11:16:35.6817033Z Apr 22 11:16:35 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106) > 2021-04-22T11:16:35.6817719Z Apr 22 11:16:35 at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) > 2021-04-22T11:16:35.6818351Z Apr 22 11:16:35 at > org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370) > 2021-04-22T11:16:35.6818980Z Apr 22 11:16:35 at > org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestUtils.collectRows(KafkaTableTestUtils.java:52) > 2021-04-22T11:16:35.6819978Z Apr 22 11:16:35 at > org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.testSourceSinkWithKeyAndPartialValue(UpsertKafkaTableITCase.java:147) > 2021-04-22T11:16:35.6820803Z Apr 22 11:16:35 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2021-04-22T11:16:35.6821365Z Apr 22 11:16:35 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2021-04-22T11:16:35.6822072Z Apr 22 11:16:35 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2021-04-22T11:16:35.6822656Z Apr 22 11:16:35 at > java.lang.reflect.Method.invoke(Method.java:498) > 2021-04-22T11:16:35.6823124Z Apr 22 11:16:35 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2021-04-22T11:16:35.6823672Z Apr 22 11:16:35 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2021-04-22T11:16:35.6824202Z Apr 22 11:16:35 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2021-04-22T11:16:35.6824709Z Apr 22 11:16:35 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2021-04-22T11:16:35.6825230Z Apr 22 11:16:35 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2021-04-22T11:16:35.6825716Z Apr 22 11:16:35 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2021-04-22T11:16:35.6826204Z Apr 22 11:16:35 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2021-04-22T11:16:35.6826807Z Apr 22 11:16:35 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > 2021-04-22T11:16:35.6827378Z Apr 22 11:16:35 at >
[GitHub] [flink] flinkbot edited a comment on pull request #16885: [FLINK-23862][runtime] Cleanup StreamTask if cancelled after restore …
flinkbot edited a comment on pull request #16885: URL: https://github.com/apache/flink/pull/16885#issuecomment-901300956 ## CI report: * 42f46f71429e3bd703b451cdab8a486542abf283 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22461) * 507d2f8129e46bac8a31f5cc6371e1d627c16b2b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22462) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16838: [FLINK-23801][connector/kafka] Report numBytesIn and pendingRecords in KafkaSource
flinkbot edited a comment on pull request #16838: URL: https://github.com/apache/flink/pull/16838#issuecomment-899337835 ## CI report: * 92f938c7b00a357e2fc58fd2532385a6a4b72516 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22460) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rkhachatryan commented on a change in pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints
rkhachatryan commented on a change in pull request #16606: URL: https://github.com/apache/flink/pull/16606#discussion_r691590608 ## File path: flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/materializer/PeriodicMaterializer.java ## @@ -0,0 +1,263 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.state.changelog.materializer; + +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.core.fs.FileSystemSafetyNet; +import org.apache.flink.runtime.checkpoint.CheckpointOptions; +import org.apache.flink.runtime.checkpoint.CheckpointType; +import org.apache.flink.runtime.mailbox.MailboxExecutor; +import org.apache.flink.runtime.state.CheckpointStorageLocationReference; +import org.apache.flink.runtime.state.CheckpointStorageWorkerView; +import org.apache.flink.runtime.state.CheckpointStreamFactory; +import org.apache.flink.runtime.state.KeyedStateHandle; +import org.apache.flink.runtime.state.SnapshotResult; +import org.apache.flink.runtime.state.Snapshotable; +import org.apache.flink.runtime.state.StateObject; +import org.apache.flink.runtime.state.changelog.ChangelogStateHandle; +import org.apache.flink.runtime.state.changelog.SequenceNumber; +import org.apache.flink.runtime.state.changelog.StateChangelogWriter; +import org.apache.flink.util.concurrent.ExecutorThreadFactory; +import org.apache.flink.util.concurrent.FutureUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.annotation.Nonnull; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.RunnableFuture; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; + +import static java.util.Collections.emptyList; +import static java.util.Collections.singletonList; +import static org.apache.flink.util.Preconditions.checkState; + +/** Periodically make materialization for the delegated state backend. */ +public class PeriodicMaterializer { +private static final Logger LOG = LoggerFactory.getLogger(PeriodicMaterializer.class); + +/** + * ChangelogStateBackend only supports CheckpointType.CHECKPOINT; The rest of information in + * CheckpointOptions is not used in Snapshotable#snapshot(). More details in FLINK-23441. + */ +private static final CheckpointOptions CHECKPOINT_OPTIONS = +new CheckpointOptions( +CheckpointType.CHECKPOINT, CheckpointStorageLocationReference.getDefault()); + +/** task mailbox executor, execute from Task Thread. */ +private final MailboxExecutor mailboxExecutor; + +/** Async thread pool, to complete async phase of materialization. */ +private final ExecutorService asyncOperationsThreadPool; + +/** scheduled executor, periodically trigger materialization. */ +private final ScheduledExecutorService periodicExecutor; + +private final CheckpointStreamFactory streamFactory; + +private final Snapshotable keyedStateBackend; + +private final StateChangelogWriter stateChangelogWriter; + +private final AsyncExceptionHandler asyncExceptionHandler; + +private final int allowedNumberOfFailures; + +/** Materialization failure retries. */ +private final AtomicInteger retries; + +/** Making sure only one materialization on going at a time. */ +private final AtomicBoolean materializationOnGoing; + +private long materializedId; + +private MaterializedState materializedState; + +public PeriodicMaterializer( +MailboxExecutor mailboxExecutor, +ExecutorService asyncOperationsThreadPool, +Snapshotable keyedStateBackend, +CheckpointStorageWorkerView checkpointStorageWorkerView, +StateChangelogWriter stateChangelogWriter, +AsyncExceptionHandler asyncExceptionHandler, +MaterializedState materializedState, +long periodicMaterializeInitDelay, +long
[GitHub] [flink] rkhachatryan commented on a change in pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints
rkhachatryan commented on a change in pull request #16606: URL: https://github.com/apache/flink/pull/16606#discussion_r691587190 ## File path: flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/materializer/PeriodicMaterializer.java ## @@ -0,0 +1,263 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.state.changelog.materializer; + +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.core.fs.FileSystemSafetyNet; +import org.apache.flink.runtime.checkpoint.CheckpointOptions; +import org.apache.flink.runtime.checkpoint.CheckpointType; +import org.apache.flink.runtime.mailbox.MailboxExecutor; +import org.apache.flink.runtime.state.CheckpointStorageLocationReference; +import org.apache.flink.runtime.state.CheckpointStorageWorkerView; +import org.apache.flink.runtime.state.CheckpointStreamFactory; +import org.apache.flink.runtime.state.KeyedStateHandle; +import org.apache.flink.runtime.state.SnapshotResult; +import org.apache.flink.runtime.state.Snapshotable; +import org.apache.flink.runtime.state.StateObject; +import org.apache.flink.runtime.state.changelog.ChangelogStateHandle; +import org.apache.flink.runtime.state.changelog.SequenceNumber; +import org.apache.flink.runtime.state.changelog.StateChangelogWriter; +import org.apache.flink.util.concurrent.ExecutorThreadFactory; +import org.apache.flink.util.concurrent.FutureUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.annotation.Nonnull; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.RunnableFuture; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; + +import static java.util.Collections.emptyList; +import static java.util.Collections.singletonList; +import static org.apache.flink.util.Preconditions.checkState; + +/** Periodically make materialization for the delegated state backend. */ +public class PeriodicMaterializer { +private static final Logger LOG = LoggerFactory.getLogger(PeriodicMaterializer.class); + +/** + * ChangelogStateBackend only supports CheckpointType.CHECKPOINT; The rest of information in + * CheckpointOptions is not used in Snapshotable#snapshot(). More details in FLINK-23441. + */ +private static final CheckpointOptions CHECKPOINT_OPTIONS = +new CheckpointOptions( +CheckpointType.CHECKPOINT, CheckpointStorageLocationReference.getDefault()); + +/** task mailbox executor, execute from Task Thread. */ +private final MailboxExecutor mailboxExecutor; + +/** Async thread pool, to complete async phase of materialization. */ +private final ExecutorService asyncOperationsThreadPool; + +/** scheduled executor, periodically trigger materialization. */ +private final ScheduledExecutorService periodicExecutor; + +private final CheckpointStreamFactory streamFactory; + +private final Snapshotable keyedStateBackend; + +private final StateChangelogWriter stateChangelogWriter; + +private final AsyncExceptionHandler asyncExceptionHandler; + +private final int allowedNumberOfFailures; + +/** Materialization failure retries. */ +private final AtomicInteger retries; + +/** Making sure only one materialization on going at a time. */ +private final AtomicBoolean materializationOnGoing; + +private long materializedId; + +private MaterializedState materializedState; + +public PeriodicMaterializer( +MailboxExecutor mailboxExecutor, +ExecutorService asyncOperationsThreadPool, +Snapshotable keyedStateBackend, +CheckpointStorageWorkerView checkpointStorageWorkerView, +StateChangelogWriter stateChangelogWriter, +AsyncExceptionHandler asyncExceptionHandler, +MaterializedState materializedState, +long periodicMaterializeInitDelay, +long
[GitHub] [flink] flinkbot edited a comment on pull request #16884: [FLINK-22221][runtime] Log instantiations of FileChannelManager to pr…
flinkbot edited a comment on pull request #16884: URL: https://github.com/apache/flink/pull/16884#issuecomment-901259128 ## CI report: * afd09ac3fd8ff1a0dff955aa8e142a0d5b3ec638 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22459) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rkhachatryan commented on a change in pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints
rkhachatryan commented on a change in pull request #16606: URL: https://github.com/apache/flink/pull/16606#discussion_r691577741 ## File path: flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/materializer/PeriodicMaterializer.java ## @@ -0,0 +1,263 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.state.changelog.materializer; + +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.core.fs.FileSystemSafetyNet; +import org.apache.flink.runtime.checkpoint.CheckpointOptions; +import org.apache.flink.runtime.checkpoint.CheckpointType; +import org.apache.flink.runtime.mailbox.MailboxExecutor; +import org.apache.flink.runtime.state.CheckpointStorageLocationReference; +import org.apache.flink.runtime.state.CheckpointStorageWorkerView; +import org.apache.flink.runtime.state.CheckpointStreamFactory; +import org.apache.flink.runtime.state.KeyedStateHandle; +import org.apache.flink.runtime.state.SnapshotResult; +import org.apache.flink.runtime.state.Snapshotable; +import org.apache.flink.runtime.state.StateObject; +import org.apache.flink.runtime.state.changelog.ChangelogStateHandle; +import org.apache.flink.runtime.state.changelog.SequenceNumber; +import org.apache.flink.runtime.state.changelog.StateChangelogWriter; +import org.apache.flink.util.concurrent.ExecutorThreadFactory; +import org.apache.flink.util.concurrent.FutureUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.annotation.Nonnull; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.RunnableFuture; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; + +import static java.util.Collections.emptyList; +import static java.util.Collections.singletonList; +import static org.apache.flink.util.Preconditions.checkState; + +/** Periodically make materialization for the delegated state backend. */ +public class PeriodicMaterializer { +private static final Logger LOG = LoggerFactory.getLogger(PeriodicMaterializer.class); + +/** + * ChangelogStateBackend only supports CheckpointType.CHECKPOINT; The rest of information in + * CheckpointOptions is not used in Snapshotable#snapshot(). More details in FLINK-23441. + */ +private static final CheckpointOptions CHECKPOINT_OPTIONS = +new CheckpointOptions( +CheckpointType.CHECKPOINT, CheckpointStorageLocationReference.getDefault()); + +/** task mailbox executor, execute from Task Thread. */ +private final MailboxExecutor mailboxExecutor; + +/** Async thread pool, to complete async phase of materialization. */ +private final ExecutorService asyncOperationsThreadPool; + +/** scheduled executor, periodically trigger materialization. */ +private final ScheduledExecutorService periodicExecutor; + +private final CheckpointStreamFactory streamFactory; + +private final Snapshotable keyedStateBackend; + +private final StateChangelogWriter stateChangelogWriter; + +private final AsyncExceptionHandler asyncExceptionHandler; + +private final int allowedNumberOfFailures; + +/** Materialization failure retries. */ +private final AtomicInteger retries; + +/** Making sure only one materialization on going at a time. */ +private final AtomicBoolean materializationOnGoing; + +private long materializedId; + +private MaterializedState materializedState; + +public PeriodicMaterializer( +MailboxExecutor mailboxExecutor, +ExecutorService asyncOperationsThreadPool, +Snapshotable keyedStateBackend, +CheckpointStorageWorkerView checkpointStorageWorkerView, +StateChangelogWriter stateChangelogWriter, +AsyncExceptionHandler asyncExceptionHandler, +MaterializedState materializedState, +long periodicMaterializeInitDelay, +long
[GitHub] [flink] rkhachatryan commented on a change in pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints
rkhachatryan commented on a change in pull request #16606: URL: https://github.com/apache/flink/pull/16606#discussion_r691573533 ## File path: flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/materializer/PeriodicMaterializer.java ## @@ -0,0 +1,263 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.state.changelog.materializer; + +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.core.fs.FileSystemSafetyNet; +import org.apache.flink.runtime.checkpoint.CheckpointOptions; +import org.apache.flink.runtime.checkpoint.CheckpointType; +import org.apache.flink.runtime.mailbox.MailboxExecutor; +import org.apache.flink.runtime.state.CheckpointStorageLocationReference; +import org.apache.flink.runtime.state.CheckpointStorageWorkerView; +import org.apache.flink.runtime.state.CheckpointStreamFactory; +import org.apache.flink.runtime.state.KeyedStateHandle; +import org.apache.flink.runtime.state.SnapshotResult; +import org.apache.flink.runtime.state.Snapshotable; +import org.apache.flink.runtime.state.StateObject; +import org.apache.flink.runtime.state.changelog.ChangelogStateHandle; +import org.apache.flink.runtime.state.changelog.SequenceNumber; +import org.apache.flink.runtime.state.changelog.StateChangelogWriter; +import org.apache.flink.util.concurrent.ExecutorThreadFactory; +import org.apache.flink.util.concurrent.FutureUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.annotation.Nonnull; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.RunnableFuture; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; + +import static java.util.Collections.emptyList; +import static java.util.Collections.singletonList; +import static org.apache.flink.util.Preconditions.checkState; + +/** Periodically make materialization for the delegated state backend. */ +public class PeriodicMaterializer { +private static final Logger LOG = LoggerFactory.getLogger(PeriodicMaterializer.class); + +/** + * ChangelogStateBackend only supports CheckpointType.CHECKPOINT; The rest of information in + * CheckpointOptions is not used in Snapshotable#snapshot(). More details in FLINK-23441. + */ +private static final CheckpointOptions CHECKPOINT_OPTIONS = +new CheckpointOptions( +CheckpointType.CHECKPOINT, CheckpointStorageLocationReference.getDefault()); + +/** task mailbox executor, execute from Task Thread. */ +private final MailboxExecutor mailboxExecutor; + +/** Async thread pool, to complete async phase of materialization. */ +private final ExecutorService asyncOperationsThreadPool; + +/** scheduled executor, periodically trigger materialization. */ +private final ScheduledExecutorService periodicExecutor; + +private final CheckpointStreamFactory streamFactory; + +private final Snapshotable keyedStateBackend; + +private final StateChangelogWriter stateChangelogWriter; + +private final AsyncExceptionHandler asyncExceptionHandler; + +private final int allowedNumberOfFailures; + +/** Materialization failure retries. */ +private final AtomicInteger retries; + +/** Making sure only one materialization on going at a time. */ +private final AtomicBoolean materializationOnGoing; + +private long materializedId; + +private MaterializedState materializedState; + +public PeriodicMaterializer( +MailboxExecutor mailboxExecutor, +ExecutorService asyncOperationsThreadPool, +Snapshotable keyedStateBackend, +CheckpointStorageWorkerView checkpointStorageWorkerView, +StateChangelogWriter stateChangelogWriter, +AsyncExceptionHandler asyncExceptionHandler, +MaterializedState materializedState, +long periodicMaterializeInitDelay, +long
[GitHub] [flink] flinkbot edited a comment on pull request #16882: [FLINK-23528][kinesis] Do not interrupt main thread during cancellation.
flinkbot edited a comment on pull request #16882: URL: https://github.com/apache/flink/pull/16882#issuecomment-901186762 ## CI report: * 9ebf91c53e256e47aaa6a7af1b94bd99d499e53a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22456) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16883: [FLINK-22217][runtime] Add quotes around job name in job related log …
flinkbot edited a comment on pull request #16883: URL: https://github.com/apache/flink/pull/16883#issuecomment-901213387 ## CI report: * 6e6bd1dadc897ec438c3f176f7579690f30cf860 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22457) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16877: [FLINK-23861][sql-client]flink sql client support dynamic params
flinkbot edited a comment on pull request #16877: URL: https://github.com/apache/flink/pull/16877#issuecomment-901089685 ## CI report: * d28f807fa7579005eeb231eb9cc5e772f222f85b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22454) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16885: [FLINK-23862][runtime] Cleanup StreamTask if cancelled after restore …
flinkbot edited a comment on pull request #16885: URL: https://github.com/apache/flink/pull/16885#issuecomment-901300956 ## CI report: * 42f46f71429e3bd703b451cdab8a486542abf283 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22461) * 507d2f8129e46bac8a31f5cc6371e1d627c16b2b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22462) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23841) [doc]Modify incorrect English statements for page 'execution_configuration'
[ https://issues.apache.org/jira/browse/FLINK-23841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401305#comment-17401305 ] wuguihu commented on FLINK-23841: - Hi,[~chesnay] Excuse me for taking up your time. I created pull request for this issue. Would you like to review it for me. If there is any problem, please inform me in time. I will actively revise it in time. Thank you very much! > [doc]Modify incorrect English statements for page 'execution_configuration' > --- > > Key: FLINK-23841 > URL: https://issues.apache.org/jira/browse/FLINK-23841 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: wuguihu >Priority: Minor > Labels: pull-request-available > > [https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/execution/execution_configuration/] > > The English statement in line 82 is not correct. > {code:java} > //line 82 > Note that types registered with `registerKryoType()` are not available to > Flink's Kryo serializer instance. > {code} > It should be modified as follows: > {code:java} > Note that types registered with `registerKryoType()` are not available to > Flink's POJO serializer instance. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23859) [typo][flink-core][flink-connectors]fix typo for code
[ https://issues.apache.org/jira/browse/FLINK-23859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401304#comment-17401304 ] wuguihu commented on FLINK-23859: - Hi,[~chesnay] Excuse me for taking up your time. I created pull request for this issue. Would you like to review it for me. If there is any problem, please inform me in time. I will actively revise it in time. Thank you very much! > [typo][flink-core][flink-connectors]fix typo for code > --- > > Key: FLINK-23859 > URL: https://issues.apache.org/jira/browse/FLINK-23859 > Project: Flink > Issue Type: Bug >Reporter: wuguihu >Priority: Minor > Labels: pull-request-available > > There are some typo issues in these modules. > > {code:java} > # Use the Codespell tool to check typo issue. > pip install codespell > codespell -h > {code} > > 1、 codespell flink-java/src > {code:java} > flink-java/src/main/java/org/apache/flink/api/java/operators/PartitionOperator.java:125: > partioning ==> partitioning > flink-java/src/main/java/org/apache/flink/api/java/operators/PartitionOperator.java:128: > neccessary ==> necessary > {code} > > 2、 codespell flink-clients/ > {code:java} > flink-clients/src/test/java/org/apache/flink/client/program/DefaultPackagedProgramRetrieverTest.java:545: > acessible ==> accessible > {code} > > 3、codespell flink-connectors/ -S '*.xml' -S '*.iml' -S '*.txt' > > {code:java} > flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/reader/SourceReaderOptions.java:25: > tht ==> that > flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/catalog/PostgresCatalogTestBase.java:192: > doens't ==> doesn't > flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/JdbcDialect.java:96: > PostgresSQL ==> postgresql > flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/batch/connectors/cassandra/CustomCassandraAnnotatedPojo.java:38: > instanciation ==> instantiation > flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParserCalcitePlanner.java:822: > partion ==> partition > flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParserTypeCheckProcFactory.java:943: > funtion ==> function > flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveASTParseDriver.java:55: > funtion ==> function > flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveASTParseDriver.java:51: > characteres ==> characters > flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/KinesisConfigUtil.java:436: > paremeters ==> parameters > flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserBaseSemanticAnalyzer.java:2369: > Unkown ==> Unknown > flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ConsumerConfigConstants.java:75: > reprsents ==> represents > flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/HiveFunctionWrapper.java:28: > functino ==> function > flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/source/HBaseRowDataAsyncLookupFunction.java:62: > implemenation > flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/source/enumerator/cursor/StartCursor.java:70: > ture ==> true > flink-connectors/flink-connector-pulsar/src/test/resources/containers/txnStandalone.conf:907: > partions ==> partitions > flink-connectors/flink-connector-pulsar/src/test/resources/containers/txnStandalone.conf:468: > implementatation ==> implementation > flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/src/enumerate/BlockSplittingRecursiveEnumerator.java:141: > bloc ==> block > flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/src/reader/SimpleStreamFormat.java:37: > te ==> the > flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/deserializer/KafkaRecordDeserializationSchema.java:70: > determin ==> determine > flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/enumerator/subscriber/TopicListSubscriber.java:36: > hav ==> have > flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserQBSubQuery.java:555: > correlatd ==> correlated >
[GitHub] [flink] flinkbot edited a comment on pull request #16885: [FLINK-23862][runtime] Cleanup StreamTask if cancelled after restore …
flinkbot edited a comment on pull request #16885: URL: https://github.com/apache/flink/pull/16885#issuecomment-901300956 ## CI report: * 42f46f71429e3bd703b451cdab8a486542abf283 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22461) * 507d2f8129e46bac8a31f5cc6371e1d627c16b2b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16879: [FLINK-23840][runtime] Improve confusing log / exception message when…
flinkbot edited a comment on pull request #16879: URL: https://github.com/apache/flink/pull/16879#issuecomment-901136305 ## CI report: * 6f8434f7ee71aea629edfc4d1e3f77e8c17a1a33 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22452) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16885: [FLINK-23862][runtime] Cleanup StreamTask if cancelled after restore …
flinkbot edited a comment on pull request #16885: URL: https://github.com/apache/flink/pull/16885#issuecomment-901300956 ## CI report: * 42f46f71429e3bd703b451cdab8a486542abf283 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22461) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16880: [hotfix][flink-kafka-connector][docs] Fix some typos in flink-kafka-connector annotation
flinkbot edited a comment on pull request #16880: URL: https://github.com/apache/flink/pull/16880#issuecomment-901136372 ## CI report: * d8f377bba18c95536edd372c5438903f849cea69 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22453) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16865: [FLINK-23836][doc-zh]Translate "/dev/datastream/execution/execution_c…
flinkbot edited a comment on pull request #16865: URL: https://github.com/apache/flink/pull/16865#issuecomment-900450716 ## CI report: * 2e14eb9abe1a001787baf70a87f604f387e5f753 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22458) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16885: [FLINK-23862][runtime] Cleanup StreamTask if cancelled after restore …
flinkbot commented on pull request #16885: URL: https://github.com/apache/flink/pull/16885#issuecomment-901300956 ## CI report: * 42f46f71429e3bd703b451cdab8a486542abf283 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org