[jira] [Closed] (FLINK-15670) Provide a Kafka Source/Sink pair that aligns Kafka's Partitions and Flink's KeyGroups
[ https://issues.apache.org/jira/browse/FLINK-15670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski closed FLINK-15670. -- Resolution: Fixed Merged to master as 03f5d54b99..78b7c71d83 > Provide a Kafka Source/Sink pair that aligns Kafka's Partitions and Flink's > KeyGroups > - > > Key: FLINK-15670 > URL: https://issues.apache.org/jira/browse/FLINK-15670 > Project: Flink > Issue Type: New Feature > Components: API / DataStream, Connectors / Kafka >Reporter: Stephan Ewen >Assignee: Yuan Mei >Priority: Major > Labels: pull-request-available, usability > Fix For: 1.11.0 > > Time Spent: 10m > Remaining Estimate: 0h > > This Source/Sink pair would serve two purposes: > 1. You can read topics that are already partitioned by key and process them > without partitioning them again (avoid shuffles) > 2. You can use this to shuffle through Kafka, thereby decomposing the job > into smaller jobs and independent pipelined regions that fail over > independently. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] pnowojski commented on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
pnowojski commented on pull request #11725: URL: https://github.com/apache/flink/pull/11725#issuecomment-629984389 LGTM, private azure test passed % end to end tests timeout, but this couldn't be possibly caused by the latest update - previous version differed only in a java doc and was green. Merging. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pnowojski merged pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
pnowojski merged pull request #11725: URL: https://github.com/apache/flink/pull/11725 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17791) Combine collecting sink and iterator to support collecting query results under all execution and network environments
[ https://issues.apache.org/jira/browse/FLINK-17791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Caizhi Weng updated FLINK-17791: Labels: pull-request-available (was: ) > Combine collecting sink and iterator to support collecting query results > under all execution and network environments > - > > Key: FLINK-17791 > URL: https://issues.apache.org/jira/browse/FLINK-17791 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream, Table SQL / Planner >Reporter: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > After introducing specialized collecting sink and iterator, the last thing we > need to do is to combine them together in Table / DataSteram API so that the > whole collecting mechanism works for the users. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] leonardBang commented on a change in pull request #12184: [FLINK-17027] Introduce a new Elasticsearch 7 connector with new property keys
leonardBang commented on a change in pull request #12184: URL: https://github.com/apache/flink/pull/12184#discussion_r426404152 ## File path: flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/Elasticsearch6DynamicSinkFactory.java ## @@ -0,0 +1,152 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.streaming.connectors.elasticsearch.table; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.serialization.SerializationSchema; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.connector.format.SinkFormat; +import org.apache.flink.table.connector.sink.DynamicTableSink; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.factories.DynamicTableSinkFactory; +import org.apache.flink.table.factories.FactoryUtil; +import org.apache.flink.table.factories.SerializationFormatFactory; + +import java.util.Set; +import java.util.function.Supplier; +import java.util.stream.Collectors; +import java.util.stream.Stream; + +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.BULK_FLASH_MAX_SIZE_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.BULK_FLUSH_BACKOFF_DELAY_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.BULK_FLUSH_BACKOFF_MAX_RETRIES_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.BULK_FLUSH_BACKOFF_TYPE_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.BULK_FLUSH_INTERVAL_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.BULK_FLUSH_MAX_ACTIONS_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.CONNECTION_MAX_RETRY_TIMEOUT_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.CONNECTION_PATH_PREFIX; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.DOCUMENT_TYPE_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.FAILURE_HANDLER_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.FLUSH_ON_CHECKPOINT_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.FORMAT_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.HOSTS_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.INDEX_OPTION; +import static org.apache.flink.streaming.connectors.elasticsearch.table.ElasticsearchOptions.KEY_DELIMITER_OPTION; + +/** + * A {@link DynamicTableSinkFactory} for discovering {@link Elasticsearch6DynamicSink}. + */ +@Internal +public class Elasticsearch6DynamicSinkFactory implements DynamicTableSinkFactory { + private static final Set> requiredOptions = Stream.of( + HOSTS_OPTION, + INDEX_OPTION, + DOCUMENT_TYPE_OPTION + ).collect(Collectors.toSet()); + private static final Set> optionalOptions = Stream.of( + KEY_DELIMITER_OPTION, + FAILURE_HANDLER_OPTION, + FLUSH_ON_CHECKPOINT_OPTION, + BULK_FLASH_MAX_SIZE_OPTION, + BULK_FLUSH_MAX_ACTIONS_OPTION, + BULK_FLUSH_INTERVAL_OPTION, + BULK_FLUSH_BACKOFF_TYPE_OPTION, + BULK_FLUSH_BACKOFF_MAX_RETRIES_OPTION, + BULK_FLUSH_BACKOFF_DELAY_OPTION, + CONNECTION_MAX_RETRY_TIMEOUT_OPTION, + CONNECTION_PATH_PREFIX, + FORMAT_OPTION + ).collect(Collectors.toSet()); + + @Override + public DynamicTableSink createDynamicTableSink(Context context) { + ElasticsearchValidationUtils.va
[jira] [Created] (FLINK-17791) Combine collecting sink and iterator to support collecting query results under all execution and network environments
Caizhi Weng created FLINK-17791: --- Summary: Combine collecting sink and iterator to support collecting query results under all execution and network environments Key: FLINK-17791 URL: https://issues.apache.org/jira/browse/FLINK-17791 Project: Flink Issue Type: Sub-task Components: API / DataStream, Table SQL / Planner Reporter: Caizhi Weng Fix For: 1.11.0 After introducing specialized collecting sink and iterator, the last thing we need to do is to combine them together in Table / DataSteram API so that the whole collecting mechanism works for the users. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17790) flink-connector-kafka-base does not compile on Java11
Robert Metzger created FLINK-17790: -- Summary: flink-connector-kafka-base does not compile on Java11 Key: FLINK-17790 URL: https://issues.apache.org/jira/browse/FLINK-17790 Project: Flink Issue Type: Bug Components: Connectors / Kafka, Table SQL / Ecosystem Affects Versions: 1.11.0 Reporter: Robert Metzger Fix For: 1.11.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1657&view=logs&j=946871de-358d-5815-3994-8175615bc253&t=e0240c62-4570-5d1c-51af-dd63d2093da1 [ERROR] /__w/3/s/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaOptions.java:[271,41] incompatible types: cannot infer type-variable(s) U,T,T,T,T (argument mismatch; bad return type in lambda expression java.util.Optional cannot be converted to java.util.Optional>) [INFO] 1 error -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap
Jingsong Lee created FLINK-17789: Summary: DelegatingConfiguration should remove prefix instead of add prefix in toMap Key: FLINK-17789 URL: https://issues.apache.org/jira/browse/FLINK-17789 Project: Flink Issue Type: Bug Components: API / Core Reporter: Jingsong Lee Fix For: 1.11.0 {code:java} Configuration conf = new Configuration(); conf.setString("k0", "v0"); conf.setString("prefix.k1", "v1"); DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix."); System.out.println(dc.getString("k0", "empty")); // empty System.out.println(dc.getString("k1", "empty")); // v1 System.out.println(dc.toMap().get("k1")); // should be v1, but null System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, but v1 {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-17787) BucketStateSerializerTest fails on output mismatch
[ https://issues.apache.org/jira/browse/FLINK-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger reassigned FLINK-17787: -- Assignee: Aljoscha Krettek > BucketStateSerializerTest fails on output mismatch > -- > > Key: FLINK-17787 > URL: https://issues.apache.org/jira/browse/FLINK-17787 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Assignee: Aljoscha Krettek >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > {code} > [INFO] Results: > [INFO] > [ERROR] Failures: > [ERROR] BucketStateSerializerTest.testDeserializationEmpty:139 > expected: but > was: > [ERROR] > BucketStateSerializerTest.testDeserializationFullNoInProgress:260->testDeserializationFull:289 > expected: but > was: > [ERROR] BucketStateSerializerTest.testDeserializationOnlyInProgress:184 > expected: but > was: > [ERROR] Errors: > [ERROR] > BucketStateSerializerTest.testDeserializationFull:255->testDeserializationFull:284->restoreBucket:332 > » FileNotFound > [INFO] > [ERROR] Tests run: 1491, Failures: 3, Errors: 1, Skipped: 53 > 2020-05-17T21:04:39.6023179Z [ERROR] Tests run: 8, Failures: 3, Errors: 1, > Skipped: 4, Time elapsed: 0.508 s <<< FAILURE! - in > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest > 2020-05-17T21:04:39.6024418Z [ERROR] > testDeserializationOnlyInProgress[Previous Version = > 1](org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest) > Time elapsed: 0.416 s <<< FAILURE! > 2020-05-17T21:04:39.6025661Z org.junit.ComparisonFailure: > expected: but > was: > 2020-05-17T21:04:39.6026162Z at > org.junit.Assert.assertEquals(Assert.java:115) > 2020-05-17T21:04:39.6026509Z at > org.junit.Assert.assertEquals(Assert.java:144) > 2020-05-17T21:04:39.6027088Z at > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest.testDeserializationOnlyInProgress(BucketStateSerializerTest.java:184) > 2020-05-17T21:04:39.6027655Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-17T21:04:39.6028099Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-17T21:04:39.6028587Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-17T21:04:39.6029077Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-17T21:04:39.6029717Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-17T21:04:39.6030222Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-17T21:04:39.6030724Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-17T21:04:39.6031206Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-17T21:04:39.6031670Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-17T21:04:39.6032129Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-17T21:04:39.6033125Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-17T21:04:39.6033818Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6034557Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6035086Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6035502Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6035937Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6036360Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-17T21:04:39.6036732Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-17T21:04:39.6037236Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-17T21:04:39.6037625Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6038028Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6038471Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6038900Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6039471Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21
[jira] [Resolved] (FLINK-17787) BucketStateSerializerTest fails on output mismatch
[ https://issues.apache.org/jira/browse/FLINK-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger resolved FLINK-17787. Resolution: Fixed Yep: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1655&view=results > BucketStateSerializerTest fails on output mismatch > -- > > Key: FLINK-17787 > URL: https://issues.apache.org/jira/browse/FLINK-17787 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > {code} > [INFO] Results: > [INFO] > [ERROR] Failures: > [ERROR] BucketStateSerializerTest.testDeserializationEmpty:139 > expected: but > was: > [ERROR] > BucketStateSerializerTest.testDeserializationFullNoInProgress:260->testDeserializationFull:289 > expected: but > was: > [ERROR] BucketStateSerializerTest.testDeserializationOnlyInProgress:184 > expected: but > was: > [ERROR] Errors: > [ERROR] > BucketStateSerializerTest.testDeserializationFull:255->testDeserializationFull:284->restoreBucket:332 > » FileNotFound > [INFO] > [ERROR] Tests run: 1491, Failures: 3, Errors: 1, Skipped: 53 > 2020-05-17T21:04:39.6023179Z [ERROR] Tests run: 8, Failures: 3, Errors: 1, > Skipped: 4, Time elapsed: 0.508 s <<< FAILURE! - in > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest > 2020-05-17T21:04:39.6024418Z [ERROR] > testDeserializationOnlyInProgress[Previous Version = > 1](org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest) > Time elapsed: 0.416 s <<< FAILURE! > 2020-05-17T21:04:39.6025661Z org.junit.ComparisonFailure: > expected: but > was: > 2020-05-17T21:04:39.6026162Z at > org.junit.Assert.assertEquals(Assert.java:115) > 2020-05-17T21:04:39.6026509Z at > org.junit.Assert.assertEquals(Assert.java:144) > 2020-05-17T21:04:39.6027088Z at > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest.testDeserializationOnlyInProgress(BucketStateSerializerTest.java:184) > 2020-05-17T21:04:39.6027655Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-17T21:04:39.6028099Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-17T21:04:39.6028587Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-17T21:04:39.6029077Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-17T21:04:39.6029717Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-17T21:04:39.6030222Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-17T21:04:39.6030724Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-17T21:04:39.6031206Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-17T21:04:39.6031670Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-17T21:04:39.6032129Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-17T21:04:39.6033125Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-17T21:04:39.6033818Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6034557Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6035086Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6035502Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6035937Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6036360Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-17T21:04:39.6036732Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-17T21:04:39.6037236Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-17T21:04:39.6037625Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6038028Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6038471Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6038900Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6039471Z at > org.junit.runners.ParentRunner$2.evalu
[jira] [Commented] (FLINK-17787) BucketStateSerializerTest fails on output mismatch
[ https://issues.apache.org/jira/browse/FLINK-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109940#comment-17109940 ] Robert Metzger commented on FLINK-17787: This was probably resolved through https://github.com/apache/flink/commit/f33819b95f8d0eef5dd6a00c86b5cff6aea24e87 > BucketStateSerializerTest fails on output mismatch > -- > > Key: FLINK-17787 > URL: https://issues.apache.org/jira/browse/FLINK-17787 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > {code} > [INFO] Results: > [INFO] > [ERROR] Failures: > [ERROR] BucketStateSerializerTest.testDeserializationEmpty:139 > expected: but > was: > [ERROR] > BucketStateSerializerTest.testDeserializationFullNoInProgress:260->testDeserializationFull:289 > expected: but > was: > [ERROR] BucketStateSerializerTest.testDeserializationOnlyInProgress:184 > expected: but > was: > [ERROR] Errors: > [ERROR] > BucketStateSerializerTest.testDeserializationFull:255->testDeserializationFull:284->restoreBucket:332 > » FileNotFound > [INFO] > [ERROR] Tests run: 1491, Failures: 3, Errors: 1, Skipped: 53 > 2020-05-17T21:04:39.6023179Z [ERROR] Tests run: 8, Failures: 3, Errors: 1, > Skipped: 4, Time elapsed: 0.508 s <<< FAILURE! - in > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest > 2020-05-17T21:04:39.6024418Z [ERROR] > testDeserializationOnlyInProgress[Previous Version = > 1](org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest) > Time elapsed: 0.416 s <<< FAILURE! > 2020-05-17T21:04:39.6025661Z org.junit.ComparisonFailure: > expected: but > was: > 2020-05-17T21:04:39.6026162Z at > org.junit.Assert.assertEquals(Assert.java:115) > 2020-05-17T21:04:39.6026509Z at > org.junit.Assert.assertEquals(Assert.java:144) > 2020-05-17T21:04:39.6027088Z at > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest.testDeserializationOnlyInProgress(BucketStateSerializerTest.java:184) > 2020-05-17T21:04:39.6027655Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-17T21:04:39.6028099Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-17T21:04:39.6028587Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-17T21:04:39.6029077Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-17T21:04:39.6029717Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-17T21:04:39.6030222Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-17T21:04:39.6030724Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-17T21:04:39.6031206Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-17T21:04:39.6031670Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-17T21:04:39.6032129Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-17T21:04:39.6033125Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-17T21:04:39.6033818Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6034557Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6035086Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6035502Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6035937Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6036360Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-17T21:04:39.6036732Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-17T21:04:39.6037236Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-17T21:04:39.6037625Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6038028Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6038471Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6038900Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.
[jira] [Commented] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable
[ https://issues.apache.org/jira/browse/FLINK-17768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109939#comment-17109939 ] Piotr Nowojski commented on FLINK-17768: Test disabled on master via {{591aebcdd9}} > UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel > is instable > - > > Key: FLINK-17768 > URL: https://issues.apache.org/jira/browse/FLINK-17768 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.11.0 >Reporter: Dian Fu >Assignee: Piotr Nowojski >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel > and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure: > {code} > 2020-05-16T12:41:32.3546620Z [ERROR] > shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) > Time elapsed: 18.865 s <<< ERROR! > 2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-16T12:41:32.3550177Z at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2020-05-16T12:41:32.3551416Z at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2020-05-16T12:41:32.3552959Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665) > 2020-05-16T12:41:32.3554979Z at > org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74) > 2020-05-16T12:41:32.3556584Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645) > 2020-05-16T12:41:32.3558068Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627) > 2020-05-16T12:41:32.3559431Z at > org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158) > 2020-05-16T12:41:32.3560954Z at > org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145) > 2020-05-16T12:41:32.3562203Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-16T12:41:32.3563433Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-16T12:41:32.3564846Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-16T12:41:32.3565894Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-16T12:41:32.3566870Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-16T12:41:32.3568064Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-16T12:41:32.3569727Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-16T12:41:32.3570818Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-16T12:41:32.3571840Z at > org.junit.rules.Verifier$1.evaluate(Verifier.java:35) > 2020-05-16T12:41:32.3572771Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2020-05-16T12:41:32.3574008Z at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > 2020-05-16T12:41:32.3575406Z at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > 2020-05-16T12:41:32.3576476Z at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > 2020-05-16T12:41:32.3577253Z at java.lang.Thread.run(Thread.java:748) > 2020-05-16T12:41:32.3578228Z Caused by: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-16T12:41:32.3579520Z at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147) > 2020-05-16T12:41:32.3580935Z at > org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186) > 2020-05-16T12:41:32.3582361Z at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2020-05-16T12:41:32.3583456Z at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2020-05-16T12:41:32.3584816Z at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) > 2020-05-16T12:41:32.3585874Z at > java.util.conc
[jira] [Commented] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out
[ https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109935#comment-17109935 ] Robert Metzger commented on FLINK-17730: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1630&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8 > HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart > times out > > > Key: FLINK-17730 > URL: https://issues.apache.org/jira/browse/FLINK-17730 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines, FileSystems, Tests >Reporter: Robert Metzger >Assignee: Robert Metzger >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8 > After 5 minutes > {code} > 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 > tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000] > 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE > 2020-05-15T06:56:38.1689028Z at > java.net.SocketInputStream.socketRead0(Native Method) > 2020-05-15T06:56:38.1689496Z at > java.net.SocketInputStream.socketRead(SocketInputStream.java:116) > 2020-05-15T06:56:38.1689921Z at > java.net.SocketInputStream.read(SocketInputStream.java:171) > 2020-05-15T06:56:38.1690316Z at > java.net.SocketInputStream.read(SocketInputStream.java:141) > 2020-05-15T06:56:38.1690723Z at > sun.security.ssl.InputRecord.readFully(InputRecord.java:465) > 2020-05-15T06:56:38.1691196Z at > sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593) > 2020-05-15T06:56:38.1691608Z at > sun.security.ssl.InputRecord.read(InputRecord.java:532) > 2020-05-15T06:56:38.1692023Z at > sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975) > 2020-05-15T06:56:38.1692558Z - locked <0xb94644f8> (a > java.lang.Object) > 2020-05-15T06:56:38.1692946Z at > sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933) > 2020-05-15T06:56:38.1693371Z at > sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > 2020-05-15T06:56:38.1694151Z - locked <0xb9464d20> (a > sun.security.ssl.AppInputStream) > 2020-05-15T06:56:38.1694908Z at > org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > 2020-05-15T06:56:38.1695475Z at > org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > 2020-05-15T06:56:38.1696007Z at > org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176) > 2020-05-15T06:56:38.1696509Z at > org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > 2020-05-15T06:56:38.1696993Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1697466Z at > com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > 2020-05-15T06:56:38.1698069Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1698567Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1699041Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1699624Z at > com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > 2020-05-15T06:56:38.1700090Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1700584Z at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > 2020-05-15T06:56:38.1701282Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1701800Z at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > 2020-05-15T06:56:38.1702328Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1702804Z at > org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445) > 2020-05-15T06:56:38.1703270Z at > org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown > Source) > 2020-05-15T06:56:38.1703677Z at > org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) > 2020-05-15T06:56:38.1704090Z at > org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260) > 2020-05-15T06:56:38.1704607Z at > org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source) > 2020-05-15T06:56:38.1705115Z at > org.apache.hadoop.fs.s3a.Invoker.re
[jira] [Commented] (FLINK-16947) ArtifactResolutionException: Could not transfer artifact. Entry [...] has not been leased from this pool
[ https://issues.apache.org/jira/browse/FLINK-16947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109934#comment-17109934 ] Robert Metzger commented on FLINK-16947: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1634&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=6b04ca5f-0b52-511d-19c9-52bf0d9fbdfa > ArtifactResolutionException: Could not transfer artifact. Entry [...] has > not been leased from this pool > - > > Key: FLINK-16947 > URL: https://issues.apache.org/jira/browse/FLINK-16947 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines >Reporter: Piotr Nowojski >Assignee: Robert Metzger >Priority: Blocker > Labels: test-stability > Fix For: 1.12.0 > > > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6982&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5 > Build of flink-metrics-availability-test failed with: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) > on project flink-metrics-availability-test: Unable to generate classpath: > org.apache.maven.artifact.resolver.ArtifactResolutionException: Could not > transfer artifact org.apache.maven.surefire:surefire-grouper:jar:2.22.1 > from/to google-maven-central > (https://maven-central-eu.storage-download.googleapis.com/maven2/): Entry > [id:13][route:{s}->https://maven-central-eu.storage-download.googleapis.com:443][state:null] > has not been leased from this pool > [ERROR] org.apache.maven.surefire:surefire-grouper:jar:2.22.1 > [ERROR] > [ERROR] from the specified remote repositories: > [ERROR] google-maven-central > (https://maven-central-eu.storage-download.googleapis.com/maven2/, > releases=true, snapshots=false), > [ERROR] apache.snapshots (https://repository.apache.org/snapshots, > releases=false, snapshots=true) > [ERROR] Path to dependency: > [ERROR] 1) dummy:dummy:jar:1.0 > [ERROR] 2) org.apache.maven.surefire:surefire-junit47:jar:2.22.1 > [ERROR] 3) org.apache.maven.surefire:common-junit48:jar:2.22.1 > [ERROR] 4) org.apache.maven.surefire:surefire-grouper:jar:2.22.1 > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > [ERROR] > [ERROR] After correcting the problems, you can resume the build with the > command > [ERROR] mvn -rf :flink-metrics-availability-test > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-12030) KafkaITCase.testMultipleSourcesOnePartition is unstable: This server does not host this topic-partition
[ https://issues.apache.org/jira/browse/FLINK-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109932#comment-17109932 ] Robert Metzger commented on FLINK-12030: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1654&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8 > KafkaITCase.testMultipleSourcesOnePartition is unstable: This server does not > host this topic-partition > --- > > Key: FLINK-12030 > URL: https://issues.apache.org/jira/browse/FLINK-12030 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Aljoscha Krettek >Assignee: Jiangjie Qin >Priority: Critical > Labels: test-stability > Fix For: 1.11.0 > > > This is a relevant part from the log: > {code} > 14:11:45,305 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase >- > > Test > testMetricsAndEndOfStream(org.apache.flink.streaming.connectors.kafka.KafkaITCase) > is running. > > 14:11:45,310 INFO org.apache.flink.streaming.connectors.kafka.KafkaTestBase >- > === > == Writing sequence of 300 into testEndOfStream with p=1 > === > 14:11:45,311 INFO org.apache.flink.streaming.connectors.kafka.KafkaTestBase >- Writing attempt #1 > 14:11:45,316 INFO > org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl - > Creating topic testEndOfStream-1 > 14:11:45,863 WARN > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Property > [transaction.timeout.ms] not specified. Setting it to 360 ms > 14:11:45,910 WARN > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Using > AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE > semantic. > 14:11:45,921 INFO > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Starting > FlinkKafkaInternalProducer (1/1) to produce into default topic > testEndOfStream-1 > 14:11:46,006 ERROR org.apache.flink.streaming.connectors.kafka.KafkaTestBase >- Write attempt failed, trying again > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:638) > at > org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:79) > at > org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.writeSequence(KafkaConsumerTestBase.java:1918) > at > org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runEndOfStreamTest(KafkaConsumerTestBase.java:1537) > at > org.apache.flink.streaming.connectors.kafka.KafkaITCase.testMetricsAndEndOfStream(KafkaITCase.java:136) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: > Failed to send data to Kafka: This server does not host this topic-partition. > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.checkErroneous(FlinkKafkaProducer.java:1002) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.flush(FlinkKafkaProducer.java:787) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.close(FlinkKafkaProducer.java:658) > at > org.apache.flink.api.common.functions.util.FunctionUt
[jira] [Commented] (FLINK-5763) Make savepoints self-contained and relocatable
[ https://issues.apache.org/jira/browse/FLINK-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109933#comment-17109933 ] Congxian Qiu(klion26) commented on FLINK-5763: -- Add a release note. > Make savepoints self-contained and relocatable > -- > > Key: FLINK-5763 > URL: https://issues.apache.org/jira/browse/FLINK-5763 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Reporter: Ufuk Celebi >Assignee: Congxian Qiu(klion26) >Priority: Critical > Labels: pull-request-available, usability > Fix For: 1.11.0 > > Time Spent: 10m > Remaining Estimate: 0h > > After a user has triggered a savepoint, a single savepoint file will be > returned as a handle to the savepoint. A savepoint to {{}} creates a > savepoint file like {{/savepoint-}}. > This file contains the metadata of the corresponding checkpoint, but not the > actual program state. While this works well for short term management > (pause-and-resume a job), it makes it hard to manage savepoints over longer > periods of time. > h4. Problems > h5. Scattered Checkpoint Files > For file system based checkpoints (FsStateBackend, RocksDBStateBackend) this > results in the savepoint referencing files from the checkpoint directory > (usually different than ). For users, it is virtually impossible to > tell which checkpoint files belong to a savepoint and which are lingering > around. This can easily lead to accidentally invalidating a savepoint by > deleting checkpoint files. > h5. Savepoints Not Relocatable > Even if a user is able to figure out which checkpoint files belong to a > savepoint, moving these files will invalidate the savepoint as well, because > the metadata file references absolute file paths. > h5. Forced to Use CLI for Disposal > Because of the scattered files, the user is in practice forced to use Flink’s > CLI to dispose a savepoint. This should be possible to handle in the scope of > the user’s environment via a file system delete operation. > h4. Proposal > In order to solve the described problems, savepoints should contain all their > state, both metadata and program state, inside a single directory. > Furthermore the metadata must only hold relative references to the checkpoint > files. This makes it obvious which files make up the state of a savepoint and > it is possible to move savepoints around by moving the savepoint directory. > h5. Desired File Layout > Triggering a savepoint to {{}} creates a directory as follows: > {code} > /savepoint-- > +-- _metadata > +-- data- [1 or more] > {code} > We include the JobID in the savepoint directory name in order to give some > hints about which job a savepoint belongs to. > h5. CLI > - Trigger: When triggering a savepoint to {{}} the savepoint > directory will be returned as the handle to the savepoint. > - Restore: Users can restore by pointing to the directory or the _metadata > file. The data files should be required to be in the same directory as the > _metadata file. > - Dispose: The disposal command should be deprecated and eventually removed. > While deprecated, disposal can happen by specifying the directory or the > _metadata file (same as restore). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-5763) Make savepoints self-contained and relocatable
[ https://issues.apache.org/jira/browse/FLINK-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Congxian Qiu(klion26) updated FLINK-5763: - Release Note: After FLINK-5763, we made savepoint self-contained and relocatable so that users can migrate savepoint from one place to another without any other processing manually. Currently do not support this feature after Entropy Injection enabled. > Make savepoints self-contained and relocatable > -- > > Key: FLINK-5763 > URL: https://issues.apache.org/jira/browse/FLINK-5763 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Reporter: Ufuk Celebi >Assignee: Congxian Qiu(klion26) >Priority: Critical > Labels: pull-request-available, usability > Fix For: 1.11.0 > > Time Spent: 10m > Remaining Estimate: 0h > > After a user has triggered a savepoint, a single savepoint file will be > returned as a handle to the savepoint. A savepoint to {{}} creates a > savepoint file like {{/savepoint-}}. > This file contains the metadata of the corresponding checkpoint, but not the > actual program state. While this works well for short term management > (pause-and-resume a job), it makes it hard to manage savepoints over longer > periods of time. > h4. Problems > h5. Scattered Checkpoint Files > For file system based checkpoints (FsStateBackend, RocksDBStateBackend) this > results in the savepoint referencing files from the checkpoint directory > (usually different than ). For users, it is virtually impossible to > tell which checkpoint files belong to a savepoint and which are lingering > around. This can easily lead to accidentally invalidating a savepoint by > deleting checkpoint files. > h5. Savepoints Not Relocatable > Even if a user is able to figure out which checkpoint files belong to a > savepoint, moving these files will invalidate the savepoint as well, because > the metadata file references absolute file paths. > h5. Forced to Use CLI for Disposal > Because of the scattered files, the user is in practice forced to use Flink’s > CLI to dispose a savepoint. This should be possible to handle in the scope of > the user’s environment via a file system delete operation. > h4. Proposal > In order to solve the described problems, savepoints should contain all their > state, both metadata and program state, inside a single directory. > Furthermore the metadata must only hold relative references to the checkpoint > files. This makes it obvious which files make up the state of a savepoint and > it is possible to move savepoints around by moving the savepoint directory. > h5. Desired File Layout > Triggering a savepoint to {{}} creates a directory as follows: > {code} > /savepoint-- > +-- _metadata > +-- data- [1 or more] > {code} > We include the JobID in the savepoint directory name in order to give some > hints about which job a savepoint belongs to. > h5. CLI > - Trigger: When triggering a savepoint to {{}} the savepoint > directory will be returned as the handle to the savepoint. > - Restore: Users can restore by pointing to the directory or the _metadata > file. The data files should be required to be in the same directory as the > _metadata file. > - Dispose: The disposal command should be deprecated and eventually removed. > While deprecated, disposal can happen by specifying the directory or the > _metadata file (same as restore). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zentol commented on a change in pull request #12138: [FLINK-16611] [datadog-metrics] Make number of metrics per request configurable
zentol commented on a change in pull request #12138: URL: https://github.com/apache/flink/pull/12138#discussion_r426394581 ## File path: flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java ## @@ -157,6 +181,11 @@ private void addGaugesAndUnregisterOnException(DSeries request) { // Flink uses Gauge to store many types other than Number g.getMetricValue(); request.addGauge(g); + ++currentCount; + if (currentCount % maxMetricsPerRequestValue == 0 || currentCount >= totalGauges) { + client.send(request); Review comment: yes, you'D of course have to rap the subList in a DSeries again, requiring a new constructor. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17787) BucketStateSerializerTest fails on output mismatch
[ https://issues.apache.org/jira/browse/FLINK-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109931#comment-17109931 ] Robert Metzger commented on FLINK-17787: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1654&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > BucketStateSerializerTest fails on output mismatch > -- > > Key: FLINK-17787 > URL: https://issues.apache.org/jira/browse/FLINK-17787 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > {code} > [INFO] Results: > [INFO] > [ERROR] Failures: > [ERROR] BucketStateSerializerTest.testDeserializationEmpty:139 > expected: but > was: > [ERROR] > BucketStateSerializerTest.testDeserializationFullNoInProgress:260->testDeserializationFull:289 > expected: but > was: > [ERROR] BucketStateSerializerTest.testDeserializationOnlyInProgress:184 > expected: but > was: > [ERROR] Errors: > [ERROR] > BucketStateSerializerTest.testDeserializationFull:255->testDeserializationFull:284->restoreBucket:332 > » FileNotFound > [INFO] > [ERROR] Tests run: 1491, Failures: 3, Errors: 1, Skipped: 53 > 2020-05-17T21:04:39.6023179Z [ERROR] Tests run: 8, Failures: 3, Errors: 1, > Skipped: 4, Time elapsed: 0.508 s <<< FAILURE! - in > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest > 2020-05-17T21:04:39.6024418Z [ERROR] > testDeserializationOnlyInProgress[Previous Version = > 1](org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest) > Time elapsed: 0.416 s <<< FAILURE! > 2020-05-17T21:04:39.6025661Z org.junit.ComparisonFailure: > expected: but > was: > 2020-05-17T21:04:39.6026162Z at > org.junit.Assert.assertEquals(Assert.java:115) > 2020-05-17T21:04:39.6026509Z at > org.junit.Assert.assertEquals(Assert.java:144) > 2020-05-17T21:04:39.6027088Z at > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest.testDeserializationOnlyInProgress(BucketStateSerializerTest.java:184) > 2020-05-17T21:04:39.6027655Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-17T21:04:39.6028099Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-17T21:04:39.6028587Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-17T21:04:39.6029077Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-17T21:04:39.6029717Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-17T21:04:39.6030222Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-17T21:04:39.6030724Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-17T21:04:39.6031206Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-17T21:04:39.6031670Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-17T21:04:39.6032129Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-17T21:04:39.6033125Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-17T21:04:39.6033818Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6034557Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6035086Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6035502Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6035937Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6036360Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-17T21:04:39.6036732Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-17T21:04:39.6037236Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-17T21:04:39.6037625Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6038028Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6038471Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6038900Z at > org.junit.runners.ParentRunner.access$
[jira] [Commented] (FLINK-16572) CheckPubSubEmulatorTest is flaky on Azure
[ https://issues.apache.org/jira/browse/FLINK-16572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109928#comment-17109928 ] Robert Metzger commented on FLINK-16572: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1654&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5 > CheckPubSubEmulatorTest is flaky on Azure > - > > Key: FLINK-16572 > URL: https://issues.apache.org/jira/browse/FLINK-16572 > Project: Flink > Issue Type: Bug > Components: Connectors / Google Cloud PubSub, Tests >Affects Versions: 1.11.0 >Reporter: Aljoscha Krettek >Assignee: Richard Deurwaarder >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.11.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Log: > https://dev.azure.com/aljoschakrettek/Flink/_build/results?buildId=56&view=logs&j=1f3ed471-1849-5d3c-a34c-19792af4ad16&t=ce095137-3e3b-5f73-4b79-c42d3d5f8283&l=7842 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink-web] yangjf2019 commented on pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese
yangjf2019 commented on pull request #268: URL: https://github.com/apache/flink-web/pull/268#issuecomment-629972237 Hi, @klion26 thank you for your help, I will continue to work hard. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable
[ https://issues.apache.org/jira/browse/FLINK-17768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109926#comment-17109926 ] Robert Metzger commented on FLINK-17768: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1647&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=45cc9205-bdb7-5b54-63cd-89fdc0983323 > UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel > is instable > - > > Key: FLINK-17768 > URL: https://issues.apache.org/jira/browse/FLINK-17768 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.11.0 >Reporter: Dian Fu >Assignee: Piotr Nowojski >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel > and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure: > {code} > 2020-05-16T12:41:32.3546620Z [ERROR] > shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) > Time elapsed: 18.865 s <<< ERROR! > 2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-16T12:41:32.3550177Z at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2020-05-16T12:41:32.3551416Z at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2020-05-16T12:41:32.3552959Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665) > 2020-05-16T12:41:32.3554979Z at > org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74) > 2020-05-16T12:41:32.3556584Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645) > 2020-05-16T12:41:32.3558068Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627) > 2020-05-16T12:41:32.3559431Z at > org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158) > 2020-05-16T12:41:32.3560954Z at > org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145) > 2020-05-16T12:41:32.3562203Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-16T12:41:32.3563433Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-16T12:41:32.3564846Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-16T12:41:32.3565894Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-16T12:41:32.3566870Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-16T12:41:32.3568064Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-16T12:41:32.3569727Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-16T12:41:32.3570818Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-16T12:41:32.3571840Z at > org.junit.rules.Verifier$1.evaluate(Verifier.java:35) > 2020-05-16T12:41:32.3572771Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2020-05-16T12:41:32.3574008Z at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > 2020-05-16T12:41:32.3575406Z at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > 2020-05-16T12:41:32.3576476Z at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > 2020-05-16T12:41:32.3577253Z at java.lang.Thread.run(Thread.java:748) > 2020-05-16T12:41:32.3578228Z Caused by: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-16T12:41:32.3579520Z at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147) > 2020-05-16T12:41:32.3580935Z at > org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186) > 2020-05-16T12:41:32.3582361Z at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2020-05-16T12:41:32.3583456Z at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2020-05-16T12:41:32.3584816Z at > java.util.c
[GitHub] [flink] flinkbot edited a comment on pull request #12132: [FLINK-17593][Connectors/FileSystem] Support arbitrary recovery mechanism for PartFileWriter
flinkbot edited a comment on pull request #12132: URL: https://github.com/apache/flink/pull/12132#issuecomment-628151415 ## CI report: * 449b8494248924ab0c9a4a5187458933902a13a3 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1626) * 10ef0c696350fcd84866fde27f19ed2a0312ee4b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-12030) KafkaITCase.testMultipleSourcesOnePartition is unstable: This server does not host this topic-partition
[ https://issues.apache.org/jira/browse/FLINK-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109925#comment-17109925 ] Robert Metzger commented on FLINK-12030: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1650&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8 > KafkaITCase.testMultipleSourcesOnePartition is unstable: This server does not > host this topic-partition > --- > > Key: FLINK-12030 > URL: https://issues.apache.org/jira/browse/FLINK-12030 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Aljoscha Krettek >Assignee: Jiangjie Qin >Priority: Critical > Labels: test-stability > Fix For: 1.11.0 > > > This is a relevant part from the log: > {code} > 14:11:45,305 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase >- > > Test > testMetricsAndEndOfStream(org.apache.flink.streaming.connectors.kafka.KafkaITCase) > is running. > > 14:11:45,310 INFO org.apache.flink.streaming.connectors.kafka.KafkaTestBase >- > === > == Writing sequence of 300 into testEndOfStream with p=1 > === > 14:11:45,311 INFO org.apache.flink.streaming.connectors.kafka.KafkaTestBase >- Writing attempt #1 > 14:11:45,316 INFO > org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl - > Creating topic testEndOfStream-1 > 14:11:45,863 WARN > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Property > [transaction.timeout.ms] not specified. Setting it to 360 ms > 14:11:45,910 WARN > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Using > AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE > semantic. > 14:11:45,921 INFO > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Starting > FlinkKafkaInternalProducer (1/1) to produce into default topic > testEndOfStream-1 > 14:11:46,006 ERROR org.apache.flink.streaming.connectors.kafka.KafkaTestBase >- Write attempt failed, trying again > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:638) > at > org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:79) > at > org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.writeSequence(KafkaConsumerTestBase.java:1918) > at > org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runEndOfStreamTest(KafkaConsumerTestBase.java:1537) > at > org.apache.flink.streaming.connectors.kafka.KafkaITCase.testMetricsAndEndOfStream(KafkaITCase.java:136) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: > Failed to send data to Kafka: This server does not host this topic-partition. > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.checkErroneous(FlinkKafkaProducer.java:1002) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.flush(FlinkKafkaProducer.java:787) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.close(FlinkKafkaProducer.java:658) > at > org.apache.flink.api.common.functions.util.FunctionUt
[GitHub] [flink] flinkbot commented on pull request #12209: [FLINK-17520] [core] Extend CompositeTypeSerializerSnapshot to support migration
flinkbot commented on pull request #12209: URL: https://github.com/apache/flink/pull/12209#issuecomment-629971018 ## CI report: * c91cdd32db0464d8b60d0efc0763078388a15daf UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] AHeise commented on pull request #12186: [FLINK-16383][task] Do not relay notifyCheckpointComplete to closed operators
AHeise commented on pull request #12186: URL: https://github.com/apache/flink/pull/12186#issuecomment-629970819 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-9992) FsStorageLocationReferenceTest#testEncodeAndDecode failed in Travis CI
[ https://issues.apache.org/jira/browse/FLINK-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109923#comment-17109923 ] Robert Metzger commented on FLINK-9992: --- CI https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1650&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d {code} 2020-05-17T20:28:40.9416617Z java.lang.IllegalArgumentException: java.net.URISyntaxException: Illegal character in hostname at index 6: grb://町㤊䓜[å¤ç£¦â“«ß·Ã³æ¿³à§”Დタ翿äšæ¹°23508/沃䩕椢ხæ»ã¶…â¼¾å€ã“à´ªã¦/Ⓞ梫ῌ侶/䇱⳵㫎⿉㌔䛀å„䩎毗ã ã’©ä²žç „å´/ϓ暋ᾩޱ䕿ɸç¬ä¿–ؾ盿㲆㎨竡 2020-05-17T20:28:40.9418886Zat org.apache.flink.core.fs.Path.initialize(Path.java:247) 2020-05-17T20:28:40.9419552Zat org.apache.flink.core.fs.Path.(Path.java:215) 2020-05-17T20:28:40.9420068Zat org.apache.flink.runtime.state.filesystem.FsStorageLocationReferenceTest.randomPath(FsStorageLocationReferenceTest.java:88) 2020-05-17T20:28:40.9421532Zat org.apache.flink.runtime.state.filesystem.FsStorageLocationReferenceTest.testEncodeAndDecode(FsStorageLocationReferenceTest.java:41) 2020-05-17T20:28:40.9422055Zat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-17T20:28:40.9422503Zat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-17T20:28:40.9422973Zat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-17T20:28:40.9423481Zat java.lang.reflect.Method.invoke(Method.java:498) 2020-05-17T20:28:40.9423897Zat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) 2020-05-17T20:28:40.9424356Zat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) 2020-05-17T20:28:40.9424819Zat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) 2020-05-17T20:28:40.9425297Zat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) 2020-05-17T20:28:40.9425705Zat org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) 2020-05-17T20:28:40.9426093Zat org.junit.rules.RunRules.evaluate(RunRules.java:20) 2020-05-17T20:28:40.9426713Zat org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 2020-05-17T20:28:40.9427146Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) 2020-05-17T20:28:40.9427720Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) 2020-05-17T20:28:40.9428132Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2020-05-17T20:28:40.9428521Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2020-05-17T20:28:40.9428930Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2020-05-17T20:28:40.9429334Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2020-05-17T20:28:40.9429734Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2020-05-17T20:28:40.9430259Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2020-05-17T20:28:40.9430691Zat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) 2020-05-17T20:28:40.9431262Zat org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) 2020-05-17T20:28:40.9431769Zat org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) 2020-05-17T20:28:40.9432223Zat org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) 2020-05-17T20:28:40.9432722Zat org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) 2020-05-17T20:28:40.9433582Zat org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) 2020-05-17T20:28:40.9434253Zat org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 2020-05-17T20:28:40.9434937Zat org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 2020-05-17T20:28:40.9436429Z Caused by: java.net.URISyntaxException: Illegal character in hostname at index 6: grb://町㤊䓜[å¤ç£¦â“«ß·Ã³æ¿³à§”Დタ翿äšæ¹°23508/沃䩕椢ხæ»ã¶…â¼¾å€ã“à´ªã¦/Ⓞ梫ῌ侶/䇱⳵㫎⿉㌔䛀å„䩎毗ã ã’©ä²žç „å´/ϓ暋ᾩޱ䕿ɸç¬ä¿–ؾ盿㲆㎨竡 2020-05-17T20:28:40.9436974Zat java.net.URI$Parser.fail(URI.java:2848) 2020-05-17T20:28:40.9437307Zat java.net.URI$Parser.parseHostname(URI.java:3387) 2020-05-17T20:28:40.9437741Zat java.net.URI$Parser.parseServer(URI.java:3236) 2020-05-17T20:28:40.9438088Zat java.net.URI$Parser.parseAuthority(URI.java:3155) 2020-05-17T20:28:40.9438422Zat java.net.URI$Parser.parseHierarchical(URI.java:3097) 2020-05-17T20:28:40.9438810Zat java.net.URI$Parser.
[jira] [Reopened] (FLINK-9992) FsStorageLocationReferenceTest#testEncodeAndDecode failed in Travis CI
[ https://issues.apache.org/jira/browse/FLINK-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger reopened FLINK-9992: --- > FsStorageLocationReferenceTest#testEncodeAndDecode failed in Travis CI > -- > > Key: FLINK-9992 > URL: https://issues.apache.org/jira/browse/FLINK-9992 > Project: Flink > Issue Type: Bug > Components: FileSystems >Reporter: vinoyang >Priority: Critical > > {code:java} > testEncodeAndDecode(org.apache.flink.runtime.state.filesystem.FsStorageLocationReferenceTest) > Time elapsed: 0.027 sec <<< ERROR! > java.lang.IllegalArgumentException: java.net.URISyntaxException: Illegal > character in hostname at index 5: > gl://碪⯶㪴]ឪ嵿⎐䪀筪ᆶ歑ᆂ玚䇷ノⳡ೯43575/䡷ᦼ☶⨩䚩筶ࢊණ⣁尯/彡䫼畒伈森削㔞/缳漸⩧勎㓘癐⍖ᾐ䘽㼺䨶/粉掩㤡⪌⎏㆐罠Ꮨㆆ䤱ൎ堉儾 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseHostname(URI.java:3387) > at java.net.URI$Parser.parseServer(URI.java:3236) > at java.net.URI$Parser.parseAuthority(URI.java:3155) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:746) > at org.apache.flink.core.fs.Path.initialize(Path.java:247) > at org.apache.flink.core.fs.Path.(Path.java:217) > at > org.apache.flink.runtime.state.filesystem.FsStorageLocationReferenceTest.randomPath(FsStorageLocationReferenceTest.java:88) > at > org.apache.flink.runtime.state.filesystem.FsStorageLocationReferenceTest.testEncodeAndDecode(FsStorageLocationReferenceTest.java:41) > {code} > log is here : https://travis-ci.org/apache/flink/jobs/409430886 > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17788) scala shell in yarn mode is broken
Jeff Zhang created FLINK-17788: -- Summary: scala shell in yarn mode is broken Key: FLINK-17788 URL: https://issues.apache.org/jira/browse/FLINK-17788 Project: Flink Issue Type: Bug Components: Scala Shell Affects Versions: 1.11.0 Reporter: Jeff Zhang When I start scala shell in yarn mode, one yarn app will be launched, and after I write some flink code and trigger a flink job, another yarn app will be launched but would failed to launch due to some conflicts. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] pnowojski commented on pull request #8693: [FLINK-8871] Support to cancel checkpoing via notification
pnowojski commented on pull request #8693: URL: https://github.com/apache/flink/pull/8693#issuecomment-629969390 There is a test failure: ``` SubtaskCheckpointCoordinatorTest.testNotifyCheckpointAbortedDuringAsyncPhase:195 » IllegalArgument ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] leonardBang commented on pull request #12176: [FLINK-17029][jdbc]Introduce a new JDBC connector with new property keys
leonardBang commented on pull request #12176: URL: https://github.com/apache/flink/pull/12176#issuecomment-629969292 Thanks for the awesome review!@wuchong This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable
[ https://issues.apache.org/jira/browse/FLINK-17768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109920#comment-17109920 ] Robert Metzger commented on FLINK-17768: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=45cc9205-bdb7-5b54-63cd-89fdc0983323 > UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel > is instable > - > > Key: FLINK-17768 > URL: https://issues.apache.org/jira/browse/FLINK-17768 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.11.0 >Reporter: Dian Fu >Assignee: Piotr Nowojski >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel > and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure: > {code} > 2020-05-16T12:41:32.3546620Z [ERROR] > shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) > Time elapsed: 18.865 s <<< ERROR! > 2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-16T12:41:32.3550177Z at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2020-05-16T12:41:32.3551416Z at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2020-05-16T12:41:32.3552959Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665) > 2020-05-16T12:41:32.3554979Z at > org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74) > 2020-05-16T12:41:32.3556584Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645) > 2020-05-16T12:41:32.3558068Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627) > 2020-05-16T12:41:32.3559431Z at > org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158) > 2020-05-16T12:41:32.3560954Z at > org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145) > 2020-05-16T12:41:32.3562203Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-16T12:41:32.3563433Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-16T12:41:32.3564846Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-16T12:41:32.3565894Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-16T12:41:32.3566870Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-16T12:41:32.3568064Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-16T12:41:32.3569727Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-16T12:41:32.3570818Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-16T12:41:32.3571840Z at > org.junit.rules.Verifier$1.evaluate(Verifier.java:35) > 2020-05-16T12:41:32.3572771Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2020-05-16T12:41:32.3574008Z at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > 2020-05-16T12:41:32.3575406Z at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > 2020-05-16T12:41:32.3576476Z at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > 2020-05-16T12:41:32.3577253Z at java.lang.Thread.run(Thread.java:748) > 2020-05-16T12:41:32.3578228Z Caused by: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-16T12:41:32.3579520Z at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147) > 2020-05-16T12:41:32.3580935Z at > org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186) > 2020-05-16T12:41:32.3582361Z at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2020-05-16T12:41:32.3583456Z at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2020-05-16T12:41:32.3584816Z at > java.util.c
[jira] [Commented] (FLINK-17787) BucketStateSerializerTest fails on output mismatch
[ https://issues.apache.org/jira/browse/FLINK-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109918#comment-17109918 ] Robert Metzger commented on FLINK-17787: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1651&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > BucketStateSerializerTest fails on output mismatch > -- > > Key: FLINK-17787 > URL: https://issues.apache.org/jira/browse/FLINK-17787 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Major > Labels: test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > {code} > [INFO] Results: > [INFO] > [ERROR] Failures: > [ERROR] BucketStateSerializerTest.testDeserializationEmpty:139 > expected: but > was: > [ERROR] > BucketStateSerializerTest.testDeserializationFullNoInProgress:260->testDeserializationFull:289 > expected: but > was: > [ERROR] BucketStateSerializerTest.testDeserializationOnlyInProgress:184 > expected: but > was: > [ERROR] Errors: > [ERROR] > BucketStateSerializerTest.testDeserializationFull:255->testDeserializationFull:284->restoreBucket:332 > » FileNotFound > [INFO] > [ERROR] Tests run: 1491, Failures: 3, Errors: 1, Skipped: 53 > 2020-05-17T21:04:39.6023179Z [ERROR] Tests run: 8, Failures: 3, Errors: 1, > Skipped: 4, Time elapsed: 0.508 s <<< FAILURE! - in > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest > 2020-05-17T21:04:39.6024418Z [ERROR] > testDeserializationOnlyInProgress[Previous Version = > 1](org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest) > Time elapsed: 0.416 s <<< FAILURE! > 2020-05-17T21:04:39.6025661Z org.junit.ComparisonFailure: > expected: but > was: > 2020-05-17T21:04:39.6026162Z at > org.junit.Assert.assertEquals(Assert.java:115) > 2020-05-17T21:04:39.6026509Z at > org.junit.Assert.assertEquals(Assert.java:144) > 2020-05-17T21:04:39.6027088Z at > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest.testDeserializationOnlyInProgress(BucketStateSerializerTest.java:184) > 2020-05-17T21:04:39.6027655Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-17T21:04:39.6028099Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-17T21:04:39.6028587Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-17T21:04:39.6029077Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-17T21:04:39.6029717Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-17T21:04:39.6030222Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-17T21:04:39.6030724Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-17T21:04:39.6031206Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-17T21:04:39.6031670Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-17T21:04:39.6032129Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-17T21:04:39.6033125Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-17T21:04:39.6033818Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6034557Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6035086Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6035502Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6035937Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6036360Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-17T21:04:39.6036732Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-17T21:04:39.6037236Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-17T21:04:39.6037625Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6038028Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6038471Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6038900Z at > org.junit.runners.ParentRunner.access$00
[jira] [Updated] (FLINK-17787) BucketStateSerializerTest fails on output mismatch
[ https://issues.apache.org/jira/browse/FLINK-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger updated FLINK-17787: --- Priority: Blocker (was: Major) > BucketStateSerializerTest fails on output mismatch > -- > > Key: FLINK-17787 > URL: https://issues.apache.org/jira/browse/FLINK-17787 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > {code} > [INFO] Results: > [INFO] > [ERROR] Failures: > [ERROR] BucketStateSerializerTest.testDeserializationEmpty:139 > expected: but > was: > [ERROR] > BucketStateSerializerTest.testDeserializationFullNoInProgress:260->testDeserializationFull:289 > expected: but > was: > [ERROR] BucketStateSerializerTest.testDeserializationOnlyInProgress:184 > expected: but > was: > [ERROR] Errors: > [ERROR] > BucketStateSerializerTest.testDeserializationFull:255->testDeserializationFull:284->restoreBucket:332 > » FileNotFound > [INFO] > [ERROR] Tests run: 1491, Failures: 3, Errors: 1, Skipped: 53 > 2020-05-17T21:04:39.6023179Z [ERROR] Tests run: 8, Failures: 3, Errors: 1, > Skipped: 4, Time elapsed: 0.508 s <<< FAILURE! - in > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest > 2020-05-17T21:04:39.6024418Z [ERROR] > testDeserializationOnlyInProgress[Previous Version = > 1](org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest) > Time elapsed: 0.416 s <<< FAILURE! > 2020-05-17T21:04:39.6025661Z org.junit.ComparisonFailure: > expected: but > was: > 2020-05-17T21:04:39.6026162Z at > org.junit.Assert.assertEquals(Assert.java:115) > 2020-05-17T21:04:39.6026509Z at > org.junit.Assert.assertEquals(Assert.java:144) > 2020-05-17T21:04:39.6027088Z at > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest.testDeserializationOnlyInProgress(BucketStateSerializerTest.java:184) > 2020-05-17T21:04:39.6027655Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-17T21:04:39.6028099Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-17T21:04:39.6028587Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-17T21:04:39.6029077Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-17T21:04:39.6029717Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-17T21:04:39.6030222Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-17T21:04:39.6030724Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-17T21:04:39.6031206Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-17T21:04:39.6031670Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-17T21:04:39.6032129Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-17T21:04:39.6033125Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-17T21:04:39.6033818Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6034557Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6035086Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6035502Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6035937Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6036360Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-17T21:04:39.6036732Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-17T21:04:39.6037236Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-17T21:04:39.6037625Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6038028Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6038471Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6038900Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6039471Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6039923Z at > org.junit.rules.Ext
[GitHub] [flink] pnowojski commented on a change in pull request #8693: [FLINK-8871] Support to cancel checkpoing via notification
pnowojski commented on a change in pull request #8693: URL: https://github.com/apache/flink/pull/8693#discussion_r426378044 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java ## @@ -38,4 +38,12 @@ * @throws Exception */ void notifyCheckpointComplete(long checkpointId) throws Exception; + + /** +* This method is called as a notification once a distributed checkpoint has been aborted. +* +* @param checkpointId The ID of the checkpoint that has been aborted. +* @throws Exception +*/ + void notifyCheckpointAborted(long checkpointId) throws Exception; Review comment: Let's put it into the JIRA's release notes and then we will see if it makes it to the final release notes (I would say so, as it's a braking change) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wanglijie95 commented on a change in pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.
wanglijie95 commented on a change in pull request #12181: URL: https://github.com/apache/flink/pull/12181#discussion_r426389591 ## File path: flink-core/src/test/java/org/apache/flink/core/fs/SafetyNetCloseableRegistryTest.java ## @@ -198,4 +211,27 @@ public void testReaperThreadSpawnAndStop() throws Exception { } Assert.assertFalse(SafetyNetCloseableRegistry.isReaperThreadRunning()); } + + /** +* Test for FLINK-17645. Review comment: I got it. I will delete this line. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zentol merged pull request #12207: [hotfix][doc] Fix broken link in the native Kubernetes document
zentol merged pull request #12207: URL: https://github.com/apache/flink/pull/12207 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable
[ https://issues.apache.org/jira/browse/FLINK-17768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109916#comment-17109916 ] Robert Metzger commented on FLINK-17768: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=45cc9205-bdb7-5b54-63cd-89fdc0983323 > UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel > is instable > - > > Key: FLINK-17768 > URL: https://issues.apache.org/jira/browse/FLINK-17768 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.11.0 >Reporter: Dian Fu >Assignee: Piotr Nowojski >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel > and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure: > {code} > 2020-05-16T12:41:32.3546620Z [ERROR] > shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) > Time elapsed: 18.865 s <<< ERROR! > 2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-16T12:41:32.3550177Z at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2020-05-16T12:41:32.3551416Z at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2020-05-16T12:41:32.3552959Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665) > 2020-05-16T12:41:32.3554979Z at > org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74) > 2020-05-16T12:41:32.3556584Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645) > 2020-05-16T12:41:32.3558068Z at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627) > 2020-05-16T12:41:32.3559431Z at > org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158) > 2020-05-16T12:41:32.3560954Z at > org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145) > 2020-05-16T12:41:32.3562203Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-16T12:41:32.3563433Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-16T12:41:32.3564846Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-16T12:41:32.3565894Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-16T12:41:32.3566870Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-16T12:41:32.3568064Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-16T12:41:32.3569727Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-16T12:41:32.3570818Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-16T12:41:32.3571840Z at > org.junit.rules.Verifier$1.evaluate(Verifier.java:35) > 2020-05-16T12:41:32.3572771Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2020-05-16T12:41:32.3574008Z at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > 2020-05-16T12:41:32.3575406Z at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > 2020-05-16T12:41:32.3576476Z at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > 2020-05-16T12:41:32.3577253Z at java.lang.Thread.run(Thread.java:748) > 2020-05-16T12:41:32.3578228Z Caused by: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-16T12:41:32.3579520Z at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147) > 2020-05-16T12:41:32.3580935Z at > org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186) > 2020-05-16T12:41:32.3582361Z at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) > 2020-05-16T12:41:32.3583456Z at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > 2020-05-16T12:41:32.3584816Z at > java.util.c
[jira] [Commented] (FLINK-12030) KafkaITCase.testMultipleSourcesOnePartition is unstable: This server does not host this topic-partition
[ https://issues.apache.org/jira/browse/FLINK-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109915#comment-17109915 ] Robert Metzger commented on FLINK-12030: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8 {code} 2020-05-17T20:56:20.3972904Z 20:56:20,386 [main] ERROR org.apache.flink.streaming.connectors.kafka.table.Kafka010TableITCase [] - 2020-05-17T20:56:20.3974368Z 2020-05-17T20:56:20.3975255Z Test testKafkaSourceSink[legacy = false, topicId = 1](org.apache.flink.streaming.connectors.kafka.table.Kafka010TableITCase) failed with: 2020-05-17T20:56:20.3976273Z java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobExecutionException: Job execution failed. 2020-05-17T20:56:20.3977113Zat java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 2020-05-17T20:56:20.3977826Zat java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 2020-05-17T20:56:20.3978648Zat org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:31) 2020-05-17T20:56:20.3979513Zat org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala) 2020-05-17T20:56:20.3980379Zat org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.testKafkaSourceSink(KafkaTableTestBase.java:145) 2020-05-17T20:56:20.3981313Zat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-17T20:56:20.3981940Zat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-17T20:56:20.3982863Zat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-17T20:56:20.3983605Zat java.lang.reflect.Method.invoke(Method.java:498) 2020-05-17T20:56:20.3984754Zat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) 2020-05-17T20:56:20.3985595Zat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) 2020-05-17T20:56:20.3986382Zat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) 2020-05-17T20:56:20.3987139Zat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) 2020-05-17T20:56:20.3987839Zat org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) 2020-05-17T20:56:20.3988434Zat org.junit.rules.RunRules.evaluate(RunRules.java:20) 2020-05-17T20:56:20.3989040Zat org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 2020-05-17T20:56:20.3989713Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) 2020-05-17T20:56:20.3990472Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) 2020-05-17T20:56:20.3991285Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2020-05-17T20:56:20.3992133Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2020-05-17T20:56:20.3992815Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2020-05-17T20:56:20.3993538Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2020-05-17T20:56:20.3994536Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2020-05-17T20:56:20.3995182Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2020-05-17T20:56:20.3995742Zat org.junit.runners.Suite.runChild(Suite.java:128) 2020-05-17T20:56:20.3996307Zat org.junit.runners.Suite.runChild(Suite.java:27) 2020-05-17T20:56:20.3996875Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2020-05-17T20:56:20.3997522Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2020-05-17T20:56:20.3998184Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2020-05-17T20:56:20.3998822Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2020-05-17T20:56:20.3999444Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2020-05-17T20:56:20.4000318Zat org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 2020-05-17T20:56:20.4001173Zat org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 2020-05-17T20:56:20.4001836Zat org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) 2020-05-17T20:56:20.4002420Zat org.junit.rules.RunRules.evaluate(RunRules.java:20) 2020-05-17T20:56:20.4002773Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2020-05-17T20:56:20.4003258Zat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) 2020-05-17T20:56:20.40037
[jira] [Created] (FLINK-17787) BucketStateSerializerTest fails on output mismatch
Robert Metzger created FLINK-17787: -- Summary: BucketStateSerializerTest fails on output mismatch Key: FLINK-17787 URL: https://issues.apache.org/jira/browse/FLINK-17787 Project: Flink Issue Type: Bug Components: API / DataStream Affects Versions: 1.11.0 Reporter: Robert Metzger Fix For: 1.11.0 CI: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d {code} [INFO] Results: [INFO] [ERROR] Failures: [ERROR] BucketStateSerializerTest.testDeserializationEmpty:139 expected: but was: [ERROR] BucketStateSerializerTest.testDeserializationFullNoInProgress:260->testDeserializationFull:289 expected: but was: [ERROR] BucketStateSerializerTest.testDeserializationOnlyInProgress:184 expected: but was: [ERROR] Errors: [ERROR] BucketStateSerializerTest.testDeserializationFull:255->testDeserializationFull:284->restoreBucket:332 » FileNotFound [INFO] [ERROR] Tests run: 1491, Failures: 3, Errors: 1, Skipped: 53 2020-05-17T21:04:39.6023179Z [ERROR] Tests run: 8, Failures: 3, Errors: 1, Skipped: 4, Time elapsed: 0.508 s <<< FAILURE! - in org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest 2020-05-17T21:04:39.6024418Z [ERROR] testDeserializationOnlyInProgress[Previous Version = 1](org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest) Time elapsed: 0.416 s <<< FAILURE! 2020-05-17T21:04:39.6025661Z org.junit.ComparisonFailure: expected: but was: 2020-05-17T21:04:39.6026162Zat org.junit.Assert.assertEquals(Assert.java:115) 2020-05-17T21:04:39.6026509Zat org.junit.Assert.assertEquals(Assert.java:144) 2020-05-17T21:04:39.6027088Zat org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest.testDeserializationOnlyInProgress(BucketStateSerializerTest.java:184) 2020-05-17T21:04:39.6027655Zat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-17T21:04:39.6028099Zat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-17T21:04:39.6028587Zat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-17T21:04:39.6029077Zat java.lang.reflect.Method.invoke(Method.java:498) 2020-05-17T21:04:39.6029717Zat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) 2020-05-17T21:04:39.6030222Zat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) 2020-05-17T21:04:39.6030724Zat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) 2020-05-17T21:04:39.6031206Zat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) 2020-05-17T21:04:39.6031670Zat org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 2020-05-17T21:04:39.6032129Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) 2020-05-17T21:04:39.6033125Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) 2020-05-17T21:04:39.6033818Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2020-05-17T21:04:39.6034557Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2020-05-17T21:04:39.6035086Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2020-05-17T21:04:39.6035502Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2020-05-17T21:04:39.6035937Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2020-05-17T21:04:39.6036360Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2020-05-17T21:04:39.6036732Zat org.junit.runners.Suite.runChild(Suite.java:128) 2020-05-17T21:04:39.6037236Zat org.junit.runners.Suite.runChild(Suite.java:27) 2020-05-17T21:04:39.6037625Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2020-05-17T21:04:39.6038028Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2020-05-17T21:04:39.6038471Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2020-05-17T21:04:39.6038900Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2020-05-17T21:04:39.6039471Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2020-05-17T21:04:39.6039923Zat org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) 2020-05-17T21:04:39.6040325Zat org.junit.rules.RunRules.evaluate(RunRules.java:20) 2020-05-17T21:04:39.6040900Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2020-05-17T21:04:39.6041333Zat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) 2020-05-17T21:04:39.6041856Zat org.apache.mave
[GitHub] [flink] yangyichao-mango removed a comment on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master
yangyichao-mango removed a comment on pull request #12196: URL: https://github.com/apache/flink/pull/12196#issuecomment-629932386 > @yangyichao-mango thanks for your contribution. could you please share the result after execute `sh docs/check_links.sh` here (you have to execute `sh docs/build_docs.sh -p ` before execute the check_links script) ![image](https://user-images.githubusercontent.com/29545877/82173191-f18ebc80-98fe-11ea-8972-0173a641996f.png) That is the result after execute sh docs/check_links.sh This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango removed a comment on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master
yangyichao-mango removed a comment on pull request #12196: URL: https://github.com/apache/flink/pull/12196#issuecomment-629932483 > @yangyichao-mango thanks for your contribution. could you please share the result after execute `sh docs/check_links.sh` here (you have to execute `sh docs/build_docs.sh -p ` before execute the check_links script) @klion26 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17787) BucketStateSerializerTest fails on output mismatch
[ https://issues.apache.org/jira/browse/FLINK-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger updated FLINK-17787: --- Component/s: Tests > BucketStateSerializerTest fails on output mismatch > -- > > Key: FLINK-17787 > URL: https://issues.apache.org/jira/browse/FLINK-17787 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Major > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > {code} > [INFO] Results: > [INFO] > [ERROR] Failures: > [ERROR] BucketStateSerializerTest.testDeserializationEmpty:139 > expected: but > was: > [ERROR] > BucketStateSerializerTest.testDeserializationFullNoInProgress:260->testDeserializationFull:289 > expected: but > was: > [ERROR] BucketStateSerializerTest.testDeserializationOnlyInProgress:184 > expected: but > was: > [ERROR] Errors: > [ERROR] > BucketStateSerializerTest.testDeserializationFull:255->testDeserializationFull:284->restoreBucket:332 > » FileNotFound > [INFO] > [ERROR] Tests run: 1491, Failures: 3, Errors: 1, Skipped: 53 > 2020-05-17T21:04:39.6023179Z [ERROR] Tests run: 8, Failures: 3, Errors: 1, > Skipped: 4, Time elapsed: 0.508 s <<< FAILURE! - in > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest > 2020-05-17T21:04:39.6024418Z [ERROR] > testDeserializationOnlyInProgress[Previous Version = > 1](org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest) > Time elapsed: 0.416 s <<< FAILURE! > 2020-05-17T21:04:39.6025661Z org.junit.ComparisonFailure: > expected: but > was: > 2020-05-17T21:04:39.6026162Z at > org.junit.Assert.assertEquals(Assert.java:115) > 2020-05-17T21:04:39.6026509Z at > org.junit.Assert.assertEquals(Assert.java:144) > 2020-05-17T21:04:39.6027088Z at > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest.testDeserializationOnlyInProgress(BucketStateSerializerTest.java:184) > 2020-05-17T21:04:39.6027655Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-17T21:04:39.6028099Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-17T21:04:39.6028587Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-17T21:04:39.6029077Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-17T21:04:39.6029717Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-17T21:04:39.6030222Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-17T21:04:39.6030724Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-17T21:04:39.6031206Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-17T21:04:39.6031670Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-17T21:04:39.6032129Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-17T21:04:39.6033125Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-17T21:04:39.6033818Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6034557Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6035086Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6035502Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6035937Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6036360Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-17T21:04:39.6036732Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-17T21:04:39.6037236Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-17T21:04:39.6037625Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6038028Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6038471Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6038900Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6039471Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6039923Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) >
[jira] [Updated] (FLINK-17787) BucketStateSerializerTest fails on output mismatch
[ https://issues.apache.org/jira/browse/FLINK-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger updated FLINK-17787: --- Labels: test-stability (was: ) > BucketStateSerializerTest fails on output mismatch > -- > > Key: FLINK-17787 > URL: https://issues.apache.org/jira/browse/FLINK-17787 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Major > Labels: test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1653&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d > {code} > [INFO] Results: > [INFO] > [ERROR] Failures: > [ERROR] BucketStateSerializerTest.testDeserializationEmpty:139 > expected: but > was: > [ERROR] > BucketStateSerializerTest.testDeserializationFullNoInProgress:260->testDeserializationFull:289 > expected: but > was: > [ERROR] BucketStateSerializerTest.testDeserializationOnlyInProgress:184 > expected: but > was: > [ERROR] Errors: > [ERROR] > BucketStateSerializerTest.testDeserializationFull:255->testDeserializationFull:284->restoreBucket:332 > » FileNotFound > [INFO] > [ERROR] Tests run: 1491, Failures: 3, Errors: 1, Skipped: 53 > 2020-05-17T21:04:39.6023179Z [ERROR] Tests run: 8, Failures: 3, Errors: 1, > Skipped: 4, Time elapsed: 0.508 s <<< FAILURE! - in > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest > 2020-05-17T21:04:39.6024418Z [ERROR] > testDeserializationOnlyInProgress[Previous Version = > 1](org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest) > Time elapsed: 0.416 s <<< FAILURE! > 2020-05-17T21:04:39.6025661Z org.junit.ComparisonFailure: > expected: but > was: > 2020-05-17T21:04:39.6026162Z at > org.junit.Assert.assertEquals(Assert.java:115) > 2020-05-17T21:04:39.6026509Z at > org.junit.Assert.assertEquals(Assert.java:144) > 2020-05-17T21:04:39.6027088Z at > org.apache.flink.streaming.api.functions.sink.filesystem.BucketStateSerializerTest.testDeserializationOnlyInProgress(BucketStateSerializerTest.java:184) > 2020-05-17T21:04:39.6027655Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-17T21:04:39.6028099Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-17T21:04:39.6028587Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-17T21:04:39.6029077Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-17T21:04:39.6029717Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-17T21:04:39.6030222Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-17T21:04:39.6030724Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-17T21:04:39.6031206Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-17T21:04:39.6031670Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-17T21:04:39.6032129Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-17T21:04:39.6033125Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-17T21:04:39.6033818Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6034557Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6035086Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6035502Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6035937Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6036360Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-17T21:04:39.6036732Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-17T21:04:39.6037236Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-17T21:04:39.6037625Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-17T21:04:39.6038028Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-17T21:04:39.6038471Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-17T21:04:39.6038900Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-17T21:04:39.6039471Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-17T21:04:39.6039923Z at > org.junit.rules.Exter
[GitHub] [flink] flinkbot commented on pull request #12209: [FLINK-17520] [core] Extend CompositeTypeSerializerSnapshot to support migration
flinkbot commented on pull request #12209: URL: https://github.com/apache/flink/pull/12209#issuecomment-629966084 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit c91cdd32db0464d8b60d0efc0763078388a15daf (Mon May 18 06:06:28 UTC 2020) ✅no warnings Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17166) Modify the log4j-console.properties to also output logs into the files for WebUI
[ https://issues.apache.org/jira/browse/FLINK-17166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-17166. Resolution: Fixed master: a06bb9ee38c18211d8832ff21952c19d59ee9a61 > Modify the log4j-console.properties to also output logs into the files for > WebUI > > > Key: FLINK-17166 > URL: https://issues.apache.org/jira/browse/FLINK-17166 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Configuration >Reporter: Andrey Zagrebin >Assignee: Yang Wang >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zentol merged pull request #11839: [FLINK-17166][dist] Modify the log4j-console.properties to also output logs into the files for WebUI
zentol merged pull request #11839: URL: https://github.com/apache/flink/pull/11839 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17520) Extend CompositeTypeSerializerSnapshot to allow composite serializers to signal migration based on outer configuration
[ https://issues.apache.org/jira/browse/FLINK-17520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17520: --- Labels: pull-request-available (was: ) > Extend CompositeTypeSerializerSnapshot to allow composite serializers to > signal migration based on outer configuration > -- > > Key: FLINK-17520 > URL: https://issues.apache.org/jira/browse/FLINK-17520 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Reporter: Tzu-Li (Gordon) Tai >Assignee: Tzu-Li (Gordon) Tai >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > > Compatibility of composite serializers is governed by the overall resolved > compatibility of all its nested serializers, as well as any additional > configuration (or what we call the "outer configuration" or "outer snapshot"). > The compatibility resolution logic for these composite serializers is > implemented in the {{CompositeTypeSerializerSnapshot}} abstract class. > One current limitation of this base class is that the implementation assumes > that the outer configuration is always either compatible, or incompatible. > We should relax this to also allow signaling migration, purely based on the > outer configuration. This is already a requirement by FLINK-16998. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] tzulitai opened a new pull request #12209: [FLINK-17520] [core] Extend CompositeTypeSerializerSnapshot to support migration
tzulitai opened a new pull request #12209: URL: https://github.com/apache/flink/pull/12209 ## What is the purpose of the change Previously, the abstraction provided by `CompositeTypeSerializerSnapshot` only allows a mismatch in the outer configuration of composite serializers to be resolved as incompatible (restore fails) or compatible (restore continues as is). The resolution logic is captured as a `isOuterSnapshotCompatible` method, returning a flag. A requirement to additionally allow migration based on the outer configuration has came up in https://issues.apache.org/jira/browse/FLINK-16998. This PR extends functionality of the abstraction by deprecating the old `boolean isOuterSnapshotCompatible(serializer)`, in favor of a new `OuterSchemaCompatibility resolveOuterSchemaCompatibility(serializer)`. The change is backwards compatible - existing subclasses are not broken API / behaviour wise. ## Brief change log Existing tests (e.g. subclasses of `TypeSerializerUpgradeTestBase`) should cover this change. An additional unit test in `CompositeTypeSerializerSnapshotTest` was also added for this. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): NO - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: **YES** - The serializers: **YES** - The runtime per-record code paths (performance sensitive): NO - Anything that affects deployment or recovery: NO - The S3 file system connector: NO ## Documentation - Does this pull request introduce a new feature? YES - If yes, how is the feature documented? DOCS + JAVADOCS This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12205: [FLINK-17780][checkpointing] Add task name to log statements of ChannelStateWriter.
flinkbot edited a comment on pull request #12205: URL: https://github.com/apache/flink/pull/12205#issuecomment-629865234 ## CI report: * 95793199f3be07aaf21dcefd2f7c7f8998be683d Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1656) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11877: [FLINK-16641][network] Announce sender's backlog to solve the deadlock issue without exclusive buffers
flinkbot edited a comment on pull request #11877: URL: https://github.com/apache/flink/pull/11877#issuecomment-618273998 ## CI report: * 3f89d29a4cee4917ff8087e16ab35c5d1274220c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1674) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wanglijie95 commented on a change in pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.
wanglijie95 commented on a change in pull request #12181: URL: https://github.com/apache/flink/pull/12181#discussion_r426375851 ## File path: flink-core/src/test/java/org/apache/flink/core/fs/SafetyNetCloseableRegistryTest.java ## @@ -27,18 +27,31 @@ import org.junit.Rule; import org.junit.Test; import org.junit.rules.TemporaryFolder; +import org.junit.runner.RunWith; +import org.mockito.Mock; +import org.powermock.core.classloader.annotations.PrepareForTest; +import org.powermock.modules.junit4.PowerMockRunner; import java.io.Closeable; import java.io.IOException; import java.util.concurrent.atomic.AtomicInteger; +import static org.powermock.api.mockito.PowerMockito.doNothing; +import static org.powermock.api.mockito.PowerMockito.doThrow; +import static org.powermock.api.mockito.PowerMockito.whenNew; + /** * Tests for the {@link SafetyNetCloseableRegistry}. */ +@RunWith(PowerMockRunner.class) Review comment: Thanks for reivew, @zhuzhurk . I will considier your comment. But `CloseableReaperThread ` is final class and can't be inherited, I will change this for override. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12193: [FLINK-17759][runtime] Remove unused RestartIndividualStrategy
flinkbot edited a comment on pull request #12193: URL: https://github.com/apache/flink/pull/12193#issuecomment-629673659 ## CI report: * a36bef2ce9bc2c3724217931c5d30b212e8ef856 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1671) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…
flinkbot edited a comment on pull request #11837: URL: https://github.com/apache/flink/pull/11837#issuecomment-616956896 ## CI report: * 2b6338c91d9a1e176fcedf3b1f45dcc1adc0059c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1355) * d1d16a0c0d9e8b01c6bbf4d39374d25911373f07 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1680) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
curcur commented on pull request #11725: URL: https://github.com/apache/flink/pull/11725#issuecomment-629949773 BTW, I have updated the javaDocs, and pushed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12192: [FLINK-17758][runtime] Remove unused AdaptedRestartPipelinedRegionStrategyNG
flinkbot edited a comment on pull request #12192: URL: https://github.com/apache/flink/pull/12192#issuecomment-629673624 ## CI report: * 27144288995d450def38b46b6f526fb023620af4 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1670) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12208: [FLINK-17651][table-planner-blink] DecomposeGroupingSetsRule generat…
flinkbot edited a comment on pull request #12208: URL: https://github.com/apache/flink/pull/12208#issuecomment-629928875 ## CI report: * f25b5668db8f6c57472286fd839ed8a548ee051b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1669) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12199: [FLINK-17774] [table] supports all kinds of changes for select result
flinkbot edited a comment on pull request #12199: URL: https://github.com/apache/flink/pull/12199#issuecomment-629793563 ## CI report: * 3a1729c5513f3a86af9e6b9d0dec00325e375c7b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1618) * f11005c6596ecf41efd898ba324374948b2eb8cb Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1678) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12103: [FLINK-16998][core] Add a changeflag to Row
flinkbot edited a comment on pull request #12103: URL: https://github.com/apache/flink/pull/12103#issuecomment-627456433 ## CI report: * 05ab513e7a7aed7481001668eecddf26b8fd05cb Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1621) * b20298d51eda267f008430478e375804ffa0f9df Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1677) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…
flinkbot edited a comment on pull request #11837: URL: https://github.com/apache/flink/pull/11837#issuecomment-616956896 ## CI report: * 2b6338c91d9a1e176fcedf3b1f45dcc1adc0059c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1355) * d1d16a0c0d9e8b01c6bbf4d39374d25911373f07 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17669) Use new WatermarkStrategy/WatermarkGenerator in Kafka connector
[ https://issues.apache.org/jira/browse/FLINK-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aljoscha Krettek closed FLINK-17669. Resolution: Fixed master: 5be27b090c80e97c611db83e04b5d82a8abced9d > Use new WatermarkStrategy/WatermarkGenerator in Kafka connector > --- > > Key: FLINK-17669 > URL: https://issues.apache.org/jira/browse/FLINK-17669 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka >Reporter: Aljoscha Krettek >Assignee: Aljoscha Krettek >Priority: Major > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17661) Add APIs for using new WatermarkStrategy/WatermarkGenerator
[ https://issues.apache.org/jira/browse/FLINK-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aljoscha Krettek closed FLINK-17661. Resolution: Fixed master: d755e1b4e7252f968e0a66836353f30b2e4f64bd > Add APIs for using new WatermarkStrategy/WatermarkGenerator > --- > > Key: FLINK-17661 > URL: https://issues.apache.org/jira/browse/FLINK-17661 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Aljoscha Krettek >Assignee: Aljoscha Krettek >Priority: Major > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17766) Use checkpoint lock instead of fine-grained locking in Kafka AbstractFetcher
[ https://issues.apache.org/jira/browse/FLINK-17766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aljoscha Krettek closed FLINK-17766. Resolution: Fixed master: 59714b9d6addb1dbf2171cab937a0e3fec52f2b1 > Use checkpoint lock instead of fine-grained locking in Kafka AbstractFetcher > > > Key: FLINK-17766 > URL: https://issues.apache.org/jira/browse/FLINK-17766 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka >Reporter: Aljoscha Krettek >Assignee: Aljoscha Krettek >Priority: Major > Fix For: 1.11.0 > > > In {{emitRecordsWithTimestamps()}}, we are currently locking on the partition > state object itself to prevent concurrent access (and to make sure that > changes are visible across threads). However, after recent changes > (FLINK-17307) we hold the checkpoint lock for emitting the whole "bundle" of > records from Kafka. We can now also just use the checkpoint lock in the > periodic emitter callback and then don't need the fine-grained locking on the > state for record emission. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17658) Add new TimestampAssigner and WatermarkGenerator interfaces
[ https://issues.apache.org/jira/browse/FLINK-17658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aljoscha Krettek closed FLINK-17658. Resolution: Fixed master: 12b984f13a11c0f314d060dc7267e11e60e4930e > Add new TimestampAssigner and WatermarkGenerator interfaces > --- > > Key: FLINK-17658 > URL: https://issues.apache.org/jira/browse/FLINK-17658 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Reporter: Aljoscha Krettek >Assignee: Aljoscha Krettek >Priority: Major > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17659) Add common watermark strategies and WatermarkStrategies helper
[ https://issues.apache.org/jira/browse/FLINK-17659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aljoscha Krettek closed FLINK-17659. Resolution: Fixed master: eaef2d7f7acfd9b2063d1c421c97ad521734536b > Add common watermark strategies and WatermarkStrategies helper > -- > > Key: FLINK-17659 > URL: https://issues.apache.org/jira/browse/FLINK-17659 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Reporter: Aljoscha Krettek >Assignee: Aljoscha Krettek >Priority: Major > Fix For: 1.11.0 > > > {{WatermarkStrategies}} is a builder-style helper for constructing common > {{WatermarkStrategy}} subclasses, along with timestamp assigners and idleness > configuration. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] klion26 commented on pull request #12174: [FLINK-17736] Add flink cep example
klion26 commented on pull request #12174: URL: https://github.com/apache/flink/pull/12174#issuecomment-629945677 @dengziming thanks for the contribution. But, as the [contribute guild](https://flink.apache.org/contributing/contribute-code.html#create-jira-ticket-and-reach-consensus) said, we should reach consensus on Jira side before filing a pr. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] aljoscha closed pull request #12147: [FLINK-17653] FLIP-126: Unify (and separate) Watermark Assigners
aljoscha closed pull request #12147: URL: https://github.com/apache/flink/pull/12147 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] aljoscha commented on pull request #12147: [FLINK-17653] FLIP-126: Unify (and separate) Watermark Assigners
aljoscha commented on pull request #12147: URL: https://github.com/apache/flink/pull/12147#issuecomment-629944823 Thanks for the patient review! I merged this This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12199: [FLINK-17774] [table] supports all kinds of changes for select result
flinkbot edited a comment on pull request #12199: URL: https://github.com/apache/flink/pull/12199#issuecomment-629793563 ## CI report: * 3a1729c5513f3a86af9e6b9d0dec00325e375c7b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1618) * f11005c6596ecf41efd898ba324374948b2eb8cb UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12103: [FLINK-16998][core] Add a changeflag to Row
flinkbot edited a comment on pull request #12103: URL: https://github.com/apache/flink/pull/12103#issuecomment-627456433 ## CI report: * 05ab513e7a7aed7481001668eecddf26b8fd05cb Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1621) * b20298d51eda267f008430478e375804ffa0f9df UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11877: [FLINK-16641][network] Announce sender's backlog to solve the deadlock issue without exclusive buffers
flinkbot edited a comment on pull request #11877: URL: https://github.com/apache/flink/pull/11877#issuecomment-618273998 ## CI report: * d8b233f483d45b8901cc28770be9da71a39929ef Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1500) * 3f89d29a4cee4917ff8087e16ab35c5d1274220c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1674) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] twalthr commented on a change in pull request #12201: [hotfix] Remove raw class usages in Configuration.
twalthr commented on a change in pull request #12201: URL: https://github.com/apache/flink/pull/12201#discussion_r426365127 ## File path: flink-core/src/main/java/org/apache/flink/configuration/Configuration.java ## @@ -909,12 +909,10 @@ private void loggingFallback(FallbackKey fallbackKey, ConfigOption configOpti List listOfRawProperties = StructuredOptionsSplitter.splitEscaped(o.toString(), ','); return listOfRawProperties.stream() .map(s -> StructuredOptionsSplitter.splitEscaped(s, ':')) - .map(pair -> { + .peek(pair -> { Review comment: not sure if we should use `peek` here: https://stackoverflow.com/questions/44370676/java-8-peek-vs-map This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] Myasuka commented on a change in pull request #8693: [FLINK-8871] Support to cancel checkpoing via notification
Myasuka commented on a change in pull request #8693: URL: https://github.com/apache/flink/pull/8693#discussion_r426365219 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java ## @@ -38,4 +38,12 @@ * @throws Exception */ void notifyCheckpointComplete(long checkpointId) throws Exception; + + /** +* This method is called as a notification once a distributed checkpoint has been aborted. +* +* @param checkpointId The ID of the checkpoint that has been aborted. +* @throws Exception +*/ + void notifyCheckpointAborted(long checkpointId) throws Exception; Review comment: Once this PR merged, I planned to add release notes in JIRA. From my previous experience, this interface change might not publish in the final release notes of flink official website. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17029) Introduce a new JDBC connector with new property keys
[ https://issues.apache.org/jira/browse/FLINK-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-17029. --- Fix Version/s: 1.11.0 Resolution: Fixed master (1.11.0): ce843a2e601cbc2ddba8d3feacaa930aea810877 > Introduce a new JDBC connector with new property keys > - > > Key: FLINK-17029 > URL: https://issues.apache.org/jira/browse/FLINK-17029 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC, Table SQL / Ecosystem >Reporter: Jark Wu >Assignee: Leonard Xu >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > This new JDBC connector should use new interfaces proposed by FLIP-95, e.g. > DynamicTableSource, DynamicTableSink, and Factory. > The new proposed keys : > ||Old key||New key|| > |connector.type|connector| > |connector.url|url| > |connector.table|table-name| > |connector.driver|driver| > |connector.username|username| > |connector.password|password| > |connector.read.partition.column|scan.partition.column| > |connector.read.partition.num|scan.partition.num| > |connector.read.partition.lower-bound|scan.partition.lower-bound| > |connector.read.partition.upper-bound|scan.partition.upper-bound| > |connector.read.fetch-size|scan.fetch-size| > |connector.lookup.cache.max-rows|lookup.cache.max-rows| > |connector.lookup.cache.ttl|lookup.cache.ttl| > |connector.lookup.max-retries|lookup.max-retries| > |connector.write.flush.max-rows|sink.buffer-flush.max-rows| > |connector.write.flush.interval|sink.buffer-flush.interval| > |connector.write.max-retries|sink.max-retries| > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong closed pull request #12176: [FLINK-17029][jdbc]Introduce a new JDBC connector with new property keys
wuchong closed pull request #12176: URL: https://github.com/apache/flink/pull/12176 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] guoweiM edited a comment on pull request #12132: [FLINK-17593][Connectors/FileSystem] Support arbitrary recovery mechanism for PartFileWriter
guoweiM edited a comment on pull request #12132: URL: https://github.com/apache/flink/pull/12132#issuecomment-629923445 HI @aljoscha , 1. First of all, I think you are right we should merge to master after all the participants agree with the pr. 2. I could split the commit to two commits: Update BucketStateSerierlizerTest and Rework. I think it is a good practice. Before do it I want to agree on how to change it. Using the `Bucket` not `InProgressFileWriter` or `RecoverableFsDataOutputStream` to produce pending-file/in-progress-file. The other two main concerns are the following. 3. I would want to share some thoughts about using a special `RcoverableWriter` tests the buck state migration. Currently, `Bucket`/`BucketState`/`BucketStateSerializer` depends on interface, not some special implementation. IMO any implementation of `RecoverableWriter` is a ‘special’ one. The code path we want to test is bucket state(pendingFile&inProgressWriter) serializer(snapshot) and deserializer(restore). Using which implementation depends on convenience. For example, we always use the LocalFileSystem does not use the S3 or HDFS. So I think it is ok if the implementation could test the code path. 4. For custom file copying/cleanup things. I think it is important that these PendingFile/InProgressFileWriter that are restored from v1 or v2 bytes should be work really. That the old version of `BucketStateSerierlizerTest` also verifies this. It’s why we do that. Of course, this is all my personal thoughts. Correct if I missing something. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-16999) Data structure should cover all conversions declared in logical types
[ https://issues.apache.org/jira/browse/FLINK-16999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timo Walther closed FLINK-16999. Fix Version/s: 1.11.0 Resolution: Fixed > Data structure should cover all conversions declared in logical types > - > > Key: FLINK-16999 > URL: https://issues.apache.org/jira/browse/FLINK-16999 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API, Table SQL / Planner >Reporter: Timo Walther >Assignee: Timo Walther >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > In order to ensure that we don't loose any type precision or conversion class > information in sources and sinks, this issue will add a type integrity test > for data structure converters. Also UDFs will benefit from this test. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-16999) Data structure should cover all conversions declared in logical types
[ https://issues.apache.org/jira/browse/FLINK-16999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109828#comment-17109828 ] Timo Walther commented on FLINK-16999: -- Enabled in 1.11.0: adfd45fbbd826488e6a00cc8d8f6436fa2179579 > Data structure should cover all conversions declared in logical types > - > > Key: FLINK-16999 > URL: https://issues.apache.org/jira/browse/FLINK-16999 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API, Table SQL / Planner >Reporter: Timo Walther >Assignee: Timo Walther >Priority: Major > Labels: pull-request-available > > In order to ensure that we don't loose any type precision or conversion class > information in sources and sinks, this issue will add a type integrity test > for data structure converters. Also UDFs will benefit from this test. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-15792) Make Flink logs accessible via kubectl logs per default
[ https://issues.apache.org/jira/browse/FLINK-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109827#comment-17109827 ] Yang Wang commented on FLINK-15792: --- Just like what we have discussed in the PR[1] of FLINK-17166, we do not find a reasonable way to output STDOUT/STDERR to console and log file simultaneously. So in FLINK-17166, we start with only outputting logs to console and files simultaneously. And leave the STDOUT/STDERR for the open question to be done later. For this ticket, we are in the same situation. If we make the logs output console per default, then we will not be able to access the STDOUT/STDERR in the Flink web dashboard. I think it is burden for users. So i suggest to still leave this in the document[2] if users want to use {{kubectl logs}} to view the logs. [1]. [https://github.com/apache/flink/pull/11839] [2]. [https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/native_kubernetes.html#log-files] > Make Flink logs accessible via kubectl logs per default > --- > > Key: FLINK-15792 > URL: https://issues.apache.org/jira/browse/FLINK-15792 > Project: Flink > Issue Type: Sub-task > Components: Deployment / Kubernetes >Affects Versions: 1.10.0 >Reporter: Till Rohrmann >Priority: Major > Fix For: 1.11.0, 1.10.2 > > > I think we should make Flink's logs accessible via {{kubectl logs}} per > default. Firstly, this is the idiomatic way to obtain the logs from a > container on Kubernetes. Secondly, especially if something does not work and > the container cannot start/stops abruptly, there is no way to log into the > container and look for the log.file. This makes debugging the setup quite > hard. > I think the best way would be to create the Flink Docker image in such a way > that it logs to stdout. In order to allow access to the log file from the web > ui, it should also create a log file. One way to achieve this is to add a > ConsoleAppender to the respective logging configuration. Another way could be > to start the process in the console mode and then to teeing the stdout output > into the log file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] TsReaper commented on pull request #12202: [FLINK-14807][table] Support collecting query results under all execution and network environments
TsReaper commented on pull request #12202: URL: https://github.com/apache/flink/pull/12202#issuecomment-629936566 Azure passed in https://dev.azure.com/tsreaper96/Flink/_build/results?buildId=27&view=results This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12203: [FLINK-17687][tests] Collect TM log files before tearing down Mesos
flinkbot edited a comment on pull request #12203: URL: https://github.com/apache/flink/pull/12203#issuecomment-629845268 ## CI report: * 9e6f064d217572b17f71450b6397b5587691551c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1648) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12193: [FLINK-17759][runtime] Remove unused RestartIndividualStrategy
flinkbot edited a comment on pull request #12193: URL: https://github.com/apache/flink/pull/12193#issuecomment-629673659 ## CI report: * 3f6a40a4f0933023bc3172529ed0c7d5a6c422fb Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1569) * a36bef2ce9bc2c3724217931c5d30b212e8ef856 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1671) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12192: [FLINK-17758][runtime] Remove unused AdaptedRestartPipelinedRegionStrategyNG
flinkbot edited a comment on pull request #12192: URL: https://github.com/apache/flink/pull/12192#issuecomment-629673624 ## CI report: * 15ca64e3a20101239b30b90064f3b0f35238c828 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1568) * 27144288995d450def38b46b6f526fb023620af4 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1670) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11877: [FLINK-16641][network] Announce sender's backlog to solve the deadlock issue without exclusive buffers
flinkbot edited a comment on pull request #11877: URL: https://github.com/apache/flink/pull/11877#issuecomment-618273998 ## CI report: * d8b233f483d45b8901cc28770be9da71a39929ef Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1500) * 3f89d29a4cee4917ff8087e16ab35c5d1274220c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12171: [FLINK-17715][sql client] Supports function DDLs in SQL-CLI
flinkbot edited a comment on pull request #12171: URL: https://github.com/apache/flink/pull/12171#issuecomment-629218919 ## CI report: * f807fd2e590f6bc6fe0a8022c94486a96de3dcb4 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1665) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] Myasuka commented on a change in pull request #8693: [FLINK-8871] Support to cancel checkpoing via notification
Myasuka commented on a change in pull request #8693: URL: https://github.com/apache/flink/pull/8693#discussion_r426360979 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java ## @@ -183,6 +235,42 @@ public void notifyCheckpointComplete(long checkpointId, OperatorChain oper env.getTaskStateManager().notifyCheckpointComplete(checkpointId); } + @Override + public void notifyCheckpointAborted(long checkpointId, OperatorChain operatorChain, Supplier isRunning) throws Exception { + + if (isRunning.get()) { + LOG.debug("Notification of aborted checkpoint for task {}", taskName); + // only happens when the task always received checkpoints to abort but never trigger or executing. + if (abortedCheckpointIds.size() >= maxRecordAbortedCheckpoints) { + abortedCheckpointIds.pollFirst(); Review comment: I think this suggestion sounds reasonable, changed to `ArrayDeque`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-14255) Integrate hive to streaming file sink
[ https://issues.apache.org/jira/browse/FLINK-14255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109820#comment-17109820 ] Jingsong Lee commented on FLINK-14255: -- master(parquet and orc): 1f668dd3df1a3d9bb8837c05ebdd5e473c55b1ea > Integrate hive to streaming file sink > - > > Key: FLINK-14255 > URL: https://issues.apache.org/jira/browse/FLINK-14255 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: Jingsong Lee >Assignee: Jingsong Lee >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Based on StreamingFileSink. > Extends format support and partition commit support. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] JingsongLi merged pull request #12206: [FLINK-14255][hive] Integrate hive to streaming file sink with parquet and orc
JingsongLi merged pull request #12206: URL: https://github.com/apache/flink/pull/12206 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] JingsongLi commented on pull request #12206: [FLINK-14255][hive] Integrate hive to streaming file sink with parquet and orc
JingsongLi commented on pull request #12206: URL: https://github.com/apache/flink/pull/12206#issuecomment-629934589 Thanks @lirui-apache for the review, merging... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zhuzhurk commented on a change in pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.
zhuzhurk commented on a change in pull request #12181: URL: https://github.com/apache/flink/pull/12181#discussion_r426358510 ## File path: flink-core/src/test/java/org/apache/flink/core/fs/SafetyNetCloseableRegistryTest.java ## @@ -198,4 +211,27 @@ public void testReaperThreadSpawnAndStop() throws Exception { } Assert.assertFalse(SafetyNetCloseableRegistry.isReaperThreadRunning()); } + + /** +* Test for FLINK-17645. Review comment: I think this line is not needed. It can be tracked via git log. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango closed pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master
yangyichao-mango closed pull request #12196: URL: https://github.com/apache/flink/pull/12196 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango commented on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master
yangyichao-mango commented on pull request #12196: URL: https://github.com/apache/flink/pull/12196#issuecomment-629932611 That is the result after execute sh docs/check_links.sh. Thx for review. @klion26 ![image](https://user-images.githubusercontent.com/29545877/82173270-30bd0d80-98ff-11ea-9721-22e06d4b6211.png) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango commented on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master
yangyichao-mango commented on pull request #12196: URL: https://github.com/apache/flink/pull/12196#issuecomment-629932483 > @yangyichao-mango thanks for your contribution. could you please share the result after execute `sh docs/check_links.sh` here (you have to execute `sh docs/build_docs.sh -p ` before execute the check_links script) @klion26 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12208: [FLINK-17651][table-planner-blink] DecomposeGroupingSetsRule generat…
flinkbot edited a comment on pull request #12208: URL: https://github.com/apache/flink/pull/12208#issuecomment-629928875 ## CI report: * f25b5668db8f6c57472286fd839ed8a548ee051b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1669) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12207: [hotfix][doc] Fix broken link in the native Kubernetes document
flinkbot edited a comment on pull request #12207: URL: https://github.com/apache/flink/pull/12207#issuecomment-629928836 ## CI report: * ac05af3e3c748efa3478dab616be670578721f92 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1668) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12206: [FLINK-14255][hive] Integrate hive to streaming file sink with parquet and orc
flinkbot edited a comment on pull request #12206: URL: https://github.com/apache/flink/pull/12206#issuecomment-629916046 ## CI report: * 521334c523a5004883f94e1fb996cdd18081500c UNKNOWN * 2ea5f52f890c9cb5728fd7772f59bea4b38f07ca Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1664) * 299012acbbf33c20a524bfa366bb0a1d07551ec1 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango commented on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master
yangyichao-mango commented on pull request #12196: URL: https://github.com/apache/flink/pull/12196#issuecomment-629932386 > @yangyichao-mango thanks for your contribution. could you please share the result after execute `sh docs/check_links.sh` here (you have to execute `sh docs/build_docs.sh -p ` before execute the check_links script) ![image](https://user-images.githubusercontent.com/29545877/82173191-f18ebc80-98fe-11ea-8972-0173a641996f.png) That is the result after execute sh docs/check_links.sh This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12193: [FLINK-17759][runtime] Remove unused RestartIndividualStrategy
flinkbot edited a comment on pull request #12193: URL: https://github.com/apache/flink/pull/12193#issuecomment-629673659 ## CI report: * 3f6a40a4f0933023bc3172529ed0c7d5a6c422fb Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1569) * a36bef2ce9bc2c3724217931c5d30b212e8ef856 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11839: [FLINK-17166][dist] Modify the log4j-console.properties to also output logs into the files for WebUI
flinkbot edited a comment on pull request #11839: URL: https://github.com/apache/flink/pull/11839#issuecomment-617042256 ## CI report: * ab5e8324f0ace63d1e5b3f292dd6d517b056fd21 UNKNOWN * 216b86cf20cccbc2462b36c8edc5b5a396ebd3c0 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1646) * de71393ce72505e00313c75d5c43060bb86589f7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1667) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango commented on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master
yangyichao-mango commented on pull request #12196: URL: https://github.com/apache/flink/pull/12196#issuecomment-629932115 ![image](https://user-images.githubusercontent.com/29545877/82173155-dd4abf80-98fe-11ea-8f36-5f10f6f10781.png) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org