[GitHub] [flink] wuchong commented on a change in pull request #11959: [FLINK-16997][table-common] Add new factory interfaces and discovery utilities
wuchong commented on a change in pull request #11959: URL: https://github.com/apache/flink/pull/11959#discussion_r418922875 ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java ## @@ -0,0 +1,570 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.factories; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.DelegatingConfiguration; +import org.apache.flink.configuration.ReadableConfig; +import org.apache.flink.table.api.TableException; +import org.apache.flink.table.api.ValidationException; +import org.apache.flink.table.catalog.Catalog; +import org.apache.flink.table.catalog.CatalogTable; +import org.apache.flink.table.catalog.ObjectIdentifier; +import org.apache.flink.table.connector.format.ScanFormat; +import org.apache.flink.table.connector.format.SinkFormat; +import org.apache.flink.table.connector.sink.DynamicTableSink; +import org.apache.flink.table.connector.source.DynamicTableSource; +import org.apache.flink.table.utils.EncodingUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.annotation.Nullable; + +import java.util.HashSet; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.ServiceConfigurationError; +import java.util.ServiceLoader; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * Utility for working with {@link Factory}s. + */ +@Internal Review comment: Shall we mark the `FactoryUtil` as `PublicEvolving`? This class appears in many Javadocs of public interfaces. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
flinkbot edited a comment on pull request #11725: URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882 ## CI report: * a9c54057daa5bb907302534b04be5f4742d1b586 Travis: [FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/162928464) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=473) * 0d84af72bc6f7159452da67f34d8825a0d040d02 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17027) Introduce a new Elasticsearch connector with new property keys
[ https://issues.apache.org/jira/browse/FLINK-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097831#comment-17097831 ] Jark Wu commented on FLINK-17027: - Hi [~molsion], this issue is still blocked by FLIP-95. > Introduce a new Elasticsearch connector with new property keys > -- > > Key: FLINK-17027 > URL: https://issues.apache.org/jira/browse/FLINK-17027 > Project: Flink > Issue Type: Sub-task > Components: Connectors / ElasticSearch, Table SQL / Ecosystem >Reporter: Jark Wu >Priority: Major > Fix For: 1.11.0 > > > This new Elasticsearch connector should use new interfaces proposed by > FLIP-95, e.g. DynamicTableSource, DynamicTableSink, and Factory. > The new proposed keys : > ||Old key||New key||Note|| > |connector.type|connector| | > |connector.version|N/A|merged into 'connector' key| > |connector.hosts|hosts| | > |connector.index|index| | > |connector.document-type|document-type| | > |connector.failure-handler|failure-handler| | > |connector.connection-max-retry-timeout|connection.max-retry-timeout| | > |connector.connection-path-prefix|connection.path-prefix| | > |connector.key-delimiter|document-id.key-delimiter|They can also be used by > sources in the future. In addition, we prefix 'document-id' to make the > meaning more understandable. | > |connector.key-null-literal|document-id.key-null-literal| > |connector.flush-on-checkpoint|sink.flush-on-checkpoint| | > |connector.bulk-flush.max-actions|sink.bulk-flush.max-actions|we still use > bulk-flush, because it's a elasticsearch terminology.| > |connector.bulk-flush.max-size|sink.bulk-flush.max-size| | > |connector.bulk-flush.interval|sink.bulk-flush.interval| | > |connector.bulk-flush.back-off.type|sink.bulk-flush.back-off.strategy| | > |connector.bulk-flush.back-off.max-retries|sink.bulk-flush.back-off.max-retries| > | > |connector.bulk-flush.back-off.delay|sink.bulk-flush.back-off.delay| | > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17028) Introduce a new HBase connector with new property keys
[ https://issues.apache.org/jira/browse/FLINK-17028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097830#comment-17097830 ] Jark Wu commented on FLINK-17028: - Hi [~molsion], this issue is still blocked by FLIP-95. > Introduce a new HBase connector with new property keys > -- > > Key: FLINK-17028 > URL: https://issues.apache.org/jira/browse/FLINK-17028 > Project: Flink > Issue Type: Sub-task > Components: Connectors / HBase, Table SQL / Ecosystem >Reporter: Jark Wu >Priority: Major > > This new Kafka connector should use new interfaces proposed by FLIP-95, e.g. > DynamicTableSource, DynamicTableSink, and Factory. > The new proposed keys : > ||Old key||New key||Note|| > |connector.type|connector| | > |connector.version|N/A|merged into 'connector' key| > |connector.table-name|table-name| | > |connector.zookeeper.quorum|zookeeper.quorum| | > |connector.zookeeper.znode.parent|zookeeper.znode-parent| | > |connector.write.buffer-flush.max-size|sink.buffer-flush.max-size| | > |connector.write.buffer-flush.max-rows|sink.buffer-flush.max-rows| | > |connector.write.buffer-flush.interval|sink.buffer-flush.interval| | > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #11974: [FLINK-16970][build][metrics]Bundle JMXReporter separately from dist …
flinkbot edited a comment on pull request #11974: URL: https://github.com/apache/flink/pull/11974#issuecomment-622652948 ## CI report: * 4261809a08f0cfb516ea52db15de27f154b3ad73 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=529) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-12675) Event time synchronization in Kafka consumer
[ https://issues.apache.org/jira/browse/FLINK-12675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097822#comment-17097822 ] Akshay Aggarwal commented on FLINK-12675: - Thanks [~thw]. I've raised a PR, can someone help review? > Event time synchronization in Kafka consumer > > > Key: FLINK-12675 > URL: https://issues.apache.org/jira/browse/FLINK-12675 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Kafka >Reporter: Thomas Weise >Assignee: Akshay Aggarwal >Priority: Major > Labels: pull-request-available > Attachments: 0001-Kafka-event-time-alignment.patch > > > Integrate the source watermark tracking into the Kafka consumer and implement > the sync mechanism (different consumer model, compared to Kinesis). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17027) Introduce a new Elasticsearch connector with new property keys
[ https://issues.apache.org/jira/browse/FLINK-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097813#comment-17097813 ] molsion mo commented on FLINK-17027: i want to take this task if no one work for it. > Introduce a new Elasticsearch connector with new property keys > -- > > Key: FLINK-17027 > URL: https://issues.apache.org/jira/browse/FLINK-17027 > Project: Flink > Issue Type: Sub-task > Components: Connectors / ElasticSearch, Table SQL / Ecosystem >Reporter: Jark Wu >Priority: Major > Fix For: 1.11.0 > > > This new Elasticsearch connector should use new interfaces proposed by > FLIP-95, e.g. DynamicTableSource, DynamicTableSink, and Factory. > The new proposed keys : > ||Old key||New key||Note|| > |connector.type|connector| | > |connector.version|N/A|merged into 'connector' key| > |connector.hosts|hosts| | > |connector.index|index| | > |connector.document-type|document-type| | > |connector.failure-handler|failure-handler| | > |connector.connection-max-retry-timeout|connection.max-retry-timeout| | > |connector.connection-path-prefix|connection.path-prefix| | > |connector.key-delimiter|document-id.key-delimiter|They can also be used by > sources in the future. In addition, we prefix 'document-id' to make the > meaning more understandable. | > |connector.key-null-literal|document-id.key-null-literal| > |connector.flush-on-checkpoint|sink.flush-on-checkpoint| | > |connector.bulk-flush.max-actions|sink.bulk-flush.max-actions|we still use > bulk-flush, because it's a elasticsearch terminology.| > |connector.bulk-flush.max-size|sink.bulk-flush.max-size| | > |connector.bulk-flush.interval|sink.bulk-flush.interval| | > |connector.bulk-flush.back-off.type|sink.bulk-flush.back-off.strategy| | > |connector.bulk-flush.back-off.max-retries|sink.bulk-flush.back-off.max-retries| > | > |connector.bulk-flush.back-off.delay|sink.bulk-flush.back-off.delay| | > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #11975: [FLINK-17496][kinesis] Upgrade amazon-kinesis-producer to 0.14.0
flinkbot edited a comment on pull request #11975: URL: https://github.com/apache/flink/pull/11975#issuecomment-622662116 ## CI report: * 89d8665ddbcdd11cff8652f39fea4d0601f809ac Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=530) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17028) Introduce a new HBase connector with new property keys
[ https://issues.apache.org/jira/browse/FLINK-17028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097803#comment-17097803 ] molsion mo commented on FLINK-17028: I have read and understood FLIP-95 and i was contributor in flink project , i want to take this task if no one work for it. In addition,your description is wrong, it is Hbase connector instead of kafka connector. > Introduce a new HBase connector with new property keys > -- > > Key: FLINK-17028 > URL: https://issues.apache.org/jira/browse/FLINK-17028 > Project: Flink > Issue Type: Sub-task > Components: Connectors / HBase, Table SQL / Ecosystem >Reporter: Jark Wu >Priority: Major > > This new Kafka connector should use new interfaces proposed by FLIP-95, e.g. > DynamicTableSource, DynamicTableSink, and Factory. > The new proposed keys : > ||Old key||New key||Note|| > |connector.type|connector| | > |connector.version|N/A|merged into 'connector' key| > |connector.table-name|table-name| | > |connector.zookeeper.quorum|zookeeper.quorum| | > |connector.zookeeper.znode.parent|zookeeper.znode-parent| | > |connector.write.buffer-flush.max-size|sink.buffer-flush.max-size| | > |connector.write.buffer-flush.max-rows|sink.buffer-flush.max-rows| | > |connector.write.buffer-flush.interval|sink.buffer-flush.interval| | > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #11975: [FLINK-17496][kinesis] Upgrade amazon-kinesis-producer to 0.14.0
flinkbot commented on pull request #11975: URL: https://github.com/apache/flink/pull/11975#issuecomment-622662116 ## CI report: * 89d8665ddbcdd11cff8652f39fea4d0601f809ac UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #11975: [FLINK-17496][kinesis] Upgrade amazon-kinesis-producer to 0.14.0
flinkbot commented on pull request #11975: URL: https://github.com/apache/flink/pull/11975#issuecomment-622660356 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 89d8665ddbcdd11cff8652f39fea4d0601f809ac (Sat May 02 03:11:25 UTC 2020) **Warnings:** * **1 pom.xml files were touched**: Check for build and licensing issues. * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tweise opened a new pull request #11975: [FLINK-17496][kinesis] Upgrade amazon-kinesis-producer to 0.14.0
tweise opened a new pull request #11975: URL: https://github.com/apache/flink/pull/11975 ## What is the purpose of the change Kinesis producer 0.13.1 introduced a performance regression that can be addressed with the upgrade to 0.14.0 ## Verifying this change The problem was discovered and the fix verified as part of 1.10.1 release testing in an internal benchmark environment. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17496) Performance regression with amazon-kinesis-producer 0.13.1 in Flink 1.10.x
[ https://issues.apache.org/jira/browse/FLINK-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17496: --- Labels: pull-request-available (was: ) > Performance regression with amazon-kinesis-producer 0.13.1 in Flink 1.10.x > -- > > Key: FLINK-17496 > URL: https://issues.apache.org/jira/browse/FLINK-17496 > Project: Flink > Issue Type: Bug > Components: Connectors / Kinesis >Affects Versions: 1.10.0 > Environment: The KPL upgrade in 1.10.0 has introduced a performance > issue, which can be addressed by reverting to 0.12.9 or forward fix with > 0.14.0. >Reporter: Thomas Weise >Assignee: Thomas Weise >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17496) Performance regression with amazon-kinesis-producer 0.13.1 in Flink 1.10.x
Thomas Weise created FLINK-17496: Summary: Performance regression with amazon-kinesis-producer 0.13.1 in Flink 1.10.x Key: FLINK-17496 URL: https://issues.apache.org/jira/browse/FLINK-17496 Project: Flink Issue Type: Bug Components: Connectors / Kinesis Affects Versions: 1.10.0 Environment: The KPL upgrade in 1.10.0 has introduced a performance issue, which can be addressed by reverting to 0.12.9 or forward fix with 0.14.0. Reporter: Thomas Weise Assignee: Thomas Weise -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #11974: [FLINK-16970][build][metrics]Bundle JMXReporter separately from dist …
flinkbot edited a comment on pull request #11974: URL: https://github.com/apache/flink/pull/11974#issuecomment-622652948 ## CI report: * 4261809a08f0cfb516ea52db15de27f154b3ad73 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=529) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #11974: [FLINK-16970][build][metrics]Bundle JMXReporter separately from dist …
flinkbot commented on pull request #11974: URL: https://github.com/apache/flink/pull/11974#issuecomment-622652948 ## CI report: * 4261809a08f0cfb516ea52db15de27f154b3ad73 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wanglijie95 commented on pull request #11499: [FLINK-16681][Connectors/JDBC]Jdbc JDBCOutputFormat and JDBCLookupFunction PreparedStatement loss connection, if long time not records to
wanglijie95 commented on pull request #11499: URL: https://github.com/apache/flink/pull/11499#issuecomment-622651532 @LakeShen Hi LakeShen, flink release 1.11 will enter the code freeze period on May 15. Maybe you can finish this PR before May 15 and merge to 1.11 . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #11974: [FLINK-16970][build][metrics]Bundle JMXReporter separately from dist …
flinkbot commented on pull request #11974: URL: https://github.com/apache/flink/pull/11974#issuecomment-622650910 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 4261809a08f0cfb516ea52db15de27f154b3ad73 (Sat May 02 01:43:41 UTC 2020) **Warnings:** * **1 pom.xml files were touched**: Check for build and licensing issues. * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] molsionmo opened a new pull request #11974: [FLINK-16970][build][metrics]Bundle JMXReporter separately from dist …
molsionmo opened a new pull request #11974: URL: https://github.com/apache/flink/pull/11974 ## What is the purpose of the change *Bundle JMXReporter separately from dist jar,and put it in folder /opt* ## Verifying this change This change is a trivial rework / code cleanup without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11963: [FLINK-16408] Bind user code class loader to lifetime of Job on TaskExecutor
flinkbot edited a comment on pull request #11963: URL: https://github.com/apache/flink/pull/11963#issuecomment-621969588 ## CI report: * e07ec1bd9c216cdac0ac4faa79bc5509ab4129b6 UNKNOWN * 994fb5984ec6312126d76b75206d0023e8ae4212 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=528) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11973: [FLINK-17488] Now JdbcConnectionOptions supports autoCommit control
flinkbot edited a comment on pull request #11973: URL: https://github.com/apache/flink/pull/11973#issuecomment-622555129 ## CI report: * ccb23fef7822524883d71a604d4ef9e673307bc7 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=527) * 0a189aac6198b16763d5784f423b793c0e22d4b0 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11963: [FLINK-16408] Bind user code class loader to lifetime of Job on TaskExecutor
flinkbot edited a comment on pull request #11963: URL: https://github.com/apache/flink/pull/11963#issuecomment-621969588 ## CI report: * e07ec1bd9c216cdac0ac4faa79bc5509ab4129b6 UNKNOWN * 7401f5580774397bd121736b1068f03009a2d980 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=521) * 994fb5984ec6312126d76b75206d0023e8ae4212 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=528) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11963: [FLINK-16408] Bind user code class loader to lifetime of Job on TaskExecutor
flinkbot edited a comment on pull request #11963: URL: https://github.com/apache/flink/pull/11963#issuecomment-621969588 ## CI report: * e07ec1bd9c216cdac0ac4faa79bc5509ab4129b6 UNKNOWN * 7401f5580774397bd121736b1068f03009a2d980 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=521) * 994fb5984ec6312126d76b75206d0023e8ae4212 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11971: [FLINK-17271] Translate new DataStream API tutorial
flinkbot edited a comment on pull request #11971: URL: https://github.com/apache/flink/pull/11971#issuecomment-622521983 ## CI report: * f4082786bad837955dea7f68fce466c6b7719912 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=524) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-16478) add restApi to modify loglevel
[ https://issues.apache.org/jira/browse/FLINK-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097714#comment-17097714 ] Xingxing Di commented on FLINK-16478: - {quote}In general I think such a feature would be helpful for our users. What I would be interested in is how exactly it should be implemented. I guess a proper design with an implementation plan could help here. {quote} [~trohrmann] [~felixzheng] [~xiaodao], I wrote a new design doc with more details, could you help me to review it, I will impove it asap. [Flink should support dynamic log level setting|https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY] > add restApi to modify loglevel > --- > > Key: FLINK-16478 > URL: https://issues.apache.org/jira/browse/FLINK-16478 > Project: Flink > Issue Type: Improvement > Components: Runtime / REST >Reporter: xiaodao >Priority: Minor > > sometimes we may need to change loglevel to get more information to resolved > bug, now we need to stop it and modify conf/log4j.properties and resubmit it > ,i think it's better to add rest api to modify loglevel. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-10672) Task stuck while writing output to flink
[ https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097696#comment-17097696 ] Kyle Weaver commented on FLINK-10672: - Hi Yun, sorry I haven't had much time to look at this lately. If you still have the test environment available, you can send info to ib...@apache.org. > Task stuck while writing output to flink > > > Key: FLINK-10672 > URL: https://issues.apache.org/jira/browse/FLINK-10672 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.5.4 > Environment: OS: Debuan rodente 4.17 > Flink version: 1.5.4 > ||Key||Value|| > |jobmanager.heap.mb|1024| > |jobmanager.rpc.address|localhost| > |jobmanager.rpc.port|6123| > |metrics.reporter.jmx.class|org.apache.flink.metrics.jmx.JMXReporter| > |metrics.reporter.jmx.port|9250-9260| > |metrics.reporters|jmx| > |parallelism.default|1| > |rest.port|8081| > |taskmanager.heap.mb|1024| > |taskmanager.numberOfTaskSlots|1| > |web.tmpdir|/tmp/flink-web-bdb73d6c-5b9e-47b5-9ebf-eed0a7c82c26| > > h1. Overview > ||Data Port||All Slots||Free Slots||CPU Cores||Physical Memory||JVM Heap > Size||Flink Managed Memory|| > |43501|1|0|12|62.9 GB|922 MB|642 MB| > h1. Memory > h2. JVM (Heap/Non-Heap) > ||Type||Committed||Used||Maximum|| > |Heap|922 MB|575 MB|922 MB| > |Non-Heap|68.8 MB|64.3 MB|-1 B| > |Total|991 MB|639 MB|922 MB| > h2. Outside JVM > ||Type||Count||Used||Capacity|| > |Direct|3,292|105 MB|105 MB| > |Mapped|0|0 B|0 B| > h1. Network > h2. Memory Segments > ||Type||Count|| > |Available|3,194| > |Total|3,278| > h1. Garbage Collection > ||Collector||Count||Time|| > |G1_Young_Generation|13|336| > |G1_Old_Generation|1|21| >Reporter: Ankur Goenka >Assignee: Yun Gao >Priority: Major > Labels: beam > Attachments: 0.14_all_jobs.jpg, 1uruvakHxBu.png, 3aDKQ24WvKk.png, > Po89UGDn58V.png, WithBroadcastJob.png, jmx_dump.json, jmx_dump_detailed.json, > jstack_129827.log, jstack_163822.log, jstack_66985.log > > > I am running a fairly complex pipleline with 200+ task. > The pipeline works fine with small data (order of 10kb input) but gets stuck > with a slightly larger data (300kb input). > > The task gets stuck while writing the output toFlink, more specifically it > gets stuck while requesting memory segment in local buffer pool. The Task > manager UI shows that it has enough memory and memory segments to work with. > The relevant stack trace is > {quote}"grpc-default-executor-0" #138 daemon prio=5 os_prio=0 > tid=0x7fedb0163800 nid=0x30b7f in Object.wait() [0x7fedb4f9] > java.lang.Thread.State: TIMED_WAITING (on object monitor) > at (C/C++) 0x7fef201c7dae (Unknown Source) > at (C/C++) 0x7fef1f2aea07 (Unknown Source) > at (C/C++) 0x7fef1f241cd3 (Unknown Source) > at java.lang.Object.wait(Native Method) > - waiting on <0xf6d56450> (a java.util.ArrayDeque) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247) > - locked <0xf6d56450> (a java.util.ArrayDeque) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:204) > at > org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:213) > at > org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:144) > at > org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:107) > at > org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65) > at > org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:42) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:26) > at > org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:80) > at > org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction$MyDataReceiver.accept(FlinkExecutableStageFunction.java:230) > - locked <0xf6a60bd0> (a java.lang.Object) > at > org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:81) > at > org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:32) > at > org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer$InboundObserver.onNext(BeamFnDataGrpcMultiplexer.j
[GitHub] [flink] edu05 commented on pull request #11952: [FLINK-16638][runtime][checkpointing] Flink checkStateMappingCompleteness doesn't include UserDefinedOperatorIDs
edu05 commented on pull request #11952: URL: https://github.com/apache/flink/pull/11952#issuecomment-622582457 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11973: [FLINK-17488] Now JdbcConnectionOptions supports autoCommit control
flinkbot edited a comment on pull request #11973: URL: https://github.com/apache/flink/pull/11973#issuecomment-622555129 ## CI report: * ccb23fef7822524883d71a604d4ef9e673307bc7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=527) * 0a189aac6198b16763d5784f423b793c0e22d4b0 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #11973: [FLINK-17488] Now JdbcConnectionOptions supports autoCommit control
flinkbot commented on pull request #11973: URL: https://github.com/apache/flink/pull/11973#issuecomment-622555129 ## CI report: * ccb23fef7822524883d71a604d4ef9e673307bc7 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #11973: [FLINK-17488] Now JdbcConnectionOptions supports autoCommit control
flinkbot commented on pull request #11973: URL: https://github.com/apache/flink/pull/11973#issuecomment-622550880 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit ccb23fef7822524883d71a604d4ef9e673307bc7 (Fri May 01 20:21:41 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-17488).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pavel-hp opened a new pull request #11973: [FLINK-17488] Now JdbcConnectionOptions supports autoCommit control
pavel-hp opened a new pull request #11973: URL: https://github.com/apache/flink/pull/11973 - Now JdbcConnectionOptions supports autoCommit control Default behaviour hasn't changed. - Added logging information to be able to track batches ## What is the purpose of the change *(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)* ## Brief change log *(for example:)* - *The TaskInfo is stored in the blob store on job creation time as a persistent artifact* - *Deployments RPC transmits only the blob storage reference* - *TaskManagers retrieve the TaskInfo from the blob cache* ## Verifying this change *(Please pick either of the following options)* This change is a trivial rework / code cleanup without any test coverage. *(or)* This change is already covered by existing tests, such as *(please describe tests)*. *(or)* This change added tests and can be verified as follows: *(example:)* - *Added integration tests for end-to-end deployment with large payloads (100MB)* - *Extended integration test for recovery after master (JobManager) failure* - *Added test that validates that TaskInfo is transferred only once across recoveries* - *Manually verified the change by running a 4 node cluser with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no) - The serializers: (yes / no / don't know) - The runtime per-record code paths (performance sensitive): (yes / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know) - The S3 file system connector: (yes / no / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17488) JdbcSink has to support setting autoCommit mode of DB
[ https://issues.apache.org/jira/browse/FLINK-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17488: --- Labels: pull-request-available (was: ) > JdbcSink has to support setting autoCommit mode of DB > - > > Key: FLINK-17488 > URL: https://issues.apache.org/jira/browse/FLINK-17488 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Affects Versions: 1.10.0 >Reporter: Khokhlov Pavel >Priority: Major > Labels: pull-request-available > > Just played with new > {noformat} > org.apache.flink.api.java.io.jdbc.JdbcSink{noformat} > ({{1.11-SNAPSHOT)}} > [(https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/jdbc.html|https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/jdbc.html]) > And batch mode with mysql driver (8.0.19). > Noticed that *JdbcSink* supports only *autoCommit true* and developer cannot > change that behaviour. But it's very important from Transactional and > Performance point of view to support autoCommit {color:#00875a}*false* > {color:#172b4d}and call commit explicitly. {color}{color} > When a connection is created, it is in auto-commit mode. This means that > each individual SQL statement is treated as a transaction and is > automatically committed right after it is executed. > For example Confluent connector disable it by default. > [https://github.com/confluentinc/kafka-connect-jdbc/blob/da9619af1d7442dd91793dbc4dc65b8e7414e7b5/src/main/java/io/confluent/connect/jdbc/sink/JdbcDbWriter.java#L50] > > As I see you added it only for JDBCInputFormat in: FLINK-12198 > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #11972: [FLINK-17058] Adding ProcessingTimeoutTrigger of nested triggers.
flinkbot edited a comment on pull request #11972: URL: https://github.com/apache/flink/pull/11972#issuecomment-622529341 ## CI report: * c06b5ebbdbe753d44ce2111dfde996ff9e0868f5 Travis: [FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/163266345) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=525) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11971: [FLINK-17271] Translate new DataStream API tutorial
flinkbot edited a comment on pull request #11971: URL: https://github.com/apache/flink/pull/11971#issuecomment-622521983 ## CI report: * f4082786bad837955dea7f68fce466c6b7719912 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=524) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #11972: [FLINK-17058] Adding ProcessingTimeoutTrigger of nested triggers.
flinkbot commented on pull request #11972: URL: https://github.com/apache/flink/pull/11972#issuecomment-622529341 ## CI report: * c06b5ebbdbe753d44ce2111dfde996ff9e0868f5 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17058) Adding TimeoutTrigger support nested triggers
[ https://issues.apache.org/jira/browse/FLINK-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097624#comment-17097624 ] Roey Shem Tov commented on FLINK-17058: --- [~aljoscha] sorry for late response, took me a while to understand all the test mechanisem of flink, (by the way great test infrastructure). I opened pull request with the tests, altaugh i changed a little bit what we talked about: # onProcessingTime call the nestedTrigger,onProcessingTime but return TriggerResult.FIRE, that because understanding that when processingTimer emit it should FIRE the window. # onElement method check the TriggerResult of the nestedTrigger, if the nestedTrigger return any fire result, we are gonna clear the state (because timeout should be reset). # added new flag of shouldClearAtTimeout meaning if the timeout arrived should i clear the nestedTrigger, for example if i had ProccessingTimeoutTrigger.of(CountTrigger.of(4)) with timeout of 10 seconds, and after 10 seconds i had 3 records , so when i emit the window should i reset the count to zero?(the record`s counter), or keeping it on 3. Please provide me any information if something is missing. > Adding TimeoutTrigger support nested triggers > - > > Key: FLINK-17058 > URL: https://issues.apache.org/jira/browse/FLINK-17058 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Reporter: Roey Shem Tov >Assignee: Roey Shem Tov >Priority: Minor > Labels: pull-request-available > Attachments: ProcessingTimeoutTrigger.java, > ProcessingTimeoutTrigger.java > > > Hello, > first Jira ticket that im opening here so if there is any mistakes of how > doing it, please recorrect me. > My suggestion is to add a TimeoutTrigger that apply as nestedTrigger(as > example how the PurgeTrigger does). > The TimeoutTrigger will support ProcessTime/EventTime and will have 2 timeous: > # Const timeout - when the first element of the window is arriving it is > opening a timeout of X millis - after that the window will be evaluate. > # Continual timeout - each record arriving will increase the timeout of the > evaluation of the window. > > I found it very useful in our case when using flink, and i would like to work > on it (if it is possible). > what do you think? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17289) Translate tutorials/etl.md to chinese
[ https://issues.apache.org/jira/browse/FLINK-17289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Anderson updated FLINK-17289: --- Description: This is one of the new tutorials, and it needs translation. The file is docs/training/etl.zh.md. (was: This is one of the new tutorials, and it needs translation. The file is docs/tutorials/etl.zh.md.) > Translate tutorials/etl.md to chinese > - > > Key: FLINK-17289 > URL: https://issues.apache.org/jira/browse/FLINK-17289 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Assignee: Li Ying >Priority: Major > > This is one of the new tutorials, and it needs translation. The file is > docs/training/etl.zh.md. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17291) Translate tutorial on event-driven applications to chinese
[ https://issues.apache.org/jira/browse/FLINK-17291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Anderson updated FLINK-17291: --- Description: Translate docs/training/event_driven.zh.md to Chinese. (was: Translate docs/tutorials/event_driven.zh.md to Chinese.) > Translate tutorial on event-driven applications to chinese > -- > > Key: FLINK-17291 > URL: https://issues.apache.org/jira/browse/FLINK-17291 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Assignee: RocMarshal >Priority: Major > > Translate docs/training/event_driven.zh.md to Chinese. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17271) Translate new DataStream API tutorial
[ https://issues.apache.org/jira/browse/FLINK-17271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Anderson updated FLINK-17271: --- Description: docs/training/datastream_api.zh.md needs to be translated. (was: docs/tutorials/datastream_api.zh.md needs to be translated.) > Translate new DataStream API tutorial > - > > Key: FLINK-17271 > URL: https://issues.apache.org/jira/browse/FLINK-17271 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Assignee: Bai Xu >Priority: Major > Labels: pull-request-available > > docs/training/datastream_api.zh.md needs to be translated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17290) Translate Streaming Analytics tutorial to chinese
[ https://issues.apache.org/jira/browse/FLINK-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Anderson updated FLINK-17290: --- Description: The file to be translated is docs/training/streaming-analytics.zh.md. The content covers event time, watermarks, and windowing. (was: The file to be translated is docs/tutorials/streaming-analytics.zh.md. The content covers event time, watermarks, and windowing.) > Translate Streaming Analytics tutorial to chinese > - > > Key: FLINK-17290 > URL: https://issues.apache.org/jira/browse/FLINK-17290 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Assignee: Herman, Li >Priority: Major > > The file to be translated is docs/training/streaming-analytics.zh.md. The > content covers event time, watermarks, and windowing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17292) Translate Fault Tolerance tutorial to Chinese
[ https://issues.apache.org/jira/browse/FLINK-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Anderson updated FLINK-17292: --- Description: This ticket is about translating the new tutorial in docs/training/fault_tolerance.zh.md. (was: This ticket is about translating the new tutorial in docs/tutorials/fault_tolerance.zh.md.) > Translate Fault Tolerance tutorial to Chinese > - > > Key: FLINK-17292 > URL: https://issues.apache.org/jira/browse/FLINK-17292 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Priority: Major > > This ticket is about translating the new tutorial in > docs/training/fault_tolerance.zh.md. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17292) Translate Fault Tolerance tutorial to Chinese
[ https://issues.apache.org/jira/browse/FLINK-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097618#comment-17097618 ] David Anderson commented on FLINK-17292: Yes, [~xbaith], that's right. > Translate Fault Tolerance tutorial to Chinese > - > > Key: FLINK-17292 > URL: https://issues.apache.org/jira/browse/FLINK-17292 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Priority: Major > > This ticket is about translating the new tutorial in > docs/training/fault_tolerance.zh.md. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #11972: [FLINK-17058] Adding ProcessingTimeoutTrigger of nested triggers.
flinkbot commented on pull request #11972: URL: https://github.com/apache/flink/pull/11972#issuecomment-622522199 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 47f954acdbf8cecd643620a63bfa265a5e210272 (Fri May 01 19:10:31 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #11971: [FLINK-17271] Translate new DataStream API tutorial
flinkbot commented on pull request #11971: URL: https://github.com/apache/flink/pull/11971#issuecomment-622521983 ## CI report: * f4082786bad837955dea7f68fce466c6b7719912 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11952: [FLINK-16638][runtime][checkpointing] Flink checkStateMappingCompleteness doesn't include UserDefinedOperatorIDs
flinkbot edited a comment on pull request #11952: URL: https://github.com/apache/flink/pull/11952#issuecomment-621589853 ## CI report: * 2113227fdc0076eb9e2f72ae7883c2d5d245cbab Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=523) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] roeyshemtov opened a new pull request #11972: [FLINK-17058] Adding ProcessingTimeoutTrigger of nested triggers.
roeyshemtov opened a new pull request #11972: URL: https://github.com/apache/flink/pull/11972 ## What is the purpose of the change Adding new feature of ProcessingTimeoutTrigger as mention in FLINK-17058 jira page. ## Verifying this change Adding UnitTests to the new code. ## Does this pull request potentially affect one of the following parts: Does not affect anything of the mentioned. ## Documentation - Does this pull request introduce a new feature? yes - If yes, how is the feature documented? not documented, where should i document it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-17292) Translate Fault Tolerance tutorial to Chinese
[ https://issues.apache.org/jira/browse/FLINK-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097611#comment-17097611 ] Bai Xu edited comment on FLINK-17292 at 5/1/20, 7:02 PM: - Hi [~alpinegizmo].It's me again.Let me drive this simple ticket. And maybe the URL of the training document is "docs/training/fault_tolerance.zh.md" ? Thank you:) was (Author: xbaith): Hi [~alpinegizmo].It's me again.Let me drive this simple ticket. And maybe the URL of the training document is "docs/training/fault_tolerance.zh.md" ? Thank you:) [document|http://dict.youdao.com/search?q=document&keyfrom=chrome.extension] [ˈdɒkjumənt] [详细|http://dict.youdao.com/search?q=document&keyfrom=chrome.extension]X 基本翻译 n. 文件,公文;[计] 文档;证件 vt. 记录,记载 网络释义 [Document:|http://dict.youdao.com/search?q=Document&keyfrom=chrome.extension&le=eng] 文档 [transport document:|http://dict.youdao.com/search?q=transport%20document&keyfrom=chrome.extension&le=eng] 运输单据 [original document:|http://dict.youdao.com/search?q=original%20document&keyfrom=chrome.extension&le=eng] 原始单据 > Translate Fault Tolerance tutorial to Chinese > - > > Key: FLINK-17292 > URL: https://issues.apache.org/jira/browse/FLINK-17292 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Priority: Major > > This ticket is about translating the new tutorial in > docs/tutorials/fault_tolerance.zh.md. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17292) Translate Fault Tolerance tutorial to Chinese
[ https://issues.apache.org/jira/browse/FLINK-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097611#comment-17097611 ] Bai Xu commented on FLINK-17292: Hi [~alpinegizmo].It's me again.Let me drive this simple ticket. And maybe the URL of the training document is "docs/training/fault_tolerance.zh.md" ? Thank you:) [document|http://dict.youdao.com/search?q=document&keyfrom=chrome.extension] [ˈdɒkjumənt] [详细|http://dict.youdao.com/search?q=document&keyfrom=chrome.extension]X 基本翻译 n. 文件,公文;[计] 文档;证件 vt. 记录,记载 网络释义 [Document:|http://dict.youdao.com/search?q=Document&keyfrom=chrome.extension&le=eng] 文档 [transport document:|http://dict.youdao.com/search?q=transport%20document&keyfrom=chrome.extension&le=eng] 运输单据 [original document:|http://dict.youdao.com/search?q=original%20document&keyfrom=chrome.extension&le=eng] 原始单据 > Translate Fault Tolerance tutorial to Chinese > - > > Key: FLINK-17292 > URL: https://issues.apache.org/jira/browse/FLINK-17292 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Priority: Major > > This ticket is about translating the new tutorial in > docs/tutorials/fault_tolerance.zh.md. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17058) Adding TimeoutTrigger support nested triggers
[ https://issues.apache.org/jira/browse/FLINK-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17058: --- Labels: pull-request-available (was: ) > Adding TimeoutTrigger support nested triggers > - > > Key: FLINK-17058 > URL: https://issues.apache.org/jira/browse/FLINK-17058 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Reporter: Roey Shem Tov >Assignee: Roey Shem Tov >Priority: Minor > Labels: pull-request-available > Attachments: ProcessingTimeoutTrigger.java, > ProcessingTimeoutTrigger.java > > > Hello, > first Jira ticket that im opening here so if there is any mistakes of how > doing it, please recorrect me. > My suggestion is to add a TimeoutTrigger that apply as nestedTrigger(as > example how the PurgeTrigger does). > The TimeoutTrigger will support ProcessTime/EventTime and will have 2 timeous: > # Const timeout - when the first element of the window is arriving it is > opening a timeout of X millis - after that the window will be evaluate. > # Continual timeout - each record arriving will increase the timeout of the > evaluation of the window. > > I found it very useful in our case when using flink, and i would like to work > on it (if it is possible). > what do you think? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-17271) Translate new DataStream API tutorial
[ https://issues.apache.org/jira/browse/FLINK-17271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097585#comment-17097585 ] Bai Xu edited comment on FLINK-17271 at 5/1/20, 6:39 PM: - [~alpinegizmo] Thanks for your patience! I have finshed this translation and please check it out if you are willing or notify other committers who could help. was (Author: xbaith): [~alpinegizmo] Thanks for your patient! I have finshed this translation and please check it out if you are willing or notify other committers who could help. > Translate new DataStream API tutorial > - > > Key: FLINK-17271 > URL: https://issues.apache.org/jira/browse/FLINK-17271 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Assignee: Bai Xu >Priority: Major > Labels: pull-request-available > > docs/tutorials/datastream_api.zh.md needs to be translated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17404) Running HA per-job cluster (rocks, non-incremental) gets stuck killing a non-existing pid
[ https://issues.apache.org/jira/browse/FLINK-17404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097587#comment-17097587 ] Kostas Kloudas commented on FLINK-17404: Thanks [~rmetzger] > Running HA per-job cluster (rocks, non-incremental) gets stuck killing a > non-existing pid > - > > Key: FLINK-17404 > URL: https://issues.apache.org/jira/browse/FLINK-17404 > Project: Flink > Issue Type: Bug > Components: Test Infrastructure, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Assignee: Robert Metzger >Priority: Critical > Labels: test-stability > > CI log: https://api.travis-ci.org/v3/job/678609505/log.txt > {code} > Waiting for text Completed checkpoint [1-9]* for job > to appear 2 of times in logs... > grep: > /home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/log/*standalonejob-2*.log: > No such file or directory > grep: > /home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/log/*standalonejob-2*.log: > No such file or directory > Starting standalonejob daemon on host > travis-job-e606668f-b674-49c0-8590-e3508e22b99d. > grep: > /home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/log/*standalonejob-2*.log: > No such file or directory > grep: > /home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/log/*standalonejob-2*.log: > No such file or directory > Killed TM @ 18864 > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or > kill -l [sigspec] > Killed TM @ > No output has been received in the last 10m0s, this potentially indicates a > stalled build or something wrong with the build itself. > Check the details on how to adjust your build configuration on: > https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received > The build has been terminated > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17271) Translate new DataStream API tutorial
[ https://issues.apache.org/jira/browse/FLINK-17271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097585#comment-17097585 ] Bai Xu commented on FLINK-17271: [~alpinegizmo] Thanks for your patient! I have finshed this translation and please check it out if you are willing or notify other committers who could help. > Translate new DataStream API tutorial > - > > Key: FLINK-17271 > URL: https://issues.apache.org/jira/browse/FLINK-17271 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Assignee: Bai Xu >Priority: Major > Labels: pull-request-available > > docs/tutorials/datastream_api.zh.md needs to be translated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #11971: [FLINK-17271] Translate new DataStream API tutorial
flinkbot commented on pull request #11971: URL: https://github.com/apache/flink/pull/11971#issuecomment-622508565 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit f4082786bad837955dea7f68fce466c6b7719912 (Fri May 01 18:34:43 UTC 2020) ✅no warnings Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17271) Translate new DataStream API tutorial
[ https://issues.apache.org/jira/browse/FLINK-17271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17271: --- Labels: pull-request-available (was: ) > Translate new DataStream API tutorial > - > > Key: FLINK-17271 > URL: https://issues.apache.org/jira/browse/FLINK-17271 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Assignee: Bai Xu >Priority: Major > Labels: pull-request-available > > docs/tutorials/datastream_api.zh.md needs to be translated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] XBaith opened a new pull request #11971: [FLINK-17271] Translate new DataStream API tutorial
XBaith opened a new pull request #11971: URL: https://github.com/apache/flink/pull/11971 ## What is the purpose of the change *This pull request translates new "DataStream API tutorial" page into Chinese.* ## Verifying this change *This change is a docs work without any test coverage.* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11952: [FLINK-16638][runtime][checkpointing] Flink checkStateMappingCompleteness doesn't include UserDefinedOperatorIDs
flinkbot edited a comment on pull request #11952: URL: https://github.com/apache/flink/pull/11952#issuecomment-621589853 ## CI report: * cb00ba58dee155aceb27bd4b8bab837a77265699 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=514) * 2113227fdc0076eb9e2f72ae7883c2d5d245cbab Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=523) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-17271) Translate new DataStream API tutorial
[ https://issues.apache.org/jira/browse/FLINK-17271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Anderson reassigned FLINK-17271: -- Assignee: Bai Xu > Translate new DataStream API tutorial > - > > Key: FLINK-17271 > URL: https://issues.apache.org/jira/browse/FLINK-17271 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Assignee: Bai Xu >Priority: Major > > docs/tutorials/datastream_api.zh.md needs to be translated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #11936: [FLINK-17386][Security][hotfix] fix LinkageError not captured
flinkbot edited a comment on pull request #11936: URL: https://github.com/apache/flink/pull/11936#issuecomment-620683562 ## CI report: * a02867a0bce8f1a0174a6a3133b50b76167dbcc9 Travis: [SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163208458) Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=520) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11952: [FLINK-16638][runtime][checkpointing] Flink checkStateMappingCompleteness doesn't include UserDefinedOperatorIDs
flinkbot edited a comment on pull request #11952: URL: https://github.com/apache/flink/pull/11952#issuecomment-621589853 ## CI report: * cb00ba58dee155aceb27bd4b8bab837a77265699 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=514) * 2113227fdc0076eb9e2f72ae7883c2d5d245cbab UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] edu05 commented on pull request #11952: [FLINK-16638][runtime][checkpointing] Flink checkStateMappingCompleteness doesn't include UserDefinedOperatorIDs
edu05 commented on pull request #11952: URL: https://github.com/apache/flink/pull/11952#issuecomment-622489226 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17488) JdbcSink has to support setting autoCommit mode of DB
[ https://issues.apache.org/jira/browse/FLINK-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Khokhlov Pavel updated FLINK-17488: --- Priority: Major (was: Critical) > JdbcSink has to support setting autoCommit mode of DB > - > > Key: FLINK-17488 > URL: https://issues.apache.org/jira/browse/FLINK-17488 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Affects Versions: 1.10.0 >Reporter: Khokhlov Pavel >Priority: Major > > Just played with new > {noformat} > org.apache.flink.api.java.io.jdbc.JdbcSink{noformat} > ({{1.11-SNAPSHOT)}} > [(https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/jdbc.html|https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/jdbc.html]) > And batch mode with mysql driver (8.0.19). > Noticed that *JdbcSink* supports only *autoCommit true* and developer cannot > change that behaviour. But it's very important from Transactional and > Performance point of view to support autoCommit {color:#00875a}*false* > {color:#172b4d}and call commit explicitly. {color}{color} > When a connection is created, it is in auto-commit mode. This means that > each individual SQL statement is treated as a transaction and is > automatically committed right after it is executed. > For example Confluent connector disable it by default. > [https://github.com/confluentinc/kafka-connect-jdbc/blob/da9619af1d7442dd91793dbc4dc65b8e7414e7b5/src/main/java/io/confluent/connect/jdbc/sink/JdbcDbWriter.java#L50] > > As I see you added it only for JDBCInputFormat in: FLINK-12198 > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #11844: [FLINK-17125][python][doc] Add a Usage Notes Page to Answer Common Questions Encountered by PyFlink Users
flinkbot edited a comment on pull request #11844: URL: https://github.com/apache/flink/pull/11844#issuecomment-617177959 ## CI report: * 135e33f23db360cbb1b4cbc4870e7640e78241cf Travis: [SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163211195) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint
flinkbot edited a comment on pull request #11687: URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542 ## CI report: * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN * 4f7399680818f5d29e917e17720e00900822a43d UNKNOWN * 44c8e741ef3e7c17492736d441369a56646b6713 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=498) * 25bf7d665d0b232621648b7d517376a60aab4311 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11936: [FLINK-17386][Security][hotfix] fix LinkageError not captured
flinkbot edited a comment on pull request #11936: URL: https://github.com/apache/flink/pull/11936#issuecomment-620683562 ## CI report: * a02867a0bce8f1a0174a6a3133b50b76167dbcc9 Travis: [SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163208458) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=520) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17271) Translate new DataStream API tutorial
[ https://issues.apache.org/jira/browse/FLINK-17271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097473#comment-17097473 ] Bai Xu commented on FLINK-17271: Hi,[~alpinegizmo] .Could you please assign this to me?Thank you! > Translate new DataStream API tutorial > - > > Key: FLINK-17271 > URL: https://issues.apache.org/jira/browse/FLINK-17271 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Priority: Major > > docs/tutorials/datastream_api.zh.md needs to be translated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17291) Translate tutorial on event-driven applications to chinese
[ https://issues.apache.org/jira/browse/FLINK-17291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097467#comment-17097467 ] RocMarshal commented on FLINK-17291: [~alpinegizmo] I will translate the "docs/training/event_driven.zh.md" located in the new directory structure of the master branch into Chinese. Thank you for your trust. > Translate tutorial on event-driven applications to chinese > -- > > Key: FLINK-17291 > URL: https://issues.apache.org/jira/browse/FLINK-17291 > Project: Flink > Issue Type: Improvement > Components: chinese-translation, Documentation / Training >Reporter: David Anderson >Assignee: RocMarshal >Priority: Major > > Translate docs/tutorials/event_driven.zh.md to Chinese. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] curcur commented on a change in pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
curcur commented on a change in pull request #11725: URL: https://github.com/apache/flink/pull/11725#discussion_r418596170 ## File path: flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaShuffleProducer.java ## @@ -0,0 +1,194 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.streaming.connectors.kafka; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.serialization.TypeInformationSerializationSchema; +import org.apache.flink.api.common.typeutils.TypeSerializer; +import org.apache.flink.api.java.functions.KeySelector; +import org.apache.flink.core.memory.DataOutputSerializer; +import org.apache.flink.runtime.state.KeyGroupRangeAssignment; +import org.apache.flink.streaming.api.watermark.Watermark; +import org.apache.flink.util.Preconditions; +import org.apache.flink.util.PropertiesUtil; + +import org.apache.kafka.clients.producer.ProducerRecord; + +import java.io.IOException; +import java.io.Serializable; +import java.util.Properties; + +import static org.apache.flink.streaming.connectors.kafka.FlinkKafkaShuffle.PARTITION_NUMBER; + +/** + * Flink Kafka Shuffle Producer Function. + * It is different from {@link FlinkKafkaProducer} in the way handling elements and watermarks + */ +@Internal +public class FlinkKafkaShuffleProducer extends FlinkKafkaProducer { + private final KafkaSerializer kafkaSerializer; + private final KeySelector keySelector; + private final int numberOfPartitions; + + FlinkKafkaShuffleProducer( + String defaultTopicId, + TypeInformationSerializationSchema schema, + Properties props, + KeySelector keySelector, + Semantic semantic, + int kafkaProducersPoolSize) { + super(defaultTopicId, (element, timestamp) -> null, props, semantic, kafkaProducersPoolSize); + + this.kafkaSerializer = new KafkaSerializer<>(schema.getSerializer()); + this.keySelector = keySelector; + + Preconditions.checkArgument( + props.getProperty(PARTITION_NUMBER) != null, + "Missing partition number for Kafka Shuffle"); + numberOfPartitions = PropertiesUtil.getInt(props, PARTITION_NUMBER, Integer.MIN_VALUE); + } + + /** +* This is the function invoked to handle each element. +* @param transaction transaction state; +*elements are written to Kafka in transactions to guarantee different level of data consistency +* @param next element to handle +* @param context context needed to handle the element +* @throws FlinkKafkaException +*/ + @Override + public void invoke(KafkaTransactionState transaction, IN next, Context context) throws FlinkKafkaException { + checkErroneous(); + + // write timestamp to Kafka if timestamp is available + Long timestamp = context.timestamp(); + + int[] partitions = getPartitions(transaction); + int partitionIndex; + try { + partitionIndex = KeyGroupRangeAssignment + .assignKeyToParallelOperator(keySelector.getKey(next), partitions.length, partitions.length); + } catch (Exception e) { + throw new RuntimeException("Fail to assign a partition number to record"); + } + + ProducerRecord record = new ProducerRecord<>( + defaultTopicId, partitionIndex, timestamp, null, kafkaSerializer.serializeRecord(next, timestamp)); + pendingRecords.incrementAndGet(); + transaction.producer.send(record, callback); + } + + /** +* This is the function invoked to handle each watermark. +* @param transaction transaction state; +*watermark are written to Kafka (if needed) in transactions +* @param watermark watermark to handle +* @throws FlinkKafkaException +*/ + @Over
[GitHub] [flink] curcur commented on a change in pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
curcur commented on a change in pull request #11725: URL: https://github.com/apache/flink/pull/11725#discussion_r418596170 ## File path: flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaShuffleProducer.java ## @@ -0,0 +1,194 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.streaming.connectors.kafka; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.serialization.TypeInformationSerializationSchema; +import org.apache.flink.api.common.typeutils.TypeSerializer; +import org.apache.flink.api.java.functions.KeySelector; +import org.apache.flink.core.memory.DataOutputSerializer; +import org.apache.flink.runtime.state.KeyGroupRangeAssignment; +import org.apache.flink.streaming.api.watermark.Watermark; +import org.apache.flink.util.Preconditions; +import org.apache.flink.util.PropertiesUtil; + +import org.apache.kafka.clients.producer.ProducerRecord; + +import java.io.IOException; +import java.io.Serializable; +import java.util.Properties; + +import static org.apache.flink.streaming.connectors.kafka.FlinkKafkaShuffle.PARTITION_NUMBER; + +/** + * Flink Kafka Shuffle Producer Function. + * It is different from {@link FlinkKafkaProducer} in the way handling elements and watermarks + */ +@Internal +public class FlinkKafkaShuffleProducer extends FlinkKafkaProducer { + private final KafkaSerializer kafkaSerializer; + private final KeySelector keySelector; + private final int numberOfPartitions; + + FlinkKafkaShuffleProducer( + String defaultTopicId, + TypeInformationSerializationSchema schema, + Properties props, + KeySelector keySelector, + Semantic semantic, + int kafkaProducersPoolSize) { + super(defaultTopicId, (element, timestamp) -> null, props, semantic, kafkaProducersPoolSize); + + this.kafkaSerializer = new KafkaSerializer<>(schema.getSerializer()); + this.keySelector = keySelector; + + Preconditions.checkArgument( + props.getProperty(PARTITION_NUMBER) != null, + "Missing partition number for Kafka Shuffle"); + numberOfPartitions = PropertiesUtil.getInt(props, PARTITION_NUMBER, Integer.MIN_VALUE); + } + + /** +* This is the function invoked to handle each element. +* @param transaction transaction state; +*elements are written to Kafka in transactions to guarantee different level of data consistency +* @param next element to handle +* @param context context needed to handle the element +* @throws FlinkKafkaException +*/ + @Override + public void invoke(KafkaTransactionState transaction, IN next, Context context) throws FlinkKafkaException { + checkErroneous(); + + // write timestamp to Kafka if timestamp is available + Long timestamp = context.timestamp(); + + int[] partitions = getPartitions(transaction); + int partitionIndex; + try { + partitionIndex = KeyGroupRangeAssignment + .assignKeyToParallelOperator(keySelector.getKey(next), partitions.length, partitions.length); + } catch (Exception e) { + throw new RuntimeException("Fail to assign a partition number to record"); + } + + ProducerRecord record = new ProducerRecord<>( + defaultTopicId, partitionIndex, timestamp, null, kafkaSerializer.serializeRecord(next, timestamp)); + pendingRecords.incrementAndGet(); + transaction.producer.send(record, callback); + } + + /** +* This is the function invoked to handle each watermark. +* @param transaction transaction state; +*watermark are written to Kafka (if needed) in transactions +* @param watermark watermark to handle +* @throws FlinkKafkaException +*/ + @Over
[GitHub] [flink] curcur commented on a change in pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
curcur commented on a change in pull request #11725: URL: https://github.com/apache/flink/pull/11725#discussion_r418595393 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/DataStream.java ## @@ -1333,6 +1334,32 @@ public ExecutionConfig getExecutionConfig() { return sink; } + /** +* Adds a {@link StreamShuffleSink} to this DataStream. {@link StreamShuffleSink} is attached with +* {@link SinkFunction} that can manipulate watermarks. +* +* @param sinkFunction +* The object containing the sink's invoke function for both the element and watermark. +* @return The closed DataStream. +*/ + public DataStreamSink addSinkShuffle(SinkFunction sinkFunction) { Review comment: Mark as resolved since the entire API is re-organized. This method is wrapped inside `FlinkKafkaShuffle` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
curcur commented on a change in pull request #11725: URL: https://github.com/apache/flink/pull/11725#discussion_r418595542 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/api/transformations/SinkTransformation.java ## @@ -67,6 +68,22 @@ public SinkTransformation( this(input, name, SimpleOperatorFactory.of(operator), parallelism); } + /** +* Creates a new {@code SinkTransformation} from the given input {@code Transformation}. +* +* @param input The input {@code Transformation} +* @param name The name of the {@code Transformation}, this will be shown in Visualizations and the Log +* @param operator The sink shuffle operator +* @param parallelism The parallelism of this {@code SinkTransformation} +*/ + public SinkTransformation( Review comment: Mark as resolved since the entire API is re-organized. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] sjwiesman commented on pull request #11878: [FLINK-17125][python] Add a Usage Notes Page to Answer Common Questions Encountered by PyFlink Users
sjwiesman commented on pull request #11878: URL: https://github.com/apache/flink/pull/11878#issuecomment-622433861 Sure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11963: [FLINK-16408] Bind user code class loader to lifetime of Job on TaskExecutor
flinkbot edited a comment on pull request #11963: URL: https://github.com/apache/flink/pull/11963#issuecomment-621969588 ## CI report: * e07ec1bd9c216cdac0ac4faa79bc5509ab4129b6 UNKNOWN * 7401f5580774397bd121736b1068f03009a2d980 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=521) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11969: [FLINK-17489][core] Support any kind of array in StringUtils.arrayAwareToString()
flinkbot edited a comment on pull request #11969: URL: https://github.com/apache/flink/pull/11969#issuecomment-622355748 ## CI report: * ddc6ff6bc782c78bae1688b8b61e51038e9c2fb9 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=515) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11844: [FLINK-17125][python][doc] Add a Usage Notes Page to Answer Common Questions Encountered by PyFlink Users
flinkbot edited a comment on pull request #11844: URL: https://github.com/apache/flink/pull/11844#issuecomment-617177959 ## CI report: * ebadc9001eb68754a95b3eee329b26c7590dfdfc Travis: [SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163020392) * 135e33f23db360cbb1b4cbc4870e7640e78241cf Travis: [PENDING](https://travis-ci.com/github/flink-ci/flink/builds/163211195) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11936: [FLINK-17386][Security][hotfix] fix LinkageError not captured
flinkbot edited a comment on pull request #11936: URL: https://github.com/apache/flink/pull/11936#issuecomment-620683562 ## CI report: * 1dcd163a3f869576f9504233e152a2e76baef51c Travis: [FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/163136724) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=510) * a02867a0bce8f1a0174a6a3133b50b76167dbcc9 Travis: [PENDING](https://travis-ci.com/github/flink-ci/flink/builds/163208458) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=520) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11963: [FLINK-16408] Bind user code class loader to lifetime of Job on TaskExecutor
flinkbot edited a comment on pull request #11963: URL: https://github.com/apache/flink/pull/11963#issuecomment-621969588 ## CI report: * e07ec1bd9c216cdac0ac4faa79bc5509ab4129b6 UNKNOWN * 7359ca7d6960774eb8294b49d915cbd1563644fe Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=497) * 7401f5580774397bd121736b1068f03009a2d980 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=521) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11844: [FLINK-17125][python][doc] Add a Usage Notes Page to Answer Common Questions Encountered by PyFlink Users
flinkbot edited a comment on pull request #11844: URL: https://github.com/apache/flink/pull/11844#issuecomment-617177959 ## CI report: * ebadc9001eb68754a95b3eee329b26c7590dfdfc Travis: [SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163020392) * 135e33f23db360cbb1b4cbc4870e7640e78241cf UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] HuangXingBo commented on pull request #11878: [FLINK-17125][python] Add a Usage Notes Page to Answer Common Questions Encountered by PyFlink Users
HuangXingBo commented on pull request #11878: URL: https://github.com/apache/flink/pull/11878#issuecomment-622409672 Thanks a lot for the fix @sjwiesman . I have cherry-pick these commits to release-1.10 https://github.com/apache/flink/pull/11844 and change the content of `Adding Jar Files` which is different in release-1.10 and master. Could you help review? Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11936: [FLINK-17386][Security][hotfix] fix LinkageError not captured
flinkbot edited a comment on pull request #11936: URL: https://github.com/apache/flink/pull/11936#issuecomment-620683562 ## CI report: * 1dcd163a3f869576f9504233e152a2e76baef51c Travis: [FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/163136724) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=510) * a02867a0bce8f1a0174a6a3133b50b76167dbcc9 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11963: [FLINK-16408] Bind user code class loader to lifetime of Job on TaskExecutor
flinkbot edited a comment on pull request #11963: URL: https://github.com/apache/flink/pull/11963#issuecomment-621969588 ## CI report: * e07ec1bd9c216cdac0ac4faa79bc5509ab4129b6 UNKNOWN * 7359ca7d6960774eb8294b49d915cbd1563644fe Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=497) * 7401f5580774397bd121736b1068f03009a2d980 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11970: [FLINK-17483][legal] Update flink-sql-connector-elasticsearch7 NOTICE file to correctly reflect bundled dependencies
flinkbot edited a comment on pull request #11970: URL: https://github.com/apache/flink/pull/11970#issuecomment-622395974 ## CI report: * 151f32be272c47867647c3ef9877534c37887900 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=518) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-16868) Table/SQL doesn't support custom trigger
[ https://issues.apache.org/jira/browse/FLINK-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097390#comment-17097390 ] Jark Wu commented on FLINK-16868: - This is a new feature, not a bug. So I removed the fix versions. > Table/SQL doesn't support custom trigger > > > Key: FLINK-16868 > URL: https://issues.apache.org/jira/browse/FLINK-16868 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API, Table SQL / Runtime >Reporter: Jimmy Wong >Priority: Major > > Table/SQL doesn't support custom trigger, such as CountTrigger, > ContinuousEventTimeTrigger/ContinuousProcessingTimeTrigger. Do we has plans > to make it? > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16868) Table/SQL doesn't support custom trigger
[ https://issues.apache.org/jira/browse/FLINK-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-16868: Fix Version/s: (was: 1.9.4) (was: 1.10.2) (was: 1.11.0) > Table/SQL doesn't support custom trigger > > > Key: FLINK-16868 > URL: https://issues.apache.org/jira/browse/FLINK-16868 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API, Table SQL / Runtime >Reporter: Jimmy Wong >Priority: Major > > Table/SQL doesn't support custom trigger, such as CountTrigger, > ContinuousEventTimeTrigger/ContinuousProcessingTimeTrigger. Do we has plans > to make it? > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17466) toRetractStream doesn't work correctly with Pojo conversion class
[ https://issues.apache.org/jira/browse/FLINK-17466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097389#comment-17097389 ] Jark Wu commented on FLINK-17466: - Yes. I checked in master and this is still another bug. Sad... > toRetractStream doesn't work correctly with Pojo conversion class > - > > Key: FLINK-17466 > URL: https://issues.apache.org/jira/browse/FLINK-17466 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.10.0 >Reporter: Gyula Fora >Priority: Critical > Fix For: 1.11.0, 1.10.2 > > Attachments: retract-issue.patch > > > The toRetractStream(table, Pojo.class) does not map the query columns > properly to the pojo fields. > This either leads to exceptions due to type incompatibility or simply > incorrect results. > It can be simple reproduced by the following test code: > {code:java} > @Test > public void testRetract() throws Exception { > EnvironmentSettings settings = EnvironmentSettings > .newInstance() > .useBlinkPlanner() > .inStreamingMode() > .build(); > StreamExecutionEnvironment env = > StreamExecutionEnvironment.getExecutionEnvironment(); > StreamTableEnvironment tableEnv = StreamTableEnvironment > .create(StreamExecutionEnvironment.getExecutionEnvironment(), settings); > tableEnv.createTemporaryView("person", env.fromElements(new Person())); > tableEnv.toRetractStream(tableEnv.sqlQuery("select name, age from person"), > Person.class).print(); > tableEnv.execute("Test"); > } > public static class Person { > public String name = "bob"; > public int age = 1; > }{code} > Runtime Error: > {noformat} > java.lang.ClassCastException: org.apache.flink.table.dataformat.BinaryString > cannot be cast to java.lang.Integer{noformat} > Changing the query to "select age,name from person" in this case would > resolve the problem but it also highlights the possible underlying issue. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17466) toRetractStream doesn't work correctly with Pojo conversion class
[ https://issues.apache.org/jira/browse/FLINK-17466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-17466: Fix Version/s: 1.10.2 1.11.0 > toRetractStream doesn't work correctly with Pojo conversion class > - > > Key: FLINK-17466 > URL: https://issues.apache.org/jira/browse/FLINK-17466 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.10.0 >Reporter: Gyula Fora >Priority: Critical > Fix For: 1.11.0, 1.10.2 > > Attachments: retract-issue.patch > > > The toRetractStream(table, Pojo.class) does not map the query columns > properly to the pojo fields. > This either leads to exceptions due to type incompatibility or simply > incorrect results. > It can be simple reproduced by the following test code: > {code:java} > @Test > public void testRetract() throws Exception { > EnvironmentSettings settings = EnvironmentSettings > .newInstance() > .useBlinkPlanner() > .inStreamingMode() > .build(); > StreamExecutionEnvironment env = > StreamExecutionEnvironment.getExecutionEnvironment(); > StreamTableEnvironment tableEnv = StreamTableEnvironment > .create(StreamExecutionEnvironment.getExecutionEnvironment(), settings); > tableEnv.createTemporaryView("person", env.fromElements(new Person())); > tableEnv.toRetractStream(tableEnv.sqlQuery("select name, age from person"), > Person.class).print(); > tableEnv.execute("Test"); > } > public static class Person { > public String name = "bob"; > public int age = 1; > }{code} > Runtime Error: > {noformat} > java.lang.ClassCastException: org.apache.flink.table.dataformat.BinaryString > cannot be cast to java.lang.Integer{noformat} > Changing the query to "select age,name from person" in this case would > resolve the problem but it also highlights the possible underlying issue. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16868) Table/SQL doesn't support custom trigger
[ https://issues.apache.org/jira/browse/FLINK-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li updated FLINK-16868: -- Fix Version/s: (was: 1.9.2) (was: 1.9.1) (was: 1.9.0) 1.9.4 1.10.2 1.11.0 The 1.9.0~1.9.2 versions are all released, I guess this issue is observed in 1.9.x thus updating the fix version to unreleased versions. Feel free to correct me but please make sure the fix version reflects the fact. Thanks. > Table/SQL doesn't support custom trigger > > > Key: FLINK-16868 > URL: https://issues.apache.org/jira/browse/FLINK-16868 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API, Table SQL / Runtime >Reporter: Jimmy Wong >Priority: Major > Fix For: 1.11.0, 1.10.2, 1.9.4 > > > Table/SQL doesn't support custom trigger, such as CountTrigger, > ContinuousEventTimeTrigger/ContinuousProcessingTimeTrigger. Do we has plans > to make it? > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16612) Submit job through the rest api, job name will be lost
[ https://issues.apache.org/jira/browse/FLINK-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li updated FLINK-16612: -- Fix Version/s: (was: 1.9.2) 1.10.2 1.11.0 It's a little bit weird that affected version is 1.10.0 but fix version is 1.9.2, I guess it was a typo, plus the fact that 1.9.2 is an already released version, I'm changing the fix version to 1.10.2 and 1.11.0 > Submit job through the rest api, job name will be lost > -- > > Key: FLINK-16612 > URL: https://issues.apache.org/jira/browse/FLINK-16612 > Project: Flink > Issue Type: Bug > Components: Client / Job Submission >Affects Versions: 1.10.0 > Environment: In flink1.10 >Reporter: Junli Zhang >Priority: Major > Labels: Client, JobName, RESTful > Fix For: 1.11.0, 1.10.2 > > Attachments: image-2020-03-16-18-04-59-891.png, > image-2020-03-16-18-06-05-051.png > > > Bug:Submit job through the rest api, job name will be lost > Reason:In method OptimizerPlanEnvironment.executeAsync(String jobName) > > !image-2020-03-16-18-04-59-891.png! > > Fix: change to : this.pipeline = createProgramPlan(jobName); > !image-2020-03-16-18-06-05-051.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-16901) Flink Kinesis connector NOTICE should have contents of AWS KPL's THIRD_PARTY_NOTICES file manually merged in
[ https://issues.apache.org/jira/browse/FLINK-16901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li closed FLINK-16901. - Resolution: Fixed Closing the issue since all work done. > Flink Kinesis connector NOTICE should have contents of AWS KPL's > THIRD_PARTY_NOTICES file manually merged in > > > Key: FLINK-16901 > URL: https://issues.apache.org/jira/browse/FLINK-16901 > Project: Flink > Issue Type: Bug > Components: Connectors / Kinesis >Affects Versions: 1.10.0 >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yu Li >Priority: Blocker > Labels: legal, pull-request-available > Fix For: 1.10.1, 1.11.0 > > > The Flink Kinesis connector artifact bundles AWS KPL's > [THIRD_PARTY_NOTICES|https://github.com/awslabs/amazon-kinesis-producer/blob/master/THIRD_PARTY_NOTICES] > file under the {{META-INF}} folder. > The contents of this should be manually merged into the artifact's own NOTICE > file, and the {{THIRD_PARTY_NOTICES}} file itself excluded. > --- > Since Stateful Functions' {{statefun-flink-distribution}} bundles the Flink > Kinesis connector, the {{THIRD_PARTY_NOTICES}} is bundled there as well. For > now, since we're already about to release Stateful Functions, we'll have to > apply these changes downstream in {{statefun-flink-distribution}}'s NOTICE > file. > Once this is fixed upstream in the Flink Kinesis connector's NOTICE file, we > can revert the changes in StateFun. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16901) Flink Kinesis connector NOTICE should have contents of AWS KPL's THIRD_PARTY_NOTICES file manually merged in
[ https://issues.apache.org/jira/browse/FLINK-16901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li updated FLINK-16901: -- Issue Type: Bug (was: Improvement) Labels: legal pull-request-available (was: pull-request-available) Change the issue type to `Bug` since the previous behavior didn't follow our [licensing policy|https://cwiki.apache.org/confluence/display/FLINK/Licensing]. > Flink Kinesis connector NOTICE should have contents of AWS KPL's > THIRD_PARTY_NOTICES file manually merged in > > > Key: FLINK-16901 > URL: https://issues.apache.org/jira/browse/FLINK-16901 > Project: Flink > Issue Type: Bug > Components: Connectors / Kinesis >Affects Versions: 1.10.0 >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yu Li >Priority: Blocker > Labels: legal, pull-request-available > Fix For: 1.10.1, 1.11.0 > > > The Flink Kinesis connector artifact bundles AWS KPL's > [THIRD_PARTY_NOTICES|https://github.com/awslabs/amazon-kinesis-producer/blob/master/THIRD_PARTY_NOTICES] > file under the {{META-INF}} folder. > The contents of this should be manually merged into the artifact's own NOTICE > file, and the {{THIRD_PARTY_NOTICES}} file itself excluded. > --- > Since Stateful Functions' {{statefun-flink-distribution}} bundles the Flink > Kinesis connector, the {{THIRD_PARTY_NOTICES}} is bundled there as well. For > now, since we're already about to release Stateful Functions, we'll have to > apply these changes downstream in {{statefun-flink-distribution}}'s NOTICE > file. > Once this is fixed upstream in the Flink Kinesis connector's NOTICE file, we > can revert the changes in StateFun. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-16901) Flink Kinesis connector NOTICE should have contents of AWS KPL's THIRD_PARTY_NOTICES file manually merged in
[ https://issues.apache.org/jira/browse/FLINK-16901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li updated FLINK-16901: -- Fix Version/s: 1.11.0 Checked and confirmed master branch also has the same issue, and merged the fix into master via 261e72119b69c4fc3e22d9bcdec50f6ca2fdc2e9 > Flink Kinesis connector NOTICE should have contents of AWS KPL's > THIRD_PARTY_NOTICES file manually merged in > > > Key: FLINK-16901 > URL: https://issues.apache.org/jira/browse/FLINK-16901 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kinesis >Affects Versions: 1.10.0 >Reporter: Tzu-Li (Gordon) Tai >Assignee: Yu Li >Priority: Blocker > Labels: pull-request-available > Fix For: 1.10.1, 1.11.0 > > > The Flink Kinesis connector artifact bundles AWS KPL's > [THIRD_PARTY_NOTICES|https://github.com/awslabs/amazon-kinesis-producer/blob/master/THIRD_PARTY_NOTICES] > file under the {{META-INF}} folder. > The contents of this should be manually merged into the artifact's own NOTICE > file, and the {{THIRD_PARTY_NOTICES}} file itself excluded. > --- > Since Stateful Functions' {{statefun-flink-distribution}} bundles the Flink > Kinesis connector, the {{THIRD_PARTY_NOTICES}} is bundled there as well. For > now, since we're already about to release Stateful Functions, we'll have to > apply these changes downstream in {{statefun-flink-distribution}}'s NOTICE > file. > Once this is fixed upstream in the Flink Kinesis connector's NOTICE file, we > can revert the changes in StateFun. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17495) Add custom labels on PrometheusReporter like PrometheusPushGatewayReporter's groupingKey
jinhai created FLINK-17495: -- Summary: Add custom labels on PrometheusReporter like PrometheusPushGatewayReporter's groupingKey Key: FLINK-17495 URL: https://issues.apache.org/jira/browse/FLINK-17495 Project: Flink Issue Type: Improvement Components: Runtime / Metrics Reporter: jinhai We need to add some custom labels on Prometheus, so we can query by them. Now we can add jobName\groupingKey to PrometheusPushGatewayReporter in version 1.10, but not in PrometheusReporter. Can we add AbstractPrometheusReporter#addDimension method to support this, so they will be no differences except for the metrics exposing mechanism pulling/pushing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #11970: [FLINK-17483][legal] Update flink-sql-connector-elasticsearch7 NOTICE file to correctly reflect bundled dependencies
flinkbot commented on pull request #11970: URL: https://github.com/apache/flink/pull/11970#issuecomment-622395974 ## CI report: * 151f32be272c47867647c3ef9877534c37887900 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
curcur commented on a change in pull request #11725: URL: https://github.com/apache/flink/pull/11725#discussion_r418544339 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/SinkFunction.java ## @@ -52,6 +54,20 @@ default void invoke(IN value, Context context) throws Exception { invoke(value); } + /** +* This function is called for every watermark. +* +* You have to override this method when implementing a {@code SinkFunction} to handle watermark. +* This method has to be used together with {@link StreamShuffleSink} +* +* @param watermark The watermark to handle. +* @throws Exception This method may throw exceptions. Throwing an exception will cause the operation +* to fail and may trigger recovery. +*/ + default void invoke(Watermark watermark) throws Exception { Review comment: Mark as resolved since the entire API is re-organized. SinkFunction is untouched in the new version. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
curcur commented on a change in pull request #11725: URL: https://github.com/apache/flink/pull/11725#discussion_r418544086 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/StreamShuffleSink.java ## @@ -0,0 +1,104 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.streaming.api.operators; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.streaming.api.functions.sink.SinkFunction; +import org.apache.flink.streaming.api.watermark.Watermark; +import org.apache.flink.streaming.runtime.streamrecord.LatencyMarker; +import org.apache.flink.streaming.runtime.streamrecord.StreamRecord; +import org.apache.flink.streaming.runtime.tasks.ProcessingTimeService; + +/** + * A {@link StreamOperator} for executing {@link SinkFunction} that handle both elements and watermarks. + * + * @param + */ +@Internal +public class StreamShuffleSink extends AbstractUdfStreamOperator> Review comment: Mark this as resolved since the entire API part has been re-organized. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-14816) Add thread dump feature for taskmanager
[ https://issues.apache.org/jira/browse/FLINK-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann resolved FLINK-14816. --- Resolution: Fixed Fixed via 02944090d7d267790aa9265ddce3d8a56d324878 4dc1f7e8c22b970cd0656d40419d3745c3277c71 > Add thread dump feature for taskmanager > --- > > Key: FLINK-14816 > URL: https://issues.apache.org/jira/browse/FLINK-14816 > Project: Flink > Issue Type: New Feature > Components: Runtime / Web Frontend >Affects Versions: 1.9.1 >Reporter: lamber-ken >Assignee: lamber-ken >Priority: Major > Labels: pull-request-available, usability > Fix For: 1.11.0 > > Attachments: screenshot-1.png > > Time Spent: 10m > Remaining Estimate: 0h > > Add thread dump feature for taskmanager, so use can get thread information > easily. > !screenshot-1.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] curcur commented on a change in pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
curcur commented on a change in pull request #11725: URL: https://github.com/apache/flink/pull/11725#discussion_r418543715 ## File path: flink-core/src/main/java/org/apache/flink/util/PropertiesUtil.java ## @@ -19,6 +19,7 @@ Review comment: Oh, yep, used as a HashTable, translated to a HashMap. Thanks :-) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-17494) Possible direct memory leak in cassandra sink
nobleyd created FLINK-17494: --- Summary: Possible direct memory leak in cassandra sink Key: FLINK-17494 URL: https://issues.apache.org/jira/browse/FLINK-17494 Project: Flink Issue Type: Bug Components: Connectors / Cassandra Affects Versions: 1.10.0, 1.9.3 Reporter: nobleyd # Cassandra Sink use direct memorys. # Start a standalone cluster(1 machines) for test. # After the cluster started, check the flink web-ui, and record the task manager's memory info. I mean the direct memory part info. # Start a job which read from kafka and write to cassandra using the cassandra sink, and you can see that the direct memory count in 'Outside JVM' part go up. # Stop the job, and the direct memory count is not decreased(using 'jmap -histo:live pid' to make the task manager gc). # Repeat serveral times, the direct memory count will be more and more. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #11970: [FLINK-17483][legal] Update flink-sql-connector-elasticsearch7 NOTICE file to correctly reflect bundled dependencies
flinkbot commented on pull request #11970: URL: https://github.com/apache/flink/pull/11970#issuecomment-622389283 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 151f32be272c47867647c3ef9877534c37887900 (Fri May 01 13:31:42 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle
curcur commented on a change in pull request #11725: URL: https://github.com/apache/flink/pull/11725#discussion_r418541284 ## File path: flink-core/src/main/java/org/apache/flink/util/PropertiesUtil.java ## @@ -108,6 +109,27 @@ public static boolean getBoolean(Properties config, String key, boolean defaultV } } + /** +* Flatten a recursive {@link Properties} to a first level property map. +* In some cases, {KafkaProducer#propsToMap} for example, Properties is used purely as a HashMap +* without considering its default properties. +* +* @param config Properties to be flatten +* @return Properties without defaults; all properties are put in the first-level +*/ + public static Properties flatten(Properties config) { Review comment: > On a second thought, wouldn't it make more sense to provide a correct `propsToMap` implementation? If it's only used in KafkaProducer, then we could fix it there. If not, I'd consider that function more useful than this `flatten`. this name is still confusing since it is a Properties, not a map. The flattened properties are actually used in the Kafka client lib, not that easy to fix. ## File path: flink-core/src/main/java/org/apache/flink/util/PropertiesUtil.java ## @@ -108,6 +109,27 @@ public static boolean getBoolean(Properties config, String key, boolean defaultV } } + /** +* Flatten a recursive {@link Properties} to a first level property map. +* In some cases, {KafkaProducer#propsToMap} for example, Properties is used purely as a HashMap +* without considering its default properties. +* +* @param config Properties to be flatten +* @return Properties without defaults; all properties are put in the first-level +*/ + public static Properties flatten(Properties config) { Review comment: > Should be covered with one test case. sure, will add one later. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] carp84 commented on pull request #11970: [FLINK-17483][legal] Update flink-sql-connector-elasticsearch7 NOTICE file to correctly reflect bundled dependencies
carp84 commented on pull request #11970: URL: https://github.com/apache/flink/pull/11970#issuecomment-622388870 CC @zentol @rmetzger This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org