[PR] Add release announcement for Flink CDC 3.1.1 [flink-web]
PatrickRen opened a new pull request, #746: URL: https://github.com/apache/flink-web/pull/746 This pull request adds release announcement for Flink CDC 3.1.1. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [hotfix][checkpointing] Normalize file-merging sub dir [flink]
zoltar9264 commented on PR #24928: URL: https://github.com/apache/flink/pull/24928#issuecomment-2167344359 This fix already as a part of [PR-24933](https://github.com/apache/flink/pull/24933), so close this pr. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [hotfix][checkpointing] Normalize file-merging sub dir [flink]
zoltar9264 closed pull request #24928: [hotfix][checkpointing] Normalize file-merging sub dir URL: https://github.com/apache/flink/pull/24928 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-35042) Streaming File Sink s3 end-to-end test failed as TM lost
[ https://issues.apache.org/jira/browse/FLINK-35042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854944#comment-17854944 ] Matthias Pohl commented on FLINK-35042: --- I noticed that the build failure in the description is unrelated to FLINK-34150 because it appeared on April 8, 2024 whereas FLINK-24150 only was merged on May 10, 2024. But the build failure I shared might be related. So, it could be that these two are actually two different issues. > Streaming File Sink s3 end-to-end test failed as TM lost > > > Key: FLINK-35042 > URL: https://issues.apache.org/jira/browse/FLINK-35042 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Major > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58782&view=logs&j=fb37c667-81b7-5c22-dd91-846535e99a97&t=011e961e-597c-5c96-04fe-7941c8b83f23&l=14344 > FAIL 'Streaming File Sink s3 end-to-end test' failed after 15 minutes and 20 > seconds! Test exited with exit code 1 > I have checked the JM log, it seems that a taskmanager is no longer reachable: > {code:java} > 2024-04-08T01:12:04.3922210Z Apr 08 01:12:04 2024-04-08 00:58:15,517 INFO > org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Sink: > Unnamed (4/4) > (14b44f534745ffb2f1ef03fca34f7f0d_0a448493b4782967b150582570326227_3_0) > switched from RUNNING to FAILED on localhost:44987-47f5af @ localhost > (dataPort=34489). > 2024-04-08T01:12:04.3924522Z Apr 08 01:12:04 > org.apache.flink.runtime.jobmaster.JobMasterException: TaskManager with id > localhost:44987-47f5af is no longer reachable. > 2024-04-08T01:12:04.3925421Z Apr 08 01:12:04 at > org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyTargetUnreachable(JobMaster.java:1511) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3926185Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.DefaultHeartbeatMonitor.reportHeartbeatRpcFailure(DefaultHeartbeatMonitor.java:126) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3926925Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.runIfHeartbeatMonitorExists(HeartbeatManagerImpl.java:275) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3929898Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.reportHeartbeatTargetUnreachable(HeartbeatManagerImpl.java:267) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3930692Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.handleHeartbeatRpcFailure(HeartbeatManagerImpl.java:262) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3931442Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.lambda$handleHeartbeatRpc$0(HeartbeatManagerImpl.java:248) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3931917Z Apr 08 01:12:04 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > ~[?:1.8.0_402] > 2024-04-08T01:12:04.3934759Z Apr 08 01:12:04 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > ~[?:1.8.0_402] > 2024-04-08T01:12:04.3935252Z Apr 08 01:12:04 at > java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456) > ~[?:1.8.0_402] > 2024-04-08T01:12:04.3935989Z Apr 08 01:12:04 at > org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRunAsync$4(PekkoRpcActor.java:460) > ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3936731Z Apr 08 01:12:04 at > org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3938103Z Apr 08 01:12:04 at > org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRunAsync(PekkoRpcActor.java:460) > ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3942549Z Apr 08 01:12:04 at > org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:225) > ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3945371Z Apr 08 01:12:04 at > org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:88) > ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3946244Z Apr 08 01:12:04 at > org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174) > ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:
[jira] [Comment Edited] (FLINK-35042) Streaming File Sink s3 end-to-end test failed as TM lost
[ https://issues.apache.org/jira/browse/FLINK-35042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854944#comment-17854944 ] Matthias Pohl edited comment on FLINK-35042 at 6/14/24 6:37 AM: I noticed that the build failure in the description is unrelated to FLINK-34150 because it appeared on April 8, 2024 whereas FLINK-34150 only was merged on May 10, 2024. But the build failure I shared might be related. So, it could be that these two are actually two different issues. was (Author: mapohl): I noticed that the build failure in the description is unrelated to FLINK-34150 because it appeared on April 8, 2024 whereas FLINK-24150 only was merged on May 10, 2024. But the build failure I shared might be related. So, it could be that these two are actually two different issues. > Streaming File Sink s3 end-to-end test failed as TM lost > > > Key: FLINK-35042 > URL: https://issues.apache.org/jira/browse/FLINK-35042 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Major > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58782&view=logs&j=fb37c667-81b7-5c22-dd91-846535e99a97&t=011e961e-597c-5c96-04fe-7941c8b83f23&l=14344 > FAIL 'Streaming File Sink s3 end-to-end test' failed after 15 minutes and 20 > seconds! Test exited with exit code 1 > I have checked the JM log, it seems that a taskmanager is no longer reachable: > {code:java} > 2024-04-08T01:12:04.3922210Z Apr 08 01:12:04 2024-04-08 00:58:15,517 INFO > org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Sink: > Unnamed (4/4) > (14b44f534745ffb2f1ef03fca34f7f0d_0a448493b4782967b150582570326227_3_0) > switched from RUNNING to FAILED on localhost:44987-47f5af @ localhost > (dataPort=34489). > 2024-04-08T01:12:04.3924522Z Apr 08 01:12:04 > org.apache.flink.runtime.jobmaster.JobMasterException: TaskManager with id > localhost:44987-47f5af is no longer reachable. > 2024-04-08T01:12:04.3925421Z Apr 08 01:12:04 at > org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyTargetUnreachable(JobMaster.java:1511) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3926185Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.DefaultHeartbeatMonitor.reportHeartbeatRpcFailure(DefaultHeartbeatMonitor.java:126) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3926925Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.runIfHeartbeatMonitorExists(HeartbeatManagerImpl.java:275) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3929898Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.reportHeartbeatTargetUnreachable(HeartbeatManagerImpl.java:267) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3930692Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.handleHeartbeatRpcFailure(HeartbeatManagerImpl.java:262) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3931442Z Apr 08 01:12:04 at > org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.lambda$handleHeartbeatRpc$0(HeartbeatManagerImpl.java:248) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3931917Z Apr 08 01:12:04 at > java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) > ~[?:1.8.0_402] > 2024-04-08T01:12:04.3934759Z Apr 08 01:12:04 at > java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) > ~[?:1.8.0_402] > 2024-04-08T01:12:04.3935252Z Apr 08 01:12:04 at > java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456) > ~[?:1.8.0_402] > 2024-04-08T01:12:04.3935989Z Apr 08 01:12:04 at > org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRunAsync$4(PekkoRpcActor.java:460) > ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3936731Z Apr 08 01:12:04 at > org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) > ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3938103Z Apr 08 01:12:04 at > org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRunAsync(PekkoRpcActor.java:460) > ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3942549Z Apr 08 01:12:04 at > org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:225) > ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT] > 2024-04-08T01:12:04.3945371Z Apr 08 01:12:04 at > org.apache.flink.runtime.rpc.pekko.FencedPekkoRpc
Re: [PR] [FLINK-35593][Kubernetes Operator] Add Apache 2 License to docker image [flink-kubernetes-operator]
rmetzger commented on PR #839: URL: https://github.com/apache/flink-kubernetes-operator/pull/839#issuecomment-2167326372 Ah, I see. That guide was written before Flink started having many repositories. Back then everything was in apache/flink. Here, the component tags are used for things like [helm] or [docker]. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Support `ALTER CATALOG COMMENT` syntax [flink]
liyubin117 commented on code in PR #24932: URL: https://github.com/apache/flink/pull/24932#discussion_r1639333418 ## flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlAlterCatalogComment.java: ## @@ -0,0 +1,62 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser.ddl; + +import org.apache.calcite.sql.SqlCharStringLiteral; +import org.apache.calcite.sql.SqlIdentifier; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.util.ImmutableNullableList; + +import java.util.List; + +import static java.util.Objects.requireNonNull; + +/** ALTER CATALOG catalog_name COMMENT 'comment'. */ +public class SqlAlterCatalogComment extends SqlAlterCatalog { + +private final SqlCharStringLiteral comment; + +public SqlAlterCatalogComment( +SqlParserPos position, SqlIdentifier catalogName, SqlCharStringLiteral comment) { +super(position, catalogName); +this.comment = requireNonNull(comment, "comment cannot be null"); +} + +@Override +public List getOperandList() { +return ImmutableNullableList.of(catalogName, comment); +} + +public SqlCharStringLiteral getComment() { +return comment; +} + +public String getCommentAsString() { +return comment.getValueAs(String.class); +} Review Comment: done as `enhanced create catalog` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Support `ALTER CATALOG COMMENT` syntax [flink]
liyubin117 commented on code in PR #24932: URL: https://github.com/apache/flink/pull/24932#discussion_r1639333058 ## flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogDescriptor.java: ## @@ -48,18 +51,29 @@ public Configuration getConfiguration() { return configuration; } -private CatalogDescriptor(String catalogName, Configuration configuration) { +public String getComment() { +return comment; +} Review Comment: done ## flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogDescriptor.java: ## @@ -40,6 +40,9 @@ public class CatalogDescriptor { /* The configuration used to discover and construct the catalog. */ private final Configuration configuration; +/* Catalog comment. */ +private final String comment; Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-35592) MysqlDebeziumTimeConverter miss timezone convert to timestamp
[ https://issues.apache.org/jira/browse/FLINK-35592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854943#comment-17854943 ] Qingsheng Ren commented on FLINK-35592: --- release-3.1: f476a5b1dd99fc63962bc75048c0b12957e53da8 > MysqlDebeziumTimeConverter miss timezone convert to timestamp > - > > Key: FLINK-35592 > URL: https://issues.apache.org/jira/browse/FLINK-35592 > Project: Flink > Issue Type: Improvement > Components: Flink CDC >Reporter: ZhengYu Chen >Assignee: ZhengYu Chen >Priority: Major > Fix For: cdc-3.1.1 > > > MysqlDebeziumTimeConverter miss timezone convert to timestamp.if use > timestamp to mmddhhmmss.it will be lost timezone convert -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34918][table] Support `ALTER CATALOG COMMENT` syntax [flink]
liyubin117 commented on code in PR #24932: URL: https://github.com/apache/flink/pull/24932#discussion_r1639333766 ## flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl: ## @@ -176,6 +176,15 @@ SqlAlterCatalog SqlAlterCatalog() : catalogName, propertyList); } +| + +{ +String p = SqlParserUtil.parseString(token.image); +comment = SqlLiteral.createCharString(p, getPos()); Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Support `ALTER CATALOG COMMENT` syntax [flink]
liyubin117 commented on code in PR #24932: URL: https://github.com/apache/flink/pull/24932#discussion_r1639333418 ## flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlAlterCatalogComment.java: ## @@ -0,0 +1,62 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser.ddl; + +import org.apache.calcite.sql.SqlCharStringLiteral; +import org.apache.calcite.sql.SqlIdentifier; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.util.ImmutableNullableList; + +import java.util.List; + +import static java.util.Objects.requireNonNull; + +/** ALTER CATALOG catalog_name COMMENT 'comment'. */ +public class SqlAlterCatalogComment extends SqlAlterCatalog { + +private final SqlCharStringLiteral comment; + +public SqlAlterCatalogComment( +SqlParserPos position, SqlIdentifier catalogName, SqlCharStringLiteral comment) { +super(position, catalogName); +this.comment = requireNonNull(comment, "comment cannot be null"); +} + +@Override +public List getOperandList() { +return ImmutableNullableList.of(catalogName, comment); +} + +public SqlCharStringLiteral getComment() { +return comment; +} + +public String getCommentAsString() { +return comment.getValueAs(String.class); +} Review Comment: done ## flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlAlterCatalogComment.java: ## @@ -0,0 +1,62 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser.ddl; + +import org.apache.calcite.sql.SqlCharStringLiteral; +import org.apache.calcite.sql.SqlIdentifier; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.util.ImmutableNullableList; + +import java.util.List; + +import static java.util.Objects.requireNonNull; + +/** ALTER CATALOG catalog_name COMMENT 'comment'. */ +public class SqlAlterCatalogComment extends SqlAlterCatalog { + +private final SqlCharStringLiteral comment; + +public SqlAlterCatalogComment( +SqlParserPos position, SqlIdentifier catalogName, SqlCharStringLiteral comment) { +super(position, catalogName); +this.comment = requireNonNull(comment, "comment cannot be null"); +} + +@Override +public List getOperandList() { +return ImmutableNullableList.of(catalogName, comment); +} + +public SqlCharStringLiteral getComment() { Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Support `ALTER CATALOG COMMENT` syntax [flink]
liyubin117 commented on code in PR #24932: URL: https://github.com/apache/flink/pull/24932#discussion_r1639332192 ## flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogDescriptor.java: ## @@ -48,18 +51,29 @@ public Configuration getConfiguration() { return configuration; } -private CatalogDescriptor(String catalogName, Configuration configuration) { +public String getComment() { +return comment; +} + +private CatalogDescriptor(String catalogName, Configuration configuration, String comment) { Review Comment: done ## flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/DescribeCatalogOperation.java: ## @@ -84,8 +84,7 @@ public TableResultInternal execute(Context ctx) { "type", properties.getOrDefault( CommonCatalogOptions.CATALOG_TYPE.key(), "")), -// TODO: Show the catalog comment until FLINK-34918 is resolved -Arrays.asList("comment", ""))); +Arrays.asList("comment", catalogDescriptor.getComment(; Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35593][Kubernetes Operator] Add Apache 2 License to docker image [flink-kubernetes-operator]
anupamaggarwal commented on PR #839: URL: https://github.com/apache/flink-kubernetes-operator/pull/839#issuecomment-2167310593 > +1 to merge. I don't think the `[Kubernetes Operator]` tag in the commit message is needed, since this is the operator repo. ack @rmetzger I was following https://flink.apache.org/how-to-contribute/code-style-and-quality-pull-requests/#1-jira-issue-and-naming ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33192] Fix Memory Leak in WindowOperator due to Improper Timer Cleanup [flink]
fredia commented on PR #24917: URL: https://github.com/apache/flink/pull/24917#issuecomment-2167284732 @kartikeypant Thanks for the PR, overall LGTM, would you like to add a test about this in `WindowOperatorTest`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]
dingxin-tech commented on code in PR #3414: URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1639313828 ## flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunctionProvider.java: ## @@ -0,0 +1,27 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.common.sink; + +import org.apache.flink.cdc.common.schema.Schema; + +import java.io.Serializable; + +/** use for PrePartitionOperator. */ Review Comment: Sure, thank you for perfect comments so clear. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]
dingxin-tech commented on code in PR #3414: URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1639312924 ## flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunction.java: ## @@ -0,0 +1,28 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.common.sink; + +import org.apache.flink.cdc.common.event.DataChangeEvent; + +import java.util.function.Function; + +/** use for PrePartitionOperator when calculating hash code of primary key. */ Review Comment: Actually, `Sink` in `cdc-common` cannot link `PrePartitionOperator` in `cdc-runtime`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35593][Kubernetes Operator] Add Apache 2 License to docker image [flink-kubernetes-operator]
rmetzger commented on PR #839: URL: https://github.com/apache/flink-kubernetes-operator/pull/839#issuecomment-2167276203 +1 to merge. I don't think the `[Kubernetes Operator]` tag in the commit message is needed, since this is the operator repo. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35167][cdc-connector] Introduce MaxCompute pipeline DataSink [flink-cdc]
dingxin-tech commented on code in PR #3254: URL: https://github.com/apache/flink-cdc/pull/3254#discussion_r1639307538 ## docs/content.zh/docs/connectors/maxcompute.md: ## @@ -0,0 +1,342 @@ +--- +title: "MaxCompute" +weight: 7 +type: docs +aliases: + - /connectors/maxcompute +--- + + + +# MaxCompute Connector + +MaxCompute Pipeline 连接器可以用作 Pipeline 的 *Data Sink*,将数据写入[MaxCompute](https://www.aliyun.com/product/odps)。 +本文档介绍如何设置 MaxCompute Pipeline 连接器。 + +## 连接器的功能 + +* 自动建表 +* 表结构变更同步 +* 数据实时同步 + +## 示例 + +从 MySQL 读取数据同步到 MaxCompute 的 Pipeline 可以定义如下: + +```yaml +source: + type: mysql + name: MySQL Source + hostname: 127.0.0.1 + port: 3306 + username: admin + password: pass + tables: adb.\.*, bdb.user_table_[0-9]+, [app|web].order_\.* + server-id: 5401-5404 + +sink: + type: maxcompute + name: MaxCompute Sink + accessId: ak + accessKey: sk + endpoint: endpoint + project: flink_cdc + bucketSize: 8 + +pipeline: + name: MySQL to MaxCompute Pipeline + parallelism: 2 +``` + +## 连接器配置项 + + + + + +Option +Required +Default +Type +Description + + + + + type + required + (none) + String + 指定要使用的连接器, 这里需要设置成 'maxcompute'. + + + name + optional + (none) + String + Sink 的名称. + + + accessId + required + (none) + String + 阿里云账号或RAM用户的AccessKey ID。您可以进入https://ram.console.aliyun.com/manage/ak";> +AccessKey管理页面 获取AccessKey ID。 + + + accessKey + required + (none) + String + AccessKey ID对应的AccessKey Secret。您可以进入https://ram.console.aliyun.com/manage/ak";> +AccessKey管理页面 获取AccessKey Secret。 + + + endpoint + required + (none) + String + MaxCompute服务的连接地址。您需要根据创建MaxCompute项目时选择的地域以及网络连接方式配置Endpoint。各地域及网络对应的Endpoint值,请参见https://help.aliyun.com/zh/maxcompute/user-guide/endpoints";> + Endpoint。 + + + project + required + (none) + String + MaxCompute项目名称。您可以登录https://maxcompute.console.aliyun.com/";> + MaxCompute控制台,在 工作区 > 项目管理 页面获取MaxCompute项目名称。 + + + tunnelEndpoint + optional + (none) + String + MaxCompute Tunnel服务的连接地址,通常这项配置可以根据指定的project所在的region进行自动路由。仅在使用代理等特殊网络环境下使用该配置。 + + + quotaName + optional + (none) + String + MaxCompute 数据传输使用的独享资源组名称,如不指定该配置,则使用共享资源组。详情可以参考https://help.aliyun.com/zh/maxcompute/user-guide/purchase-and-use-exclusive-resource-groups-for-dts";> + 使用 Maxcompute 独享资源组 + + + stsToken + optional + (none) + String + 当使用RAM角色颁发的短时有效的访问令牌(STS Token)进行鉴权时,需要指定该参数。 + + + bucketsNum + optional + 16 + Integer + 自动创建 MaxCompute Transaction 表时使用的桶数。使用方式可以参考 Review Comment: Fixed. ## flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-maxcompute/src/main/resources/META-INF/services/org.apache.flink.cdc.common.factories.Factory: ## @@ -0,0 +1,17 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +org.apache.flink.cdc.connectors.maxcompute.MaxComputeDataSinkFactory Review Comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [cdc-common] add field of defaultValue to Column. [flink-cdc]
yuxiqian commented on code in PR #2944: URL: https://github.com/apache/flink-cdc/pull/2944#discussion_r1639295066 ## flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/serializer/schema/PhysicalColumnSerializer.java: ## Review Comment: Seems this is a breaking change of current serialization format. Should we bump serializer version here and keep backwards compatibility? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35167][cdc-connector] Introduce MaxCompute pipeline DataSink [flink-cdc]
dingxin-tech commented on code in PR #3254: URL: https://github.com/apache/flink-cdc/pull/3254#discussion_r1639306007 ## flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-maxcompute/src/main/java/org/apache/flink/cdc/connectors/maxcompute/utils/TypeConvertUtils.java: ## @@ -0,0 +1,540 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.connectors.maxcompute.utils; + +import org.apache.flink.cdc.common.data.ArrayData; +import org.apache.flink.cdc.common.data.MapData; +import org.apache.flink.cdc.common.data.RecordData; +import org.apache.flink.cdc.common.schema.Schema; +import org.apache.flink.cdc.common.types.ArrayType; +import org.apache.flink.cdc.common.types.DataType; +import org.apache.flink.cdc.common.types.DecimalType; +import org.apache.flink.cdc.common.types.MapType; +import org.apache.flink.cdc.common.types.RowType; +import org.apache.flink.cdc.common.utils.SchemaUtils; + +import com.aliyun.odps.Column; +import com.aliyun.odps.OdpsType; +import com.aliyun.odps.TableSchema; +import com.aliyun.odps.data.ArrayRecord; +import com.aliyun.odps.data.Binary; +import com.aliyun.odps.data.SimpleStruct; +import com.aliyun.odps.data.Struct; +import com.aliyun.odps.table.utils.Preconditions; +import com.aliyun.odps.type.StructTypeInfo; +import com.aliyun.odps.type.TypeInfo; +import com.aliyun.odps.type.TypeInfoFactory; + +import java.math.BigDecimal; +import java.time.Instant; +import java.time.LocalDate; +import java.time.LocalDateTime; +import java.time.LocalTime; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; + +import static org.apache.flink.cdc.common.types.DataTypeChecks.getFieldCount; +import static org.apache.flink.cdc.common.types.DataTypeChecks.getPrecision; +import static org.apache.flink.cdc.common.types.DataTypeChecks.getScale; + +/** + * Data type mapping table This table shows the mapping relationship from Flink types to MaxCompute + * types and the corresponding Java type representation. + * + * + * | Flink Type| MaxCompute Type| Flink Java Type | MaxCompute Java Type | + * |---||-|--| + * | CHAR/VARCHAR/STRING | STRING | StringData | String | + * | BOOLEAN | BOOLEAN| Boolean | Boolean | + * | BINARY/VARBINARY | BINARY | byte[] | odps.data.Binary | + * | DECIMAL | DECIMAL| DecimalData | BigDecimal | + * | TINYINT | TINYINT| Byte | Byte | + * | SMALLINT | SMALLINT | Short | Short| + * | INTEGER | INTEGER| Integer | Integer | + * | BIGINT| BIGINT | Long | Long | + * | FLOAT | FLOAT | Float | Float| + * | DOUBLE| DOUBLE | Double | Double | + * | TIME_WITHOUT_TIME_ZONE| STRING | Integer | String | + * | DATE | DATE | Integer | LocalDate| + * | TIMESTAMP_WITHOUT_TIME_ZONE | TIMESTAMP_NTZ | TimestampData | LocalDateTime| + * | TIMESTAMP_WITH_LOCAL_TIME_ZONE| TIMESTAMP | LocalZonedTimestampData | Instant | + * | TIMESTAMP_WITH_TIME_ZONE | TIMESTAMP | ZonedTimestampData | Instant | + * | ARRAY | ARRAY | ArrayData | ArrayList| + * | MAP | MAP| MapData | HashMap | + * | ROW
Re: [PR] [FLINK-34487][ci] Adds Python Wheels nightly GHA workflow [flink]
XComp commented on code in PR #24426: URL: https://github.com/apache/flink/pull/24426#discussion_r1639305664 ## .github/workflows/nightly.yml: ## @@ -28,69 +28,131 @@ jobs: name: "Pre-compile Checks" uses: ./.github/workflows/template.pre-compile-checks.yml - java8: -name: "Java 8" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: java8 - environment: 'PROFILE="-Dinclude_hadoop_aws"' - jdk_version: 8 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - java11: -name: "Java 11" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: java11 - environment: 'PROFILE="-Dinclude_hadoop_aws -Djdk11 -Pjava11-target"' - jdk_version: 11 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - java17: -name: "Java 17" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: java17 - environment: 'PROFILE="-Dinclude_hadoop_aws -Djdk11 -Djdk17 -Pjava17-target"' - jdk_version: 17 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - java21: -name: "Java 21" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: java21 - environment: 'PROFILE="-Dinclude_hadoop_aws -Djdk11 -Djdk17 -Djdk21 -Pjava21-target"' - jdk_version: 21 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - hadoop313: -name: "Hadoop 3.1.3" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: hadoop313 - environment: 'PROFILE="-Dflink.hadoop.version=3.2.3 -Phadoop3-tests,hive3"' - jdk_version: 8 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - adaptive-scheduler: -name: "AdaptiveScheduler" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: adaptive-scheduler - environment: 'PROFILE="-Penable-adaptive-scheduler"' - jdk_version: 8 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} +# java8: Review Comment: That's just disabled code to fasten the test run. It will be reverted before merging the PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35167][cdc-connector] Introduce MaxCompute pipeline DataSink [flink-cdc]
dingxin-tech commented on code in PR #3254: URL: https://github.com/apache/flink-cdc/pull/3254#discussion_r1639304610 ## flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-maxcompute/src/test/java/org/apache/flink/cdc/connectors/maxcompute/utils/SchemaEvolutionUtilsTest.java: ## @@ -0,0 +1,97 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.connectors.maxcompute.utils; + +import org.apache.flink.cdc.common.event.AddColumnEvent; +import org.apache.flink.cdc.common.event.TableId; +import org.apache.flink.cdc.common.schema.Column; +import org.apache.flink.cdc.common.schema.Schema; +import org.apache.flink.cdc.common.types.DataTypes; +import org.apache.flink.cdc.connectors.maxcompute.MockedMaxComputeOptions; + +import org.apache.flink.shaded.guava31.com.google.common.collect.ImmutableList; +import org.apache.flink.shaded.guava31.com.google.common.collect.ImmutableMap; + +import com.aliyun.odps.OdpsException; +import org.junit.Assert; +import org.junit.BeforeClass; +import org.junit.jupiter.api.Test; + +/** e2e test of SchemaEvolutionUtils. */ +public class SchemaEvolutionUtilsTest { Review Comment: Sure. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [cdc-common] add field of defaultValue to Column. [flink-cdc]
yuxiqian commented on code in PR #2944: URL: https://github.com/apache/flink-cdc/pull/2944#discussion_r1639295066 ## flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/serializer/schema/PhysicalColumnSerializer.java: ## Review Comment: Seems this is a breaking change of current serialization format. Should we bump serializer version here to keep backwards compatibility? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35600) Data read duplication during the full-to-incremental conversion phase
[ https://issues.apache.org/jira/browse/FLINK-35600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Di Wu updated FLINK-35600: -- Description: Assume that the table has been split into 3 Chunks Timeline t1: chunk1 is read t2: a piece of data A belonging to chunk2 is inserted in MySQL t3: chunk2 is read, and data A has been sent downstream t4: chunk3 is read At this time, startOffset will be set to lowwatermark t5: *BinlogSplitReader.pollSplitRecords* receives data A, and uses the method *shouldEmit* to determine whether the data is sent downstream In this method {code:java} private boolean hasEnterPureBinlogPhase(TableId tableId, BinlogOffset position) { if (pureBinlogPhaseTables.contains(tableId)) { return true; } // the existed tables those have finished snapshot reading if (maxSplitHighWatermarkMap.containsKey(tableId) && position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))) { pureBinlogPhaseTables.add(tableId); return true; } } {code} *maxSplitHighWatermarkMap.get(tableId)* obtains the HighWatermark data without ts_sec variable, and the default value is 0 *position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))* So this expression is judged as true *Data A continues to be sent downstream, and the data is repeated* was: Assume that the table has been split into 3 Chunks Timeline t1: chunk1 is read t2: a piece of data A belonging to chunk2 is inserted in MySQL t3: chunk2 is read, and data A has been sent downstream t4: chunk3 is read At this time, startOffset will be set to lowwatermark t5: BinlogSplitReader.pollSplitRecords receives data A, and uses the method shouldEmit to determine whether the data is sent downstream In this method {code:java} private boolean hasEnterPureBinlogPhase(TableId tableId, BinlogOffset position) { if (pureBinlogPhaseTables.contains(tableId)) { return true; } // the existed tables those have finished snapshot reading if (maxSplitHighWatermarkMap.containsKey(tableId) && position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))) { pureBinlogPhaseTables.add(tableId); return true; } } {code} *maxSplitHighWatermarkMap.get(tableId)* obtains the HighWatermark data without ts_sec variable, and the default value is 0 *position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))* So this expression is judged as true *Data A continues to be sent downstream, and the data is repeated* > Data read duplication during the full-to-incremental conversion phase > - > > Key: FLINK-35600 > URL: https://issues.apache.org/jira/browse/FLINK-35600 > Project: Flink > Issue Type: Bug > Components: Flink CDC >Affects Versions: cdc-3.1.0 >Reporter: Di Wu >Priority: Major > Labels: pull-request-available > > Assume that the table has been split into 3 Chunks > Timeline > t1: chunk1 is read > t2: a piece of data A belonging to chunk2 is inserted in MySQL > t3: chunk2 is read, and data A has been sent downstream > t4: chunk3 is read > At this time, startOffset will be set to lowwatermark > t5: *BinlogSplitReader.pollSplitRecords* receives data A, and uses the method > *shouldEmit* to determine whether the data is sent downstream > In this method > {code:java} > private boolean hasEnterPureBinlogPhase(TableId tableId, BinlogOffset > position) { > if (pureBinlogPhaseTables.contains(tableId)) { > return true; > } > // the existed tables those have finished snapshot reading > if (maxSplitHighWatermarkMap.containsKey(tableId) > && position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))) { > pureBinlogPhaseTables.add(tableId); > return true; > } > } {code} > *maxSplitHighWatermarkMap.get(tableId)* obtains the HighWatermark data > without ts_sec variable, and the default value is 0 > *position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))* > So this expression is judged as true > *Data A continues to be sent downstream, and the data is repeated* -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35597][test] Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts [flink]
1996fanrui commented on PR #24936: URL: https://github.com/apache/flink/pull/24936#issuecomment-2167246199 > @1996fanrui @GOODBOY008 Thanks for your attention, I found that CI e2e tests have stucked for hours, As the code is about to be frozen, could you help cancel some tasks? For example, this task has already failed but is still running the e2e test. https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60262&view=results No worries, the bugfix still can be merged after feature freeze. The feature cannot be merged after feature freeze. Let's wait for the CI for a while. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35600) Data read duplication during the full-to-incremental conversion phase
[ https://issues.apache.org/jira/browse/FLINK-35600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35600: --- Labels: pull-request-available (was: ) > Data read duplication during the full-to-incremental conversion phase > - > > Key: FLINK-35600 > URL: https://issues.apache.org/jira/browse/FLINK-35600 > Project: Flink > Issue Type: Bug > Components: Flink CDC >Affects Versions: cdc-3.1.0 >Reporter: Di Wu >Priority: Major > Labels: pull-request-available > > Assume that the table has been split into 3 Chunks > Timeline > t1: chunk1 is read > t2: a piece of data A belonging to chunk2 is inserted in MySQL > t3: chunk2 is read, and data A has been sent downstream > t4: chunk3 is read > At this time, startOffset will be set to lowwatermark > t5: BinlogSplitReader.pollSplitRecords receives data A, and uses the method > shouldEmit to determine whether the data is sent downstream > In this method > {code:java} > private boolean hasEnterPureBinlogPhase(TableId tableId, BinlogOffset > position) { > if (pureBinlogPhaseTables.contains(tableId)) { > return true; > } > // the existed tables those have finished snapshot reading > if (maxSplitHighWatermarkMap.containsKey(tableId) > && position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))) { > pureBinlogPhaseTables.add(tableId); > return true; > } > } {code} > *maxSplitHighWatermarkMap.get(tableId)* obtains the HighWatermark data > without ts_sec variable, and the default value is 0 > *position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))* > So this expression is judged as true > *Data A continues to be sent downstream, and the data is repeated* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35600) Data read duplication during the full-to-incremental conversion phase
Di Wu created FLINK-35600: - Summary: Data read duplication during the full-to-incremental conversion phase Key: FLINK-35600 URL: https://issues.apache.org/jira/browse/FLINK-35600 Project: Flink Issue Type: Bug Components: Flink CDC Affects Versions: cdc-3.1.0 Reporter: Di Wu Assume that the table has been split into 3 Chunks Timeline t1: chunk1 is read t2: a piece of data A belonging to chunk2 is inserted in MySQL t3: chunk2 is read, and data A has been sent downstream t4: chunk3 is read At this time, startOffset will be set to lowwatermark t5: BinlogSplitReader.pollSplitRecords receives data A, and uses the method shouldEmit to determine whether the data is sent downstream In this method {code:java} private boolean hasEnterPureBinlogPhase(TableId tableId, BinlogOffset position) { if (pureBinlogPhaseTables.contains(tableId)) { return true; } // the existed tables those have finished snapshot reading if (maxSplitHighWatermarkMap.containsKey(tableId) && position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))) { pureBinlogPhaseTables.add(tableId); return true; } } {code} *maxSplitHighWatermarkMap.get(tableId)* obtains the HighWatermark data without ts_sec variable, and the default value is 0 *position.isAtOrAfter(maxSplitHighWatermarkMap.get(tableId))* So this expression is judged as true *Data A continues to be sent downstream, and the data is repeated* -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-35564) The topic cannot be distributed on subtask when calculatePartitionOwner returns -1
[ https://issues.apache.org/jira/browse/FLINK-35564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854929#comment-17854929 ] Yufan Sheng commented on FLINK-35564: - Hi, thanks for mention this. I think this bug has been fixed in the latest main branch. But we may never backport to the 1.17 branch, I think you can upgrade to the latest connector for fixing this issue. https://github.com/apache/flink-connector-pulsar/blob/b37a8b32f30683664ff25888d403c4de414043e1/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/source/enumerator/assigner/SplitAssignerImpl.java#L173 > The topic cannot be distributed on subtask when calculatePartitionOwner > returns -1 > -- > > Key: FLINK-35564 > URL: https://issues.apache.org/jira/browse/FLINK-35564 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.17.2 >Reporter: 中国无锡周良 >Priority: Major > > The topic cannot be distributed on subtask when calculatePartitionOwner > returns -1 > {code:java} > @VisibleForTesting > static int calculatePartitionOwner(String topic, int partitionId, int > parallelism) { > int startIndex = ((topic.hashCode() * 31) & 0x7FFF) % parallelism; > /* > * Here, the assumption is that the id of Pulsar partitions are always > ascending starting from > * 0. Therefore, can be used directly as the offset clockwise from the > start index. > */ > return (startIndex + partitionId) % parallelism; > } {code} > Here startIndex is a non-negative number calculated based on topic.hashCode() > and in the range [0, parallelism-1]. > For non-partitioned topic. partitionId is NON_PARTITION_ID = -1; > but > {code:java} > @Override > public Optional> createAssignment( > List readers) { > if (pendingPartitionSplits.isEmpty() || readers.isEmpty()) { > return Optional.empty(); > } > Map> assignMap = > new HashMap<>(pendingPartitionSplits.size()); > for (Integer reader : readers) { > Set splits = > pendingPartitionSplits.remove(reader); > if (splits != null && !splits.isEmpty()) { > assignMap.put(reader, new ArrayList<>(splits)); > } > } > if (assignMap.isEmpty()) { > return Optional.empty(); > } else { > return Optional.of(new SplitsAssignment<>(assignMap)); > } > } {code} > pendingPartitionSplits can't possibly have a value of -1, right? The > calculation method of the topic by the above return 1, > pendingPartitionSplits. Remove (reader), forever is null; This topic will not > be assigned to a subtask; And I simulated this topic locally and found that > messages were indeed not processed; -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35597][test] Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts [flink]
liyubin117 commented on PR #24936: URL: https://github.com/apache/flink/pull/24936#issuecomment-2167237003 @1996fanrui @GOODBOY008 Thanks for your attention, I found that CI e2e tests have stucked for hours, As the code is about to be frozen, could you help cancel some tasks? For example, this task has already failed but is still running the e2e test. https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60262&view=results -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35593) Apache Kubernetes Operator Docker image does not contain Apache LICENSE
[ https://issues.apache.org/jira/browse/FLINK-35593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35593: --- Labels: pull-request-available (was: ) > Apache Kubernetes Operator Docker image does not contain Apache LICENSE > --- > > Key: FLINK-35593 > URL: https://issues.apache.org/jira/browse/FLINK-35593 > Project: Flink > Issue Type: Improvement > Components: Kubernetes Operator >Affects Versions: 1.8.0 >Reporter: Anupam Aggarwal >Assignee: Anupam Aggarwal >Priority: Minor > Labels: pull-request-available > > The Apache > [LICENSE|https://github.com/apache/flink-kubernetes-operator/blob/main/LICENSE] > is not bundled along with the Apache Flink Kubernetes Operator docker image. > {code:java} > ❯ docker run -it apache/flink-kubernetes-operator:1.8.0 bash > flink@cc372b31d067:/flink-kubernetes-operator$ ls -latr > total 104732 > -rw-r--r-- 1 flink flink 40962 Mar 14 15:19 > flink-kubernetes-standalone-1.8.0.jar > -rw-r--r-- 1 flink flink 107055161 Mar 14 15:21 > flink-kubernetes-operator-1.8.0-shaded.jar > -rw-r--r-- 1 flink flink 62402 Mar 14 15:21 > flink-kubernetes-webhook-1.8.0-shaded.jar > -rw-r--r-- 1 flink flink 63740 Mar 14 15:21 NOTICE > drwxr-xr-x 2 flink flink 4096 Mar 14 15:21 licenses > drwxr-xr-x 1 root root 4096 Mar 14 15:21 . > drwxr-xr-x 1 root root 4096 Jun 13 12:49 .. {code} > The Apache Flink docker image by contrast bundles the license (LICENSE) > {code:java} > ❯ docker run -it apache/flink:latest bash > sed: can't read /config.yaml: No such file or directory > lflink@24c2dff32a45:~$ ls -latr > total 224 > -rw-r--r-- 1 flink flink 1309 Mar 4 15:34 README.txt > drwxrwxr-x 2 flink flink 4096 Mar 4 15:34 log > -rw-r--r-- 1 flink flink 11357 Mar 4 15:34 LICENSE > drwxrwxr-x 2 flink flink 4096 Mar 7 05:49 lib > drwxrwxr-x 6 flink flink 4096 Mar 7 05:49 examples > drwxrwxr-x 1 flink flink 4096 Mar 7 05:49 conf > drwxrwxr-x 2 flink flink 4096 Mar 7 05:49 bin > drwxrwxr-x 10 flink flink 4096 Mar 7 05:49 plugins > drwxrwxr-x 3 flink flink 4096 Mar 7 05:49 opt > -rw-rw-r-- 1 flink flink 156327 Mar 7 05:49 NOTICE > drwxrwxr-x 2 flink flink 4096 Mar 7 05:49 licenses > drwxr-xr-x 1 root root 4096 Mar 19 05:01 .. > drwxr-xr-x 1 flink flink 4096 Mar 19 05:02 . > flink@24c2dff32a45:~$ {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [FLINK-35593][Kubernetes Operator] Add Apache 2 License to docker image [flink-kubernetes-operator]
anupamaggarwal opened a new pull request, #839: URL: https://github.com/apache/flink-kubernetes-operator/pull/839 ## What is the purpose of the change Adds Apache 2 LICENSE to Apache Kubernetes Operator Docker image ## Brief change log - Adds LICENSE file to docker stage image ## Verifying this change - Run `mvn clean verify -T1C -DskipTests` - `docker build . -t apache/flink-kubernetes-operator:latest` - From folder where apache-kubernetes-operator code is checked out, verified checksum ``` ❯ docker run -it apache/flink-kubernetes-operator:latest cksum LICENSE 3425401509 11357 LICENSE ❯ cksum LICENSE 3425401509 11357 LICENSE ``` ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changes to the `CustomResourceDescriptors`: (no) - Core observer or reconciler logic that is regularly executed: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax [flink]
liyubin117 commented on code in PR #24934: URL: https://github.com/apache/flink/pull/24934#discussion_r1639245472 ## flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogManagerTest.java: ## @@ -367,14 +368,28 @@ void testCatalogStore() throws Exception { catalogManager.createCatalog("cat1", CatalogDescriptor.of("cat1", configuration)); catalogManager.createCatalog("cat2", CatalogDescriptor.of("cat2", configuration)); catalogManager.createCatalog("cat3", CatalogDescriptor.of("cat3", configuration)); +catalogManager.createCatalog( Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax [flink]
liyubin117 commented on code in PR #24934: URL: https://github.com/apache/flink/pull/24934#discussion_r1639245307 ## flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java: ## @@ -295,31 +295,42 @@ public DataTypeFactory getDataTypeFactory() { * * @param catalogName the given catalog name under which to create the given catalog * @param catalogDescriptor catalog descriptor for creating catalog + * @param ignoreIfExists if false exception will be thrown if a catalog exists. * @throws CatalogException If the catalog already exists in the catalog store or initialized * catalogs, or if an error occurs while creating the catalog or storing the {@link * CatalogDescriptor} */ -public void createCatalog(String catalogName, CatalogDescriptor catalogDescriptor) +public void createCatalog( +String catalogName, CatalogDescriptor catalogDescriptor, boolean ignoreIfExists) throws CatalogException { checkArgument( !StringUtils.isNullOrWhitespaceOnly(catalogName), "Catalog name cannot be null or empty."); checkNotNull(catalogDescriptor, "Catalog descriptor cannot be null"); if (catalogStoreHolder.catalogStore().contains(catalogName)) { -throw new CatalogException( -format("Catalog %s already exists in catalog store.", catalogName)); +if (!ignoreIfExists) { Review Comment: good idea ## flink-core/src/test/java/org/apache/flink/core/io/LocatableSplitAssignerTest.java: ## @@ -428,7 +428,7 @@ void testConcurrentSplitAssignmentForMultipleHosts() throws InterruptedException assertThat(ia.getNextInputSplit("testhost", 0)).isNull(); // at least one fraction of hosts needs be local, no matter how bad the thread races -assertThat(ia.getNumberOfRemoteAssignments()) +assertThat(ia.getNumberOfLocalAssignments()) Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax [flink]
liyubin117 commented on code in PR #24934: URL: https://github.com/apache/flink/pull/24934#discussion_r1639244971 ## flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlCreateCatalogConverter.java: ## @@ -41,6 +43,10 @@ public Operation convertSqlNode(SqlCreateCatalog node, ConvertContext context) { ((SqlTableOption) p).getKeyString(), ((SqlTableOption) p).getValueString())); -return new CreateCatalogOperation(node.catalogName(), properties); +return new CreateCatalogOperation( +node.catalogName(), +properties, +getCatalogComment(node.getComment()), Review Comment: done ## flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/utils/OperationConverterUtils.java: ## @@ -85,6 +87,13 @@ public static List buildModifyColumnChange( } } +public static @Nullable String getCatalogComment(Optional catalogComment) { +return catalogComment +.map(SqlCharStringLiteral.class::cast) +.map(c -> c.getValueAs(NlsString.class).getValue()) +.orElse(null); +} + Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax [flink]
liyubin117 commented on code in PR #24934: URL: https://github.com/apache/flink/pull/24934#discussion_r1639244453 ## flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/CreateCatalogOperation.java: ## @@ -39,10 +41,18 @@ public class CreateCatalogOperation implements CreateOperation { private final String catalogName; private final Map properties; +private final @Nullable String comment; Review Comment: done ## flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/CreateCatalogOperation.java: ## @@ -53,11 +63,23 @@ public Map getProperties() { return properties; } +public @Nullable String getComment() { +return comment; +} Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax [flink]
liyubin117 commented on code in PR #24934: URL: https://github.com/apache/flink/pull/24934#discussion_r1639243865 ## flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogDescriptor.java: ## @@ -48,18 +55,34 @@ public Configuration getConfiguration() { return configuration; } -private CatalogDescriptor(String catalogName, Configuration configuration) { +public Optional getComment() { +return Optional.ofNullable(comment); +} + +public CatalogDescriptor setComment(String comment) { +return new CatalogDescriptor(catalogName, configuration, comment); +} Review Comment: done ## flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java: ## @@ -134,6 +134,28 @@ void testCreateCatalog() { + " 'key1' = 'value1',\n" + " 'key2' = 'value2'\n" + ")"); +sql("create catalog c1 comment 'hello'\n" ++ " WITH (\n" ++ " 'key1'='value1',\n" ++ " 'key2'='value2'\n" ++ " )\n") +.ok( +"CREATE CATALOG `C1`\n" ++ "COMMENT 'hello' WITH (\n" ++ " 'key1' = 'value1',\n" ++ " 'key2' = 'value2'\n" ++ ")"); +sql("create catalog if not exists c1 comment 'hello'\n" ++ " WITH (\n" ++ " 'key1'='value1',\n" ++ " 'key2'='value2'\n" ++ " )\n") +.ok( +"CREATE CATALOG IF NOT EXISTS `C1`\n" ++ "COMMENT 'hello' WITH (\n" ++ " 'key1' = 'value1',\n" ++ " 'key2' = 'value2'\n" ++ ")"); Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax [flink]
liyubin117 commented on code in PR #24934: URL: https://github.com/apache/flink/pull/24934#discussion_r1639240721 ## flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateCatalog.java: ## @@ -45,11 +48,18 @@ public class SqlCreateCatalog extends SqlCreate { private final SqlNodeList propertyList; +private final @Nullable SqlNode comment; Review Comment: StringLiteral() returns SqlNode, so we should keep it as `SqlNode` ![baf72e629b5f98822abd464ce74da00](https://github.com/apache/flink/assets/7313035/ff49c24e-ccad-42d7-b34b-aa100965af98) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Out of use [flink-cdc]
Jiabao-Sun closed pull request #3318: Out of use URL: https://github.com/apache/flink-cdc/pull/3318 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-35121) CDC pipeline connector should verify requiredOptions and optionalOptions
[ https://issues.apache.org/jira/browse/FLINK-35121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiabao Sun resolved FLINK-35121. Assignee: yux Resolution: Implemented Implemented by flink-cdc master: 2bd2e4ce24ec0cc6a11129e3e3b32af6a09dd977 > CDC pipeline connector should verify requiredOptions and optionalOptions > > > Key: FLINK-35121 > URL: https://issues.apache.org/jira/browse/FLINK-35121 > Project: Flink > Issue Type: Improvement > Components: Flink CDC >Affects Versions: cdc-3.1.0 >Reporter: Hongshun Wang >Assignee: yux >Priority: Major > Labels: pull-request-available > Fix For: cdc-3.2.0 > > > At present, though we provide > org.apache.flink.cdc.common.factories.Factory#requiredOptions and > org.apache.flink.cdc.common.factories.Factory#optionalOptions, but both are > not used anywhere. This means not verifying requiredOptions and > optionalOptions. > Thus, like what DynamicTableFactory does, provide > FactoryHelper to help verify requiredOptions and optionalOptions. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35121][pipeline-connector][cdc-base] CDC pipeline connector provide ability to verify requiredOptions and optionalOptions [flink-cdc]
Jiabao-Sun commented on PR #3412: URL: https://github.com/apache/flink-cdc/pull/3412#issuecomment-2167182112 Closed by #3382 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35121][pipeline-connector][cdc-base] CDC pipeline connector provide ability to verify requiredOptions and optionalOptions [flink-cdc]
Jiabao-Sun closed pull request #3412: [FLINK-35121][pipeline-connector][cdc-base] CDC pipeline connector provide ability to verify requiredOptions and optionalOptions URL: https://github.com/apache/flink-cdc/pull/3412 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35121][common] Adds validation for pipeline definition options [flink-cdc]
Jiabao-Sun merged PR #3382: URL: https://github.com/apache/flink-cdc/pull/3382 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35599) Add JDBC Sink Plugin to Flink-CDC-Pipeline
[ https://issues.apache.org/jira/browse/FLINK-35599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhengJunZhou updated FLINK-35599: - Component/s: Flink CDC Affects Version/s: cdc-3.2.0 Description: TODO Issue Type: New Feature (was: Bug) Priority: Minor (was: Major) Summary: Add JDBC Sink Plugin to Flink-CDC-Pipeline (was: test) > Add JDBC Sink Plugin to Flink-CDC-Pipeline > -- > > Key: FLINK-35599 > URL: https://issues.apache.org/jira/browse/FLINK-35599 > Project: Flink > Issue Type: New Feature > Components: Flink CDC >Affects Versions: cdc-3.2.0 >Reporter: ZhengJunZhou >Priority: Minor > > TODO -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35598][sql-parser] Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep [flink]
wzx140 commented on PR #24937: URL: https://github.com/apache/flink/pull/24937#issuecomment-2167176948 @1996fanrui Could you please review it? Really thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-35599) test
ZhengJunZhou created FLINK-35599: Summary: test Key: FLINK-35599 URL: https://issues.apache.org/jira/browse/FLINK-35599 Project: Flink Issue Type: Bug Reporter: ZhengJunZhou -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35598) Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep
[ https://issues.apache.org/jira/browse/FLINK-35598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35598: --- Labels: pull-request-available pull_request_available (was: pull_request_available) > Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep > --- > > Key: FLINK-35598 > URL: https://issues.apache.org/jira/browse/FLINK-35598 > Project: Flink > Issue Type: Bug > Components: Table SQL / Legacy Planner >Affects Versions: 1.20.0 >Reporter: Frank Wong >Priority: Major > Labels: pull-request-available, pull_request_available > > {code:java} > ExtendedSqlRowTypeNameSpec nameSpec = new ExtendedSqlRowTypeNameSpec( > SqlParserPos.ZERO, > Arrays.asList( > new SqlIdentifier("column1", SqlParserPos.ZERO), > new SqlIdentifier("column2", SqlParserPos.ZERO)), > Arrays.asList( > new SqlDataTypeSpec(new SqlBasicTypeNameSpec( > SqlTypeName.INTEGER, > SqlParserPos.ZERO), SqlParserPos.ZERO), > new SqlDataTypeSpec(new SqlBasicTypeNameSpec( > SqlTypeName.INTEGER, > SqlParserPos.ZERO), SqlParserPos.ZERO)), > Collections.emptyList(), true > ); > // Throw exception > nameSpec.equalsDeep(nameSpec, Litmus.THROW);{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35598] Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep [flink]
flinkbot commented on PR #24937: URL: https://github.com/apache/flink/pull/24937#issuecomment-2167173883 ## CI report: * 0438cd927e430131c07b0555265696e34e1606a0 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35598) Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep
[ https://issues.apache.org/jira/browse/FLINK-35598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35598: --- Labels: pull-request-available (was: ) > Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep > --- > > Key: FLINK-35598 > URL: https://issues.apache.org/jira/browse/FLINK-35598 > Project: Flink > Issue Type: Bug > Components: Table SQL / Legacy Planner >Affects Versions: 1.20.0 >Reporter: Frank Wong >Priority: Major > Labels: pull-request-available > > {code:java} > ExtendedSqlRowTypeNameSpec nameSpec = new ExtendedSqlRowTypeNameSpec( > SqlParserPos.ZERO, > Arrays.asList( > new SqlIdentifier("column1", SqlParserPos.ZERO), > new SqlIdentifier("column2", SqlParserPos.ZERO)), > Arrays.asList( > new SqlDataTypeSpec(new SqlBasicTypeNameSpec( > SqlTypeName.INTEGER, > SqlParserPos.ZERO), SqlParserPos.ZERO), > new SqlDataTypeSpec(new SqlBasicTypeNameSpec( > SqlTypeName.INTEGER, > SqlParserPos.ZERO), SqlParserPos.ZERO)), > Collections.emptyList(), true > ); > // Throw exception > nameSpec.equalsDeep(nameSpec, Litmus.THROW);{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35598) Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep
[ https://issues.apache.org/jira/browse/FLINK-35598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Frank Wong updated FLINK-35598: --- Labels: pull_request_available (was: pull-request-available) > Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep > --- > > Key: FLINK-35598 > URL: https://issues.apache.org/jira/browse/FLINK-35598 > Project: Flink > Issue Type: Bug > Components: Table SQL / Legacy Planner >Affects Versions: 1.20.0 >Reporter: Frank Wong >Priority: Major > Labels: pull_request_available > > {code:java} > ExtendedSqlRowTypeNameSpec nameSpec = new ExtendedSqlRowTypeNameSpec( > SqlParserPos.ZERO, > Arrays.asList( > new SqlIdentifier("column1", SqlParserPos.ZERO), > new SqlIdentifier("column2", SqlParserPos.ZERO)), > Arrays.asList( > new SqlDataTypeSpec(new SqlBasicTypeNameSpec( > SqlTypeName.INTEGER, > SqlParserPos.ZERO), SqlParserPos.ZERO), > new SqlDataTypeSpec(new SqlBasicTypeNameSpec( > SqlTypeName.INTEGER, > SqlParserPos.ZERO), SqlParserPos.ZERO)), > Collections.emptyList(), true > ); > // Throw exception > nameSpec.equalsDeep(nameSpec, Litmus.THROW);{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35598) Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep
Frank Wong created FLINK-35598: -- Summary: Fix error comparison type ExtendedSqlRowTypeNameSpec#equalsDeep Key: FLINK-35598 URL: https://issues.apache.org/jira/browse/FLINK-35598 Project: Flink Issue Type: Bug Components: Table SQL / Legacy Planner Affects Versions: 1.20.0 Reporter: Frank Wong {code:java} ExtendedSqlRowTypeNameSpec nameSpec = new ExtendedSqlRowTypeNameSpec( SqlParserPos.ZERO, Arrays.asList( new SqlIdentifier("column1", SqlParserPos.ZERO), new SqlIdentifier("column2", SqlParserPos.ZERO)), Arrays.asList( new SqlDataTypeSpec(new SqlBasicTypeNameSpec( SqlTypeName.INTEGER, SqlParserPos.ZERO), SqlParserPos.ZERO), new SqlDataTypeSpec(new SqlBasicTypeNameSpec( SqlTypeName.INTEGER, SqlParserPos.ZERO), SqlParserPos.ZERO)), Collections.emptyList(), true ); // Throw exception nameSpec.equalsDeep(nameSpec, Litmus.THROW);{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]
lvyanquan commented on code in PR #3414: URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1639192510 ## flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunction.java: ## @@ -0,0 +1,28 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.common.sink; + +import org.apache.flink.cdc.common.event.DataChangeEvent; + +import java.util.function.Function; + +/** use for PrePartitionOperator when calculating hash code of primary key. */ Review Comment: {@link PrePartitionOperator} ## flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunctionProvider.java: ## @@ -0,0 +1,27 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.common.sink; + +import org.apache.flink.cdc.common.schema.Schema; + +import java.io.Serializable; + +/** use for PrePartitionOperator. */ Review Comment: Provide {@link HashFunction} to help {@link PrePartitionOperator} to shuffle DataChangeEvent to designated subtask. This is usually beneficial for load balancing, when writing to different partitions/buckets in {@link DataSink}, add custom Implementation to further improve efficiency. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35543][HIVE] Upgrade Hive 2.3 connector to version 2.3.10 [flink]
pan3793 commented on PR #24905: URL: https://github.com/apache/flink/pull/24905#issuecomment-2167155520 thank you, @1996fanrui -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-32091][checkpoint] Add file size metrics for file-merging [flink]
fredia commented on PR #24922: URL: https://github.com/apache/flink/pull/24922#issuecomment-2167153534 > Thanks for the PR! Overall LGTM. How about adding some docs under `docs/ops/metrics#checkpointing`? @Zakelly Thanks for the review, added into `docs/ops/metrics#checkpointing` and `docs/zh/ops/metrics#checkpointing`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-35543) Upgrade Hive 2.3 connector to version 2.3.10
[ https://issues.apache.org/jira/browse/FLINK-35543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854919#comment-17854919 ] Rui Fan commented on FLINK-35543: - Merged to master(1.20.0) via : 6c05981948ea72f79fef94282d6e8c648951092b > Upgrade Hive 2.3 connector to version 2.3.10 > > > Key: FLINK-35543 > URL: https://issues.apache.org/jira/browse/FLINK-35543 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Cheng Pan >Assignee: Cheng Pan >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-35543) Upgrade Hive 2.3 connector to version 2.3.10
[ https://issues.apache.org/jira/browse/FLINK-35543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan resolved FLINK-35543. - Fix Version/s: 1.20.0 Resolution: Fixed > Upgrade Hive 2.3 connector to version 2.3.10 > > > Key: FLINK-35543 > URL: https://issues.apache.org/jira/browse/FLINK-35543 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Cheng Pan >Assignee: Cheng Pan >Priority: Major > Labels: pull-request-available > Fix For: 1.20.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-35543) Upgrade Hive 2.3 connector to version 2.3.10
[ https://issues.apache.org/jira/browse/FLINK-35543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan reassigned FLINK-35543: --- Assignee: Cheng Pan > Upgrade Hive 2.3 connector to version 2.3.10 > > > Key: FLINK-35543 > URL: https://issues.apache.org/jira/browse/FLINK-35543 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Cheng Pan >Assignee: Cheng Pan >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35543][HIVE] Upgrade Hive 2.3 connector to version 2.3.10 [flink]
1996fanrui merged PR #24905: URL: https://github.com/apache/flink/pull/24905 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]
lvyanquan commented on code in PR #3414: URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1639176999 ## flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunction.java: ## @@ -0,0 +1,28 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.common.sink; + +import org.apache.flink.cdc.common.event.DataChangeEvent; + +import java.util.function.Function; + +/** use for PrePartitionOperator when calculating hash code of primary key. */ Review Comment: Used to calculate hash code of given DataChangeEvent by primary key, which will help {@link PrePartitionOperator} to shuffle DataChangeEvent to designated subtask. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]
lvyanquan commented on code in PR #3414: URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1639176999 ## flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunction.java: ## @@ -0,0 +1,28 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.common.sink; + +import org.apache.flink.cdc.common.event.DataChangeEvent; + +import java.util.function.Function; + +/** use for PrePartitionOperator when calculating hash code of primary key. */ Review Comment: Used to calculate hash code of given DataChangeEvent by primary key, which will help {@link PrePartitionOperator} to shuffle DataChangeEvent to designated subtask. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax [flink]
LadyForest commented on code in PR #24934: URL: https://github.com/apache/flink/pull/24934#discussion_r1639160703 ## flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/CreateCatalogOperation.java: ## @@ -39,10 +41,18 @@ public class CreateCatalogOperation implements CreateOperation { private final String catalogName; private final Map properties; +private final @Nullable String comment; Review Comment: please keep a consistent annotation style ```suggestion @Nullable private final String comment; ``` ## flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateCatalog.java: ## @@ -45,11 +48,18 @@ public class SqlCreateCatalog extends SqlCreate { private final SqlNodeList propertyList; +private final @Nullable SqlNode comment; Review Comment: ```suggestion @Nullable private final SqlCharStringLiteral comment; ``` ## flink-core/src/test/java/org/apache/flink/core/io/LocatableSplitAssignerTest.java: ## @@ -428,7 +428,7 @@ void testConcurrentSplitAssignmentForMultipleHosts() throws InterruptedException assertThat(ia.getNextInputSplit("testhost", 0)).isNull(); // at least one fraction of hosts needs be local, no matter how bad the thread races -assertThat(ia.getNumberOfRemoteAssignments()) +assertThat(ia.getNumberOfLocalAssignments()) Review Comment: Do not include commits that don't belong to this PR. ## flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/utils/OperationConverterUtils.java: ## @@ -85,6 +87,13 @@ public static List buildModifyColumnChange( } } +public static @Nullable String getCatalogComment(Optional catalogComment) { +return catalogComment +.map(SqlCharStringLiteral.class::cast) +.map(c -> c.getValueAs(NlsString.class).getValue()) +.orElse(null); +} + Review Comment: it can be inlined, so remove this ## flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/CreateCatalogOperation.java: ## @@ -53,11 +63,23 @@ public Map getProperties() { return properties; } +public @Nullable String getComment() { +return comment; +} Review Comment: Remove this getter if there is no use. ## flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateCatalog.java: ## @@ -70,11 +80,28 @@ public SqlNodeList getPropertyList() { return propertyList; } +public Optional getComment() { Review Comment: ```suggestion public Optional getComment() { ``` ## flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateCatalog.java: ## @@ -45,11 +48,18 @@ public class SqlCreateCatalog extends SqlCreate { private final SqlNodeList propertyList; +private final @Nullable SqlNode comment; + public SqlCreateCatalog( -SqlParserPos position, SqlIdentifier catalogName, SqlNodeList propertyList) { -super(OPERATOR, position, false, false); +SqlParserPos position, +SqlIdentifier catalogName, +SqlNodeList propertyList, +@Nullable SqlNode comment, Review Comment: ```suggestion @Nullable SqlCharStringLiteral comment, ``` ## flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java: ## @@ -134,6 +134,28 @@ void testCreateCatalog() { + " 'key1' = 'value1',\n" + " 'key2' = 'value2'\n" + ")"); +sql("create catalog c1 comment 'hello'\n" ++ " WITH (\n" ++ " 'key1'='value1',\n" ++ " 'key2'='value2'\n" ++ " )\n") +.ok( +"CREATE CATALOG `C1`\n" ++ "COMMENT 'hello' WITH (\n" ++ " 'key1' = 'value1',\n" ++ " 'key2' = 'value2'\n" ++ ")"); +sql("create catalog if not exists c1 comment 'hello'\n" ++ " WITH (\n" ++ " 'key1'='value1',\n" ++ " 'key2'='value2'\n" ++ " )\n") +.ok( +"CREATE CATALOG IF NOT EXISTS `C1`\n" ++ "COMMENT 'hello' WITH (\n" ++ " 'key1' = 'value1',\n" ++ " 'key2' = 'value2'\n" ++ ")"); Review Comment: Add one more test for `CREATE CATALOG IF NOT EXISTS` without comment ##
Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]
caicancai commented on PR #830: URL: https://github.com/apache/flink-kubernetes-operator/pull/830#issuecomment-2167139959 @1996fanrui @RocMarshal Thank you for your review and patient reply -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35167][cdc-connector] Introduce MaxCompute pipeline DataSink [flink-cdc]
dingxin-tech commented on code in PR #3254: URL: https://github.com/apache/flink-cdc/pull/3254#discussion_r1639174241 ## flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-maxcompute/src/main/java/org/apache/flink/cdc/connectors/maxcompute/sink/MaxComputeEventSink.java: ## @@ -0,0 +1,82 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.connectors.maxcompute.sink; + +import org.apache.flink.api.connector.sink2.Sink; +import org.apache.flink.api.connector.sink2.SinkWriter; +import org.apache.flink.cdc.common.event.Event; +import org.apache.flink.cdc.connectors.maxcompute.common.Constant; +import org.apache.flink.cdc.connectors.maxcompute.coordinator.SessionManageCoordinatedOperatorFactory; +import org.apache.flink.cdc.connectors.maxcompute.options.MaxComputeExecutionOptions; +import org.apache.flink.cdc.connectors.maxcompute.options.MaxComputeOptions; +import org.apache.flink.cdc.connectors.maxcompute.options.MaxComputeWriteOptions; +import org.apache.flink.cdc.runtime.typeutils.EventTypeInfo; +import org.apache.flink.streaming.api.connector.sink2.WithPreWriteTopology; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; + +import java.io.IOException; + +/** A {@link Sink} of {@link Event} to MaxCompute. */ +public class MaxComputeEventSink implements Sink, WithPreWriteTopology { +private static final long serialVersionUID = 1L; +private final MaxComputeOptions options; +private final MaxComputeWriteOptions writeOptions; +private final MaxComputeExecutionOptions executionOptions; + +public MaxComputeEventSink( +MaxComputeOptions options, +MaxComputeWriteOptions writeOptions, +MaxComputeExecutionOptions executionOptions) { +this.options = options; +this.writeOptions = writeOptions; +this.executionOptions = executionOptions; +} + +@Override +public DataStream addPreWriteTopology(DataStream inputDataStream) { +SingleOutputStreamOperator stream = +inputDataStream.transform( +"SessionManageOperator", +new EventTypeInfo(), +new SessionManageCoordinatedOperatorFactory( +options, writeOptions, executionOptions)); +stream.uid(Constant.PIPELINE_SESSION_MANAGE_OPERATOR_UID); + +//stream = +//stream.transform( +//"PartitionByBucket", +//new PartitioningEventTypeInfo(), +//new PartitionOperator( +//stream.getParallelism(), options.getBucketSize())) +//.partitionCustom(new EventPartitioner(), new +// PartitioningEventKeySelector()) +//.transform( +//"PostPartition", +//new EventTypeInfo(), +//new PostPartitionOperator(stream.getParallelism())) +//.name("PartitionByBucket"); Review Comment: Sorry, I comment out this part of code for debug use but forget to un-comment. I think this code will stay here until the runtime optimization your mentioned is passed, and then I will re-implement this feature in that way. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-35448) Translate pod templates documentation into Chinese
[ https://issues.apache.org/jira/browse/FLINK-35448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan resolved FLINK-35448. - Fix Version/s: kubernetes-operator-1.9.0 Assignee: Caican Cai Resolution: Fixed > Translate pod templates documentation into Chinese > -- > > Key: FLINK-35448 > URL: https://issues.apache.org/jira/browse/FLINK-35448 > Project: Flink > Issue Type: Sub-task >Reporter: Caican Cai >Assignee: Caican Cai >Priority: Minor > Labels: pull-request-available > Fix For: kubernetes-operator-1.9.0 > > > Translate pod templates documentation into Chinese > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33463) Support the implementation of dynamic source tables based on the new source
[ https://issues.apache.org/jira/browse/FLINK-33463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854916#comment-17854916 ] RocMarshal commented on FLINK-33463: Hi, [~fanrui] [~eskabetxe] Should we split the Jira as a standalone Jira and mark the FLIP-239 as completed ? If so, I'd like to do the followed items: - split the Jira as a standalone Jira - create the new Jira that will be used to track 'Support the implementation of dynamic sink tables based on the new sink' And I'm glad to hear more opinions about it. Thanks a lot~ :) > Support the implementation of dynamic source tables based on the new source > --- > > Key: FLINK-33463 > URL: https://issues.apache.org/jira/browse/FLINK-33463 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / JDBC >Reporter: RocMarshal >Assignee: RocMarshal >Priority: Major > Labels: pull-request-available > Fix For: jdbc-3.3.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-35448) Translate pod templates documentation into Chinese
[ https://issues.apache.org/jira/browse/FLINK-35448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854915#comment-17854915 ] Rui Fan commented on FLINK-35448: - Merged to main(1.9.0) via: d920a235a3357311b96e3c087d5918d83da27853 > Translate pod templates documentation into Chinese > -- > > Key: FLINK-35448 > URL: https://issues.apache.org/jira/browse/FLINK-35448 > Project: Flink > Issue Type: Sub-task >Reporter: Caican Cai >Priority: Minor > Labels: pull-request-available > > Translate pod templates documentation into Chinese > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]
1996fanrui merged PR #830: URL: https://github.com/apache/flink-kubernetes-operator/pull/830 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-35157) Sources with watermark alignment get stuck once some subtasks finish
[ https://issues.apache.org/jira/browse/FLINK-35157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854914#comment-17854914 ] elon_X commented on FLINK-35157: [~fanrui] Sure, I will backport this fix to 1.17, 1.18, and 1.19. Thank you :D > Sources with watermark alignment get stuck once some subtasks finish > > > Key: FLINK-35157 > URL: https://issues.apache.org/jira/browse/FLINK-35157 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.17.2, 1.19.0, 1.18.1 >Reporter: Gyula Fora >Assignee: elon_X >Priority: Critical > Labels: pull-request-available > Fix For: 1.20.0 > > Attachments: image-2024-04-24-21-36-16-146.png > > > The current watermark alignment logic can easily get stuck if some subtasks > finish while others are still running. > The reason is that once a source subtask finishes, the subtask is not > excluded from alignment, effectively blocking the rest of the job to make > progress beyond last wm + alignment time for the finished sources. > This can be easily reproduced by the following simple pipeline: > {noformat} > StreamExecutionEnvironment env = > StreamExecutionEnvironment.getExecutionEnvironment(); > env.setParallelism(2); > DataStream s = env.fromSource(new NumberSequenceSource(0, 100), > > WatermarkStrategy.forMonotonousTimestamps().withTimestampAssigner((SerializableTimestampAssigner) > (aLong, l) -> aLong).withWatermarkAlignment("g1", Duration.ofMillis(10), > Duration.ofSeconds(2)), > "Sequence Source").filter((FilterFunction) aLong -> { > Thread.sleep(200); > return true; > } > ); > s.print(); > env.execute();{noformat} > The solution could be to send out a max watermark event once the sources > finish or to exclude them from the source coordinator -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35167][cdc-connector] Introduce MaxCompute pipeline DataSink [flink-cdc]
dingxin-tech commented on code in PR #3254: URL: https://github.com/apache/flink-cdc/pull/3254#discussion_r1639171927 ## flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-maxcompute/src/main/java/org/apache/flink/cdc/connectors/maxcompute/sink/MaxComputeEventWriter.java: ## @@ -0,0 +1,197 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.connectors.maxcompute.sink; + +import org.apache.flink.api.connector.sink2.Sink; +import org.apache.flink.api.connector.sink2.SinkWriter; +import org.apache.flink.cdc.common.event.CreateTableEvent; +import org.apache.flink.cdc.common.event.DataChangeEvent; +import org.apache.flink.cdc.common.event.Event; +import org.apache.flink.cdc.common.event.OperationType; +import org.apache.flink.cdc.common.event.SchemaChangeEvent; +import org.apache.flink.cdc.common.event.TableId; +import org.apache.flink.cdc.common.schema.Schema; +import org.apache.flink.cdc.common.utils.SchemaUtils; +import org.apache.flink.cdc.connectors.maxcompute.common.Constant; +import org.apache.flink.cdc.connectors.maxcompute.common.SessionIdentifier; +import org.apache.flink.cdc.connectors.maxcompute.coordinator.SessionManageOperator; +import org.apache.flink.cdc.connectors.maxcompute.coordinator.message.CommitSessionRequest; +import org.apache.flink.cdc.connectors.maxcompute.coordinator.message.CommitSessionResponse; +import org.apache.flink.cdc.connectors.maxcompute.options.MaxComputeExecutionOptions; +import org.apache.flink.cdc.connectors.maxcompute.options.MaxComputeOptions; +import org.apache.flink.cdc.connectors.maxcompute.options.MaxComputeWriteOptions; +import org.apache.flink.cdc.connectors.maxcompute.utils.MaxComputeUtils; +import org.apache.flink.cdc.connectors.maxcompute.utils.TypeConvertUtils; +import org.apache.flink.cdc.connectors.maxcompute.writer.MaxComputeWriter; +import org.apache.flink.cdc.runtime.operators.schema.event.CoordinationResponseUtils; +import org.apache.flink.runtime.operators.coordination.CoordinationResponse; + +import com.aliyun.odps.data.ArrayRecord; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Future; + +/** a {@link SinkWriter} for {@link Event} for MaxCompute. */ +public class MaxComputeEventWriter implements SinkWriter { +private static final Logger LOG = LoggerFactory.getLogger(MaxComputeEventWriter.class); + +private final Sink.InitContext context; +private final MaxComputeOptions options; +private final MaxComputeWriteOptions writeOptions; +private final MaxComputeExecutionOptions executionOptions; +private final Map writerMap; +private final Map schemaCache; + +public MaxComputeEventWriter( +MaxComputeOptions options, +MaxComputeWriteOptions writeOptions, +MaxComputeExecutionOptions executionOptions, +Sink.InitContext context) { +this.context = context; +this.options = options; +this.writeOptions = writeOptions; +this.executionOptions = executionOptions; + +this.writerMap = new HashMap<>(); +this.schemaCache = new HashMap<>(); +} + +@Override +public
Re: [PR] [FLINK-34545][cdc-pipeline-connector]Add OceanBase pipeline connector to Flink CDC [flink-cdc]
yuanoOo commented on code in PR #3360: URL: https://github.com/apache/flink-cdc/pull/3360#discussion_r1639167796 ## docs/content/docs/connectors/pipeline-connectors/oceanbase.md: ## @@ -0,0 +1,343 @@ +--- +title: "OceanBase" +weight: 7 +type: docs +aliases: +- /connectors/pipeline-connectors/oceanbase +--- + + +# OceanBase Connector + +OceanBase connector can be used as the *Data Sink* of the pipeline, and write data to [OceanBase](https://github.com/oceanbase/oceanbase). This document describes how to set up the OceanBase connector. + +## What can the connector do? +* Create table automatically if not exist +* Schema change synchronization +* Data synchronization + +## Example + +The pipeline for reading data from MySQL and sink to OceanBase can be defined as follows: + +```yaml +source: + type: mysql + hostname: mysql + port: 3306 + username: mysqluser + password: mysqlpw + tables: mysql_2_oceanbase_test_17l13vc.\.* + server-id: 5400-5404 + server-time-zone: UTC + +sink: + type: oceanbase + url: jdbc:mysql://oceanbase:2881/test + username: root@test + password: + +pipeline: + name: MySQL to OceanBase Pipeline + parallelism: 1 +``` + +## Connector Options + + + + + +Option +Required by Table API +Default +Type +Description + + + + + url + Yes + + String + JDBC url. + + + username + Yes + + String + The username. + + + password + Yes + + String + The password. + + + schema-name + Yes + + String + The schema name or database name. + + + table-name + Yes + + String + The table name. + + + driver-class-name + No + com.mysql.cj.jdbc.Driver + String + The driver class name, use 'com.mysql.cj.jdbc.Driver' by default. If other value is set, you need to introduce the driver manually. Review Comment: Great, I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35597][test] Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts [flink]
GOODBOY008 commented on PR #24936: URL: https://github.com/apache/flink/pull/24936#issuecomment-2167128712 @1996fanrui It's indeed my mistake. @liyubin117 Great catch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33461][Connector/JDBC] Support streaming related semantics for the new JDBC source [flink-connector-jdbc]
RocMarshal commented on PR #119: URL: https://github.com/apache/flink-connector-jdbc/pull/119#issuecomment-2167128684 👍 Thank you @1996fanrui @eskabetxe very much for your review and attention for the feature ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]
caicancai commented on PR #830: URL: https://github.com/apache/flink-kubernetes-operator/pull/830#issuecomment-2167127192 @1996fanrui cli fail, but it doesn't seem to be caused by my pr -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35237) Allow Sink to Choose HashFunction in PrePartitionOperator
[ https://issues.apache.org/jira/browse/FLINK-35237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35237: --- Labels: pull-request-available (was: ) > Allow Sink to Choose HashFunction in PrePartitionOperator > - > > Key: FLINK-35237 > URL: https://issues.apache.org/jira/browse/FLINK-35237 > Project: Flink > Issue Type: Improvement > Components: Flink CDC >Reporter: zhangdingxin >Priority: Major > Labels: pull-request-available > Fix For: cdc-3.2.0 > > > The {{PrePartitionOperator}} in its current implementation only supports a > fixed {{HashFunction}} > ({{{}org.apache.flink.cdc.runtime.partitioning.PrePartitionOperator.HashFunction{}}}). > This limits the ability of Sink implementations to customize the > partitioning logic for {{{}DataChangeEvent{}}}s. For example, in the case of > partitioned tables, it would be advantageous to allow hashing based on > partition keys, hashing according to table names, or using the database > engine's internal primary key hash functions (such as with MaxCompute > DataSink). > When users require such custom partitioning logic, they are compelled to > implement their PartitionOperator, which undermines the utility of > {{{}PrePartitionOperator{}}}. > To address this limitation, it would be highly desirable to enable the > {{PrePartitionOperator}} to support user-specified custom > {{{}HashFunction{}}}s (Function). A possible > solution could involve a mechanism analogous to the {{DataSink}} interface, > allowing the specification of a {{HashFunctionProvider}} class path in the > configuration file. This enhancement would greatly facilitate users in > tailoring partition strategies to meet their specific application needs. > In this case, I want to create new class {{HashFunctionProvider}} and > {{{}HashFunction{}}}: > {code:java} > public interface HashFunctionProvider { > HashFunction getHashFunction(Schema schema); > } > public interface HashFunction extends Function { > Integer apply(DataChangeEvent event); > } {code} > add {{getHashFunctionProvider}} method to {{DataSink}} > > {code:java} > public interface DataSink { > /** Get the {@link EventSinkProvider} for writing changed data to > external systems. */ > EventSinkProvider getEventSinkProvider(); > /** Get the {@link MetadataApplier} for applying metadata changes to > external systems. */ > MetadataApplier getMetadataApplier(); > default HashFunctionProvider getHashFunctionProvider() { > return new DefaultHashFunctionProvider(); > } > } {code} > and re-implement {{PrePartitionOperator}} {{recreateHashFunction}} method. > {code:java} > private HashFunction recreateHashFunction(TableId tableId) { > return > hashFunctionProvider.getHashFunction(loadLatestSchemaFromRegistry(tableId)); > } {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-34914) FLIP-436: Introduce Catalog-related Syntax
[ https://issues.apache.org/jira/browse/FLINK-34914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854911#comment-17854911 ] Weijie Guo edited comment on FLINK-34914 at 6/14/24 2:16 AM: - I'm glad to hear that, but with a friendly remind: The feature freeze date is June 15, 2024, 00:00 CEST(UTC+2). I think we have to hurry D) was (Author: weijie guo): I'm glad to hear that, but with a friendly remind: The feature freeze date is June 15, 2024, 00:00 CEST(UTC+2). I think we have to hurry :) > FLIP-436: Introduce Catalog-related Syntax > -- > > Key: FLINK-34914 > URL: https://issues.apache.org/jira/browse/FLINK-34914 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API >Affects Versions: 1.20.0 >Reporter: Yubin Li >Assignee: Yubin Li >Priority: Major > > Umbrella issue for: > https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34914) FLIP-436: Introduce Catalog-related Syntax
[ https://issues.apache.org/jira/browse/FLINK-34914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854911#comment-17854911 ] Weijie Guo commented on FLINK-34914: I'm glad to hear that, but with a friendly remind: The feature freeze date is June 15, 2024, 00:00 CEST(UTC+2). I think we have to hurry :) > FLIP-436: Introduce Catalog-related Syntax > -- > > Key: FLINK-34914 > URL: https://issues.apache.org/jira/browse/FLINK-34914 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API >Affects Versions: 1.20.0 >Reporter: Yubin Li >Assignee: Yubin Li >Priority: Major > > Umbrella issue for: > https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35597) Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts
[ https://issues.apache.org/jira/browse/FLINK-35597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yubin Li updated FLINK-35597: - Description: To ensure at least one fraction of hosts to be local, it should be {{ia.getNumberOfLocalAssignments()}} as before. > Fix unstable > LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts > - > > Key: FLINK-35597 > URL: https://issues.apache.org/jira/browse/FLINK-35597 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.20.0 >Reporter: Yubin Li >Priority: Major > Labels: pull-request-available > > To ensure at least one fraction of hosts to be local, it should be > {{ia.getNumberOfLocalAssignments()}} as before. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-35597) Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts
[ https://issues.apache.org/jira/browse/FLINK-35597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854910#comment-17854910 ] Jane Chan commented on FLINK-35597: --- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60255&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8 > Fix unstable > LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts > - > > Key: FLINK-35597 > URL: https://issues.apache.org/jira/browse/FLINK-35597 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.20.0 >Reporter: Yubin Li >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]
caicancai commented on code in PR #830: URL: https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1639121989 ## docs/content/docs/custom-resource/pod-template.md: ## @@ -125,4 +124,4 @@ arr1: [{name: a, p2: v2}, {name: c, p2: v2}] merged: [{name: a, p1: v1, p2: v2}, {name: b, p1: v1}, {name: c, p2: v2}] ``` -Merging by name can we be very convenient when merging container specs or when the base and override templates are not defined together. +当合并容器规范或者当基础模板和覆盖模板没有一起定义时,按名称合并可以非常方便。 Review Comment: Yes, sorry, I overlooked it. It's been fixed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34545][cdc-pipeline-connector]Add OceanBase pipeline connector to Flink CDC [flink-cdc]
whhe commented on code in PR #3360: URL: https://github.com/apache/flink-cdc/pull/3360#discussion_r1639119977 ## docs/content/docs/connectors/pipeline-connectors/oceanbase.md: ## @@ -0,0 +1,343 @@ +--- +title: "OceanBase" +weight: 7 +type: docs +aliases: +- /connectors/pipeline-connectors/oceanbase +--- + + +# OceanBase Connector + +OceanBase connector can be used as the *Data Sink* of the pipeline, and write data to [OceanBase](https://github.com/oceanbase/oceanbase). This document describes how to set up the OceanBase connector. + +## What can the connector do? +* Create table automatically if not exist +* Schema change synchronization +* Data synchronization + +## Example + +The pipeline for reading data from MySQL and sink to OceanBase can be defined as follows: + +```yaml +source: + type: mysql + hostname: mysql + port: 3306 + username: mysqluser + password: mysqlpw + tables: mysql_2_oceanbase_test_17l13vc.\.* + server-id: 5400-5404 + server-time-zone: UTC + +sink: + type: oceanbase + url: jdbc:mysql://oceanbase:2881/test + username: root@test + password: + +pipeline: + name: MySQL to OceanBase Pipeline + parallelism: 1 +``` + +## Connector Options + + + + + +Option +Required by Table API +Default +Type +Description + + + + + url + Yes + + String + JDBC url. + + + username + Yes + + String + The username. + + + password + Yes + + String + The password. + + + schema-name + Yes + + String + The schema name or database name. + + + table-name + Yes + + String + The table name. + + + driver-class-name + No + com.mysql.cj.jdbc.Driver + String + The driver class name, use 'com.mysql.cj.jdbc.Driver' by default. If other value is set, you need to introduce the driver manually. Review Comment: The MySQL driver is also excluded in flink cdc jar files, so any driver needs to be introduced manually. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-35591) Azure Pipelines not running for master since c9def981
[ https://issues.apache.org/jira/browse/FLINK-35591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weijie Guo closed FLINK-35591. -- Fix Version/s: 1.20.0 Resolution: Fixed > Azure Pipelines not running for master since c9def981 > - > > Key: FLINK-35591 > URL: https://issues.apache.org/jira/browse/FLINK-35591 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Assignee: Lorenzo Affetti >Priority: Blocker > Fix For: 1.20.0 > > Attachments: image-2024-06-13-12-31-18-076.png > > > No Azure CI builds are triggered for master since > [c9def981|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]. > The PR CI workflows appear to be not affected. > Might be the same reason as FLINK-34026. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-35591) Azure Pipelines not running for master since c9def981
[ https://issues.apache.org/jira/browse/FLINK-35591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854907#comment-17854907 ] Weijie Guo commented on FLINK-35591: Thanks [~lorenzo.affetti]! I have checked it works now. > Azure Pipelines not running for master since c9def981 > - > > Key: FLINK-35591 > URL: https://issues.apache.org/jira/browse/FLINK-35591 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Assignee: Lorenzo Affetti >Priority: Blocker > Attachments: image-2024-06-13-12-31-18-076.png > > > No Azure CI builds are triggered for master since > [c9def981|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]. > The PR CI workflows appear to be not affected. > Might be the same reason as FLINK-34026. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-35591) Azure Pipelines not running for master since c9def981
[ https://issues.apache.org/jira/browse/FLINK-35591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weijie Guo reassigned FLINK-35591: -- Assignee: Lorenzo Affetti > Azure Pipelines not running for master since c9def981 > - > > Key: FLINK-35591 > URL: https://issues.apache.org/jira/browse/FLINK-35591 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Assignee: Lorenzo Affetti >Priority: Blocker > Attachments: image-2024-06-13-12-31-18-076.png > > > No Azure CI builds are triggered for master since > [c9def981|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]. > The PR CI workflows appear to be not affected. > Might be the same reason as FLINK-34026. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-35597][test] Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts [flink]
flinkbot commented on PR #24936: URL: https://github.com/apache/flink/pull/24936#issuecomment-2167073330 ## CI report: * 971edbe445dc16987a149bda399cf235f2ed5554 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-33461) Support streaming related semantics for the new JDBC source
[ https://issues.apache.org/jira/browse/FLINK-33461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854906#comment-17854906 ] Rui Fan commented on FLINK-33461: - Merged to main(jdbc-3.3.0) via 190238a05bebe9a092e9cec84627127781d4d859 > Support streaming related semantics for the new JDBC source > --- > > Key: FLINK-33461 > URL: https://issues.apache.org/jira/browse/FLINK-33461 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC >Reporter: RocMarshal >Assignee: RocMarshal >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-33461) Support streaming related semantics for the new JDBC source
[ https://issues.apache.org/jira/browse/FLINK-33461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan resolved FLINK-33461. - Fix Version/s: jdbc-3.3.0 Resolution: Fixed > Support streaming related semantics for the new JDBC source > --- > > Key: FLINK-33461 > URL: https://issues.apache.org/jira/browse/FLINK-33461 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC >Reporter: RocMarshal >Assignee: RocMarshal >Priority: Major > Labels: pull-request-available > Fix For: jdbc-3.3.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33462) Sort out the document page about the new Jdbc source.
[ https://issues.apache.org/jira/browse/FLINK-33462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854905#comment-17854905 ] Rui Fan commented on FLINK-33462: - Merged to main(jdbc-3.3.0) via 1bab53304c65384b4fbe6a5fe71de71a344a78fe > Sort out the document page about the new Jdbc source. > - > > Key: FLINK-33462 > URL: https://issues.apache.org/jira/browse/FLINK-33462 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC >Reporter: RocMarshal >Assignee: RocMarshal >Priority: Major > Labels: pull-request-available > Fix For: jdbc-3.3.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-33462) Sort out the document page about the new Jdbc source.
[ https://issues.apache.org/jira/browse/FLINK-33462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan updated FLINK-33462: Fix Version/s: jdbc-3.3.0 > Sort out the document page about the new Jdbc source. > - > > Key: FLINK-33462 > URL: https://issues.apache.org/jira/browse/FLINK-33462 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC >Reporter: RocMarshal >Assignee: RocMarshal >Priority: Major > Labels: pull-request-available > Fix For: jdbc-3.3.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-33462) Sort out the document page about the new Jdbc source.
[ https://issues.apache.org/jira/browse/FLINK-33462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan resolved FLINK-33462. - Resolution: Fixed > Sort out the document page about the new Jdbc source. > - > > Key: FLINK-33462 > URL: https://issues.apache.org/jira/browse/FLINK-33462 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC >Reporter: RocMarshal >Assignee: RocMarshal >Priority: Major > Labels: pull-request-available > Fix For: jdbc-3.3.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34487][ci] Adds Python Wheels nightly GHA workflow [flink]
HuangXingBo commented on code in PR #24426: URL: https://github.com/apache/flink/pull/24426#discussion_r1639115041 ## .github/workflows/nightly.yml: ## @@ -28,69 +28,131 @@ jobs: name: "Pre-compile Checks" uses: ./.github/workflows/template.pre-compile-checks.yml - java8: -name: "Java 8" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: java8 - environment: 'PROFILE="-Dinclude_hadoop_aws"' - jdk_version: 8 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - java11: -name: "Java 11" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: java11 - environment: 'PROFILE="-Dinclude_hadoop_aws -Djdk11 -Pjava11-target"' - jdk_version: 11 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - java17: -name: "Java 17" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: java17 - environment: 'PROFILE="-Dinclude_hadoop_aws -Djdk11 -Djdk17 -Pjava17-target"' - jdk_version: 17 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - java21: -name: "Java 21" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: java21 - environment: 'PROFILE="-Dinclude_hadoop_aws -Djdk11 -Djdk17 -Djdk21 -Pjava21-target"' - jdk_version: 21 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - hadoop313: -name: "Hadoop 3.1.3" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: hadoop313 - environment: 'PROFILE="-Dflink.hadoop.version=3.2.3 -Phadoop3-tests,hive3"' - jdk_version: 8 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} - adaptive-scheduler: -name: "AdaptiveScheduler" -uses: ./.github/workflows/template.flink-ci.yml -with: - workflow-caller-id: adaptive-scheduler - environment: 'PROFILE="-Penable-adaptive-scheduler"' - jdk_version: 8 -secrets: - s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }} - s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }} - s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }} +# java8: Review Comment: Is this code no longer needed? Why not just delete it? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33461][Connector/JDBC] Support streaming related semantics for the new JDBC source [flink-connector-jdbc]
1996fanrui merged PR #119: URL: https://github.com/apache/flink-connector-jdbc/pull/119 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34487][ci] Adds Python Wheels nightly GHA workflow [flink]
HuangXingBo commented on PR #24426: URL: https://github.com/apache/flink/pull/24426#issuecomment-2167064115 Thanks @morazow for the update. I downloaded the wheel package and found that the wheel package after repair is manylinux_2_5. It may be related to not using `dev/glibc_version_fix.sh`. If the version of glibc is too new, it is impossible to generate manylinux1 package. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35597) Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts
[ https://issues.apache.org/jira/browse/FLINK-35597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35597: --- Labels: pull-request-available (was: ) > Fix unstable > LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts > - > > Key: FLINK-35597 > URL: https://issues.apache.org/jira/browse/FLINK-35597 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.20.0 >Reporter: Yubin Li >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [FLINK-35597][test] Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts [flink]
liyubin117 opened a new pull request, #24936: URL: https://github.com/apache/flink/pull/24936 ## Brief change log * it should be ia.getNumberOfLocalAssignments() as before -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35121][common] Adds validation for pipeline definition options [flink-cdc]
Jiabao-Sun commented on code in PR #3382: URL: https://github.com/apache/flink-cdc/pull/3382#discussion_r1639108698 ## flink-cdc-composer/src/test/java/org/apache/flink/cdc/composer/definition/PipelineValidationTest.java: ## @@ -0,0 +1,85 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.composer.definition; + +import org.apache.flink.cdc.common.configuration.Configuration; +import org.apache.flink.table.api.ValidationException; + +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; + +import java.util.HashMap; +import java.util.Map; + +/** Tests for {@link PipelineDef} validation. */ +public class PipelineValidationTest { + +@Test +public void testNormalConfigValidation() { Review Comment: minor: In JUnit 5 tests, we can minimize the visibility of test classes and methods. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]
1996fanrui commented on code in PR #830: URL: https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1639108031 ## docs/content/docs/custom-resource/pod-template.md: ## @@ -125,4 +124,4 @@ arr1: [{name: a, p2: v2}, {name: c, p2: v2}] merged: [{name: a, p1: v1, p2: v2}, {name: b, p1: v1}, {name: c, p2: v2}] ``` -Merging by name can we be very convenient when merging container specs or when the base and override templates are not defined together. +当合并容器规范或者当基础模板和覆盖模板没有一起定义时,按名称合并可以非常方便。 Review Comment: You didn't update the english doc, right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]
1996fanrui commented on code in PR #830: URL: https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1639106337 ## docs/content/docs/custom-resource/pod-template.md: ## @@ -93,16 +90,18 @@ spec: ``` {{< hint info >}} -When using the operator with Flink native Kubernetes integration, please refer to [pod template field precedence]( -https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink). +当使用具有 Flink 原生 Kubernetes 集成的 operator 时,请参考 [pod template 字段优先级]( +https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink)。 {{< /hint >}} + ## Array Merging Behaviour -When layering pod templates (defining both a top level and jobmanager specific podtemplate for example) the corresponding yamls are merged together. + + +当分层 pod templates(例如同时定义顶级和任务管理器特定的 pod 模板)时,相应的 yaml 会合并在一起。 -The default behaviour of the pod template mechanism is to merge array arrays by merging the objects in the respective array positions. -This requires that containers in the podTemplates are defined in the same order otherwise the results may be undefined. +Pod 模板机制的默认行为是通过合并相应数组位置的对象合并 json 类型的数组。 Review Comment: I tried to convert it using json formatter just now, it's not json. ![image](https://github.com/apache/flink-kubernetes-operator/assets/38427477/8aefe09e-8648-405e-a722-e342c3fbdbce) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]
1996fanrui commented on code in PR #830: URL: https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1639106337 ## docs/content/docs/custom-resource/pod-template.md: ## @@ -93,16 +90,18 @@ spec: ``` {{< hint info >}} -When using the operator with Flink native Kubernetes integration, please refer to [pod template field precedence]( -https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink). +当使用具有 Flink 原生 Kubernetes 集成的 operator 时,请参考 [pod template 字段优先级]( +https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink)。 {{< /hint >}} + ## Array Merging Behaviour -When layering pod templates (defining both a top level and jobmanager specific podtemplate for example) the corresponding yamls are merged together. + + +当分层 pod templates(例如同时定义顶级和任务管理器特定的 pod 模板)时,相应的 yaml 会合并在一起。 -The default behaviour of the pod template mechanism is to merge array arrays by merging the objects in the respective array positions. -This requires that containers in the podTemplates are defined in the same order otherwise the results may be undefined. +Pod 模板机制的默认行为是通过合并相应数组位置的对象合并 json 类型的数组。 Review Comment: I tried to convert it using json formatter just now, it's not json. ![image](https://github.com/apache/flink-kubernetes-operator/assets/38427477/f44ad700-4348-4e77-962c-efebfec04b61) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-35597) Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts
Yubin Li created FLINK-35597: Summary: Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts Key: FLINK-35597 URL: https://issues.apache.org/jira/browse/FLINK-35597 Project: Flink Issue Type: Bug Components: Tests Affects Versions: 1.20.0 Reporter: Yubin Li -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35157) Sources with watermark alignment get stuck once some subtasks finish
[ https://issues.apache.org/jira/browse/FLINK-35157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan updated FLINK-35157: Fix Version/s: 1.20.0 > Sources with watermark alignment get stuck once some subtasks finish > > > Key: FLINK-35157 > URL: https://issues.apache.org/jira/browse/FLINK-35157 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.17.2, 1.19.0, 1.18.1 >Reporter: Gyula Fora >Assignee: elon_X >Priority: Critical > Labels: pull-request-available > Fix For: 1.20.0 > > Attachments: image-2024-04-24-21-36-16-146.png > > > The current watermark alignment logic can easily get stuck if some subtasks > finish while others are still running. > The reason is that once a source subtask finishes, the subtask is not > excluded from alignment, effectively blocking the rest of the job to make > progress beyond last wm + alignment time for the finished sources. > This can be easily reproduced by the following simple pipeline: > {noformat} > StreamExecutionEnvironment env = > StreamExecutionEnvironment.getExecutionEnvironment(); > env.setParallelism(2); > DataStream s = env.fromSource(new NumberSequenceSource(0, 100), > > WatermarkStrategy.forMonotonousTimestamps().withTimestampAssigner((SerializableTimestampAssigner) > (aLong, l) -> aLong).withWatermarkAlignment("g1", Duration.ofMillis(10), > Duration.ofSeconds(2)), > "Sequence Source").filter((FilterFunction) aLong -> { > Thread.sleep(200); > return true; > } > ); > s.print(); > env.execute();{noformat} > The solution could be to send out a max watermark event once the sources > finish or to exclude them from the source coordinator -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-35157) Sources with watermark alignment get stuck once some subtasks finish
[ https://issues.apache.org/jira/browse/FLINK-35157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854902#comment-17854902 ] Rui Fan commented on FLINK-35157: - Hi [~elon] , would you mind backporting this fix to 1.17, 1.18 and 1.19? thanks :) > Sources with watermark alignment get stuck once some subtasks finish > > > Key: FLINK-35157 > URL: https://issues.apache.org/jira/browse/FLINK-35157 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.17.2, 1.19.0, 1.18.1 >Reporter: Gyula Fora >Assignee: elon_X >Priority: Critical > Labels: pull-request-available > Attachments: image-2024-04-24-21-36-16-146.png > > > The current watermark alignment logic can easily get stuck if some subtasks > finish while others are still running. > The reason is that once a source subtask finishes, the subtask is not > excluded from alignment, effectively blocking the rest of the job to make > progress beyond last wm + alignment time for the finished sources. > This can be easily reproduced by the following simple pipeline: > {noformat} > StreamExecutionEnvironment env = > StreamExecutionEnvironment.getExecutionEnvironment(); > env.setParallelism(2); > DataStream s = env.fromSource(new NumberSequenceSource(0, 100), > > WatermarkStrategy.forMonotonousTimestamps().withTimestampAssigner((SerializableTimestampAssigner) > (aLong, l) -> aLong).withWatermarkAlignment("g1", Duration.ofMillis(10), > Duration.ofSeconds(2)), > "Sequence Source").filter((FilterFunction) aLong -> { > Thread.sleep(200); > return true; > } > ); > s.print(); > env.execute();{noformat} > The solution could be to send out a max watermark event once the sources > finish or to exclude them from the source coordinator -- This message was sent by Atlassian Jira (v8.20.10#820010)