Re: [VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table
Sorry for the re-post, just to format this email content. Hi Dev Thank you to everyone for the feedback on FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table[1][2]. I'd like to start a vote for it. The vote will be open for at least 72 hours unless there is an objection or not enough votes. [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table [2] https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1 Best, Ron Ron Liu 于2024年5月9日周四 13:52写道: > Hi Dev, Thank you to everyone for the feedback on FLIP-448: Introduce > Pluggable Workflow Scheduler Interface for Materialized Table[1][2]. I'd > like to start a vote for it. The vote will be open for at least 72 hours > unless there is an objection or not enough votes. [1] > https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table > > [2] https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1 > Best, Ron >
[VOTE] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table
Hi Dev, Thank you to everyone for the feedback on FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table[1][2]. I'd like to start a vote for it. The vote will be open for at least 72 hours unless there is an objection or not enough votes. [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-448%3A+Introduce+Pluggable+Workflow+Scheduler+Interface+for+Materialized+Table [2] https://lists.apache.org/thread/57xfo6p25rbrhcg01dhyok46zt6jc5q1 Best, Ron
Re: [VOTE] FLIP-452: Allow Skipping Invocation of Function Calls While Constant-folding
+1 (binding) Timo Walther 于2024年5月8日周三 17:15写道: > > +1 (binding) > > Thanks, > Timo > > On 08.05.24 11:10, Stefan Richter wrote: > > Hi Alan, > > > > Thanks for this proposal, the ability to exclude functions from constant > > folding makes sense to me. > > > > +1 (binding) > > > > Best, > > Stefan > > > >> On 8. May 2024, at 02:01, Alan Sheinberg > >> wrote: > >> > >> Hi everyone, > >> > >> I'd like to start a vote on FLIP-452 [1]. It covers adding a new method > >> FunctionDefinition.supportsConstantFolding() as part of the Flink Table/SQL > >> API to allow skipping invocation of functions while constant-folding. It > >> has been discussed in this thread [2]. > >> > >> I would like to start a vote. The vote will be open for at least 72 hours > >> unless there is an objection or insufficient votes. > >> > >> [1] > >> https://www.google.com/url?q=https://cwiki.apache.org/confluence/display/FLINK/FLIP-452%253A%2BAllow%2BSkipping%2BInvocation%2Bof%2BFunction%2BCalls%2BWhile%2BConstant-folding=gmail-imap=171573131400=AOvVaw3sVTK3M3Qs45haptzQbUmo > >> > >> [2] > >> https://www.google.com/url?q=https://lists.apache.org/thread/ko5ndv5kr87nm011psll2hzzd0nn3ztz=gmail-imap=171573131400=AOvVaw3YKYwhLhbgWkX5hbzHRW31 > >> > >> Thanks, > >> Alan > > > -- Best, Benchao Li
[RESULT][VOTE] FLIP-447: Upgrade FRocksDB from 6.20.3 to 8.10.0
Hi everyone, Thanks for your review and the votes! I am happy to announce that FLIP-447: Upgrade FRocksDB from 6.20.3 to 8.10.0 [1]. has been accepted. The proposal has been accepted with 15 approving votes (9 binding) and there is no disapproval: - Zakelly Lan (binding) - Yanfei Lei (binding) - Rui Fan (binding) - Yuan Mei (binding) - Hangxiang Yu (binding) - Stefan Richter (binding) - Muhammet Orazov (non-binding) - Yun Tang (binding) - Gabor Somogyi (non-binding) - Roc Marshal (non-binding) - gongzhongqiang (non-binding) - Roman Khachatryan (binding) - Piotr Nowojski (binding) - ConradJam (non-binding) - zhourenxiang (non-binding) Thanks to all involved. [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-447%3A+Upgrade+FRocksDB+from+6.20.3++to+8.10.0 [2] https://lists.apache.org/thread/r92qoxkt1kwtkbx9p45cpx4jto7s3l0d -- Best, Yue
[jira] [Created] (FLINK-35315) MemoryManagerConcurrentModReleaseTest executes more than 15 minutes
Rui Fan created FLINK-35315: --- Summary: MemoryManagerConcurrentModReleaseTest executes more than 15 minutes Key: FLINK-35315 URL: https://issues.apache.org/jira/browse/FLINK-35315 Project: Flink Issue Type: Bug Components: Runtime / Network, Tests Affects Versions: 1.20.0 Reporter: Rui Fan Attachments: image-2024-05-09-11-53-10-037.png [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59395=results] It seems MemoryManagerConcurrentModReleaseTest.testConcurrentModificationWhileReleasing executes more than 15 minutes. The root cause may be {color:#e1dfdd}ConcurrentModificationException{color} [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59395=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=10060] !image-2024-05-09-11-53-10-037.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-447: Upgrade FRocksDB from 6.20.3 to 8.10.0
+1 (non-binding) ConradJam 于2024年5月7日周二 11:35写道: > +1 (no-binding) > > Piotr Nowojski 于2024年5月6日周一 20:17写道: > > > +1 (binding) > > > > Piotrek > > > > pon., 6 maj 2024 o 12:35 Roman Khachatryan > napisał(a): > > > > > +1 (binding) > > > > > > Regards, > > > Roman > > > > > > > > > On Mon, May 6, 2024 at 11:56 AM gongzhongqiang < > > gongzhongqi...@apache.org> > > > wrote: > > > > > > > +1 (non-binding) > > > > > > > > Best, > > > > Zhongqiang Gong > > > > > > > > yue ma 于2024年5月6日周一 10:54写道: > > > > > > > > > Hi everyone, > > > > > > > > > > Thanks for all the feedback, I'd like to start a vote on the > > FLIP-447: > > > > > Upgrade FRocksDB from 6.20.3 to 8.10.0 [1]. The discussion thread > is > > > here > > > > > [2]. > > > > > > > > > > The vote will be open for at least 72 hours unless there is an > > > objection > > > > or > > > > > insufficient votes. > > > > > > > > > > [1] > > > > > > > > > > > > > > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-447%3A+Upgrade+FRocksDB+from+6.20.3++to+8.10.0 > > > > > [2] > https://lists.apache.org/thread/lrxjfpjjwlq4sjzm1oolx58n1n8r48hw > > > > > > > > > > -- > > > > > Best, > > > > > Yue > > > > > > > > > > > > > > > > > -- > Best > > ConradJam > -- Best, renxiang
[jira] [Created] (FLINK-35314) Add Flink CDC pipeline transform user document
Wenkai Qi created FLINK-35314: - Summary: Add Flink CDC pipeline transform user document Key: FLINK-35314 URL: https://issues.apache.org/jira/browse/FLINK-35314 Project: Flink Issue Type: New Feature Components: Flink CDC Reporter: Wenkai Qi The document outline is as follows: # Definition # Parameters # Metadata Fields # Functions # Example # Problem -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35313) Add upsert changelog mode to avoid UPDATE_BEFORE records push down
ude created FLINK-35313: --- Summary: Add upsert changelog mode to avoid UPDATE_BEFORE records push down Key: FLINK-35313 URL: https://issues.apache.org/jira/browse/FLINK-35313 Project: Flink Issue Type: New Feature Components: Flink CDC Reporter: ude I try to use flink sql to write mysql cdc-data into redis as a dimension table for other business use. When executing {{UPDATE}} DML, the cdc-data will be converted into {{-D (UPDATE_BEFORE)}} and {{+I (UPDATE_AFTER)}} two records to sink redis. However, delete first will cause other data streams to be lost(NULL) when join data, which is unacceptable. I think we can add support for [upser changelog mode|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/dynamic_tables/#table-to-stream-conversion] by adding changelogMode option with mandatory primary key configuration.Basically, with {{changelogMode=upsert}} we will avoid {{UPDATE_BEFORE}} rows and we will require a primary key for the table. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35312) Insufficient number of arguments were supplied for the procedure or function cdc.fn_cdc_get_all_changes_
yux created FLINK-35312: --- Summary: Insufficient number of arguments were supplied for the procedure or function cdc.fn_cdc_get_all_changes_ Key: FLINK-35312 URL: https://issues.apache.org/jira/browse/FLINK-35312 Project: Flink Issue Type: Bug Components: Flink CDC Reporter: yux h3. Flink version 1.17.0 h3. Flink CDC version 2.4.1 h3. Database and its version sql server 2014 h3. Minimal reproduce step 1 h3. What did you expect to see? Caused by: java.lang.RuntimeException: SplitFetcher thread 22 received unexpected exception while polling the records at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:165) at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:114) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more Caused by: org.apache.kafka.connect.errors.RetriableException: An exception occurred in the change event producer. This connector will be restarted. at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:46) at io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.executeIteration(SqlServerStreamingChangeEventSource.java:458) at io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.execute(SqlServerStreamingChangeEventSource.java:138) at com.ververica.cdc.connectors.sqlserver.source.reader.fetch.SqlServerStreamFetchTask$LsnSplitReadTask.execute(SqlServerStreamFetchTask.java:161) at com.ververica.cdc.connectors.sqlserver.source.reader.fetch.SqlServerScanFetchTask.execute(SqlServerScanFetchTask.java:123) at com.ververica.cdc.connectors.base.source.reader.external.IncrementalSourceScanFetcher.lambda$submitTask$0(IncrementalSourceScanFetcher.java:95) ... 5 more Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: An insufficient number of arguments were supplied for the procedure or function cdc.fn_cdc_get_all_changes_ ... . at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:265) at com.microsoft.sqlserver.jdbc.SQLServerResultSet$FetchBuffer.nextRow(SQLServerResultSet.java:5471) at com.microsoft.sqlserver.jdbc.SQLServerResultSet.fetchBufferNext(SQLServerResultSet.java:1794) at com.microsoft.sqlserver.jdbc.SQLServerResultSet.next(SQLServerResultSet.java:1052) at io.debezium.pipeline.source.spi.ChangeTableResultSet.next(ChangeTableResultSet.java:63) at io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.lambda$executeIteration$1(SqlServerStreamingChangeEventSource.java:269) at io.debezium.jdbc.JdbcConnection.prepareQuery(JdbcConnection.java:606) at io.debezium.connector.sqlserver.SqlServerConnection.getChangesForTables(SqlServerConnection.java:329) at io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource.executeIteration(SqlServerStreamingChangeEventSource.java:251) ... 9 more -- This message was sent by Atlassian Jira (v8.20.10#820010)
flink-connector-kafka weekly CI job failing
Hi, I noticed the flink-connector-kafka weekly CI job is failing: https://github.com/apache/flink-connector-kafka/actions/runs/8954222477 Looks like flink-connector-kafka main has a compile error against flink 1.20-SNAPSHOT, I tried locally and get a different compile failure KafkaSerializerUpgradeTest.java:[23,45] cannot find symbol [ERROR] symbol: class TypeSerializerMatchers [ERROR] location: package org.apache.flink.api.common.typeutils Should 1.20-SNAPSHOT be removed from the weekly tests for now? Thanks Rob
Best Practices? Fault Isolation for Processing Large Number of Same-Shaped Input Kafka Topics in a Big Flink Job
Hi everyone, I'm currently prototyping on a project where we need to process a large number of Kafka input topics (say, a couple of hundred), all of which share the same DataType/Schema. Our objective is to run the same Flink SQL on all of the input topics, but I am concerned about doing this in a single large Flink SQL application for fault isolation purposes. We'd like to limit the "blast radius" in cases of data issues or "poison pills" in any particular Kafka topic — meaning, if one topic runs into a problem, it shouldn’t compromise or halt the processing of the others. At the same time, we are concerned about the operational toil associated with managing hundreds of Flink jobs that are really one logical application. Has anyone here tackled a similar challenge? If so: 1. How did you design your solution to handle a vast number of topics without creating a heavy management burden? 2. What strategies or patterns have you found effective in isolating issues within a specific topic so that they do not affect the processing of others? 3. Are there specific configurations or tools within the Flink ecosystem that you'd recommend to efficiently manage this scenario? Any examples, suggestions, or references to relevant documentation would be helpful. Thank you in advance for your time and help!
Re: [RESULT][VOTE] FLIP-454: New Apicurio Avro format
Thanks David for driving the FLIP forward, but we need 3 +1(binding) votes according Flink Bylaws[1] before community accepted it. Best, Leonard [1] https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws > 2024年5月8日 下午11:05,David Radley 写道: > > Hi everyone, > I am happy to say that FLIP-454: New Apicurio Avro format [1] has been > accepted and voted through this thread [2]. > > The proposal has been accepted with 4 approving votes and there > are no vetos: > > - Ahmed Hamdy (non-binding) > - Jeyhun Karimov (non-binding) > - Mark Nuttall (non-binding) > - Nic Townsend (non-binding) > > Martijn: > Please could you update the Flip with: > - the voting thread link > - the accepted status > - the Jira number (https://issues.apache.org/jira/browse/FLINK-35311). > As the involved committer, are you willing to assign me the Jira to work on > and merge once you approve the changes? > > [1] > https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format > [2] https://lists.apache.org/list?dev@flink.apache.org:lte=1M:apicurio > > Thanks to all involved. > > Kind regards, > David > > Unless otherwise stated above: > > IBM United Kingdom Limited > Registered in England and Wales with number 741598 > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
Re: [RESULT][VOTE] FLIP-454: New Apicurio Avro format
Hi David, thanks for the update We should wait to reach threshold of "binding" votes as per the process[1] 1- https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Approvals Best Regards Ahmed Hamdy On Wed, 8 May 2024 at 16:06, David Radley wrote: > Hi everyone, > I am happy to say that FLIP-454: New Apicurio Avro format [1] has been > accepted and voted through this thread [2]. > > The proposal has been accepted with 4 approving votes and there > are no vetos: > > - Ahmed Hamdy (non-binding) > - Jeyhun Karimov (non-binding) > - Mark Nuttall (non-binding) > - Nic Townsend (non-binding) > > Martijn: > Please could you update the Flip with: > - the voting thread link > - the accepted status > - the Jira number (https://issues.apache.org/jira/browse/FLINK-35311). > As the involved committer, are you willing to assign me the Jira to work > on and merge once you approve the changes? > > [1] > https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format > [2] https://lists.apache.org/list?dev@flink.apache.org:lte=1M:apicurio > > Thanks to all involved. > > Kind regards, > David > > Unless otherwise stated above: > > IBM United Kingdom Limited > Registered in England and Wales with number 741598 > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU >
Re: [DISCUSS] Flink CDC 3.2 Release Planning
+1 for the proposal code freeze date and RM candidate. Best, Leonard > 2024年5月8日 下午10:27,gongzhongqiang 写道: > > Hi Qingsheng > > Thank you for driving the release. > Agree with the goal and I'm willing to help. > > Best, > Zhongqiang Gong > > Qingsheng Ren 于2024年5月8日周三 14:22写道: > >> Hi devs, >> >> As we are in the midst of the release voting process for Flink CDC 3.1.0, I >> think it's a good time to kick off the upcoming Flink CDC 3.2 release >> cycle. >> >> In this release cycle I would like to focus on the stability of Flink CDC, >> especially for the newly introduced YAML-based data integration >> framework. To ensure we can iterate and improve swiftly, I propose to make >> 3.2 a relatively short release cycle, targeting a feature freeze by May 24, >> 2024. >> >> For developers that are interested in participating and contributing new >> features in this release cycle, please feel free to list your planning >> features in the wiki page [1]. >> >> I'm happy to volunteer as a release manager and of course open to work >> together with someone on this. >> >> What do you think? >> >> Best, >> Qingsheng >> >> [1] >> https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release >>
[RESULT][VOTE] FLIP-454: New Apicurio Avro format
Hi everyone, I am happy to say that FLIP-454: New Apicurio Avro format [1] has been accepted and voted through this thread [2]. The proposal has been accepted with 4 approving votes and there are no vetos: - Ahmed Hamdy (non-binding) - Jeyhun Karimov (non-binding) - Mark Nuttall (non-binding) - Nic Townsend (non-binding) Martijn: Please could you update the Flip with: - the voting thread link - the accepted status - the Jira number (https://issues.apache.org/jira/browse/FLINK-35311). As the involved committer, are you willing to assign me the Jira to work on and merge once you approve the changes? [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format [2] https://lists.apache.org/list?dev@flink.apache.org:lte=1M:apicurio Thanks to all involved. Kind regards, David Unless otherwise stated above: IBM United Kingdom Limited Registered in England and Wales with number 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
[jira] [Created] (FLINK-35311) FLIP-454: New Apicurio Avro format
david radley created FLINK-35311: Summary: FLIP-454: New Apicurio Avro format Key: FLINK-35311 URL: https://issues.apache.org/jira/browse/FLINK-35311 Project: Flink Issue Type: Improvement Components: Connectors / Kafka Affects Versions: 1.18.1, 1.19.0, 1.17.2 Reporter: david radley Fix For: 2.0.0, 1.20.0 This Jira is for the accepted [FLIP-454|https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format]. It involves changes to 2 repositories, core Flink and the Flink Kafka connector -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35310) Replace RBAC verb wildcards with actual verbs
Tim created FLINK-35310: --- Summary: Replace RBAC verb wildcards with actual verbs Key: FLINK-35310 URL: https://issues.apache.org/jira/browse/FLINK-35310 Project: Flink Issue Type: Improvement Components: Kubernetes Operator Environment: Running on Kubernetes using the flink-operator version 1.8.0 Reporter: Tim We are deploying the flink operator on a managed Kubernetes cluster which utilizes [Kyverno Policy Management|https://kyverno.io/] and all it's default rules. Not complying to certain rules, leads to a restriction in deploying. As we are using Helm to build the manifest files (which is super useful) I recognized that in the RBAC template "wildcards" are being used for all verbs ("*"). This violates the following Kyverno ruleset: [https://kyverno.io/policies/other/restrict-wildcard-verbs/restrict-wildcard-verbs/] Besides that I think that it would also be cleaner to explicitly list the needed verbs instead of just using the star symbol as a wildcard. I have already attempted to change this in a fork as a demonstration how it could be changed to be conform. Please take a look and I would greatly appreciate a change in that direction. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [DISCUSS] Flink CDC 3.2 Release Planning
Hi Qingsheng Thank you for driving the release. Agree with the goal and I'm willing to help. Best, Zhongqiang Gong Qingsheng Ren 于2024年5月8日周三 14:22写道: > Hi devs, > > As we are in the midst of the release voting process for Flink CDC 3.1.0, I > think it's a good time to kick off the upcoming Flink CDC 3.2 release > cycle. > > In this release cycle I would like to focus on the stability of Flink CDC, > especially for the newly introduced YAML-based data integration > framework. To ensure we can iterate and improve swiftly, I propose to make > 3.2 a relatively short release cycle, targeting a feature freeze by May 24, > 2024. > > For developers that are interested in participating and contributing new > features in this release cycle, please feel free to list your planning > features in the wiki page [1]. > > I'm happy to volunteer as a release manager and of course open to work > together with someone on this. > > What do you think? > > Best, > Qingsheng > > [1] > https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release >
Re: [DISCUSS] FLIP-444: Native file copy support
Hi Piotr +1 for the proposal, it seems to have a lot of gains. Best Regards Ahmed Hamdy On Mon, 6 May 2024 at 12:06, Zakelly Lan wrote: > Hi Piotrek, > > Thanks for your answers! > > Good question. The intention and use case behind `DuplicatingFileSystem` is > > different. It marks if `FileSystem` can quickly copy/duplicate files > > in the remote `FileSystem`. For example an equivalent of a hard link or > > bumping a reference count in the remote system. That's a bit different > > to copy paths between remote and local file systems. > > > > However, it could arguably be unified under one interface where we would > > re-use or re-name `canFastDuplicate(Path, Path)` to > > `canFastCopy(Path, Path)` with the following use cases: > > - `canFastCopy(remoteA, remoteB)` returns true - current equivalent of > > `DuplicatingFileSystem` - quickly duplicate/hard link remote path > > - `canFastCopy(local, remote)` returns true - FS can natively upload > local > > file to a remote location > > - `canFastCopy(remote, local)` returns true - FS can natively download > > local file from a remote location > > > > Maybe indeed that's a better solution vs having two separate interfaces > for > > copying and duplicating? > > > > I'd prefer a unified one interface, `canFastCopy(Path, Path)` looks good to > me. This also resolves my question 1 about the destination. > > > Best, > Zakelly > > On Mon, May 6, 2024 at 6:36 PM Piotr Nowojski > wrote: > > > Hi All! > > > > Thanks for your comments. > > > > Muhammet and Hong, about the config options. > > > > > Could you please also add the configuration property for this? An > example > > showing how users would set this parameter would be helpful. > > > > > 1/ Configure the implementation of PathsCopyingFileSystem used > > > 2/ Configure the location of the s5cmd binary (version control etc.) > > > > Ops, sorry I added the config options that I had in mind to the FLIP. I > > don't know why I have omitted this. Basically I suggest that in order to > > use native file copying: > > 1. `FileSystem` must support it via implementing `PathsCopyingFileSystem` > > interface > > 2. That `FileSystem` would have to be configured to actually use it. For > > example S3 file system would return `true` that it can copy paths > > only if `s3.s5cmd.path` has been specified. > > > > > Would this affect any filesystem connectors that use FileSystem[1][2] > > dependencies? > > > > Definitely not out of the box. Any place in Flink that is currently > > uploading/downloading files from a FileSystem could use this feature, but > > it > > would have to be implemented. The same way this FLIP will implement > native > > files copying when downloading state during recovery, > > but the old code path will be still used for uploading state files > during a > > checkpoint. > > > > > How adding a s5cmd will affect memory footprint? Since this is a native > > binary, memory consumption will not be controlled by JVM or Flink. > > > > As you mentioned the memory usage of `s5cmd` will not be controlled, so > the > > memory footprint will grow. S5cmd integration with Flink > > has been tested quite extensively on our production environment already, > > and we haven't observed any issues so far despite the fact we > > are using quite small pods. But of course if your setup is working on the > > edge of OOM, this could tip you over that edge. > > > > Zakelly: > > > > > 1. What is the semantic of `canCopyPath`? Should it be associated with > a > > > specific destination path? e.g. It can be copied to local, but not to > the > > > remote FS. > > > > For the S3 (both for SDKv2 and s5cmd implementations), the copying > > direction (upload/download) doesn't matter. I don't know about other > > file systems, I haven't investigated anything besides S3. Nevertheless I > > wouldn't worry too much about it, since we can start with the simple > > `canCopyPath` that handles both directions. If this will become important > > in the future, adding directional `canDownloadPath` or `canUploadPath` > > would be a backward compatible change, so we can safely extend it in the > > future if needed. > > > > > 2. Is the existing interface `DuplicatingFileSystem` feasible/enough > for > > this case? > > > > Good question. The intention and use case behind `DuplicatingFileSystem` > is > > different. It marks if `FileSystem` can quickly copy/duplicate files > > in the remote `FileSystem`. For example an equivalent of a hard link or > > bumping a reference count in the remote system. That's a bit different > > to copy paths between remote and local file systems. > > > > However, it could arguably be unified under one interface where we would > > re-use or re-name `canFastDuplicate(Path, Path)` to > > `canFastCopy(Path, Path)` with the following use cases: > > - `canFastCopy(remoteA, remoteB)` returns true - current equivalent of > > `DuplicatingFileSystem` - quickly duplicate/hard link remote path > > - `canFastCopy(local, remote)`
Re: [DISCUSS] Flink CDC 3.2 Release Planning
Thanks Qinsheng for driving, +1 for the feature freeze date. I am happy to assist with any release duties. Best Regards Ahmed Hamdy On Wed, 8 May 2024 at 10:50, Jiabao Sun wrote: > Thanks Qingsheng, > > Improving stability is crucial for Flink CDC, looking forward to this > release. > If assistance is needed, I am happy to help with it. > > Best, > Jiabao > > Qingsheng Ren 于2024年5月8日周三 14:22写道: > > > Hi devs, > > > > As we are in the midst of the release voting process for Flink CDC > 3.1.0, I > > think it's a good time to kick off the upcoming Flink CDC 3.2 release > > cycle. > > > > In this release cycle I would like to focus on the stability of Flink > CDC, > > especially for the newly introduced YAML-based data integration > > framework. To ensure we can iterate and improve swiftly, I propose to > make > > 3.2 a relatively short release cycle, targeting a feature freeze by May > 24, > > 2024. > > > > For developers that are interested in participating and contributing new > > features in this release cycle, please feel free to list your planning > > features in the wiki page [1]. > > > > I'm happy to volunteer as a release manager and of course open to work > > together with someone on this. > > > > What do you think? > > > > Best, > > Qingsheng > > > > [1] > > https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release > > >
Re: [DISCUSS] Flink CDC 3.2 Release Planning
Hey Qingsheng, Thanks for your efforts, agreed! I would be happy to help. Best, Muhammet On 2024-05-08 06:21, Qingsheng Ren wrote: Hi devs, As we are in the midst of the release voting process for Flink CDC 3.1.0, I think it's a good time to kick off the upcoming Flink CDC 3.2 release cycle. In this release cycle I would like to focus on the stability of Flink CDC, especially for the newly introduced YAML-based data integration framework. To ensure we can iterate and improve swiftly, I propose to make 3.2 a relatively short release cycle, targeting a feature freeze by May 24, 2024. For developers that are interested in participating and contributing new features in this release cycle, please feel free to list your planning features in the wiki page [1]. I'm happy to volunteer as a release manager and of course open to work together with someone on this. What do you think? Best, Qingsheng [1] https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release
Re: [DISCUSS] Flink CDC 3.2 Release Planning
Thanks Qingsheng, Improving stability is crucial for Flink CDC, looking forward to this release. If assistance is needed, I am happy to help with it. Best, Jiabao Qingsheng Ren 于2024年5月8日周三 14:22写道: > Hi devs, > > As we are in the midst of the release voting process for Flink CDC 3.1.0, I > think it's a good time to kick off the upcoming Flink CDC 3.2 release > cycle. > > In this release cycle I would like to focus on the stability of Flink CDC, > especially for the newly introduced YAML-based data integration > framework. To ensure we can iterate and improve swiftly, I propose to make > 3.2 a relatively short release cycle, targeting a feature freeze by May 24, > 2024. > > For developers that are interested in participating and contributing new > features in this release cycle, please feel free to list your planning > features in the wiki page [1]. > > I'm happy to volunteer as a release manager and of course open to work > together with someone on this. > > What do you think? > > Best, > Qingsheng > > [1] > https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release >
Re: [VOTE] FLIP-452: Allow Skipping Invocation of Function Calls While Constant-folding
+1 (binding) Thanks, Timo On 08.05.24 11:10, Stefan Richter wrote: Hi Alan, Thanks for this proposal, the ability to exclude functions from constant folding makes sense to me. +1 (binding) Best, Stefan On 8. May 2024, at 02:01, Alan Sheinberg wrote: Hi everyone, I'd like to start a vote on FLIP-452 [1]. It covers adding a new method FunctionDefinition.supportsConstantFolding() as part of the Flink Table/SQL API to allow skipping invocation of functions while constant-folding. It has been discussed in this thread [2]. I would like to start a vote. The vote will be open for at least 72 hours unless there is an objection or insufficient votes. [1] https://www.google.com/url?q=https://cwiki.apache.org/confluence/display/FLINK/FLIP-452%253A%2BAllow%2BSkipping%2BInvocation%2Bof%2BFunction%2BCalls%2BWhile%2BConstant-folding=gmail-imap=171573131400=AOvVaw3sVTK3M3Qs45haptzQbUmo [2] https://www.google.com/url?q=https://lists.apache.org/thread/ko5ndv5kr87nm011psll2hzzd0nn3ztz=gmail-imap=171573131400=AOvVaw3YKYwhLhbgWkX5hbzHRW31 Thanks, Alan
Re: [VOTE] FLIP-452: Allow Skipping Invocation of Function Calls While Constant-folding
Hi Alan, Thanks for this proposal, the ability to exclude functions from constant folding makes sense to me. +1 (binding) Best, Stefan > On 8. May 2024, at 02:01, Alan Sheinberg > wrote: > > Hi everyone, > > I'd like to start a vote on FLIP-452 [1]. It covers adding a new method > FunctionDefinition.supportsConstantFolding() as part of the Flink Table/SQL > API to allow skipping invocation of functions while constant-folding. It > has been discussed in this thread [2]. > > I would like to start a vote. The vote will be open for at least 72 hours > unless there is an objection or insufficient votes. > > [1] > https://www.google.com/url?q=https://cwiki.apache.org/confluence/display/FLINK/FLIP-452%253A%2BAllow%2BSkipping%2BInvocation%2Bof%2BFunction%2BCalls%2BWhile%2BConstant-folding=gmail-imap=171573131400=AOvVaw3sVTK3M3Qs45haptzQbUmo > > [2] > https://www.google.com/url?q=https://lists.apache.org/thread/ko5ndv5kr87nm011psll2hzzd0nn3ztz=gmail-imap=171573131400=AOvVaw3YKYwhLhbgWkX5hbzHRW31 > > Thanks, > Alan
Re: [DISCUSS] FLIP-448: Introduce Pluggable Workflow Scheduler Interface for Materialized Table
Hi, Dev Thank you all for joining this thread and giving your comments and suggestions, they have helped improve this proposal and I look forward to further feedback. If there are no further comments, I'd like to close the discussion and start the voting one day later. Best, Ron Ron Liu 于2024年5月7日周二 20:51写道: > Hi, dev > > Following the recent PoC[1], and drawing on the excellent code design > within Flink, I have made the following optimizations to the Public > Interfaces section of FLIP: > > 1. I have renamed WorkflowOperation to RefreshWorkflow. This change better > conveys its purpose. RefreshWorkflow is used to provide the necessary > information required for creating, modifying, and deleting workflows. Using > WorkflowOperation could mislead people into thinking it is a command > operation, whereas in fact, it does not represent an operation but merely > provides the essential context information for performing operations on > workflows. The specific operations are completed within WorkflowScheduler. > Additionally, I felt that using WorkflowOperation could potentially > conflict with the Operation[2] interface in the table. > 2. I have refined the signatures of the modifyRefreshWorkflow and > deleteRefreshWorkflow interface methods in WorkflowScheduler. The parameter > T refreshHandler is now provided by ModifyRefreshWorkflow and > deleteRefreshWorkflow, which makes the overall interface design more > symmetrical and clean. > > [1] https://github.com/lsyldliu/flink/tree/FLIP-448-PoC > [2] > https://github.com/apache/flink/blob/29736b8c01924b7da03d4bcbfd9c812a8e5a08b4/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/Operation.java > > Best, > Ron > > Ron Liu 于2024年5月7日周二 14:30写道: > >> > 4. It appears that in the section on `public interfaces`, within >> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to >> >> `CreateWorkflowOperation`, right? >> >> After discussing with Xuyang offline, we need to support periodic >> workflow and one-time workflow, they need different information, for >> example, periodic workflow needs cron expression, one-time workflow needs >> refresh partition, downstream cascade materialized table, etc. Therefore, >> CreateWorkflowOperation correspondingly will have two different >> implementation classes, which will be cleaner for both the implementer and >> the caller. >> >> Best, >> Ron >> >> Ron Liu 于2024年5月6日周一 20:48写道: >> >>> Hi, Xuyang >>> >>> Thanks for joining this discussion >>> >>> > 1. In the sequence diagram, it appears that there is a missing step >>> for obtaining the refresh handler from the catalog during the suspend >>> operation. >>> >>> Good catch >>> >>> > 2. The term "cascade refresh" does not seem to be mentioned in >>> FLIP-435. The workflow it creates is marked as a "one-time workflow". This >>> is different >>> >>> from a "periodic workflow," and it appears to be a one-off execution. Is >>> this actually referring to the Refresh command in FLIP-435? >>> >>> The cascade refresh is a future work, we don't propose the corresponding >>> syntax in FLIP-435. However, intuitively, it would be an extension of the >>> Refresh command in FLIP-435. >>> >>> > 3. The workflow-scheduler.type has no default value; should it be set >>> to CRON by default? >>> >>> Firstly, CRON is not a workflow scheduler. Secondly, I believe that >>> configuring the Scheduler should be an action that users are aware of, and >>> default values should not be set. >>> >>> > 4. It appears that in the section on `public interfaces`, within >>> `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to >>> >>> `CreateWorkflowOperation`, right? >>> >>> Sorry, I don't get your point. Can you give more description? >>> >>> Best, >>> Ron >>> >>> Xuyang 于2024年5月6日周一 20:26写道: >>> Hi, Ron. Thanks for driving this. After reading the entire flip, I have the following questions: 1. In the sequence diagram, it appears that there is a missing step for obtaining the refresh handler from the catalog during the suspend operation. 2. The term "cascade refresh" does not seem to be mentioned in FLIP-435. The workflow it creates is marked as a "one-time workflow". This is different from a "periodic workflow," and it appears to be a one-off execution. Is this actually referring to the Refresh command in FLIP-435? 3. The workflow-scheduler.type has no default value; should it be set to CRON by default? 4. It appears that in the section on `public interfaces`, within `WorkflowOperation`, `CreatePeriodicWorkflowOperation` should be changed to `CreateWorkflowOperation`, right? -- Best! Xuyang At 2024-04-22 14:41:39, "Ron Liu" wrote: >Hi, Dev > >I would like to start a
Re: [VOTE] Apache Flink CDC Release 3.1.0, release candidate #1
Hi Qingsheng, I open a block issue[1] to add ci for NOTICE check and fix NOTICE file. Best, Zhongqiang Gong [1] https://issues.apache.org/jira/browse/FLINK-35309 Qingsheng Ren 于2024年5月8日周三 14:28写道: > Thanks everyone for your verification! > > I'll investigate the issue and prepare another RC once it is resolved. Much > appreciated to Jingsong! > > This release candidate is now canceled. > > Best, > Qingsheng > > On Wed, May 8, 2024 at 2:20 PM Jingsong Li wrote: > > > -1 > > > > Thanks Qingsheng for preparing this RC. > > > > If you bundle third-party dependencies (non Flink dependencies) in > > your published jar, you need to write them in the NOTICE file. > > > > I recommend adding a test using flink-ci-tools to verify if the NOTICE > > file is correct. Of course, you cannot rely too much on tools and > > still need to manually verify if it is correct. > > > > Best, > > Jingsong > > > > On Sat, May 4, 2024 at 7:45 PM Ahmed Hamdy wrote: > > > > > > Hi Qisheng, > > > > > > +1 (non-binding) > > > > > > - Verified checksums and hashes > > > - Verified signatures > > > - Verified github tag exists > > > - Verified no binaries in source > > > - build source > > > > > > > > > Best Regards > > > Ahmed Hamdy > > > > > > > > > On Fri, 3 May 2024 at 23:03, Jeyhun Karimov > > wrote: > > > > > > > Hi Qinsheng, > > > > > > > > Thanks for driving the release. > > > > +1 (non-binding) > > > > > > > > - No binaries in source > > > > - Verified Signatures > > > > - Github tag exists > > > > - Build source > > > > > > > > Regards, > > > > Jeyhun > > > > > > > > On Thu, May 2, 2024 at 10:52 PM Muhammet Orazov > > > > wrote: > > > > > > > > > Hey Qingsheng, > > > > > > > > > > Thanks a lot! +1 (non-binding) > > > > > > > > > > - Checked sha512sum hash > > > > > - Checked GPG signature > > > > > - Reviewed release notes > > > > > - Reviewed GitHub web pr (added minor suggestions) > > > > > - Built the source with JDK 11 & 8 > > > > > - Checked that src doesn't contain binary files > > > > > > > > > > Best, > > > > > Muhammet > > > > > > > > > > On 2024-04-30 05:11, Qingsheng Ren wrote: > > > > > > Hi everyone, > > > > > > > > > > > > Please review and vote on the release candidate #1 for the > version > > > > > > 3.1.0 of > > > > > > Apache Flink CDC, as follows: > > > > > > [ ] +1, Approve the release > > > > > > [ ] -1, Do not approve the release (please provide specific > > comments) > > > > > > > > > > > > **Release Overview** > > > > > > > > > > > > As an overview, the release consists of the following: > > > > > > a) Flink CDC source release to be deployed to dist.apache.org > > > > > > b) Maven artifacts to be deployed to the Maven Central Repository > > > > > > > > > > > > **Staging Areas to Review** > > > > > > > > > > > > The staging areas containing the above mentioned artifacts are as > > > > > > follows, > > > > > > for your review: > > > > > > * All artifacts for a) can be found in the corresponding dev > > repository > > > > > > at > > > > > > dist.apache.org [1], which are signed with the key with > > fingerprint > > > > > > A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B [2] > > > > > > * All artifacts for b) can be found at the Apache Nexus > Repository > > [3] > > > > > > > > > > > > Other links for your review: > > > > > > * JIRA release notes [4] > > > > > > * Source code tag "release-3.1.0-rc1" with commit hash > > > > > > 63b42cb937d481f558209ab3c8547959cf039643 [5] > > > > > > * PR for release announcement blog post of Flink CDC 3.1.0 in > > flink-web > > > > > > [6] > > > > > > > > > > > > **Vote Duration** > > > > > > > > > > > > The voting time will run for at least 72 hours, adopted by > majority > > > > > > approval with at least 3 PMC affirmative votes. > > > > > > > > > > > > Thanks, > > > > > > Qingsheng Ren > > > > > > > > > > > > [1] > > https://dist.apache.org/repos/dist/dev/flink/flink-cdc-3.1.0-rc1/ > > > > > > [2] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > > [3] > > > > > > > > https://repository.apache.org/content/repositories/orgapacheflink-1731 > > > > > > [4] > > > > > > > > > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387 > > > > > > [5] > > https://github.com/apache/flink-cdc/releases/tag/release-3.1.0-rc1 > > > > > > [6] https://github.com/apache/flink-web/pull/739 > > > > > > > > > > > >
[jira] [Created] (FLINK-35309) Enable Notice file ci check and fix Notice
Zhongqiang Gong created FLINK-35309: --- Summary: Enable Notice file ci check and fix Notice Key: FLINK-35309 URL: https://issues.apache.org/jira/browse/FLINK-35309 Project: Flink Issue Type: Improvement Components: Flink CDC Affects Versions: 3.1.0 Reporter: Zhongqiang Gong Changes: * * Add ci to check Notice file * Fix Notice file issue -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35308) StarRocks sink maps TINYINT type to BOOLEAN incorrectly
yux created FLINK-35308: --- Summary: StarRocks sink maps TINYINT type to BOOLEAN incorrectly Key: FLINK-35308 URL: https://issues.apache.org/jira/browse/FLINK-35308 Project: Flink Issue Type: Bug Components: Flink CDC Reporter: yux In MySQL -> StarRocks pipeline job, the following MySQL source table schema: CREATE TABLE fallen_angel( ID VARCHAR(177) NOT NULL, BOOLEAN_COL BOOLEAN, TINYINT_COL TINYINT(1), PRIMARY KEY (ID) ); will be mapped to StarRocks sink as follows: Field Type Null Key Default Extra ID varchar(531) NO true NULL BOOLEAN_COL boolean YES false NULL TINYINT_COL boolean YES false NULL where the TINYINT_COL's type is mapped to BOOLEAN, wrongly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Apache Flink CDC Release 3.1.0, release candidate #1
Thanks everyone for your verification! I'll investigate the issue and prepare another RC once it is resolved. Much appreciated to Jingsong! This release candidate is now canceled. Best, Qingsheng On Wed, May 8, 2024 at 2:20 PM Jingsong Li wrote: > -1 > > Thanks Qingsheng for preparing this RC. > > If you bundle third-party dependencies (non Flink dependencies) in > your published jar, you need to write them in the NOTICE file. > > I recommend adding a test using flink-ci-tools to verify if the NOTICE > file is correct. Of course, you cannot rely too much on tools and > still need to manually verify if it is correct. > > Best, > Jingsong > > On Sat, May 4, 2024 at 7:45 PM Ahmed Hamdy wrote: > > > > Hi Qisheng, > > > > +1 (non-binding) > > > > - Verified checksums and hashes > > - Verified signatures > > - Verified github tag exists > > - Verified no binaries in source > > - build source > > > > > > Best Regards > > Ahmed Hamdy > > > > > > On Fri, 3 May 2024 at 23:03, Jeyhun Karimov > wrote: > > > > > Hi Qinsheng, > > > > > > Thanks for driving the release. > > > +1 (non-binding) > > > > > > - No binaries in source > > > - Verified Signatures > > > - Github tag exists > > > - Build source > > > > > > Regards, > > > Jeyhun > > > > > > On Thu, May 2, 2024 at 10:52 PM Muhammet Orazov > > > wrote: > > > > > > > Hey Qingsheng, > > > > > > > > Thanks a lot! +1 (non-binding) > > > > > > > > - Checked sha512sum hash > > > > - Checked GPG signature > > > > - Reviewed release notes > > > > - Reviewed GitHub web pr (added minor suggestions) > > > > - Built the source with JDK 11 & 8 > > > > - Checked that src doesn't contain binary files > > > > > > > > Best, > > > > Muhammet > > > > > > > > On 2024-04-30 05:11, Qingsheng Ren wrote: > > > > > Hi everyone, > > > > > > > > > > Please review and vote on the release candidate #1 for the version > > > > > 3.1.0 of > > > > > Apache Flink CDC, as follows: > > > > > [ ] +1, Approve the release > > > > > [ ] -1, Do not approve the release (please provide specific > comments) > > > > > > > > > > **Release Overview** > > > > > > > > > > As an overview, the release consists of the following: > > > > > a) Flink CDC source release to be deployed to dist.apache.org > > > > > b) Maven artifacts to be deployed to the Maven Central Repository > > > > > > > > > > **Staging Areas to Review** > > > > > > > > > > The staging areas containing the above mentioned artifacts are as > > > > > follows, > > > > > for your review: > > > > > * All artifacts for a) can be found in the corresponding dev > repository > > > > > at > > > > > dist.apache.org [1], which are signed with the key with > fingerprint > > > > > A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B [2] > > > > > * All artifacts for b) can be found at the Apache Nexus Repository > [3] > > > > > > > > > > Other links for your review: > > > > > * JIRA release notes [4] > > > > > * Source code tag "release-3.1.0-rc1" with commit hash > > > > > 63b42cb937d481f558209ab3c8547959cf039643 [5] > > > > > * PR for release announcement blog post of Flink CDC 3.1.0 in > flink-web > > > > > [6] > > > > > > > > > > **Vote Duration** > > > > > > > > > > The voting time will run for at least 72 hours, adopted by majority > > > > > approval with at least 3 PMC affirmative votes. > > > > > > > > > > Thanks, > > > > > Qingsheng Ren > > > > > > > > > > [1] > https://dist.apache.org/repos/dist/dev/flink/flink-cdc-3.1.0-rc1/ > > > > > [2] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > [3] > > > > > > https://repository.apache.org/content/repositories/orgapacheflink-1731 > > > > > [4] > > > > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387 > > > > > [5] > https://github.com/apache/flink-cdc/releases/tag/release-3.1.0-rc1 > > > > > [6] https://github.com/apache/flink-web/pull/739 > > > > > > > >
[DISCUSS] Flink CDC 3.2 Release Planning
Hi devs, As we are in the midst of the release voting process for Flink CDC 3.1.0, I think it's a good time to kick off the upcoming Flink CDC 3.2 release cycle. In this release cycle I would like to focus on the stability of Flink CDC, especially for the newly introduced YAML-based data integration framework. To ensure we can iterate and improve swiftly, I propose to make 3.2 a relatively short release cycle, targeting a feature freeze by May 24, 2024. For developers that are interested in participating and contributing new features in this release cycle, please feel free to list your planning features in the wiki page [1]. I'm happy to volunteer as a release manager and of course open to work together with someone on this. What do you think? Best, Qingsheng [1] https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release
Re: [VOTE] Apache Flink CDC Release 3.1.0, release candidate #1
-1 Thanks Qingsheng for preparing this RC. If you bundle third-party dependencies (non Flink dependencies) in your published jar, you need to write them in the NOTICE file. I recommend adding a test using flink-ci-tools to verify if the NOTICE file is correct. Of course, you cannot rely too much on tools and still need to manually verify if it is correct. Best, Jingsong On Sat, May 4, 2024 at 7:45 PM Ahmed Hamdy wrote: > > Hi Qisheng, > > +1 (non-binding) > > - Verified checksums and hashes > - Verified signatures > - Verified github tag exists > - Verified no binaries in source > - build source > > > Best Regards > Ahmed Hamdy > > > On Fri, 3 May 2024 at 23:03, Jeyhun Karimov wrote: > > > Hi Qinsheng, > > > > Thanks for driving the release. > > +1 (non-binding) > > > > - No binaries in source > > - Verified Signatures > > - Github tag exists > > - Build source > > > > Regards, > > Jeyhun > > > > On Thu, May 2, 2024 at 10:52 PM Muhammet Orazov > > wrote: > > > > > Hey Qingsheng, > > > > > > Thanks a lot! +1 (non-binding) > > > > > > - Checked sha512sum hash > > > - Checked GPG signature > > > - Reviewed release notes > > > - Reviewed GitHub web pr (added minor suggestions) > > > - Built the source with JDK 11 & 8 > > > - Checked that src doesn't contain binary files > > > > > > Best, > > > Muhammet > > > > > > On 2024-04-30 05:11, Qingsheng Ren wrote: > > > > Hi everyone, > > > > > > > > Please review and vote on the release candidate #1 for the version > > > > 3.1.0 of > > > > Apache Flink CDC, as follows: > > > > [ ] +1, Approve the release > > > > [ ] -1, Do not approve the release (please provide specific comments) > > > > > > > > **Release Overview** > > > > > > > > As an overview, the release consists of the following: > > > > a) Flink CDC source release to be deployed to dist.apache.org > > > > b) Maven artifacts to be deployed to the Maven Central Repository > > > > > > > > **Staging Areas to Review** > > > > > > > > The staging areas containing the above mentioned artifacts are as > > > > follows, > > > > for your review: > > > > * All artifacts for a) can be found in the corresponding dev repository > > > > at > > > > dist.apache.org [1], which are signed with the key with fingerprint > > > > A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B [2] > > > > * All artifacts for b) can be found at the Apache Nexus Repository [3] > > > > > > > > Other links for your review: > > > > * JIRA release notes [4] > > > > * Source code tag "release-3.1.0-rc1" with commit hash > > > > 63b42cb937d481f558209ab3c8547959cf039643 [5] > > > > * PR for release announcement blog post of Flink CDC 3.1.0 in flink-web > > > > [6] > > > > > > > > **Vote Duration** > > > > > > > > The voting time will run for at least 72 hours, adopted by majority > > > > approval with at least 3 PMC affirmative votes. > > > > > > > > Thanks, > > > > Qingsheng Ren > > > > > > > > [1] https://dist.apache.org/repos/dist/dev/flink/flink-cdc-3.1.0-rc1/ > > > > [2] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > [3] > > > > https://repository.apache.org/content/repositories/orgapacheflink-1731 > > > > [4] > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387 > > > > [5] https://github.com/apache/flink-cdc/releases/tag/release-3.1.0-rc1 > > > > [6] https://github.com/apache/flink-web/pull/739 > > > > >