Re: [DISCUSS] FLIP-445: Support dynamic parallelism inference for HiveSource
Thanks for driving the discussion. +1 for the proposal and +1 for the `InferMode.NONE` option. Best, Lijie Ron liu 于2024年4月18日周四 11:36写道: > Hi, Xia > > Thanks for driving this FLIP. > > This proposal looks good to me overall. However, I have the following minor > questions: > > 1. FLIP introduced `table.exec.hive.infer-source-parallelism.mode` as a new > parameter, and the value is the enum class `InferMode`, I think the > InferMode class should also be introduced in the Public Interfaces section! > 2. You mentioned in FLIP that the default value of > `table.exec.hive.infer-source-parallelism.max` is 1024, I checked through > the code that the default value is 1000? > 3. I also agree with Muhammet's idea that there is no need to introduce the > option `table.exec.hive.infer-source-parallelism.enabled`, and that > expanding the InferMode values will fulfill the need. There is another > issue to consider here though, how are > `table.exec.hive.infer-source-parallelism` and > `table.exec.hive.infer-source-parallelism.mode` compatible? > 4. In FLIP-367 it is supported to be able to set the Source's parallelism > individually, if in the future HiveSource also supports this feature, > however, the default value of > `table.exec.hive.infer-source-parallelism.mode` is `InferMode. DYNAMIC`, at > this point will the parallelism be dynamically derived or will the manually > set parallelism take effect, and who has the higher priority? > > Best, > Ron > > Xia Sun 于2024年4月17日周三 12:08写道: > > > Hi Jeyhun, Muhammet, > > Thanks for all the feedback! > > > > > Could you please mention the default values for the new configurations > > > (e.g., table.exec.hive.infer-source-parallelism.mode, > > > table.exec.hive.infer-source-parallelism.enabled, > > > etc) ? > > > > > > Thanks for your suggestion. I have supplemented the explanation regarding > > the default values. > > > > > Since we are introducing the mode as a configuration option, > > > could it make sense to have `InferMode.NONE` option also? > > > The `NONE` option would disable the inference. > > > > > > This is a good idea. Looking ahead, it could eliminate the need for > > introducing > > a new configuration option. I haven't identified any potential > > compatibility issues > > as yet. If there are no further ideas from others, I'll go ahead and > update > > the FLIP to > > introducing InferMode.NONE. > > > > Best, > > Xia > > > > Muhammet Orazov 于2024年4月17日周三 10:31写道: > > > > > Hello Xia, > > > > > > Thanks for the FLIP! > > > > > > Since we are introducing the mode as a configuration option, > > > could it make sense to have `InferMode.NONE` option also? > > > The `NONE` option would disable the inference. > > > > > > This way we deprecate the `table.exec.hive.infer-source-parallelism` > > > and no additional `table.exec.hive.infer-source-parallelism.enabled` > > > option is required. > > > > > > What do you think? > > > > > > Best, > > > Muhammet > > > > > > On 2024-04-16 07:07, Xia Sun wrote: > > > > Hi everyone, > > > > I would like to start a discussion on FLIP-445: Support dynamic > > > > parallelism > > > > inference for HiveSource[1]. > > > > > > > > FLIP-379[2] has introduced dynamic source parallelism inference for > > > > batch > > > > jobs, which can utilize runtime information to more accurately decide > > > > the > > > > source parallelism. As a follow-up task, we plan to implement the > > > > dynamic > > > > parallelism inference interface for HiveSource, and also switch the > > > > default > > > > static parallelism inference to dynamic parallelism inference. > > > > > > > > Looking forward to your feedback and suggestions, thanks. > > > > > > > > [1] > > > > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-445%3A+Support+dynamic+parallelism+inference+for+HiveSource > > > > [2] > > > > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs > > > > > > > > Best regards, > > > > Xia > > > > > >
[jira] [Created] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full
Xin Gong created FLINK-35151: Summary: Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full Key: FLINK-35151 URL: https://issues.apache.org/jira/browse/FLINK-35151 Project: Flink Issue Type: Bug Components: Flink CDC Environment: I use master branch reproduce it. Reason is that producing binlog is too fast. MySqlSplitReader#suspendBinlogReaderIfNeed will execute BinlogSplitReader#stopBinlogReadTask to set currentTaskRunning to be false after MysqSourceReader receives binlog split update event. MySqlSplitReader#pollSplitRecords is executed and dataIt is null to execute closeBinlogReader when currentReader is BinlogSplitReader. closeBinlogReader will execute statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. Because BinaryLogClient#connectLock is not release when MySqlStreamingChangeEventSource add element to full queue. Reporter: Xin Gong Attachments: dumpstack.txt Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [DISCUSS] FLIP-445: Support dynamic parallelism inference for HiveSource
Hi, Xia Thanks for driving this FLIP. This proposal looks good to me overall. However, I have the following minor questions: 1. FLIP introduced `table.exec.hive.infer-source-parallelism.mode` as a new parameter, and the value is the enum class `InferMode`, I think the InferMode class should also be introduced in the Public Interfaces section! 2. You mentioned in FLIP that the default value of `table.exec.hive.infer-source-parallelism.max` is 1024, I checked through the code that the default value is 1000? 3. I also agree with Muhammet's idea that there is no need to introduce the option `table.exec.hive.infer-source-parallelism.enabled`, and that expanding the InferMode values will fulfill the need. There is another issue to consider here though, how are `table.exec.hive.infer-source-parallelism` and `table.exec.hive.infer-source-parallelism.mode` compatible? 4. In FLIP-367 it is supported to be able to set the Source's parallelism individually, if in the future HiveSource also supports this feature, however, the default value of `table.exec.hive.infer-source-parallelism.mode` is `InferMode. DYNAMIC`, at this point will the parallelism be dynamically derived or will the manually set parallelism take effect, and who has the higher priority? Best, Ron Xia Sun 于2024年4月17日周三 12:08写道: > Hi Jeyhun, Muhammet, > Thanks for all the feedback! > > > Could you please mention the default values for the new configurations > > (e.g., table.exec.hive.infer-source-parallelism.mode, > > table.exec.hive.infer-source-parallelism.enabled, > > etc) ? > > > Thanks for your suggestion. I have supplemented the explanation regarding > the default values. > > > Since we are introducing the mode as a configuration option, > > could it make sense to have `InferMode.NONE` option also? > > The `NONE` option would disable the inference. > > > This is a good idea. Looking ahead, it could eliminate the need for > introducing > a new configuration option. I haven't identified any potential > compatibility issues > as yet. If there are no further ideas from others, I'll go ahead and update > the FLIP to > introducing InferMode.NONE. > > Best, > Xia > > Muhammet Orazov 于2024年4月17日周三 10:31写道: > > > Hello Xia, > > > > Thanks for the FLIP! > > > > Since we are introducing the mode as a configuration option, > > could it make sense to have `InferMode.NONE` option also? > > The `NONE` option would disable the inference. > > > > This way we deprecate the `table.exec.hive.infer-source-parallelism` > > and no additional `table.exec.hive.infer-source-parallelism.enabled` > > option is required. > > > > What do you think? > > > > Best, > > Muhammet > > > > On 2024-04-16 07:07, Xia Sun wrote: > > > Hi everyone, > > > I would like to start a discussion on FLIP-445: Support dynamic > > > parallelism > > > inference for HiveSource[1]. > > > > > > FLIP-379[2] has introduced dynamic source parallelism inference for > > > batch > > > jobs, which can utilize runtime information to more accurately decide > > > the > > > source parallelism. As a follow-up task, we plan to implement the > > > dynamic > > > parallelism inference interface for HiveSource, and also switch the > > > default > > > static parallelism inference to dynamic parallelism inference. > > > > > > Looking forward to your feedback and suggestions, thanks. > > > > > > [1] > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-445%3A+Support+dynamic+parallelism+inference+for+HiveSource > > > [2] > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs > > > > > > Best regards, > > > Xia > > >
[jira] [Created] (FLINK-35150) The specified upload does not exist. The upload ID may be invalid
qyw created FLINK-35150: --- Summary: The specified upload does not exist. The upload ID may be invalid Key: FLINK-35150 URL: https://issues.apache.org/jira/browse/FLINK-35150 Project: Flink Issue Type: Bug Components: Connectors / FileSystem Affects Versions: 1.15.0 Reporter: qyw Attachments: image-2024-04-18-10-51-05-071.png, image-2024-04-18-11-03-08-998.png, image-2024-04-18-11-07-15-555.png Flink S3 hadoop, write S3 in csv mode, I used this patch [FLINK-28513|https://issues.apache.org/jira/browse/FLINK-28513] . But I don't understand why S3RecoverableFsDataOutputStream "sync" method of this class to be "completeMultipartUpload" operation, if "completeMultipartUpload" here, Calling close later to upload the rest of the stream will inevitably result in an error. The part corresponding to uploadID has been merged. Therefore, when the message in csv is larger than "S3_MULTIPART_MIN_PART_SIZE", the uploadPart will be started when switching files, then when BulkPartWriter performs closeForCommit, Due to the sync S3RecoverableFsDataOutputStream method call completeMultipartUpload, So S3RecoverableFsDataOutputStream "closeForCommit" method due to the uploadPart, at this time will lead to errors. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+1 (binding) Best, Yun Tang From: Jark Wu Sent: Thursday, April 18, 2024 9:54 To: dev@flink.apache.org Subject: Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines +1 (binding) Best, Jark On Wed, 17 Apr 2024 at 20:52, Leonard Xu wrote: > +1(binding) > > Best, > Leonard > > > 2024年4月17日 下午8:31,Lincoln Lee 写道: > > > > +1(binding) > > > > Best, > > Lincoln Lee > > > > > > Ferenc Csaky 于2024年4月17日周三 19:58写道: > > > >> +1 (non-binding) > >> > >> Best, > >> Ferenc > >> > >> > >> > >> > >> On Wednesday, April 17th, 2024 at 10:26, Ahmed Hamdy < > hamdy10...@gmail.com> > >> wrote: > >> > >>> > >>> > >>> + 1 (non-binding) > >>> > >>> Best Regards > >>> Ahmed Hamdy > >>> > >>> > >>> On Wed, 17 Apr 2024 at 08:28, Yuepeng Pan panyuep...@apache.org wrote: > >>> > +1(non-binding). > > Best, > Yuepeng Pan > > At 2024-04-17 14:27:27, "Ron liu" ron9@gmail.com wrote: > > > Hi Dev, > > > > Thank you to everyone for the feedback on FLIP-435: Introduce a New > > Materialized Table for Simplifying Data Pipelines[1][2]. > > > > I'd like to start a vote for it. The vote will be open for at least > >> 72 > > hours unless there is an objection or not enough votes. > > > > [1] > > > >> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines > > > [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > > > > Best, > > Ron > >> > >
Re:[VOTE] Release flink-connector-jdbc v3.2.0, release candidate #1
+1 (non-binding) - Checked source-build for source code tag v3.2.0-rc1 - Did test for some examples with mysql 5.8. - Checked release note page. Thanks for driving it! Best, Yuepeng Pan At 2024-04-17 18:02:06, "Danny Cranmer" wrote: >Hi everyone, > >Please review and vote on the release candidate #1 for the version 3.2.0, >as follows: >[ ] +1, Approve the release >[ ] -1, Do not approve the release (please provide specific comments) > >This release supports Flink 1.18 and 1.19. > >The complete staging area is available for your review, which includes: >* JIRA release notes [1], >* the official Apache source release to be deployed to dist.apache.org [2], >which are signed with the key with fingerprint 125FD8DB [3], >* all artifacts to be deployed to the Maven Central Repository [4], >* source code tag v3.2.0-rc1 [5], >* website pull request listing the new release [6]. >* CI run of tag [7]. > >The vote will be open for at least 72 hours. It is adopted by majority >approval, with at least 3 PMC affirmative votes. > >Thanks, >Danny > >[1] >https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353143 >[2] >https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.2.0-rc1 >[3] https://dist.apache.org/repos/dist/release/flink/KEYS >[4] https://repository.apache.org/content/repositories/orgapacheflink-1714/ >[5] https://github.com/apache/flink-connector-jdbc/releases/tag/v3.2.0-rc1 >[6] https://github.com/apache/flink-web/pull/734 >[7] https://github.com/apache/flink-connector-jdbc/actions/runs/8719743185
[jira] [Created] (FLINK-35149) Fix DataSinkTranslator#sinkTo ignoring pre-write topology if not TwoPhaseCommittingSink
Hongshun Wang created FLINK-35149: - Summary: Fix DataSinkTranslator#sinkTo ignoring pre-write topology if not TwoPhaseCommittingSink Key: FLINK-35149 URL: https://issues.apache.org/jira/browse/FLINK-35149 Project: Flink Issue Type: Bug Components: Flink CDC Reporter: Hongshun Wang Fix For: 3.1.0 Current , when sink is not instanceof TwoPhaseCommittingSink, use input.transform rather than stream. It means that pre-write topology will be ignored. {code:java} private void sinkTo( DataStream input, Sink sink, String sinkName, OperatorID schemaOperatorID) { DataStream stream = input; // Pre write topology if (sink instanceof WithPreWriteTopology) { stream = ((WithPreWriteTopology) sink).addPreWriteTopology(stream); } if (sink instanceof TwoPhaseCommittingSink) { addCommittingTopology(sink, stream, sinkName, schemaOperatorID); } else { input.transform( SINK_WRITER_PREFIX + sinkName, CommittableMessageTypeInfo.noOutput(), new DataSinkWriterOperatorFactory<>(sink, schemaOperatorID)); } } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+1 (binding) Best, Jark On Wed, 17 Apr 2024 at 20:52, Leonard Xu wrote: > +1(binding) > > Best, > Leonard > > > 2024年4月17日 下午8:31,Lincoln Lee 写道: > > > > +1(binding) > > > > Best, > > Lincoln Lee > > > > > > Ferenc Csaky 于2024年4月17日周三 19:58写道: > > > >> +1 (non-binding) > >> > >> Best, > >> Ferenc > >> > >> > >> > >> > >> On Wednesday, April 17th, 2024 at 10:26, Ahmed Hamdy < > hamdy10...@gmail.com> > >> wrote: > >> > >>> > >>> > >>> + 1 (non-binding) > >>> > >>> Best Regards > >>> Ahmed Hamdy > >>> > >>> > >>> On Wed, 17 Apr 2024 at 08:28, Yuepeng Pan panyuep...@apache.org wrote: > >>> > +1(non-binding). > > Best, > Yuepeng Pan > > At 2024-04-17 14:27:27, "Ron liu" ron9@gmail.com wrote: > > > Hi Dev, > > > > Thank you to everyone for the feedback on FLIP-435: Introduce a New > > Materialized Table for Simplifying Data Pipelines[1][2]. > > > > I'd like to start a vote for it. The vote will be open for at least > >> 72 > > hours unless there is an objection or not enough votes. > > > > [1] > > > >> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines > > > [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > > > > Best, > > Ron > >> > >
[jira] [Created] (FLINK-35148) Improve InstantiationUtil for checking nullary public constructor
Mingliang Liu created FLINK-35148: - Summary: Improve InstantiationUtil for checking nullary public constructor Key: FLINK-35148 URL: https://issues.apache.org/jira/browse/FLINK-35148 Project: Flink Issue Type: Improvement Components: API / Core Affects Versions: 1.18.1, 1.19.0 Reporter: Mingliang Liu {{InstantiationUtil#hasPublicNullaryConstructor}} checks whether the given class has a public nullary constructor. The implementation can be improved a bit: the `Modifier#isPublic` check within the for-loop can be skipped as the {{Class#getConstructors()}} only returns public constructors. We can also add a negative unit test for this. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35147) SinkMaterializer throws StateMigrationException when widening the field type in the output table
Sharon Xie created FLINK-35147: -- Summary: SinkMaterializer throws StateMigrationException when widening the field type in the output table Key: FLINK-35147 URL: https://issues.apache.org/jira/browse/FLINK-35147 Project: Flink Issue Type: Bug Components: Table SQL / API Reporter: Sharon Xie Attachments: image-2024-04-17-14-15-21-647.png, image-2024-04-17-14-15-35-297.png When a field type in the output table is changed from int -> bigint or timestamp(3) -> timestamp(6), SinkMaterializer would fail to restore state. This is unexpected as the change is backward compatible. The new type should be able to "accept" all the old values that had narrower type. Note that the planner works fine and would accept such change. To reproduce ``` CREATE TABLE ltable ( `id` integer primary key, `num` int ) WITH ( 'connector' = 'upsert-kafka', 'properties.bootstrap.servers' = 'kafka.test:9092', 'key.format' = 'json', 'value.format' = 'json', 'topic' = 'test1' ); CREATE TABLE rtable ( `id` integer primary key, `ts` timestamp(3) ) WITH ( 'connector' = 'upsert-kafka', 'properties.bootstrap.servers' = 'kafka.test:9092', 'key.format' = 'json', 'value.format' = 'json', 'topic' = 'test2' ); CREATE TABLE output ( `id` integer primary key, `num` int, `ts` timestamp(3) ) WITH ( 'connector' = 'upsert-kafka', 'properties.bootstrap.servers' = 'kafka.test:9092', 'key.format' = 'json', 'value.format' = 'json', 'topic' = 'test3' ); insert into `output` select ltable.id, num, ts from ltable join rtable on ltable.id = rtable.id ``` Run it, stop with a savepoint, then update output table with ``` CREATE TABLE output ( `id` integer primary key, -- change one of the type below would cause the issue `num` bigint, `ts` timestamp(6) ) WITH ( 'connector' = 'upsert-kafka', 'properties.bootstrap.servers' = 'kafka.test:9092', 'key.format' = 'json', 'value.format' = 'json', 'topic' = 'test3' ); ``` Restart the job with the savepoint created Sample screenshots !image-2024-04-17-14-15-35-297.png! !image-2024-04-17-14-15-21-647.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35146) CompileAndExecuteRemotePlanITCase.testCompileAndExecutePlan
Ryan Skraba created FLINK-35146: --- Summary: CompileAndExecuteRemotePlanITCase.testCompileAndExecutePlan Key: FLINK-35146 URL: https://issues.apache.org/jira/browse/FLINK-35146 Project: Flink Issue Type: Bug Affects Versions: 1.19.1 Reporter: Ryan Skraba https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58960=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=16690 {code} Apr 17 06:27:47 06:27:47.363 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 64.51 s <<< FAILURE! -- in org.apache.flink.table.sql.CompileAndExecuteRemotePlanITCase Apr 17 06:27:47 06:27:47.364 [ERROR] org.apache.flink.table.sql.CompileAndExecuteRemotePlanITCase.testCompileAndExecutePlan[executionMode] -- Time elapsed: 56.55 s <<< FAILURE! Apr 17 06:27:47 org.opentest4j.AssertionFailedError: Did not get expected results before timeout, actual result: null. ==> expected: but was: Apr 17 06:27:47 at org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151) Apr 17 06:27:47 at org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) Apr 17 06:27:47 at org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) Apr 17 06:27:47 at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) Apr 17 06:27:47 at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214) Apr 17 06:27:47 at org.apache.flink.table.sql.SqlITCaseBase.checkResultFile(SqlITCaseBase.java:216) Apr 17 06:27:47 at org.apache.flink.table.sql.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:149) Apr 17 06:27:47 at org.apache.flink.table.sql.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:133) Apr 17 06:27:47 at org.apache.flink.table.sql.CompileAndExecuteRemotePlanITCase.testCompileAndExecutePlan(CompileAndExecuteRemotePlanITCase.java:70) Apr 17 06:27:47 at java.lang.reflect.Method.invoke(Method.java:498) Apr 17 06:27:47 at org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48) Apr 17 06:27:47 at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) Apr 17 06:27:47 {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Release flink-connector-aws v4.3.0, release candidate #1
Hey, Thanks for root causing this. I agree this needs to be fixed. -1 binding Thanks, Danny On Wed, 17 Apr 2024, 15:43 Aleksandr Pilipenko, wrote: > Hi Danny, > > There is a blocker bug reported in Kinesis connector [1] > We should pause the release until it is resolved. > > [1] > https://issues.apache.org/jira/browse/FLINK-35115 > > Thanks, > Aleksandr >
Re: [VOTE] Release flink-connector-aws v4.3.0, release candidate #1
Hi Danny, There is a blocker bug reported in Kinesis connector [1] We should pause the release until it is resolved. [1] https://issues.apache.org/jira/browse/FLINK-35115 Thanks, Aleksandr
[jira] [Created] (FLINK-35145) Add timeout for cluster termination
Zhanghao Chen created FLINK-35145: - Summary: Add timeout for cluster termination Key: FLINK-35145 URL: https://issues.apache.org/jira/browse/FLINK-35145 Project: Flink Issue Type: Improvement Components: Runtime / Coordination Affects Versions: 1.20.0 Reporter: Zhanghao Chen Fix For: 1.20.0 Currently, cluster termination may be blocked forever as there's no timeout for that. For example, for an Application cluster with ZK HA enabled, when ZK cluster is down, the cluster will reach termination status, but the termination process will be blocked when trying to clean up HA data on ZK. Similar phenomenon can be observed when an HDFS/S3 outage occurs. I propose adding a timeout for the cluster termination process in ClusterEntryPoint# shutDownAsync method. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Release flink-connector-mongodb v1.2.0, release candidate #1
Thanks for driving this. +1 (non-binding) - Validated checksum hash - Verified signature - Tag is present - Reviewed web PR Regards, Jeyhun On Wed, Apr 17, 2024 at 3:26 PM gongzhongqiang wrote: > +1 (non-binding) > > - Flink website pr reviewed > - Check source code without binary files > - Validated checksum hash and signature > - CI passed with tag v1.2.0-rc1 > > Best, > Zhongqiang Gong > > Danny Cranmer 于2024年4月17日周三 18:44写道: > > > Hi everyone, > > > > Please review and vote on the release candidate #1 for v1.2.0, as > follows: > > [ ] +1, Approve the release > > [ ] -1, Do not approve the release (please provide specific comments) > > > > This release supports Flink 1.18 and 1.19. > > > > The complete staging area is available for your review, which includes: > > * JIRA release notes [1], > > * the official Apache source release to be deployed to dist.apache.org > > [2], > > which are signed with the key with fingerprint 125FD8DB [3], > > * all artifacts to be deployed to the Maven Central Repository [4], > > * source code tag v1.2.0-rc1 [5], > > * website pull request listing the new release [6]. > > * CI build of tag [7]. > > > > The vote will be open for at least 72 hours. It is adopted by majority > > approval, with at least 3 PMC affirmative votes. > > > > Thanks, > > Danny > > > > [1] > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354192 > > [2] > > > > > https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.2.0-rc1 > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > [4] > > https://repository.apache.org/content/repositories/orgapacheflink-1715/ > > [5] > > > https://github.com/apache/flink-connector-mongodb/releases/tag/v1.2.0-rc1 > > [6] https://github.com/apache/flink-web/pull/735 > > [7] > > > https://github.com/apache/flink-connector-mongodb/actions/runs/8720057880 > > >
Re: [VOTE] FLIP-442: General Improvement to Configuration for Flink 2.0
+1 (non binding) Regards, Jeyhun On Wed, Apr 17, 2024 at 2:22 PM Zhu Zhu wrote: > +1 (binding) > > Thanks, > Zhu > > Yuxin Tan 于2024年4月17日周三 18:36写道: > > > +1 (non-binding) > > > > Best, > > Yuxin > > > > > > Zakelly Lan 于2024年4月17日周三 16:51写道: > > > > > +1 binding > > > > > > > > > Best, > > > Zakelly > > > > > > On Wed, Apr 17, 2024 at 2:05 PM Rui Fan <1996fan...@gmail.com> wrote: > > > > > > > +1(binding) > > > > > > > > Best, > > > > Rui > > > > > > > > On Wed, Apr 17, 2024 at 1:02 PM Xuannan Su > > > wrote: > > > > > > > > > Hi everyone, > > > > > > > > > > Thanks for all the feedback about the FLIP-442: General Improvement > > to > > > > > Configuration for Flink 2.0 [1] [2]. > > > > > > > > > > I'd like to start a vote for it. The vote will be open for at least > > 72 > > > > > hours(excluding weekends,until APR 22, 12:00AM GMT) unless there is > > an > > > > > objection or an insufficient number of votes. > > > > > > > > > > [1] > > > > > > > > > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-442%3A+General+Improvement+to+Configuration+for+Flink+2.0 > > > > > [2] > https://lists.apache.org/thread/15k0stwyoknhxvd643ctwjw3fd17pqwk > > > > > > > > > > > > > > > Best regards, > > > > > Xuannan > > > > > > > > > > > > > > >
Re: [ANNOUNCE] New Apache Flink Committer - Zakelly Lan
Congratulations, Zakelly! Best, Zhongqiang Gong Yuan Mei 于2024年4月15日周一 10:51写道: > Hi everyone, > > On behalf of the PMC, I'm happy to let you know that Zakelly Lan has become > a new Flink Committer! > > Zakelly has been continuously contributing to the Flink project since 2020, > with a focus area on Checkpointing, State as well as frocksdb (the default > on-disk state db). > > He leads several FLIPs to improve checkpoints and state APIs, including > File Merging for Checkpoints and configuration/API reorganizations. He is > also one of the main contributors to the recent efforts of "disaggregated > state management for Flink 2.0" and drives the entire discussion in the > mailing thread, demonstrating outstanding technical depth and breadth of > knowledge. > > Beyond his technical contributions, Zakelly is passionate about helping the > community in numerous ways. He spent quite some time setting up the Flink > Speed Center and rebuilding the benchmark pipeline after the original one > was out of lease. He helps build frocksdb and tests for the upcoming > frocksdb release (bump rocksdb from 6.20.3->8.10). > > Please join me in congratulating Zakelly for becoming an Apache Flink > committer! > > Best, > Yuan (on behalf of the Flink PMC) >
Re: [VOTE] Release flink-connector-mongodb v1.2.0, release candidate #1
+1 (non-binding) - Flink website pr reviewed - Check source code without binary files - Validated checksum hash and signature - CI passed with tag v1.2.0-rc1 Best, Zhongqiang Gong Danny Cranmer 于2024年4月17日周三 18:44写道: > Hi everyone, > > Please review and vote on the release candidate #1 for v1.2.0, as follows: > [ ] +1, Approve the release > [ ] -1, Do not approve the release (please provide specific comments) > > This release supports Flink 1.18 and 1.19. > > The complete staging area is available for your review, which includes: > * JIRA release notes [1], > * the official Apache source release to be deployed to dist.apache.org > [2], > which are signed with the key with fingerprint 125FD8DB [3], > * all artifacts to be deployed to the Maven Central Repository [4], > * source code tag v1.2.0-rc1 [5], > * website pull request listing the new release [6]. > * CI build of tag [7]. > > The vote will be open for at least 72 hours. It is adopted by majority > approval, with at least 3 PMC affirmative votes. > > Thanks, > Danny > > [1] > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354192 > [2] > > https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.2.0-rc1 > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > [4] > https://repository.apache.org/content/repositories/orgapacheflink-1715/ > [5] > https://github.com/apache/flink-connector-mongodb/releases/tag/v1.2.0-rc1 > [6] https://github.com/apache/flink-web/pull/735 > [7] > https://github.com/apache/flink-connector-mongodb/actions/runs/8720057880 >
Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+1(binding) Best, Leonard > 2024年4月17日 下午8:31,Lincoln Lee 写道: > > +1(binding) > > Best, > Lincoln Lee > > > Ferenc Csaky 于2024年4月17日周三 19:58写道: > >> +1 (non-binding) >> >> Best, >> Ferenc >> >> >> >> >> On Wednesday, April 17th, 2024 at 10:26, Ahmed Hamdy >> wrote: >> >>> >>> >>> + 1 (non-binding) >>> >>> Best Regards >>> Ahmed Hamdy >>> >>> >>> On Wed, 17 Apr 2024 at 08:28, Yuepeng Pan panyuep...@apache.org wrote: >>> +1(non-binding). Best, Yuepeng Pan At 2024-04-17 14:27:27, "Ron liu" ron9@gmail.com wrote: > Hi Dev, > > Thank you to everyone for the feedback on FLIP-435: Introduce a New > Materialized Table for Simplifying Data Pipelines[1][2]. > > I'd like to start a vote for it. The vote will be open for at least >> 72 > hours unless there is an objection or not enough votes. > > [1] >> https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines > [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > > Best, > Ron >>
Re: Question around Flink's AdaptiveBatchScheduler
Thanks Venkata and Xia for providing further clarification. I think your example illustrates the significance of this proposal very well. Please feel free go ahead and address the concerns. Best, Junrui Venkatakrishnan Sowrirajan 于2024年4月16日周二 07:01写道: > Thanks for adding your thoughts to this discussion. > > If we all agree that the source vertex parallelism shouldn't be bound by > the downstream max parallelism > (jobmanager.adaptive-batch-scheduler.max-parallelism) > based on the rationale and the issues described above, I can take a stab at > addressing the issue. > > Let me file a ticket to track this issue. Otherwise, I'm looking forward to > hearing more thoughts from others as well, especially Lijie and Junrui who > have more context on the AdaptiveBatchScheduler. > > Regards > Venkata krishnan > > > On Mon, Apr 15, 2024 at 12:54 AM Xia Sun wrote: > > > Hi Venkat, > > I agree that the parallelism of source vertex should not be upper bounded > > by the job's global max parallelism. The case you mentioned, >> High > filter > > selectivity with huge amounts of data to read excellently supports this > > viewpoint. (In fact, in the current implementation, if the source > > parallelism is pre-specified at job create stage, rather than relying on > > the dynamic parallelism inference of the AdaptiveBatchScheduler, the > source > > vertex's parallelism can indeed exceed the job's global max parallelism.) > > > > As Lijie and Junrui pointed out, the key issue is "semantic consistency." > > Currently, if a vertex has not set maxParallelism, the > > AdaptiveBatchScheduler will use > > `execution.batch.adaptive.auto-parallelism.max-parallelism` as the > vertex's > > maxParallelism. Since the current implementation does not distinguish > > between source vertices and downstream vertices, source vertices are also > > subject to this limitation. > > > > Therefore, I believe that if the issue of "semantic consistency" can be > > well explained in the code and configuration documentation, the > > AdaptiveBatchScheduler should support that the parallelism of source > > vertices can exceed the job's global max parallelism. > > > > Best, > > Xia > > > > Venkatakrishnan Sowrirajan 于2024年4月14日周日 10:31写道: > > > > > Let me state why I think "*jobmanager.adaptive-batch-sche* > > > *duler.default-source-parallelism*" should not be bound by the " > > > *jobmanager.adaptive-batch-sche**duler.max-parallelism*". > > > > > >- Source vertex is unique and does not have any upstream vertices > > >- Downstream vertices read shuffled data partitioned by key, which > is > > >not the case for the Source vertex > > >- Limiting source parallelism by downstream vertices' max > parallelism > > is > > >incorrect > > > > > > If we say for ""semantic consistency" the source vertex parallelism has > > to > > > be bound by the overall job's max parallelism, it can lead to following > > > issues: > > > > > >- High filter selectivity with huge amounts of data to read - > setting > > >high "*jobmanager.adaptive-batch-scheduler.max-parallelism*" so that > > >source parallelism can be set higher can lead to small blocks and > > >sub-optimal performance. > > >- Setting high > "*jobmanager.adaptive-batch-scheduler.max-parallelism*" > > >requires careful tuning of network buffer configurations which is > > >unnecessary in cases where it is not required just so that the > source > > >parallelism can be set high. > > > > > > Regards > > > Venkata krishnan > > > > > > On Thu, Apr 11, 2024 at 9:30 PM Junrui Lee > wrote: > > > > > > > Hello Venkata krishnan, > > > > > > > > I think the term "semantic inconsistency" defined by > > > > jobmanager.adaptive-batch-scheduler.max-parallelism refers to > > > maintaining a > > > > uniform upper limit on parallelism across all vertices within a job. > As > > > the > > > > source vertices are part of the global execution graph, they should > > also > > > > respect this rule to ensure consistent application of parallelism > > > > constraints. > > > > > > > > Best, > > > > Junrui > > > > > > > > Venkatakrishnan Sowrirajan 于2024年4月12日周五 02:10写道: > > > > > > > > > Gentle bump on this question. cc @Becket Qin > > > as > > > > > well. > > > > > > > > > > Regards > > > > > Venkata krishnan > > > > > > > > > > > > > > > On Tue, Mar 12, 2024 at 10:11 PM Venkatakrishnan Sowrirajan < > > > > > vsowr...@asu.edu> wrote: > > > > > > > > > > > Thanks for the response Lijie and Junrui. Sorry for the late > reply. > > > Few > > > > > > follow up questions. > > > > > > > > > > > > > Source can actually ignore this limit > > > > > > because it has no upstream, but this will lead to semantic > > > > inconsistency. > > > > > > > > > > > > Lijie, can you please elaborate on the above comment further? > What > > do > > > > you > > > > > > mean when you say it will lead to "semantic inconsistency"? > > > > > > > > > > > > > Secondly, we first need to limit the max parallelism of
Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+1(binding) Best, Lincoln Lee Ferenc Csaky 于2024年4月17日周三 19:58写道: > +1 (non-binding) > > Best, > Ferenc > > > > > On Wednesday, April 17th, 2024 at 10:26, Ahmed Hamdy > wrote: > > > > > > > + 1 (non-binding) > > > > Best Regards > > Ahmed Hamdy > > > > > > On Wed, 17 Apr 2024 at 08:28, Yuepeng Pan panyuep...@apache.org wrote: > > > > > +1(non-binding). > > > > > > Best, > > > Yuepeng Pan > > > > > > At 2024-04-17 14:27:27, "Ron liu" ron9@gmail.com wrote: > > > > > > > Hi Dev, > > > > > > > > Thank you to everyone for the feedback on FLIP-435: Introduce a New > > > > Materialized Table for Simplifying Data Pipelines[1][2]. > > > > > > > > I'd like to start a vote for it. The vote will be open for at least > 72 > > > > hours unless there is an objection or not enough votes. > > > > > > > > [1] > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines > > > > > > > [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > > > > > > > > Best, > > > > Ron >
Re: [VOTE] Release flink-connector-mongodb v1.2.0, release candidate #1
Thanks Danny for driving this. +1 (non-binding) - Validated checksum hash - Verified signature - Tag is present - Build successful with jdk8, jdk11 and jdk17 - Checked the dist jar was built by jdk8 - Reviewed web PR Best, Jiabao Danny Cranmer 于2024年4月17日周三 18:39写道: > Hi everyone, > > Please review and vote on the release candidate #1 for v1.2.0, as follows: > [ ] +1, Approve the release > [ ] -1, Do not approve the release (please provide specific comments) > > This release supports Flink 1.18 and 1.19. > > The complete staging area is available for your review, which includes: > * JIRA release notes [1], > * the official Apache source release to be deployed to dist.apache.org > [2], > which are signed with the key with fingerprint 125FD8DB [3], > * all artifacts to be deployed to the Maven Central Repository [4], > * source code tag v1.2.0-rc1 [5], > * website pull request listing the new release [6]. > * CI build of tag [7]. > > The vote will be open for at least 72 hours. It is adopted by majority > approval, with at least 3 PMC affirmative votes. > > Thanks, > Danny > > [1] > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354192 > [2] > > https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.2.0-rc1 > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > [4] > https://repository.apache.org/content/repositories/orgapacheflink-1715/ > [5] > https://github.com/apache/flink-connector-mongodb/releases/tag/v1.2.0-rc1 > [6] https://github.com/apache/flink-web/pull/735 > [7] > https://github.com/apache/flink-connector-mongodb/actions/runs/8720057880 >
Re: [VOTE] FLIP-442: General Improvement to Configuration for Flink 2.0
+1 (binding) Thanks, Zhu Yuxin Tan 于2024年4月17日周三 18:36写道: > +1 (non-binding) > > Best, > Yuxin > > > Zakelly Lan 于2024年4月17日周三 16:51写道: > > > +1 binding > > > > > > Best, > > Zakelly > > > > On Wed, Apr 17, 2024 at 2:05 PM Rui Fan <1996fan...@gmail.com> wrote: > > > > > +1(binding) > > > > > > Best, > > > Rui > > > > > > On Wed, Apr 17, 2024 at 1:02 PM Xuannan Su > > wrote: > > > > > > > Hi everyone, > > > > > > > > Thanks for all the feedback about the FLIP-442: General Improvement > to > > > > Configuration for Flink 2.0 [1] [2]. > > > > > > > > I'd like to start a vote for it. The vote will be open for at least > 72 > > > > hours(excluding weekends,until APR 22, 12:00AM GMT) unless there is > an > > > > objection or an insufficient number of votes. > > > > > > > > [1] > > > > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-442%3A+General+Improvement+to+Configuration+for+Flink+2.0 > > > > [2] https://lists.apache.org/thread/15k0stwyoknhxvd643ctwjw3fd17pqwk > > > > > > > > > > > > Best regards, > > > > Xuannan > > > > > > > > > >
Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+1 (non-binding) Best, Ferenc On Wednesday, April 17th, 2024 at 10:26, Ahmed Hamdy wrote: > > > + 1 (non-binding) > > Best Regards > Ahmed Hamdy > > > On Wed, 17 Apr 2024 at 08:28, Yuepeng Pan panyuep...@apache.org wrote: > > > +1(non-binding). > > > > Best, > > Yuepeng Pan > > > > At 2024-04-17 14:27:27, "Ron liu" ron9@gmail.com wrote: > > > > > Hi Dev, > > > > > > Thank you to everyone for the feedback on FLIP-435: Introduce a New > > > Materialized Table for Simplifying Data Pipelines[1][2]. > > > > > > I'd like to start a vote for it. The vote will be open for at least 72 > > > hours unless there is an objection or not enough votes. > > > > > > [1] > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines > > > > > [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > > > > > > Best, > > > Ron
[jira] [Created] (FLINK-35143) Expose newly added tables capture in mysql pipeline connector
Hongshun Wang created FLINK-35143: - Summary: Expose newly added tables capture in mysql pipeline connector Key: FLINK-35143 URL: https://issues.apache.org/jira/browse/FLINK-35143 Project: Flink Issue Type: Improvement Components: Flink CDC Reporter: Hongshun Wang Currently, mysql pipeline connector still don't allowed to capture newly added tables. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[VOTE] Release flink-connector-mongodb v1.2.0, release candidate #1
Hi everyone, Please review and vote on the release candidate #1 for v1.2.0, as follows: [ ] +1, Approve the release [ ] -1, Do not approve the release (please provide specific comments) This release supports Flink 1.18 and 1.19. The complete staging area is available for your review, which includes: * JIRA release notes [1], * the official Apache source release to be deployed to dist.apache.org [2], which are signed with the key with fingerprint 125FD8DB [3], * all artifacts to be deployed to the Maven Central Repository [4], * source code tag v1.2.0-rc1 [5], * website pull request listing the new release [6]. * CI build of tag [7]. The vote will be open for at least 72 hours. It is adopted by majority approval, with at least 3 PMC affirmative votes. Thanks, Danny [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354192 [2] https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.2.0-rc1 [3] https://dist.apache.org/repos/dist/release/flink/KEYS [4] https://repository.apache.org/content/repositories/orgapacheflink-1715/ [5] https://github.com/apache/flink-connector-mongodb/releases/tag/v1.2.0-rc1 [6] https://github.com/apache/flink-web/pull/735 [7] https://github.com/apache/flink-connector-mongodb/actions/runs/8720057880
[jira] [Created] (FLINK-35144) Support multi source sync for FlinkCDC
Congxian Qiu created FLINK-35144: Summary: Support multi source sync for FlinkCDC Key: FLINK-35144 URL: https://issues.apache.org/jira/browse/FLINK-35144 Project: Flink Issue Type: Improvement Components: Flink CDC Affects Versions: cdc-3.1.0 Reporter: Congxian Qiu Currently, the FlinkCDC pipeline can only support a single source in one pipeline, we need to start multiple pipelines when there are various sources. For upstream which uses sharding, we need to sync multiple sources in one pipeline, the current pipeline can't do this because it can only support a single source. This issue wants to support the sync of multiple sources in one pipeline. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [DISCUSS] Connector releases for Flink 1.19
Thank you Danny and Sergey for pushing this! I can help with the HBase connector if necessary, will comment the details to the relevant Jira ticket. Best, Ferenc On Wednesday, April 17th, 2024 at 11:17, Danny Cranmer wrote: > > > Hello all, > > I have created a parent Jira to cover the releases [1]. I have assigned AWS > and MongoDB to myself and OpenSearch to Sergey. Please assign the > relevant issue to yourself as you pick up the tasks. > > Thanks! > > [1] https://issues.apache.org/jira/browse/FLINK-35131 > > On Tue, Apr 16, 2024 at 2:41 PM Muhammet Orazov > mor+fl...@morazow.com.invalid wrote: > > > Thanks Sergey and Danny for clarifying, indeed it > > requires committer to go through the process. > > > > Anyway, please let me know if I can be any help. > > > > Best, > > Muhammet > > > > On 2024-04-16 11:19, Danny Cranmer wrote: > > > > > Hello, > > > > > > I have opened the VOTE thread for the AWS connectors release [1]. > > > > > > > If I'm not mistaking (please correct me if I'm wrong) this request is > > > > not > > > > about version update it is about new releases for connectors > > > > > > Yes, correct. If there are any other code changes required then help > > > would be appreciated. > > > > > > > Are you going to create an umbrella issue for it? > > > > > > We do not usually create JIRA issues for releases. That being said it > > > sounds like a good idea to have one place to track the status of the > > > connector releases and pre-requisite code changes. > > > > > > > I would like to work on this task, thanks for initiating it! > > > > > > The actual release needs to be performed by a committer. However, help > > > getting the connectors building against Flink 1.19 and testing the RC > > > is > > > appreciated. > > > > > > Thanks, > > > Danny > > > > > > [1] https://lists.apache.org/thread/0nw9smt23crx4gwkf6p1dd4jwvp1g5s0 > > > > > > On Tue, Apr 16, 2024 at 6:34 AM Sergey Nuyanzin snuyan...@gmail.com > > > wrote: > > > > > > > Thanks for volunteering Muhammet! > > > > And thanks Danny for starting the activity. > > > > > > > > If I'm not mistaking (please correct me if I'm wrong) > > > > > > > > this request is not about version update it is about new releases for > > > > connectors > > > > btw for jdbc connector support of 1.19 and 1.20-SNAPSHOT is already > > > > done > > > > > > > > I would volunteer for Opensearch connector since currently I'm working > > > > on > > > > support of Opensearch v2 > > > > and I think it would make sense to have a release after it is done > > > > > > > > On Tue, Apr 16, 2024 at 4:29 AM Muhammet Orazov > > > > mor+fl...@morazow.com.invalid wrote: > > > > > > > > > Hello Danny, > > > > > > > > > > I would like to work on this task, thanks for initiating it! > > > > > > > > > > I could update the versions on JDBC and Pulsar connectors. > > > > > > > > > > Are you going to create an umbrella issue for it? > > > > > > > > > > Best, > > > > > Muhammet > > > > > > > > > > On 2024-04-15 13:44, Danny Cranmer wrote: > > > > > > > > > > > Hello all, > > > > > > > > > > > > Flink 1.19 was released on 2024-03-18 [1] and the connectors have > > > > > > not > > > > > > yet > > > > > > caught up. I propose we start releasing the connectors with support > > > > > > for > > > > > > Flink 1.19 as per the connector support guidelines [2]. > > > > > > > > > > > > I will make a start on flink-connector-aws, then pickup others in > > > > > > the > > > > > > coming days. Please respond to the thread if you are/want to work on > > > > > > a > > > > > > particular connector to avoid duplicate work. > > > > > > > > > > > > Thanks, > > > > > > Danny > > > > > > > > > > > > [1] > > > > https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/ > > > > > > > > [2] > > > > https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development#ExternalizedConnectordevelopment-Flinkcompatibility > > > > > > > > [3] https://github.com/apache/flink-connector-aws > > > > > > > > -- > > > > Best regards, > > > > Sergey
[jira] [Created] (FLINK-35141) Release flink-connector-pulsar vX.X.X for Flink 1.19
Danny Cranmer created FLINK-35141: - Summary: Release flink-connector-pulsar vX.X.X for Flink 1.19 Key: FLINK-35141 URL: https://issues.apache.org/jira/browse/FLINK-35141 Project: Flink Issue Type: Sub-task Components: Connectors / Pulsar Reporter: Danny Cranmer https://github.com/apache/flink-connector-pulsar -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35134) Release flink-connector-elasticsearch vX.X.X for Flink 1.18/1.19
Danny Cranmer created FLINK-35134: - Summary: Release flink-connector-elasticsearch vX.X.X for Flink 1.18/1.19 Key: FLINK-35134 URL: https://issues.apache.org/jira/browse/FLINK-35134 Project: Flink Issue Type: Sub-task Components: Connectors / ElasticSearch Reporter: Danny Cranmer https://github.com/apache/flink-connector-elasticsearch -- This message was sent by Atlassian Jira (v8.20.10#820010)
[VOTE] Release flink-connector-jdbc v3.2.0, release candidate #1
Hi everyone, Please review and vote on the release candidate #1 for the version 3.2.0, as follows: [ ] +1, Approve the release [ ] -1, Do not approve the release (please provide specific comments) This release supports Flink 1.18 and 1.19. The complete staging area is available for your review, which includes: * JIRA release notes [1], * the official Apache source release to be deployed to dist.apache.org [2], which are signed with the key with fingerprint 125FD8DB [3], * all artifacts to be deployed to the Maven Central Repository [4], * source code tag v3.2.0-rc1 [5], * website pull request listing the new release [6]. * CI run of tag [7]. The vote will be open for at least 72 hours. It is adopted by majority approval, with at least 3 PMC affirmative votes. Thanks, Danny [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353143 [2] https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.2.0-rc1 [3] https://dist.apache.org/repos/dist/release/flink/KEYS [4] https://repository.apache.org/content/repositories/orgapacheflink-1714/ [5] https://github.com/apache/flink-connector-jdbc/releases/tag/v3.2.0-rc1 [6] https://github.com/apache/flink-web/pull/734 [7] https://github.com/apache/flink-connector-jdbc/actions/runs/8719743185
[jira] [Created] (FLINK-35136) Release flink-connector-hbase vX.X.X for Flink 1.19
Danny Cranmer created FLINK-35136: - Summary: Release flink-connector-hbase vX.X.X for Flink 1.19 Key: FLINK-35136 URL: https://issues.apache.org/jira/browse/FLINK-35136 Project: Flink Issue Type: Sub-task Components: Connectors / HBase Reporter: Danny Cranmer -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35142) Release flink-connector-rabbitmq vX.X.X for Flink 1.19
Danny Cranmer created FLINK-35142: - Summary: Release flink-connector-rabbitmq vX.X.X for Flink 1.19 Key: FLINK-35142 URL: https://issues.apache.org/jira/browse/FLINK-35142 Project: Flink Issue Type: Sub-task Components: Connectors/ RabbitMQ Reporter: Danny Cranmer https://github.com/apache/flink-connector-rabbitmq -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35140) Release flink-connector-opensearch vX.X.X for Flink 1.19
Danny Cranmer created FLINK-35140: - Summary: Release flink-connector-opensearch vX.X.X for Flink 1.19 Key: FLINK-35140 URL: https://issues.apache.org/jira/browse/FLINK-35140 Project: Flink Issue Type: Sub-task Components: Connectors / Opensearch Reporter: Danny Cranmer -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Release flink-connector-aws v4.3.0, release candidate #1
Hi Danny, Thanks for driving this. +1 (non-binding) - Checksum verified - Signature and Keys match - Licenses checked - Build from source - Run Kinesis example - Reviewed web PR Best Regards Ahmed Hamdy On Tue, 16 Apr 2024 at 13:01, Jeyhun Karimov wrote: > +1 (non-binding) > > - Verified tags > - Verified Lisence > - Reviewed web pr > > Regards, > Jeyhun > > On Tue, Apr 16, 2024 at 12:59 PM Danny Cranmer > wrote: > > > Hi everyone, > > > > Please review and vote on the release candidate #1 for > flink-connector-aws > > v4.3.0, as follows: > > [ ] +1, Approve the release > > [ ] -1, Do not approve the release (please provide specific comments) > > > > This release supports Apache Flink 1.18 and 1.19. > > > > The complete staging area is available for your review, which includes: > > * JIRA release notes [1], > > * the official Apache source release to be deployed to dist.apache.org > > [2], > > which are signed with the key with fingerprint 125FD8DB [3], > > * all artifacts to be deployed to the Maven Central Repository [4], > > * source code tag v4.3.0-rc1 [5], > > * website pull request listing the new release [6]. > > * CI build of the tag against Flink 1.18.1 and 1.19.0 [7]. > > > > The vote will be open for at least 72 hours. It is adopted by majority > > approval, with at least 3 PMC affirmative votes. > > > > Thanks, > > Danny > > > > [1] > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353793 > > [2] > > > https://dist.apache.org/repos/dist/dev/flink/flink-connector-aws-4.3.0-rc1 > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > [4] > > https://repository.apache.org/content/repositories/orgapacheflink-1711/ > > [5] > https://github.com/apache/flink-connector-aws/releases/tag/v4.3.0-rc1 > > [6] https://github.com/apache/flink-web/pull/733 > > [7] > https://github.com/apache/flink-connector-aws/actions/runs/8703830985 > > >
Re: [DISCUSS] Connector releases for Flink 1.19
Hello all, I have created a parent Jira to cover the releases [1]. I have assigned AWS and MongoDB to myself and OpenSearch to Sergey. Please assign the relevant issue to yourself as you pick up the tasks. Thanks! [1] https://issues.apache.org/jira/browse/FLINK-35131 On Tue, Apr 16, 2024 at 2:41 PM Muhammet Orazov wrote: > Thanks Sergey and Danny for clarifying, indeed it > requires committer to go through the process. > > Anyway, please let me know if I can be any help. > > Best, > Muhammet > > > On 2024-04-16 11:19, Danny Cranmer wrote: > > Hello, > > > > I have opened the VOTE thread for the AWS connectors release [1]. > > > >> If I'm not mistaking (please correct me if I'm wrong) this request is > >> not > > about version update it is about new releases for connectors > > > > Yes, correct. If there are any other code changes required then help > > would be appreciated. > > > >> Are you going to create an umbrella issue for it? > > > > We do not usually create JIRA issues for releases. That being said it > > sounds like a good idea to have one place to track the status of the > > connector releases and pre-requisite code changes. > > > >> I would like to work on this task, thanks for initiating it! > > > > The actual release needs to be performed by a committer. However, help > > getting the connectors building against Flink 1.19 and testing the RC > > is > > appreciated. > > > > Thanks, > > Danny > > > > [1] https://lists.apache.org/thread/0nw9smt23crx4gwkf6p1dd4jwvp1g5s0 > > > > > > > > On Tue, Apr 16, 2024 at 6:34 AM Sergey Nuyanzin > > wrote: > > > >> Thanks for volunteering Muhammet! > >> And thanks Danny for starting the activity. > >> > >> If I'm not mistaking (please correct me if I'm wrong) > >> > >> this request is not about version update it is about new releases for > >> connectors > >> btw for jdbc connector support of 1.19 and 1.20-SNAPSHOT is already > >> done > >> > >> I would volunteer for Opensearch connector since currently I'm working > >> on > >> support of Opensearch v2 > >> and I think it would make sense to have a release after it is done > >> > >> On Tue, Apr 16, 2024 at 4:29 AM Muhammet Orazov > >> wrote: > >> > >>> Hello Danny, > >>> > >>> I would like to work on this task, thanks for initiating it! > >>> > >>> I could update the versions on JDBC and Pulsar connectors. > >>> > >>> Are you going to create an umbrella issue for it? > >>> > >>> Best, > >>> Muhammet > >>> > >>> On 2024-04-15 13:44, Danny Cranmer wrote: > >>> > Hello all, > >>> > > >>> > Flink 1.19 was released on 2024-03-18 [1] and the connectors have not > >>> > yet > >>> > caught up. I propose we start releasing the connectors with support > for > >>> > Flink 1.19 as per the connector support guidelines [2]. > >>> > > >>> > I will make a start on flink-connector-aws, then pickup others in the > >>> > coming days. Please respond to the thread if you are/want to work on > a > >>> > particular connector to avoid duplicate work. > >>> > > >>> > Thanks, > >>> > Danny > >>> > > >>> > [1] > >>> > > >>> > https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/ > >>> > [2] > >>> > > >>> > https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development#ExternalizedConnectordevelopment-Flinkcompatibility > >>> > [3] https://github.com/apache/flink-connector-aws > >>> > >> > >> > >> -- > >> Best regards, > >> Sergey > >> >
[jira] [Created] (FLINK-35133) Release flink-connector-cassandra v4.3.0 for Flink 1.18/1.19
Danny Cranmer created FLINK-35133: - Summary: Release flink-connector-cassandra v4.3.0 for Flink 1.18/1.19 Key: FLINK-35133 URL: https://issues.apache.org/jira/browse/FLINK-35133 Project: Flink Issue Type: Sub-task Reporter: Danny Cranmer https://github.com/apache/flink-connector-cassandra -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-442: General Improvement to Configuration for Flink 2.0
+1 (non-binding) Best, Yuxin Zakelly Lan 于2024年4月17日周三 16:51写道: > +1 binding > > > Best, > Zakelly > > On Wed, Apr 17, 2024 at 2:05 PM Rui Fan <1996fan...@gmail.com> wrote: > > > +1(binding) > > > > Best, > > Rui > > > > On Wed, Apr 17, 2024 at 1:02 PM Xuannan Su > wrote: > > > > > Hi everyone, > > > > > > Thanks for all the feedback about the FLIP-442: General Improvement to > > > Configuration for Flink 2.0 [1] [2]. > > > > > > I'd like to start a vote for it. The vote will be open for at least 72 > > > hours(excluding weekends,until APR 22, 12:00AM GMT) unless there is an > > > objection or an insufficient number of votes. > > > > > > [1] > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-442%3A+General+Improvement+to+Configuration+for+Flink+2.0 > > > [2] https://lists.apache.org/thread/15k0stwyoknhxvd643ctwjw3fd17pqwk > > > > > > > > > Best regards, > > > Xuannan > > > > > >
[jira] [Created] (FLINK-35139) Release flink-connector-mongodb vX.X.X for Flink 1.19
Danny Cranmer created FLINK-35139: - Summary: Release flink-connector-mongodb vX.X.X for Flink 1.19 Key: FLINK-35139 URL: https://issues.apache.org/jira/browse/FLINK-35139 Project: Flink Issue Type: Sub-task Components: Connectors / MongoDB Reporter: Danny Cranmer https://github.com/apache/flink-connector-mongodb -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35138) Release flink-connector-kafka vX.X.X for Flink 1.19
Danny Cranmer created FLINK-35138: - Summary: Release flink-connector-kafka vX.X.X for Flink 1.19 Key: FLINK-35138 URL: https://issues.apache.org/jira/browse/FLINK-35138 Project: Flink Issue Type: Sub-task Reporter: Danny Cranmer https://github.com/apache/flink-connector-kafka -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35137) Release flink-connector-jdbc vX.X.X for Flink 1.19
Danny Cranmer created FLINK-35137: - Summary: Release flink-connector-jdbc vX.X.X for Flink 1.19 Key: FLINK-35137 URL: https://issues.apache.org/jira/browse/FLINK-35137 Project: Flink Issue Type: Sub-task Reporter: Danny Cranmer https://github.com/apache/flink-connector-jdbc -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35135) Release flink-connector-gcp-pubsub vX.X.X for Flink 1.18/1.19
Danny Cranmer created FLINK-35135: - Summary: Release flink-connector-gcp-pubsub vX.X.X for Flink 1.18/1.19 Key: FLINK-35135 URL: https://issues.apache.org/jira/browse/FLINK-35135 Project: Flink Issue Type: Sub-task Reporter: Danny Cranmer https://github.com/apache/flink-connector-gcp-pubsub -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35132) Release flink-connector-aws v4.3.0 for Flink 1.18/1.19
Danny Cranmer created FLINK-35132: - Summary: Release flink-connector-aws v4.3.0 for Flink 1.18/1.19 Key: FLINK-35132 URL: https://issues.apache.org/jira/browse/FLINK-35132 Project: Flink Issue Type: Sub-task Reporter: Danny Cranmer Fix For: aws-connector-4.3.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35131) Support and Release Connectors for Flink 1.19
Danny Cranmer created FLINK-35131: - Summary: Support and Release Connectors for Flink 1.19 Key: FLINK-35131 URL: https://issues.apache.org/jira/browse/FLINK-35131 Project: Flink Issue Type: Improvement Reporter: Danny Cranmer This is the parent task to contain connector support and releases for Flink 1.19. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-442: General Improvement to Configuration for Flink 2.0
+1 (binding) Best, Xintong On Wed, Apr 17, 2024 at 4:47 PM Zakelly Lan wrote: > +1 binding > > > Best, > Zakelly > > On Wed, Apr 17, 2024 at 2:05 PM Rui Fan <1996fan...@gmail.com> wrote: > > > +1(binding) > > > > Best, > > Rui > > > > On Wed, Apr 17, 2024 at 1:02 PM Xuannan Su > wrote: > > > > > Hi everyone, > > > > > > Thanks for all the feedback about the FLIP-442: General Improvement to > > > Configuration for Flink 2.0 [1] [2]. > > > > > > I'd like to start a vote for it. The vote will be open for at least 72 > > > hours(excluding weekends,until APR 22, 12:00AM GMT) unless there is an > > > objection or an insufficient number of votes. > > > > > > [1] > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-442%3A+General+Improvement+to+Configuration+for+Flink+2.0 > > > [2] https://lists.apache.org/thread/15k0stwyoknhxvd643ctwjw3fd17pqwk > > > > > > > > > Best regards, > > > Xuannan > > > > > >
[DISCUSS][QUESTION] Drop jdk 8 support for Flink connector Opensearch
Hi everyone I'm working on support for Opensearch v2.x for Flink connector Opensearch[1]. Unfortunately after several breaking changes (e.g. [2], [3]) on Opensearch side it is not possible anymore to use the same connector built for both Opensearch v1 and v2. This makes us to go in a similar way as for Elasticsearch 6/7 and build a dedicated Opensearch v2 module. However the main pain point here is that Opensearch 2.x is built with jdk11 and requires jdk11 to build and use Flink connector as well. Also in README[4] of most of the connectors it is mentioned explicitly that jdk11 is required to build connectors. At the same time it looks like we need to release a connector for Opensearch v1 with jdk8 and for Opensearch v2 with jdk11. The suggestion is to drop support of jdk8 for the Opensearch connector to make the release/testing for both modules (for Opensearch v1 and Openseach v2) easier. Other opinions are welcome [1] https://github.com/apache/flink-connector-opensearch/pull/38 [2] opensearch-project/OpenSearch#9082 [3] opensearch-project/OpenSearch#5902 [4] https://github.com/apache/flink-connector-opensearch/blob/main/README.md?plain=1#L18 -- Best regards, Sergey
Re: [VOTE] FLIP-442: General Improvement to Configuration for Flink 2.0
+1 binding Best, Zakelly On Wed, Apr 17, 2024 at 2:05 PM Rui Fan <1996fan...@gmail.com> wrote: > +1(binding) > > Best, > Rui > > On Wed, Apr 17, 2024 at 1:02 PM Xuannan Su wrote: > > > Hi everyone, > > > > Thanks for all the feedback about the FLIP-442: General Improvement to > > Configuration for Flink 2.0 [1] [2]. > > > > I'd like to start a vote for it. The vote will be open for at least 72 > > hours(excluding weekends,until APR 22, 12:00AM GMT) unless there is an > > objection or an insufficient number of votes. > > > > [1] > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-442%3A+General+Improvement+to+Configuration+for+Flink+2.0 > > [2] https://lists.apache.org/thread/15k0stwyoknhxvd643ctwjw3fd17pqwk > > > > > > Best regards, > > Xuannan > > >
[jira] [Created] (FLINK-35130) Simplify AvailabilityNotifierImpl to support speculative scheduler and improve performance
Yuxin Tan created FLINK-35130: - Summary: Simplify AvailabilityNotifierImpl to support speculative scheduler and improve performance Key: FLINK-35130 URL: https://issues.apache.org/jira/browse/FLINK-35130 Project: Flink Issue Type: Improvement Components: Runtime / Network Affects Versions: 1.20.0 Reporter: Yuxin Tan The AvailabilityNotifierImpl in SingleInputGate has maps storing the channel ids. But the map key is the result partition id, which will change according to the different attempt numbers when speculation is enabled. This can be resolved by using `inputChannels` to get channel and the map key of inputChannels will not vary with the attempts. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+ 1 (non-binding) Best Regards Ahmed Hamdy On Wed, 17 Apr 2024 at 08:28, Yuepeng Pan wrote: > +1(non-binding). > > > > > Best, > Yuepeng Pan > > At 2024-04-17 14:27:27, "Ron liu" wrote: > >Hi Dev, > > > >Thank you to everyone for the feedback on FLIP-435: Introduce a New > >Materialized Table for Simplifying Data Pipelines[1][2]. > > > >I'd like to start a vote for it. The vote will be open for at least 72 > >hours unless there is an objection or not enough votes. > > > >[1] > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines > >[2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > > > >Best, > >Ron >
Re:[VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+1(non-binding). Best, Yuepeng Pan At 2024-04-17 14:27:27, "Ron liu" wrote: >Hi Dev, > >Thank you to everyone for the feedback on FLIP-435: Introduce a New >Materialized Table for Simplifying Data Pipelines[1][2]. > >I'd like to start a vote for it. The vote will be open for at least 72 >hours unless there is an objection or not enough votes. > >[1] >https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines >[2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > >Best, >Ron
Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+1(binding) Best, Rui On Wed, Apr 17, 2024 at 2:45 PM Feng Jin wrote: > +1 (non-binding) > > Best, > Feng Jin > > On Wed, Apr 17, 2024 at 2:28 PM Ron liu wrote: > > > +1(binding) > > > > Best, > > Ron > > > > Ron liu 于2024年4月17日周三 14:27写道: > > > > > Hi Dev, > > > > > > Thank you to everyone for the feedback on FLIP-435: Introduce a New > > > Materialized Table for Simplifying Data Pipelines[1][2]. > > > > > > I'd like to start a vote for it. The vote will be open for at least 72 > > > hours unless there is an objection or not enough votes. > > > > > > [1] > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines > > > [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > > > > > > Best, > > > Ron > > > > > >
Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+1 (non-binding) Best, Feng Jin On Wed, Apr 17, 2024 at 2:28 PM Ron liu wrote: > +1(binding) > > Best, > Ron > > Ron liu 于2024年4月17日周三 14:27写道: > > > Hi Dev, > > > > Thank you to everyone for the feedback on FLIP-435: Introduce a New > > Materialized Table for Simplifying Data Pipelines[1][2]. > > > > I'd like to start a vote for it. The vote will be open for at least 72 > > hours unless there is an objection or not enough votes. > > > > [1] > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines > > [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > > > > Best, > > Ron > > >
Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
+1(binding) Best, Ron Ron liu 于2024年4月17日周三 14:27写道: > Hi Dev, > > Thank you to everyone for the feedback on FLIP-435: Introduce a New > Materialized Table for Simplifying Data Pipelines[1][2]. > > I'd like to start a vote for it. The vote will be open for at least 72 > hours unless there is an objection or not enough votes. > > [1] > https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines > [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs > > Best, > Ron >
[VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines
Hi Dev, Thank you to everyone for the feedback on FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines[1][2]. I'd like to start a vote for it. The vote will be open for at least 72 hours unless there is an objection or not enough votes. [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs Best, Ron
Re: [VOTE] FLIP-442: General Improvement to Configuration for Flink 2.0
+1(binding) Best, Rui On Wed, Apr 17, 2024 at 1:02 PM Xuannan Su wrote: > Hi everyone, > > Thanks for all the feedback about the FLIP-442: General Improvement to > Configuration for Flink 2.0 [1] [2]. > > I'd like to start a vote for it. The vote will be open for at least 72 > hours(excluding weekends,until APR 22, 12:00AM GMT) unless there is an > objection or an insufficient number of votes. > > [1] > https://cwiki.apache.org/confluence/display/FLINK/FLIP-442%3A+General+Improvement+to+Configuration+for+Flink+2.0 > [2] https://lists.apache.org/thread/15k0stwyoknhxvd643ctwjw3fd17pqwk > > > Best regards, > Xuannan >