Re: Kafka connector releases

2024-11-06 Thread Leonard Xu
Thanks @Yanquan for track this thread, and thanks @Arvid for the information, 
it makes sense to me

Qingsheng will drive the release of flink-2.0-preview of Flink Kafka connector, 
and I’d like assist it too.

Best,
Leonard


> 2024年11月6日 下午6:58,Arvid Heise  写道:
> 
> Hi Yanquan,
> 
> the current state of the 3.4.0 release is that it's still pending on the
> lineage PR [1] which I expect to be merged next week (the author is on
> vacation). The release cut would then happen right afterwards.
> 
> After the release cut, we can then bump to 4.0.0-SNAPSHOT and Flink
> 2.0-preview. @Qingsheng Ren  and Leonard wanted to
> drive that release. I already prepared a bit by thoroughly annotating
> everything with Deprecated but the whole test side needs a bigger cleanup.
> It's probably also a good time to bump other dependencies.
> Could you please sync with the two release managers? At least Qingsheng is
> responsive in the Flink slack - I talked to him quite a bit there.
> 
> If there is a pressing need to start earlier, we can also cut the 3.4
> branch (which is then effectively the 3.3 branch) earlier and backport the
> lineage PR (it's just one commit ultimately). I'd leave that decision to
> the two release managers for 4.0.0 mentioned before.
> 
> One thing to note for 4.0.0 is that we need to solve the transaction
> management issues with the KafkaSink [2]. It's blocking larger users from
> adopting the KafkaSink which will be the only option for Flink 2.0. I have
> started designing a solution.
> 
> Best,
> 
> Arvid
> 
> 
> [1] https://github.com/apache/flink-connector-kafka/pull/130
> [2] https://issues.apache.org/jira/browse/FLINK-34554
> 
> On Tue, Nov 5, 2024 at 4:48 AM Yanquan Lv  wrote:
> 
>> Hi, Arvid.
>> 
>> It has been a month and we are glad to see that we have completed the
>> release of Kafka 3.3.0 targeting 1.19 and 1.20.
>> 
>> Considering that Flink 2.0-preview1 has already been released, I would
>> like to know about our plans and progress for bumping to 2.0-preview1.
>> I tested the changes required for bump to 2.0-preview1 locally and found
>> that the adaptation changes made in the production code based on
>> FlinkKafkaProducer Depreciated work were relatively clear and the amount of
>> change was not significant. However, the headache was that there were many
>> adjustments needed in the test code.
>> 
>> I would like to know if there is already work in the community to bump to
>> 2.0-preview1. If not, I can help complete this task (but some suggestions
>> may be needed for testing the adaptation in the code).
>> 
>> 
>> 
>> 
>> 
>>> 2024年9月27日 16:23,Arvid Heise  写道:
>>> 
>>> Dear Flink devs,
>>> 
>>> I'd like to initiate three(!) Kafka connector releases. The main reason
>> for
>>> having three releases is that we have been slacking a bit in keeping up
>>> with the latest changes.
>>> 
>>> Here is the summary:
>>> 1. Release kafka-3.3.0 targeting 1.19 and 1.20 (asap)
>>> - Incorporates lots of deprecations for Flink 2 including everything that
>>> is related to FlinkKafkaProducer (SinkFunction, FlinkKafkaConsumer
>>> (SourceFunction), and KafkaShuffle
>>> - Lots of bugfixes that are very relevant for 1.19 users (and probably
>> work
>>> with older releases)
>>> 
>>> 2. Release kafka-3.4.0 targeting 1.20 (~1-2 weeks later)
>>> - Incorporates lineage tracing which is only available in 1.20 [1] (FLIP
>>> incorrectly says that it's avail in 1.19)
>>> - We have discussed some alternatives to this release in [2] but
>> basically
>>> having a separate release is the cleanest solution.
>>> - I'd like to linearize the releases to avoid having to do back or even
>>> forward ports
>>> 
>>> 3. Release kafka-4.0.0 targeting 2.0-preview (~1-2 weeks later)
>>> - Much requested to get the connector out asap for the preview. (I think
>>> the old jar using the removed interfaces should still work)
>>> - Remove all deprecated things
>>> - General spring cleaning (trying to get rid of arch unit violations,
>>> migrate to JUnit5)
>>> - Should we relocate the TableAPI stuff to o.a.f.connectors?
>>> 
>>> I'd appreciate any feedback and volunteers for RM ;) If you have pending
>>> PRs that should be part of any of those releases, please also write them.
>>> 
>>> Best,
>>> 
>>> Arvid
>>> 
>>> [1]
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-314%3A+Support+Customized+Job+Lineage+Listener
>>> [1]
>>> 
>> https://github.com/apache/flink-connector-kafka/pull/111#issuecomment-2306382878
>> 
>> 



Re: [ANNOUNCE] New Apache Flink Committer - Junrui Li

2024-11-06 Thread Leonard Xu
Congratulations Junrui !


Best,
Leonard

> 2024年11月6日 下午8:52,Hong Liang  写道:
> 
> Congratulations Junrui!
> 
> Hong
> 
> On Wed, Nov 6, 2024 at 12:36 PM Yanquan Lv  wrote:
> 
>> Congratulations, Junrui!
>> 
>> Best,
>> Yanquan
>> 
>>> 2024年11月5日 19:59,Zhu Zhu  写道:
>>> 
>>> Hi everyone,
>>> 
>>> On behalf of the PMC, I'm happy to announce that Junrui Li has become a
>>> new Flink Committer!
>>> 
>>> Junrui has been an active contributor to the Apache Flink project for two
>>> years. He had been the driver and major developer of 8 FLIPs, contributed
>>> 100+ commits with tens of thousands of code lines.
>>> 
>>> His contributions mainly focus on enhancing Flink batch execution
>>> capabilities, including enabling parallelism inference by
>> default(FLIP-283),
>>> supporting progress recovery after JM failover(FLIP-383), and supporting
>>> adaptive optimization of logical execution plan (FLIP-468/469).
>> Furthermore,
>>> Junrui did a lot of work to improve Flink's configuration layer,
>> addressing
>>> technical debt and enhancing its user-friendliness. He is also active in
>>> mailing lists, participating in discussions and answering user questions.
>>> 
>>> Please join me in congratulating Junrui Li for becoming an Apache Flink
>>> committer.
>>> 
>>> Best,
>>> Zhu (on behalf of the Flink PMC)
>> 
>> 



[jira] [Created] (FLINK-36656) Flink CDC treats MySQL Sharding table with boolean type conversion error

2024-11-04 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-36656:
--

 Summary: Flink CDC treats MySQL Sharding table with boolean type 
conversion error
 Key: FLINK-36656
 URL: https://issues.apache.org/jira/browse/FLINK-36656
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Affects Versions: cdc-3.2.0
Reporter: Leonard Xu
 Fix For: cdc-3.3.0, cdc-3.2.1






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Release HBase connector with partial JDK support

2024-10-31 Thread Leonard Xu
+1 for the release and release manager candidate

Best,
Leonard

> 2024年10月31日 下午8:04,Yanquan Lv  写道:
> 
> +1 for the release HBase connector 4.0 with 1.18 and 1.19 support.
> 
>> 2024年10月29日 00:03,Ferenc Csaky  写道:
>> 
>> Hi,
>> 
>> Based on this discussion I would like to suggest to move on with
>> the originally planned release with the HBase connector 4.0, that
>> will support 1.18, and 1.19.
>> 
>> I volunteer to be the release manager.
>> 
>> Thanks,
>> Ferenc
>> 
>> 
>> 
>> On Wednesday, October 23rd, 2024 at 13:19, Ferenc Csaky 
>>  wrote:
>> 
>>> 
>>> 
>>> Hi Marton, Yanquan,
>>> 
>>> Thank you for your responses! Regarding the points brought up to
>>> discuss:
>>> 
>>> 1. Supporting 1.20 definitely makes sense, but since there is quite
>>> a big gap to work down here now, I am not sure it should be done in
>>> 1 step. As my understanding, the externalized connector dev model
>>> [1] do not explicitly forbid that, but AFAIK there were external
>>> connector release that supported 3 different Flink minor versions.
>>> 
>>> In this case, I think technically would be possible, but IMO
>>> supporting 3 Flink verisons adds more complexity to maintain. So
>>> what I would suggest to release 4.0 with Flink 1.18 and 1.19
>>> support, and after that there can be a 4.1 that supports 1.19 and
>>> 1.20. 4.0 will only have patch support, probably minimizing Flink
>>> version specific problems.
>>> 
>>> 2. Flink 1.17 had no JDK17 support, so those Hadoop related
>>> problems should not play a role if something needs to be released
>>> that supports 1.17. But if connector 4.0 is released, 3.x versions
>>> will not get any new releases (even not patch), cause 1.17 is out
>>> of support already.
>>> 
>>> Best,
>>> Ferenc
>>> 
>>> [1] 
>>> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
>>> 
>>> On Wednesday, 23 October 2024 at 05:19, Yanquan Lv decq12y...@gmail.com 
>>> wrote:
>>> 
 Hi Feri,
 Thank you for bringing up this discussion.
 I agree to release a version to bump the newer version of Flink with 
 partial JDK versions support. I have two points to be discussed.
 1. I have heard many inquiries about supporting higher versions of Flink 
 in Slack, Chinese communities, etc., and a large part of them hope to use 
 it on Flink1.20. Should we consider explicitly supporting Flink1.20 on 
 version 4.0, otherwise users will have to wait for a relatively long 
 release cycle.
 2. Currently supporting Flink1.17 is difficult, but are there any plans to 
 support it in the future? Do we need to wait for Hadoop related 
 repositories to release specific versions.
 
> 2024年10月22日 19:44,Ferenc Csaky ferenc.cs...@pm.me.INVALID 写道:
> 
> Hello devs,
> 
> I would like to start a discussion regarding a new HBase connector 
> release. Currently, the
> externalized HBase connector has only 1 release: 3.0.0 that supports 
> Flink 1.16 and 1.17.
> 
> By stating this, it is obvious that the connector is already outdated for 
> quite a while. There
> is a long-lasting ticket [1] to release a newer HBase version, which also 
> contains a major version
> bump as HBase 1.x support is removed, but covering JDK17 with the current 
> Hadoop related
> dependency mix is impossible, because there are parts that do not play 
> well with it when you
> try to compile with JDK17+, and there are no runtime tests as well.
> 
> Solving that properly will require to bump the HBase, Hadoop, and 
> Zookeeper versions as well,
> but that will require more digging and some refactoring, at least on the 
> test side.
> 
> To cut some corners and move forward I think at this point it would make 
> sense to release
> version 4.0 that supports Flink 1.18 and 1.19 but only on top of JDK8 and 
> JDK11 just to close the
> current gap a bit. I am thinking about including the limitations in the 
> java compat docs [2] to
> highlight users.
> 
> WDYT?
> 
> Best,
> Ferenc
> 
> [1] https://issues.apache.org/jira/browse/FLINK-35136
> [2] 
> https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/java_compatibility/
> 



Re: [DISCUSS] FlinkCDC bump version to Flink 1.19

2024-10-30 Thread Leonard Xu
Thanks Yanquan for opening the ticket, let’s track it under the jira ticket.

Best,
Leonard

> 2024年10月30日 下午8:25,Yanquan Lv  写道:
> 
> Thanks ConradJam, Leonard, Hang for your positive feedback, and I haven't 
> seen any demand for blocks in the community either.
> We’ve created a jira[1] to trace the bump Flink 1.19 plan, and hope to 
> complete this work in FlinkCDC 3.3.
> 
> [1] https://issues.apache.org/jira/browse/FLINK-36586
> 
> 
>> 2024年10月28日 16:38,Hang Ruan  写道:
>> 
>> Hi, Yanquan.
>> 
>> +1 for supporting two minor flink version.
>> Supporting too many Flink versions is a hard work. We could support two
>> last minor flink version like the external connectors did[1].
>> 
>> Best,
>> Hang
>> 
>> [1]
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=231116690#ExternalizedConnectordevelopment-Flinkcompatibility
>> 
>> Leonard Xu  于2024年10月28日周一 15:27写道:
>> 
>>> +1 from my side, supporting two minor flink version(1.19.* & 1.120.*)
>>> makes sense to me.
>>> 
>>> Best,
>>> Leonard
>>> 
>>>> 2024年10月17日 下午1:58,ConradJam  写道:
>>>> 
>>>> I am currently working on the Iceberg CDC Pipeline Connector, and support
>>>> for it would be ideal, as the Iceberg Flink Connector already supports
>>>> SinkV2. There's no reason why we shouldn't support it as well. So I'm +1
>>>> 
>>>> Yanquan Lv  于2024年10月17日周四 09:55写道:
>>>> 
>>>>> Dear Flink devs, I would like to initiate a discussion about upgrading
>>>>> the  bump version of FlinkCDC to Flink 1.19.
>>>>> FlinkCDC's version was bumped to Flink 1.18 in November 2023[1],
>>> Almost a
>>>>> year has passed, during which Flink released two major versions,
>>> 1.19/1.20,
>>>>> and 1.20 is a LTS version. Therefore, I think it's time to bump the
>>> version
>>>>> to 1.19.
>>>>> Bump the version to 1.19 has the following benefits:
>>>>> 1) FLIP-371[2] and FLIP-372[3] introduced some new Sink APIs and
>>> modified
>>>>> the signatures of some classes in Flink 1.19. Bump to 1.19 allows us to
>>>>> directly reference these new interfaces to expand the downstream
>>> ecosystem.
>>>>> For example, Iceberg recently implemented Flink Sink using these new
>>>>> APIs[4], and we can start expanding Iceberg as the sink for the CDC
>>>>> Pipeline.
>>>>> 2) We can leverage some changes introduced by this jira[5] to
>>> proactively
>>>>> trigger a checkpoint upon receiving schema changes,  thereby we can
>>> reduce
>>>>> the complexity of supporting table structure evolution and achieve
>>>>> end-to-end exactly once.
>>>>> I would like to know if there are some needs to delay bumping this
>>> version
>>>>> from the community, otherwise we could add the bump version plan to
>>>>> FlinkCDC 3.3.
>>>>> 
>>>>> [1] https://github.com/apache/flink-cdc/pull/2463
>>>>> [2]
>>>>> 
>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-371%3A+Provide+initialization+context+for+Committer+creation+in+TwoPhaseCommittingSink
>>>>> [3]
>>>>> 
>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-372%3A+Enhance+and+synchronize+Sink+API+to+match+the+Source+API
>>>>> [4] https://github.com/apache/iceberg/pull/10179
>>>>> [5] https://issues.apache.org/jira/browse/FLINK-32514
>>>>> Sorry for resend again due to some issues with the email server.
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Best
>>>> 
>>>> ConradJam
>>> 
>>> 
> 



Re: [DISCUSS] FlinkCDC bump version to Flink 1.19

2024-10-28 Thread Leonard Xu
+1 from my side, supporting two minor flink version(1.19.* & 1.120.*) makes 
sense to me.

Best,
Leonard

> 2024年10月17日 下午1:58,ConradJam  写道:
> 
> I am currently working on the Iceberg CDC Pipeline Connector, and support
> for it would be ideal, as the Iceberg Flink Connector already supports
> SinkV2. There's no reason why we shouldn't support it as well. So I'm +1
> 
> Yanquan Lv  于2024年10月17日周四 09:55写道:
> 
>> Dear Flink devs, I would like to initiate a discussion about upgrading
>> the  bump version of FlinkCDC to Flink 1.19.
>> FlinkCDC's version was bumped to Flink 1.18 in November 2023[1],  Almost a
>> year has passed, during which Flink released two major versions, 1.19/1.20,
>> and 1.20 is a LTS version. Therefore, I think it's time to bump the version
>> to 1.19.
>> Bump the version to 1.19 has the following benefits:
>> 1) FLIP-371[2] and FLIP-372[3] introduced some new Sink APIs and modified
>> the signatures of some classes in Flink 1.19. Bump to 1.19 allows us to
>> directly reference these new interfaces to expand the downstream ecosystem.
>> For example, Iceberg recently implemented Flink Sink using these new
>> APIs[4], and we can start expanding Iceberg as the sink for the CDC
>> Pipeline.
>> 2) We can leverage some changes introduced by this jira[5] to proactively
>> trigger a checkpoint upon receiving schema changes,  thereby we can reduce
>> the complexity of supporting table structure evolution and achieve
>> end-to-end exactly once.
>> I would like to know if there are some needs to delay bumping this version
>> from the community, otherwise we could add the bump version plan to
>> FlinkCDC 3.3.
>> 
>> [1] https://github.com/apache/flink-cdc/pull/2463
>> [2]
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-371%3A+Provide+initialization+context+for+Committer+creation+in+TwoPhaseCommittingSink
>> [3]
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-372%3A+Enhance+and+synchronize+Sink+API+to+match+the+Source+API
>> [4] https://github.com/apache/iceberg/pull/10179
>> [5] https://issues.apache.org/jira/browse/FLINK-32514
>> Sorry for resend again due to some issues with the email server.
> 
> 
> 
> -- 
> Best
> 
> ConradJam



Re: [VOTE] Release flink-connector-kafka v3.3.0, release candidate #1

2024-10-15 Thread Leonard Xu
Thanks Arvid for the quick update,  +1 (binding)

- verified signatures
- verified hashsums
- checked Github release tag 
- checked release notes
- checked the staging jars 
- reviewed the web PR 


Best,
Leonard



> 2024年10月16日 上午3:29,Matthias Pohl  写道:
> 
> +1 (binding)
> 
> * Downloaded all artifacts
> * Extracted and built sources
> * Diff of git tag checkout with downloaded sources
> * Verified SHA512 checksums & GPG certification
> * Checked that all POMs have the right expected version
> * Generated diffs to compare pom file changes with NOTICE files
> 
> Thanks Arvid. Looks good from my side.
> 
> On Tue, Oct 15, 2024 at 8:56 AM Arvid Heise  wrote:
> 
>> Okay turns out that the upload script uses sed to extract the version and
>> that doesn't work with Mac's sed. I will update the docs to reflect the
>> need to use gsed for the release.
>> 
>> Because we didn't need to touch code to fix it, I'd leave the RC1 still
>> active. Instead, I have deleted the old staging jars and replaced them by
>> new ones [1] [2].
>> 
>> However, I'd reset all votes to 0. Everyone that already cast +1, can
>> probably do a quick redo by just checking those staging jars.
>> 
>> Thank you very much for your help and patience
>> 
>> Arvid
>> 
>> [1]
>> 
>> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.19/
>> [2]
>> 
>> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.20/
>> 
>> On Tue, Oct 15, 2024 at 8:21 AM Arvid Heise  wrote:
>> 
>>> That's a good catch Leonard. I'll check if I can change that on the fly
>> or
>>> else I will create a new RC.
>>> 
>>> On Tue, Oct 15, 2024 at 7:56 AM Leonard Xu  wrote:
>>> 
>>>> - verified signatures
>>>> - verified hashsums
>>>> - checked Github release tag
>>>> - reviewed the web PR
>>>> 
>>>> TODO:
>>>> Quick question about the url before vote.
>>>> 
>>>> 
>>>> 
>> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.19.1
>>>> 
>>>> 
>> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.19.1/flink-connector-kafka-3.3.0-1.19.1.jar
>>>> 
>>>> The suffix of directory and file name should -1.19 instead of -1.19.1,
>>>> the 1.19 means all supported flink versions like 1.19.0,1.19.1,1.19.2,
>> etc.
>>>> But a specific flink version
>>>> -1.19.1will confuse users a lot that is this connector can be used for
>>>> flink 1.19.2 ?
>>>> 
>>>> 
>>>> Best,
>>>> Leonard
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> 2024年10月14日 下午6:01,Ahmed Hamdy  写道:
>>>>> 
>>>>> Hi Arvid,
>>>>> yeah sure, thanks for following up
>>>>> 
>>>>> Best Regards
>>>>> Ahmed Hamdy
>>>>> 
>>>>> 
>>>>> On Mon, 14 Oct 2024 at 10:52, Danny Cranmer 
>>>> wrote:
>>>>> 
>>>>>> Hey Arvid,
>>>>>> 
>>>>>> +1 (binding)
>>>>>> 
>>>>>> - I left a couple of comments on the website PR that need addressing
>>>>>> - Release notes look good
>>>>>> - Source archive signatures and checksums are correct
>>>>>> - There are no binaries in the source archive
>>>>>> - Contents of mvn dist look good
>>>>>> - Binary signatures and checksums are correct
>>>>>> - CI build of tag successful [1]
>>>>>> - NOTICE and LICENSE files look correct
>>>>>> 
>>>>>> Thanks,
>>>>>> Danny
>>>>>> 
>>>>>> [1]
>>>>>> 
>>>> 
>> https://github.com/apache/flink-connector-kafka/actions/runs/11281680238
>>>>>> 
>>>>>> On Mon, Oct 14, 2024 at 10:29 AM Arvid Heise 
>> wrote:
>>>>>> 
>>>>>>> Hi Ahmed,
>>>>>>> 
>>>>>>> thanks for pointing out that web PR looks amiss. I consulted with
>>>> Chesnay
>>>>>>> and it looks like our Hugo setup is not perfect. First, we didn't
>> pin
>

Re: [VOTE] Release flink-connector-kafka v3.3.0, release candidate #1

2024-10-14 Thread Leonard Xu
- verified signatures
- verified hashsums
- checked Github release tag 
- reviewed the web PR 

TODO:
Quick question about the url before vote.

https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.19.1
https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.19.1/flink-connector-kafka-3.3.0-1.19.1.jar

The suffix of directory and file name should -1.19 instead of -1.19.1, the 1.19 
means all supported flink versions like 1.19.0,1.19.1,1.19.2, etc. But a 
specific flink version 
-1.19.1will confuse users a lot that is this connector can be used for flink 
1.19.2 ? 


Best,
Leonard





> 2024年10月14日 下午6:01,Ahmed Hamdy  写道:
> 
> Hi Arvid,
> yeah sure, thanks for following up
> 
> Best Regards
> Ahmed Hamdy
> 
> 
> On Mon, 14 Oct 2024 at 10:52, Danny Cranmer  wrote:
> 
>> Hey Arvid,
>> 
>> +1 (binding)
>> 
>> - I left a couple of comments on the website PR that need addressing
>> - Release notes look good
>> - Source archive signatures and checksums are correct
>> - There are no binaries in the source archive
>> - Contents of mvn dist look good
>> - Binary signatures and checksums are correct
>> - CI build of tag successful [1]
>> - NOTICE and LICENSE files look correct
>> 
>> Thanks,
>> Danny
>> 
>> [1]
>> https://github.com/apache/flink-connector-kafka/actions/runs/11281680238
>> 
>> On Mon, Oct 14, 2024 at 10:29 AM Arvid Heise  wrote:
>> 
>>> Hi Ahmed,
>>> 
>>> thanks for pointing out that web PR looks amiss. I consulted with Chesnay
>>> and it looks like our Hugo setup is not perfect. First, we didn't pin the
>>> Hugo version, which I now did. But second it seems like we still get
>>> different signature on the scripts for some reasons. I will not be able
>> to
>>> solve it and it seems to be ongoing for a while (history shows many of
>>> these merge war conflicts).
>>> 
>>> So I'm creating a follow-up ticket and leave as-is, okay?
>>> 
>>> Best,
>>> 
>>> Arvid
>>> 
>>> On Sat, Oct 12, 2024 at 9:54 PM Ahmed Hamdy 
>> wrote:
>>> 
 HI Arvid,
 +1 (non-binding)
 
 - Checksum & signatures are correct
 - Build from source
 - Verified tag exists in github
 - Verified no binaries in release
 - Reviewed web PR, nit: The PR is in draft mode and I see we are
>> touching
 index files on unrelated resources. Is this intended or did I miss
 something?
 
 
 Best Regards
 Ahmed Hamdy
 
 
 On Sat, 12 Oct 2024 at 05:23, Yanquan Lv  wrote:
 
> +1 (non-binding)
> I checked:
> - Review JIRA release notes
> - Verify hashes
> - Verify signatures
> - Build from source with JDK 8/11/17
> - Source code artifacts matching the current release
> 
> 
> 
>> 2024年10月11日 23:33,Arvid Heise  写道:
>> 
>> Hi everyone,
>> 
>> 
>> Please review and vote on release candidate #1 for
 flink-connector-kafka
>> v3.3.0, as follows:
>> 
>> [ ] +1, Approve the release
>> 
>> [ ] -1, Do not approve the release (please provide specific
>> comments)
>> 
>> 
>> 
>> 
>> 
>> The complete staging area is available for your review, which
>>> includes:
>> 
>> * JIRA release notes [1],
>> 
>> * the official Apache source release to be deployed to
>>> dist.apache.org
> [2],
>> which are signed with the key with fingerprint 538B49E9BCF0B72F
>> [3],
>> 
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> 
>> * source code tag v3.3.0-rc1 [5],
>> 
>> * website pull request listing the new release [6].
>> 
>> * CI build of the tag [7].
>> 
>> 
>> 
>> The vote will be open for at least 72 hours. It is adopted by
>>> majority
>> approval, with at least 3 PMC affirmative votes.
>> 
>> 
>> 
>> Thanks,
>> 
>> Arvid Heise
>> 
>> 
>> 
>> [1]
>> 
> 
 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354606
>> 
>> [2]
>> 
> 
 
>>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.3.0-rc1
>> 
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> 
>> [4]
>> 
> 
 
>>> 
>> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.19.1/
>> +
>> 
> 
 
>>> 
>> https://repository.apache.org/content/repositories/staging/org/apache/flink/flink-connector-kafka/3.3.0-1.20.0/
>> 
>> [5]
> 
>>> https://github.com/apache/flink-connector-kafka/releases/tag/v3.3.0-rc1
>> 
>> [6] https://github.com/apache/flink-web/pull/757
>> 
>> [7]
>> 
> 
 
>>> 
>> https://github.com/apache/flink-connector-kafka/actions/runs/11281680238/
> 
> 
 
>>> 
>> 



Re: Kafka connector releases

2024-09-27 Thread Leonard Xu
Thanks Arvid for the volunteering!

+ 1 for all of the three releases and RM candidate.Qingsheng and I would like 
to help the 4.0.0-preview which follows the Flink 2.0 preview,  
please feel free to ping us if you need any help.

Btw, for other external connectors which  highest supported flink version still 
is 1.17 or 1.18,  I also hope we can have dedicated plan to bump their version 
ASAP, we can 
start a new thread to track other external connector releases.

Best,
Leonard


> 2024年9月27日 下午6:54,Qingsheng Ren  写道:
> 
> Thanks for the work, Arvid!
> 
> I'm not sure if there's any incompatibility issue for 3.2 + 1.20. If
> they are fully compatible, what about not dropping support for 1.18 in
> 3.2.1, and we release one more version 3.2.1-1.20? Then we can use
> 3.3.0 for the new lineage feature in 1.20 and drop support for 1.18
> and 1.19.
> 
> And for the 4.0.0-preview version I'd like to help with it :-)
> 
> Best,
> Qingsheng
> 
> On Fri, Sep 27, 2024 at 6:13 PM Arvid Heise  wrote:
>> 
>> Hi David,
>> 
>> thank you very much for your reply.
>> 
>>> Some thoughts on whether we need the 3 deliverables. And whether we could
>> follow more traditional fixpack numbering:
>>> I see that there is already a release for 1.19
>> https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka/3.2.0-1.19
>> .
>> I am wondering why we need the first deliverable. If we need it for the bug
>> fixes , why not have a 3.2.1?
>> I forgot the most important part of the first release: drop Flink 1.18
>> support and add Flink 1.20 support. Hence I wouldn't want to mix that into
>> a bugfix release. I think this is in line with the previous minor releases.
>> 
>>> I assume that kafka-3.4.0 will not work with previous Flink releases.
>> Would it be worth have a config switch to enable the lineage in the
>> connector so that we could use it with 1.19?  We could maybe do a 3.3 if
>> this was the case.
>> Yes, as outlined in the discussion linked in the original message, we need
>> to mixin new interfaces. Afaik classloading will fail if the interfaces are
>> not present, even if the methods are not used. So I don't see how we can
>> use a config switch to make it happen (except with code duplication).
>> However, I'm grateful for any ideas to avoid this release.
>> 
>> Best,
>> 
>> Arvid
>> 
>> On Fri, Sep 27, 2024 at 11:11 AM David Radley 
>> wrote:
>> 
>>> Hi Arvid,
>>> Some thoughts on whether we need the 3 deliverables. And whether we could
>>> follow more traditional fixpack numbering:
>>> I see that there is already a release for 1.19
>>> https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka/3.2.0-1.19
>>> . I am wondering why we need the first deliverable. If we need it for the
>>> bug fixes , why not have a 3.2.1?
>>> I assume that kafka-3.4.0 will not work with previous Flink releases.
>>> Would it be worth have a config switch to enable the lineage in the
>>> connector so that we could use it with 1.19?  We could maybe do a 3.3 if
>>> this was the case.
>>> 
>>> WDYT?
>>>  Kind regards, David.
>>> 
>>> 
>>> 
>>> From: Arvid Heise 
>>> Date: Friday, 27 September 2024 at 09:24
>>> To: dev@flink.apache.org 
>>> Subject: [EXTERNAL] Kafka connector releases
>>> Dear Flink devs,
>>> 
>>> I'd like to initiate three(!) Kafka connector releases. The main reason for
>>> having three releases is that we have been slacking a bit in keeping up
>>> with the latest changes.
>>> 
>>> Here is the summary:
>>> 1. Release kafka-3.3.0 targeting 1.19 and 1.20 (asap)
>>> - Incorporates lots of deprecations for Flink 2 including everything that
>>> is related to FlinkKafkaProducer (SinkFunction, FlinkKafkaConsumer
>>> (SourceFunction), and KafkaShuffle
>>> - Lots of bugfixes that are very relevant for 1.19 users (and probably work
>>> with older releases)
>>> 
>>> 2. Release kafka-3.4.0 targeting 1.20 (~1-2 weeks later)
>>> - Incorporates lineage tracing which is only available in 1.20 [1] (FLIP
>>> incorrectly says that it's avail in 1.19)
>>> - We have discussed some alternatives to this release in [2] but basically
>>> having a separate release is the cleanest solution.
>>> - I'd like to linearize the releases to avoid having to do back or even
>>> forward ports
>>> 
>>> 3. Release kafka-4.0.0 targeting 2.0-preview (~1-2 weeks later)
>>> - Much requested to get the connector out asap for the preview. (I think
>>> the old jar using the removed interfaces should still work)
>>> - Remove all deprecated things
>>> - General spring cleaning (trying to get rid of arch unit violations,
>>> migrate to JUnit5)
>>> - Should we relocate the TableAPI stuff to o.a.f.connectors?
>>> 
>>> I'd appreciate any feedback and volunteers for RM ;) If you have pending
>>> PRs that should be part of any of those releases, please also write them.
>>> 
>>> Best,
>>> 
>>> Arvid
>>> 
>>> [1]
>>> 
>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-314%3A+Support+Customized+Job+Lineage+Listener
>>> [1]
>>> 
>>> https://github.

Re: Flink CDC 3.3 Kick Off Proposal

2024-09-11 Thread Leonard Xu
Thanks Yanquan for kicking off this discussion, 

Generally +1 for the proposed feature-freeze date and release manager candidate 
from my side.


I’d like to be one of the release managers and offer necessary help.

Best,
Leonard



> 2024年9月11日 下午5:24,Xiqian YU  写道:
> 
> Hi Yanquan,
> 
> Thanks for kicking off the next major release! I’d like to suggest adding the 
> following ticket into CDC 3.3 planning list:
> 
> 
>  *   FLINK-36041: Eliminate Calcite dependency during runtime[1]
> 
> [1] https://issues.apache.org/jira/browse/FLINK-36041
> 
> Regards,
> Xiqian
> 
> De : Yanquan Lv 
> Date : mercredi, 11 septembre 2024 à 17:01
> À : dev@flink.apache.org 
> Objet : Flink CDC 3.3 Kick Off Proposal
> Hi devs,
> 
> As the vote for FlinkCDC 3.2 release was unanimously approved, it's a good
> time to kick off the upcoming Flink CDC 3.3 release cycle. In the previous
> version, we improved the compatibility of table schema changes. In this
> release cycle, We plan to develop in these directions:
> 
> 1) Expand the upstream and downstream data systems that Flink CDC YAML can
> configure, such as Kafka Source and JDBC Sink;
> 
> 2) Explore the integration of AI models, provide the ability to generate
> text and vector data through the transform module;
> 
> 3) Expand the data synchronization pipeline to support batch
> synchronization scenarios.
> 
> To ensure that we can complete the above plan, we plan to complete the
> development of Flink CDC 3.3 by October 31, 2024. For developers that are
> interested in participating and contributing new features in this release
> cycle, please feel free to list your planning features in this mail thread
> and we will trace them in the wiki page [1].
> 
> I have participated in some development work in the community in the past
> two major versions and am very interested in the development plan for
> version 3.3. So I volunteer to become the release manager for this version,
> and of course open to work together with someone on this.
> 
> What do you think?
> 
> Best,
> Yanquan
> 
> 
> [1] https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.3+Release



Re: [VOTE] Apache Flink CDC Release 3.2.0, release candidate #1

2024-09-03 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- checked Github release tag 
- built from source code with JDK 1.8 succeeded
- checked release notes, all blockers in RC0 have been fixed
- run a CDC YAML job that sync data from MySQL to Kafka, the result is expected 
- reviewed the web PR and left some minor comments
 
Best,
Leonard


> 2024年8月30日 下午5:24,Qingsheng Ren  写道:
> 
> Hi everyone,
> 
> Please review and vote on the release candidate #1 for the version 3.2.0 of
> Apache Flink CDC, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
> 
> **Release Overview**
> 
> As an overview, the release consists of the following:
> a) Flink CDC source release to be deployed to dist.apache.org
> b) Maven artifacts to be deployed to the Maven Central Repository
> 
> **Staging Areas to Review**
> 
> The staging areas containing the above mentioned artifacts are as follows,
> for your review:
> * All artifacts for a) can be found in the corresponding dev repository at
> dist.apache.org [1], which are signed with the key with fingerprint
> A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B [2]
> * All artifacts for b) can be found at the Apache Nexus Repository [3]
> 
> Other links for your review:
> * JIRA release notes [4]
> * Source code tag "release-3.2.0-rc1" with commit hash
> 6b9dda39c7066611e3df95db3aa0a81be36cbc0e [5]
> * PR for release announcement blog post of Flink CDC 3.2.0 in flink-web [6]
> 
> **Vote Duration**
> 
> The voting time will run for at least 72 hours.
> It is adopted by majority approval, with at least 3 PMC affirmative votes.
> 
> Thanks,
> Qingsheng
> 
> [1] https://dist.apache.org/repos/dist/dev/flink/flink-cdc-3.2.0-rc1/
> [2] https://dist.apache.org/repos/dist/release/flink/KEYS
> [3] https://repository.apache.org/content/repositories/orgapacheflink-1754
> [4]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354594
> [5] https://github.com/apache/flink-cdc/tree/release-3.2.0-rc1
> [6] https://github.com/apache/flink-web/pull/753



Re: [jira](FLINK-36127) Apply for a jira account and repair bug

2024-08-22 Thread Leonard Xu
Welcome Yuming to join the Flink Community!

Your Jira account has been created and I’ve assigned ticket FLINK-36127 to you.

Best,
Leonard

> 2024年8月22日 下午8:14,Yu Chen  写道:
> 
> Hi Yunming,
> 
> Thanks for help to resolve the ticket FLINK-36127, and welcome to join flink 
> community.
> 
> I noticed that Leonard was processing request recently, and cc’ed.
> 
> 
> Best,
> Yu Chen.
> 
> 
>> 2024年8月22日 20:01,Yunming Tang  写道:
>> 
>> Dear committers,
>> I'm willing to repair the bug of FLINK-36127 and I've requested a jira
>> account while it's under processing. My email is yunming.e...@gmail.com,
>> could you please help approve the request?
>> 
>> Best regards,
>> Yunming
> 



Re: [jira](FLINK-35444) Apply for a jira account and repair bug

2024-08-22 Thread Leonard Xu
Welcome @kyzheng for joining Flink Community, I just processed jira account 
pending requests these days, hope your jira account have been created.

I’m not sure the result because the jira account request doesn’t contains 
applier’s email address, feel free to ping me if your account still not ready 
tomorrow.

And, please left a comment under FLINK-35444 and I’ll assign this ticket to 
you, welcome to contribute to Flink CDC project again.


Best,
Leonard

> 2024年8月22日 下午4:18,Gabor Somogyi  写道:
> 
> Hi,
> 
> I think you can file PR wihout jira access. Once the PR is there we can
> handle the rest.
> Please read through the contribution guide [1], which can give you good
> corner points.
> 
> [1] https://flink.apache.org/how-to-contribute/contribute-code/
> 
> BR,
> G
> 
> 
> On Thu, Aug 22, 2024 at 10:06 AM 郑恺原  wrote:
> 
>> Dear committers,
>> 
>> I'm willing to repair the bug of FLINK-35444 and I've requested a jira
>> account several days ago, while it's still under processing.
>> 
>> My email is kyzheng...@163.com, could you help to approve this request?
>> Many thanks!
>> 



Re: [VOTE] Apache Flink CDC Release 3.2.0, release candidate #0

2024-08-18 Thread Leonard Xu
Thanks Yanquan for the verification and raise up the issues, I think we need 
cancel this RC, we’ll prepare the RC1 after fixed the two issues.

CC:QingSheng

Best,
Leonard


> 2024年8月19日 上午10:28,Yanquan Lv  写道:
> 
> -1 as there are two problems of stable reproduction for main connectors.
> Met a NotSerializableException that block user to use Kafka as pipeline
> sink[1].
> Met a NoPointException that will always lead to failure when job restarted
> using Paimon as pipeline sink[2].
> 
> [1] https://issues.apache.org/jira/browse/FLINK-36082
> [2] https://issues.apache.org/jira/browse/FLINK-36088
> 
> Yanquan Lv  于2024年8月18日周日 17:43写道:
> 
>> Hi Qingsheng, I've tested and met a NotSerializableException[1] that will
>> lead to failure when using Kafka as pipeline sink in 3.2.0 version.
>> I think it may be a blocker as this happened during the submission phase.
>> 
>> [1] https://issues.apache.org/jira/browse/FLINK-36082
>> 
>> Qingsheng Ren  于2024年8月15日周四 15:13写道:
>> 
>>> Hi everyone,
>>> 
>>> Please review and vote on the release candidate #0 for the version 3.2.0
>>> of
>>> Apache Flink CDC,
>>> as follows:
>>> [ ] +1, Approve the release
>>> [ ] -1, Do not approve the release (please provide specific comments)
>>> 
>>> **Release Overview**
>>> 
>>> As an overview, the release consists of the following:
>>> a) Flink CDC source release to be deployed to dist.apache.org
>>> b) Maven artifacts to be deployed to the Maven Central Repository
>>> 
>>> **Staging Areas to Review**
>>> 
>>> The staging areas containing the above mentioned artifacts are as follows,
>>> for your review:
>>> * All artifacts for a) can be found in the corresponding dev repository at
>>> dist.apache.org [1], which are signed with the key with fingerprint
>>> A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B [2]
>>> * All artifacts for b) can be found at the Apache Nexus Repository [3]
>>> 
>>> Other links for your review:
>>> * JIRA release notes [4]
>>> * Source code tag "release-3.2.0-rc0" with commit hash
>>> c03938e8de46b2d00a5984467d0e9bdca4a1 [5]
>>> * PR for release announcement blog post of Flink CDC 3.2.0 in flink-web
>>> [6]
>>> 
>>> **Vote Duration**
>>> 
>>> The voting time will run for at least 72 hours.
>>> It is adopted by majority approval, with at least 3 PMC affirmative votes.
>>> 
>>> Thanks,
>>> Qingsheng
>>> 
>>> [1] https://dist.apache.org/repos/dist/dev/flink/flink-cdc-3.2.0-rc0/
>>> [2] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> [3]
>>> https://repository.apache.org/content/repositories/orgapacheflink-1753
>>> [4]
>>> 
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354594
>>> [5] https://github.com/apache/flink-cdc/tree/release-3.2.0-rc0
>>> [6] https://github.com/apache/flink-web/pull/753
>>> 
>> 



Re: [ANNOUNCE] New Apache Flink Committer - Xuannan Su

2024-08-17 Thread Leonard Xu
Congratulations!  Xuannan


Best,
Leonard
 


Re: [ANNOUNCE] Flink CDC release-3.2 branch cut

2024-08-13 Thread Leonard Xu
Thanks Qingsheng, I’ve merged the PR after the CI green, feel free to prepare 
the release candidate.

Best,
Leonard

> 2024年8月13日 下午2:31,Qingsheng Ren  写道:
> 
> Thanks for letting me know, Leonard! Please go ahead and merge it.
> 
> Best,
> Qingsheng
> 
> On Tue, Aug 13, 2024 at 2:28 PM Leonard Xu  <mailto:xbjt...@gmail.com>> wrote:
> Thanks Qingsheng for the release management!
> 
>  I just found I missed a PR[1] that support data event metadata type, which 
> was planned to merge to release-3.2 branch, the PR has been reviewed and 
> approved. May I merge it to release-3.2 branch as well?
> 
> 
> Best,
> Leonard
> [1]https://github.com/apache/flink-cdc/pull/3468 
> <https://github.com/apache/flink-cdc/pull/3468>
> 
> > 2024年8月12日 下午9:11,Qingsheng Ren  > <mailto:re...@apache.org>> 写道:
> > 
> > Hi devs,
> > 
> > The release-3.2 branch of Flink CDC has been forked out from master branch,
> > with commit ID 104209558a30dc12d300bc23009cf59be105671c.
> > 
> > With the branch cut completed, we are now inviting volunteers from the
> > community to help us with release testing. Your participation is crucial in
> > ensuring the stability and quality of this release.
> > 
> > Due to the limited bandwidth for code reviews, we have focused our efforts
> > on improvements in the common and runtime modules for this release, such as
> > transformation Improvements and schema evolution strategy. Some features
> > are not fully included in this release, such as connectors for MaxCompute,
> > OceanBase, ElasticSearch and JDBC could not be completed. We will give
> > greater focus for features about the ecosystem in the next release cycle.
> > 
> > We appreciate your understanding and continued support. Your contributions
> > and feedback are invaluable to us, and we look forward to collaborating
> > with you during the release testing phase!
> > 
> > Best,
> > Qingsheng
> 



Re: [ANNOUNCE] Flink CDC release-3.2 branch cut

2024-08-12 Thread Leonard Xu
Thanks Qingsheng for the release management!

 I just found I missed a PR[1] that support data event metadata type, which was 
planned to merge to release-3.2 branch, the PR has been reviewed and approved. 
May I merge it to release-3.2 branch as well?


Best,
Leonard
[1]https://github.com/apache/flink-cdc/pull/3468

> 2024年8月12日 下午9:11,Qingsheng Ren  写道:
> 
> Hi devs,
> 
> The release-3.2 branch of Flink CDC has been forked out from master branch,
> with commit ID 104209558a30dc12d300bc23009cf59be105671c.
> 
> With the branch cut completed, we are now inviting volunteers from the
> community to help us with release testing. Your participation is crucial in
> ensuring the stability and quality of this release.
> 
> Due to the limited bandwidth for code reviews, we have focused our efforts
> on improvements in the common and runtime modules for this release, such as
> transformation Improvements and schema evolution strategy. Some features
> are not fully included in this release, such as connectors for MaxCompute,
> OceanBase, ElasticSearch and JDBC could not be completed. We will give
> greater focus for features about the ecosystem in the next release cycle.
> 
> We appreciate your understanding and continued support. Your contributions
> and feedback are invaluable to us, and we look forward to collaborating
> with you during the release testing phase!
> 
> Best,
> Qingsheng



Re: [DISCUSS] Release flink-connector-mongodb 2.0.0

2024-08-08 Thread Leonard Xu
Thanks Jiabao for kicking off this thread,  +1 for the proposal.


Feel free to ping me if you need help from PMC member side.

Best,
Leonard

> 2024年8月8日 下午5:04,Jiabao Sun  写道:
> 
> Hi,
> 
> I would like to propose to create a release for flink-connector-mongodb
> 2.0.0.
> 
> From the last release (1.2.0), we mainly achieved:
> 
> 1. Support upsert into sharded collections[1]
> 2. Support MongoDB 7.0[2]
> 3. Support fine-grained configuration to control filter push down[3]
> 
> I can help in driving the release but perhaps we need some more PMC
> members' attention and help.
> 
> Please let me know if the proposal is a good idea.
> 
> Best,
> Jiabao
> 
> [1] https://github.com/apache/flink-connector-mongodb/pull/37
> [2] https://github.com/apache/flink-connector-mongodb/pull/36
> [3] https://github.com/apache/flink-connector-mongodb/pull/23



Re: [VOTE] Release 1.20.0, release candidate #2

2024-07-31 Thread Leonard Xu
+1 (binding)

- checked Github release tag
- verified signatures and hashsums
- built from source code succeeded
- reviewed the web PR, left minor comment
- checked release notes, minor: there're some issues need to update Fix 
Version[1]
- started SQL Client, used MySQL CDC connector to read changelog from database 
, the result is expected

Best,
Leonard
[1] https://issues.apache.org/jira/projects/FLINK/versions/12354210


> 2024年7月31日 下午4:24,Qingsheng Ren  写道:
> 
> +1 (binding)
> 
> - Built from source
> - Reviewed web PR and release note
> - Verified checksum and signature
> - Checked GitHub release tag
> - Tested submitting SQL job with SQL client reading and writing Kafka
> 
> Best,
> Qingsheng
> 
> On Tue, Jul 30, 2024 at 2:26 PM Xintong Song  wrote:
> 
>> +1 (binding)
>> 
>> - reviewed flink-web PR
>> - verified checksum and signature
>> - verified source archives don't contain binaries
>> - built from source
>> - tried example jobs on a standalone cluster, and everything looks fine
>> 
>> Best,
>> 
>> Xintong
>> 
>> 
>> 
>> On Tue, Jul 30, 2024 at 12:13 AM Jing Ge 
>> wrote:
>> 
>>> Thanks Weijie!
>>> 
>>> +1 (binding)
>>> 
>>> - verified signatures
>>> - verified checksums
>>> - checked Github release tag
>>> - reviewed the PRs
>>> - checked the repo
>>> - started a local cluster, tried with WordCount, everything was fine.
>>> 
>>> Best regards,
>>> Jing
>>> 
>>> 
>>> On Mon, Jul 29, 2024 at 1:47 PM Samrat Deb 
>> wrote:
>>> 
 Thank you Weijie for driving 1.20 release
 
 +1 (non-binding)
 
 - Verified checksums and sha512
 - Verified signatures
 - Verified Github release tags
 - Build from source
 - Start the flink cluster locally run few jobs (Statemachine and word
 Count)
 
 Bests,
 Samrat
 
 
 On Mon, Jul 29, 2024 at 3:15 PM Ahmed Hamdy 
>>> wrote:
 
> Thanks Weijie for driving
> 
> +1 (non-binding)
> 
> - Verified checksums
> - Verified signature matches Rui Fan's
> - Verified tag exists on Github
> - Build from source
> - Verified no binaries in source archive
> - Reviewed release notes PR (some nits)
> 
> Best Regards
> Ahmed Hamdy
> 
> 
> On Thu, 25 Jul 2024 at 12:21, weijie guo 
> wrote:
> 
>> Hi everyone,
>> 
>> 
>> Please review and vote on the release candidate #2 for the version
> 1.20.0,
>> 
>> as follows:
>> 
>> 
>> [ ] +1, Approve the release
>> 
>> [ ] -1, Do not approve the release (please provide specific
>> comments)
>> 
>> 
>> The complete staging area is available for your review, which
>>> includes:
>> 
>> * JIRA release notes [1], and the pull request adding release note
>>> for
>> users [2]
>> 
>> * the official Apache source release and binary convenience
>> releases
>>> to
> be
>> 
>> deployed to dist.apache.org [3], which are signed with the key
>> with
>> 
>> fingerprint B2D64016B940A7E0B9B72E0D7D0528B28037D8BC  [4],
>> 
>> * all artifacts to be deployed to the Maven Central Repository [5],
>> 
>> * source code tag "release-1.20.0-rc2" [6],
>> 
>> * website pull request listing the new release and adding
>>> announcement
> blog
>> 
>> post [7].
>> 
>> 
>> The vote will be open for at least 72 hours. It is adopted by
>>> majority
>> 
>> approval, with at least 3 PMC affirmative votes.
>> 
>> 
>> [1]
>> 
>> 
> 
 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354210
>> 
>> [2] https://github.com/apache/flink/pull/25091
>> 
>> [3] https://dist.apache.org/repos/dist/dev/flink/flink-1.20.0-rc2/
>> 
>> [4] https://dist.apache.org/repos/dist/release/flink/KEYS
>> 
>> [5]
>> 
 
>> https://repository.apache.org/content/repositories/orgapacheflink-1752/
>> 
>> [6]
>> https://github.com/apache/flink/releases/tag/release-1.20.0-rc2
>> 
>> [7] https://github.com/apache/flink-web/pull/751
>> 
>> 
>> Best,
>> 
>> Robert, Rui, Ufuk, Weijie
>> 
> 
 
>>> 
>> 



Re: [VOTE] FLIP-460: Display source/sink I/O metrics on Flink Web UI

2024-07-16 Thread Leonard Xu
+1(binding)

Best,
Leonard

> 2024年7月16日 下午3:15,Robert Metzger  写道:
> 
> +1 (binding)
> 
> Nice to see this fixed ;)
> 
> 
> 
> On Tue, Jul 16, 2024 at 8:46 AM Yong Fang  wrote:
> 
>> +1 (binding)
>> 
>> Best,
>> FangYong
>> 
>> 
>> On Tue, Jul 16, 2024 at 1:14 PM Zhanghao Chen 
>> wrote:
>> 
>>> Hi everyone,
>>> 
>>> 
>>> Thanks for all the feedback about the FLIP-460: Display source/sink I/O
>>> metrics on Flink Web UI [1]. The discussion
>>> thread is here [2]. I'd like to start a vote on it.
>>> 
>>> The vote will be open for at least 72 hours unless there is an objection
>>> or insufficient votes.
>>> 
>>> [1]
>>> 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=309496355
>>> [2] https://lists.apache.org/thread/sy271nhd2jr1r942f29xbvbgq7fsd841
>>> 
>>> Best,
>>> Zhanghao Chen
>>> 
>> 



Re: [DISCUSS] FLIP-460: Display source/sink I/O metrics on Flink Web UI

2024-07-14 Thread Leonard Xu
Thanks for zhanghao for kicking off the thread.

>  It is especially confusing for simple ETL jobs where there's a single 
> chained operator with 0 input rate and 0 output rate. 


This case have confused flink users for a long time, +1 for the FLIP.


Best,
Leonard



Re: [ANNOUNCE] Apache Flink 1.19.1 released

2024-06-18 Thread Leonard Xu
Congratulations!  Thanks Hong for the release work and all involved!

Best,
Leonard

> 2024年6月19日 上午4:20,Hong Liang  写道:
> 
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.19.1, which is the first bugfix release for the Apache Flink 1.19
> series.
> 
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Please check out the release blog post for an overview of the improvements
> for this bugfix release:
> https://flink.apache.org/2024/06/14/apache-flink-1.19.1-release-announcement/
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12354399&projectId=12315522
> 
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
> 
> Feel free to reach out to the release managers (or respond to this thread)
> with feedback on the release process. Our goal is to constantly improve the
> release process. Feedback on what could be improved or things that didn't
> go so well are appreciated.
> 
> Regards,
> Hong



Re: [ANNOUNCE] Apache Flink CDC 3.1.1 released

2024-06-18 Thread Leonard Xu
Congratulations! Thanks Qingsheng for the release work and all contributors 
involved.

Best,
Leonard 

> 2024年6月18日 下午11:50,Qingsheng Ren  写道:
> 
> The Apache Flink community is very happy to announce the release of Apache
> Flink CDC 3.1.1.
> 
> Apache Flink CDC is a distributed data integration tool for real time data
> and batch data, bringing the simplicity and elegance of data integration
> via YAML to describe the data movement and transformation in a data
> pipeline.
> 
> Please check out the release blog post for an overview of the release:
> https://flink.apache.org/2024/06/18/apache-flink-cdc-3.1.1-release-announcement/
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354763
> 
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
> 
> Regards,
> Qingsheng Ren



Re: [VOTE] Apache Flink CDC Release 3.1.1, release candidate #0

2024-06-18 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- checked release notes
- reviewed the web PR
- tested Flink CDC works with Flink 1.19
- tested route、transform in MySQL to Doris Pipeline

Best,
Leonard



[ANNOUNCE] New Apache Flink Committer - Zhongqiang Gong

2024-06-16 Thread Leonard Xu
Hi everyone,
On behalf of the PMC, I'm happy to announce that Zhongqiang Gong has become a 
new Flink Committer!

Zhongqiang has been an active Flink community member since November 2021, 
contributing numerous PRs to both the Flink and Flink CDC repositories. As a 
core contributor to Flink CDC, he developed the Oracle and SQL Server CDC 
Connectors and managed essential website and CI migrations during the donation 
of Flink CDC to Apache Flink.

Beyond his technical contributions, Zhongqiang actively participates in 
discussions on the Flink dev mailing list and responds to threads on the user 
and user-zh mailing lists. As an Apache StreamPark (incubating) Committer, he 
promotes Flink SQL and Flink CDC technologies at meetups and within the 
StreamPark community.

Please join me in congratulating Zhongqiang Gong for becoming an Apache Flink 
committer!

Best,
Leonard (on behalf of the Flink PMC)

[ANNOUNCE] New Apache Flink Committer - Hang Ruan

2024-06-16 Thread Leonard Xu
Hi everyone,
On behalf of the PMC, I'm happy to let you know that Hang Ruan has become a new 
Flink Committer !

Hang Ruan has been continuously contributing to the Flink project since August 
2021. Since then, he has continuously contributed to Flink, Flink CDC, and 
various Flink connector repositories, including flink-connector-kafka, 
flink-connector-elasticsearch, flink-connector-aws, flink-connector-rabbitmq, 
flink-connector-pulsar, and flink-connector-mongodb. Hang Ruan focuses on the 
improvements related to connectors and catalogs and initiated FLIP-274. He is 
most recognized as a core contributor and maintainer for the Flink CDC project, 
contributing many features such as MySQL CDC newly table addition and the 
Schema Evolution feature.

Beyond his technical contributions, Hang Ruan is an active member of the Flink 
community. He regularly engages in discussions on the Flink dev mailing list 
and the user-zh and user mailing lists, participates in FLIP discussions, 
assists with user Q&A, and consistently volunteers for release verifications.

Please join me in congratulating Hang Ruan for becoming an Apache Flink 
committer!

Best,
Leonard (on behalf of the Flink PMC)

Re: Flink Kubernetes Operator 1.9.0 release planning

2024-06-12 Thread Leonard Xu
+1 for the release plan and release manager candidate, thanks Gyula.

Best,
Leonard

> 2024年6月12日 下午11:10,Peter Huang  写道:
> 
> +1 Thanks Gyula for driving this release!
> 
> 
> Best Regards
> Peter Huang
> 
> On Tue, Jun 11, 2024 at 12:28 PM Márton Balassi 
> wrote:
> 
>> +1 for cutting the release and Gyula as the release manager.
>> 
>> On Tue, Jun 11, 2024 at 10:41 AM David Radley 
>> wrote:
>> 
>>> I agree – thanks for driving this Gyula.
>>> 
>>> From: Rui Fan <1996fan...@gmail.com>
>>> Date: Tuesday, 11 June 2024 at 02:52
>>> To: dev@flink.apache.org 
>>> Cc: Mate Czagany 
>>> Subject: [EXTERNAL] Re: Flink Kubernetes Operator 1.9.0 release planning
>>> Thanks Gyula for driving this release!
>>> 
 I suggest we cut the release branch this week after merging current
 outstanding smaller PRs.
>>> 
>>> It makes sense to me.
>>> 
>>> Best,
>>> Rui
>>> 
>>> On Mon, Jun 10, 2024 at 3:05 PM Gyula Fóra  wrote:
>>> 
 Hi all!
 
 I want to kick off the discussion / release process for the Flink
 Kubernetes Operator 1.9.0 version.
 
 The last, 1.8.0, version was released in March and since then we have
>>> had a
 number of important fixes. Furthermore there are some bigger pieces of
 outstanding work in the form of open PRs such as the Savepoint CRD work
 which should only be merged to 1.10.0 to gain more exposure/stability.
 
 I suggest we cut the release branch this week after merging current
 outstanding smaller PRs.
 
 I volunteer as the release manager but if someone else would like to do
>>> it,
 I would also be happy to assist.
 
 Please let me know what you think.
 
 Cheers,
 Gyula
 
>>> 
>>> Unless otherwise stated above:
>>> 
>>> IBM United Kingdom Limited
>>> Registered in England and Wales with number 741598
>>> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>>> 
>> 



Re: [VOTE] Release 1.19.1, release candidate #1

2024-06-11 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- checked Github release tag 
- checked release notes
- reviewed all Jira issues for 1.19.1 have been resolved
- reviewed the web PR 

Best,
Leonard

> 2024年6月11日 下午3:19,Sergey Nuyanzin  写道:
> 
> +1 (non-binding)
> 
> - Downloaded all the artifacts
> - Verified checksums and signatures
> - Verified that source archives do not contain any binaries
> - Built from source with jdk8
> - Ran a simple wordcount job on local standalone cluster
> 
> On Tue, Jun 11, 2024 at 8:36 AM Matthias Pohl  wrote:
> 
>> +1 (binding)
>> 
>> * Downloaded all artifacts
>> * Extracted sources and ran compilation on sources
>> * Diff of git tag checkout with downloaded sources
>> * Verified SHA512 & GPG checksums
>> * Checked that all POMs have the right expected version
>> * Generated diffs to compare pom file changes with NOTICE files
>> * Verified WordCount in batch mode and streaming mode with a standalone
>> session cluster to verify the logs: no suspicious behavior observed
>> 
>> Best,
>> Matthias
>> 
>> On Mon, Jun 10, 2024 at 12:54 PM Hong Liang  wrote:
>> 
>>> Thanks for testing the release candidate, everyone. Nice to see coverage
>> on
>>> different types of testing being done.
>>> 
>>> I've addressed the comments on the web PR - thanks Rui Fan for good
>>> comments, and for the reminder from Ahmed :)
>>> 
>>> We have <24 hours on the vote wait time, and still waiting on 1 more
>>> binding vote!
>>> 
>>> Regards,
>>> Hong
>>> 
>>> On Sat, Jun 8, 2024 at 11:33 PM Ahmed Hamdy 
>> wrote:
>>> 
 Hi Hong,
 Thanks for driving
 
 +1 (non-binding)
 
 - Verified signatures and hashes
 - Checked github release tag
 - Verified licenses
 - Checked that the source code does not contain binaries
 - Reviewed Web PR, nit: Could we address the comment of adding
>>> FLINK-34633
 in the release
 
 
 Best Regards
 Ahmed Hamdy
 
 
 On Sat, 8 Jun 2024 at 22:22, Jeyhun Karimov 
>>> wrote:
 
> Hi Hong,
> 
> Thanks for driving the release.
> +1 (non-binding)
> 
> - Verified gpg signature
> - Reviewed the PR
> - Verified sha512
> - Checked github release tag
> - Checked that the source code does not contain binaries
> 
> Regards,
> Jeyhun
> 
> On Sat, Jun 8, 2024 at 1:52 PM weijie guo >> 
> wrote:
> 
>> Thanks Hong!
>> 
>> +1(binding)
>> 
>> - Verified gpg signature
>> - Verified sha512 hash
>> - Checked gh release tag
>> - Checked all artifacts deployed to maven repo
>> - Ran a simple wordcount job on local standalone cluster
>> - Compiled from source code with JDK 1.8.0_291.
>> 
>> Best regards,
>> 
>> Weijie
>> 
>> 
>> Xiqian YU  于2024年6月7日周五 18:23写道:
>> 
>>> +1 (non-binding)
>>> 
>>> 
>>>  *   Checked download links & release tags
>>>  *   Verified that package checksums matched
>>>  *   Compiled Flink from source code with JDK 8 / 11
>>>  *   Ran E2e data integration test jobs on local cluster
>>> 
>>> Regards,
>>> yux
>>> 
>>> De : Rui Fan <1996fan...@gmail.com>
>>> Date : vendredi, 7 juin 2024 à 17:14
>>> À : dev@flink.apache.org 
>>> Objet : Re: [VOTE] Release 1.19.1, release candidate #1
>>> +1(binding)
>>> 
>>> - Reviewed the flink-web PR (Left some comments)
>>> - Checked Github release tag
>>> - Verified signatures
>>> - Verified sha512 (hashsums)
>>> - The source archives do not contain any binaries
>>> - Build the source with Maven 3 and java8 (Checked the license as
 well)
>>> - Start the cluster locally with jdk8, and run the
 StateMachineExample
>> job,
>>> it works fine.
>>> 
>>> Best,
>>> Rui
>>> 
>>> On Thu, Jun 6, 2024 at 11:39 PM Hong Liang 
>>> wrote:
>>> 
 Hi everyone,
 Please review and vote on the release candidate #1 for the
>> flink
>> v1.19.1,
 as follows:
 [ ] +1, Approve the release
 [ ] -1, Do not approve the release (please provide specific
 comments)
 
 
 The complete staging area is available for your review, which
> includes:
 * JIRA release notes [1],
 * the official Apache source release and binary convenience
 releases
> to
>>> be
 deployed to dist.apache.org [2], which are signed with the key
 with
 fingerprint B78A5EA1 [3],
 * all artifacts to be deployed to the Maven Central Repository
>>> [4],
 * source code tag "release-1.19.1-rc1" [5],
 * website pull request listing the new release and adding
> announcement
>>> blog
 post [6].
 
 The vote will be open for at least 72 hours. It is adopted by
> majority
 approval, with at least 3 PMC affirmative votes.
 
 Thanks,
 Hong
 
 [1]

Re: flink cdc 是否支持oracle rac

2024-06-04 Thread Leonard Xu
Hey,please send email to user...@apache.org  address 
to discuss user questions if you’d like to use Chinese, the d...@apache.org 
 is used to discuss development related tasks and 
English is required in this mailing list.

Best,
Leonard 

> 2024年6月3日 下午2:03,bbcca...@163.com 写道:
> 
> 查看debezium 1.9.8文档是支持oracle rac的,但flink cdc oracle 
> rac不管怎么配置都无法获取到源库数据,是我的配置不对吗?应该如何配置?
> debeziumProperties.setProperty("rac.nodes", 
> "172.16.140.254,172.16.140.253,172.16.140.252");
> debeziumProperties.setProperty("debezium.log.mining.strategy", 
> "online_catalog");
> debeziumProperties.setProperty("debezium.log.mining.continuous.mine", 
> "true");DebeziumSourceFunction sourceFunction = 
> OracleSource.builder()
>   .hostname("172.16.140.150")
>   .port(1521)
>   .database("testracdb")
>   .schemaList("kfz") // monitor inventory schema
>   .tableList("kfz.test_fb") // monitor products table
>   .username("sync")
>   .password("123456")
>   .startupOptions(StartupOptions.latest())
>   .debeziumProperties(debeziumProperties)
>   .deserializer(new TestFbDeserializationSchema()) 
>   .build();
> 
> 
> bbcca...@163.com



Re: [ANNOUNCE] New Apache Flink PMC Member - Weijie Guo

2024-06-04 Thread Leonard Xu
Congratulations!

Best,
Leonard

> 2024年6月4日 下午4:02,Yangze Guo  写道:
> 
> Congratulations!
> 
> Best,
> Yangze Guo
> 
> On Tue, Jun 4, 2024 at 4:00 PM Weihua Hu  wrote:
>> 
>> Congratulations, Weijie!
>> 
>> Best,
>> Weihua
>> 
>> 
>> On Tue, Jun 4, 2024 at 3:03 PM Yuxin Tan  wrote:
>> 
>>> Congratulations, Weijie!
>>> 
>>> Best,
>>> Yuxin
>>> 
>>> 
>>> Yuepeng Pan  于2024年6月4日周二 14:57写道:
>>> 
 Congratulations !
 
 
 Best,
 Yuepeng Pan
 
 At 2024-06-04 14:45:45, "Xintong Song"  wrote:
> Hi everyone,
> 
> On behalf of the PMC, I'm very happy to announce that Weijie Guo has
 joined
> the Flink PMC!
> 
> Weijie has been an active member of the Apache Flink community for many
> years. He has made significant contributions in many components,
>>> including
> runtime, shuffle, sdk, connectors, etc. He has driven / participated in
> many FLIPs, authored and reviewed hundreds of PRs, been consistently
 active
> on mailing lists, and also helped with release management of 1.20 and
> several other bugfix releases.
> 
> Congratulations and welcome Weijie!
> 
> Best,
> 
> Xintong (on behalf of the Flink PMC)
 
>>> 



Re: [DISCUSS] Add Flink CDC Channel to Apache Flink Slack Workspace

2024-06-03 Thread Leonard Xu
I’ve created flink-cdc channel in Apache Flink Slack Workspace via 
https://issues.apache.org/jira/browse/FLINK-35514 
<https://issues.apache.org/jira/browse/FLINK-35514>

Best,
Leonard

> 2024年5月29日 下午9:53,Ahmed Hamdy  写道:
> 
> Thanks Zhongqiang, +1 for sure.
> Best Regards
> Ahmed Hamdy
> 
> 
> On Wed, 29 May 2024 at 13:48, ConradJam  wrote:
> 
>> +1 best
>> 
>> Hang Ruan  于2024年5月29日周三 11:28写道:
>> 
>>> Hi, zhongqiang.
>>> 
>>> Thanks for the proposal. +1 for it.
>>> 
>>> Best,
>>> Hang
>>> 
>>> Leonard Xu  于2024年5月28日周二 11:58写道:
>>> 
>>>> 
>>>> Thanks Zhongqiang for the proposal, we need the Channel and I should
>> have
>>>> been created it but not yet, +1 from my side.
>>>> 
>>>> Best,
>>>> Leonard
>>>> 
>>>>> 2024年5月28日 上午11:54,gongzhongqiang  写道:
>>>>> 
>>>>> Hi devs,
>>>>> 
>>>>> I would like to propose adding a dedicated Flink CDC channel to the
>>>> Apache
>>>>> Flink Slack workspace.
>>>>> 
>>>>> Creating a channel focused on Flink CDC will help community members
>>>> easily
>>>>> find previous discussions
>>>>> and target new discussions and questions to the correct place. Flink
>>> CDC
>>>> is
>>>>> a sufficiently distinct component
>>>>> within the Apache Flink ecosystem, and having a dedicated channel
>> will
>>>> make
>>>>> it viable and useful for
>>>>> those specifically working with or interested in this technology.
>>>>> 
>>>>> Looking forward to your feedback and support on this proposal.
>>>>> 
>>>>> 
>>>>> Best,
>>>>> Zhongqiang Gong
>>>> 
>>>> 
>>> 
>> 
>> 
>> --
>> Best
>> 
>> ConradJam
>> 



Re: [VOTE] Release flink-connector-jdbc v3.2.0, release candidate #2

2024-06-03 Thread Leonard Xu

> -1 (non-binding)
> blocked by https://issues.apache.org/jira/browse/FLINK-35496.


I just replied in https://issues.apache.org/jira/browse/FLINK-35496 
<https://issues.apache.org/jira/browse/FLINK-35496>, I think the API annotation 
understanding is incorrect in this case, hope to see your comments

Best,
Leonard




> Best, 
> Yuepeng Pan
> 
> 
> On 2024/05/22 12:13:51 Leonard Xu wrote:
>> +1 (binding)
>> 
>> - verified signatures
>> - verified hashsums
>> - built from source code with java 1.8 succeeded
>> - checked Github release tag 
>> - checked release notes
>> - reviewed the web PR
>> 
>> Best,
>> Leonard
>> 
>>> 2024年4月21日 下午9:42,Hang Ruan  写道:
>>> 
>>> +1 (non-binding)
>>> 
>>> - Validated checksum hash
>>> - Verified signature
>>> - Verified that no binaries exist in the source archive
>>> - Build the source with Maven and jdk8
>>> - Verified web PR
>>> - Check that the jar is built by jdk8
>>> 
>>> Best,
>>> Hang
>>> 
>>> Ahmed Hamdy  于2024年4月18日周四 21:37写道:
>>> 
>>>> +1 (non-binding)
>>>> 
>>>> - Verified Checksums and hashes
>>>> - Verified Signatures
>>>> - No binaries in source
>>>> - Build source
>>>> - Github tag exists
>>>> - Reviewed Web PR
>>>> 
>>>> 
>>>> Best Regards
>>>> Ahmed Hamdy
>>>> 
>>>> 
>>>> On Thu, 18 Apr 2024 at 11:22, Danny Cranmer 
>>>> wrote:
>>>> 
>>>>> Sorry for typos:
>>>>> 
>>>>>> Please review and vote on the release candidate #1 for the version
>>>> 3.2.0,
>>>>> as follows:
>>>>> Should be "release candidate #2"
>>>>> 
>>>>>> * source code tag v3.2.0-rc1 [5],
>>>>> Should be "source code tag v3.2.0-rc2"
>>>>> 
>>>>> Thanks,
>>>>> Danny
>>>>> 
>>>>> On Thu, Apr 18, 2024 at 11:19 AM Danny Cranmer 
>>>>> wrote:
>>>>> 
>>>>>> Hi everyone,
>>>>>> 
>>>>>> Please review and vote on the release candidate #1 for the version
>>>> 3.2.0,
>>>>>> as follows:
>>>>>> [ ] +1, Approve the release
>>>>>> [ ] -1, Do not approve the release (please provide specific comments)
>>>>>> 
>>>>>> This release supports Flink 1.18 and 1.19.
>>>>>> 
>>>>>> The complete staging area is available for your review, which includes:
>>>>>> * JIRA release notes [1],
>>>>>> * the official Apache source release to be deployed to dist.apache.org
>>>>>> [2], which are signed with the key with fingerprint 125FD8DB [3],
>>>>>> * all artifacts to be deployed to the Maven Central Repository [4],
>>>>>> * source code tag v3.2.0-rc1 [5],
>>>>>> * website pull request listing the new release [6].
>>>>>> * CI run of tag [7].
>>>>>> 
>>>>>> The vote will be open for at least 72 hours. It is adopted by majority
>>>>>> approval, with at least 3 PMC affirmative votes.
>>>>>> 
>>>>>> Thanks,
>>>>>> Danny
>>>>>> 
>>>>>> [1]
>>>>>> 
>>>>> 
>>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353143
>>>>>> [2]
>>>>>> 
>>>>> 
>>>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.2.0-rc2
>>>>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>>>>> [4]
>>>>>> 
>>>> https://repository.apache.org/content/repositories/orgapacheflink-1718/
>>>>>> [5]
>>>>> https://github.com/apache/flink-connector-jdbc/releases/tag/v3.2.0-rc2
>>>>>> [6] https://github.com/apache/flink-web/pull/734
>>>>>> [7]
>>>>> https://github.com/apache/flink-connector-jdbc/actions/runs/8736019099
>>>>>> 
>>>>> 
>>>> 
>> 
>> 



Re: [DISCUSS] Flink CDC 3.2 Release Planning

2024-06-03 Thread Leonard Xu
Hey, qingsheng

Thanks for driving the 3.2 release forward, it involved more contributions and 
work than we plan at the beginning.
I’d like to help the release management as one release manager, feel free to 
ping me if you need any help from my side.

Best,
Leonard

> 2024年6月3日 下午4:21,Qingsheng Ren  写道:
> 
> Hi devs,
> 
> Considering the current timeline and my workload, it would be great to
> have someone help with managing this release cycle of Flink CDC. Feel
> free to contact me if you are interested!
> 
> Best,
> Qingsheng
> 
> On Mon, May 27, 2024 at 7:39 PM Leonard Xu  wrote:
>> 
>> Hey, Xiqian
>> 
>>> In 3.2 KO plannings, it is expected to add some strongly demanded feature 
>>> to meet YAML pipeline job users’ needs [1]. However, 0 out of 10 3.2 
>>> feature tickets have been completed till now, and it’s very unlikely for us 
>>> to catch up with planned feature freeze (May 25th) and release deadline 
>>> (June 1st) [2].
>>> 
>>> I hereby suggest postponing release schedule by 2 weeks, extending feature 
>>> freeze day to June 8th and release day to June 15th. Considering its wide 
>>> affect, please leave your comments & concerns about this suggestion.
>> 
>> Thanks for the proposal, +1 for extending 2 weeks as the 3.1 release 
>> requires more work than expected.
>> 
>> 
>> Best,
>> Leonard
>> 
>>> 
>>> Regards,
>>> Xiqian
>>> 
>>> [1] https://issues.apache.org/jira/secure/RapidBoard.jspa?rapidView=607
>>> [2] 
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=303794651
>>> 
>>> 
>>> De : Peter Huang 
>>> Date : jeudi, 9 mai 2024 à 23:31
>>> À : dev@flink.apache.org 
>>> Objet : Re: [DISCUSS] Flink CDC 3.2 Release Planning
>>> Thanks Qingsheng for driving the release!
>>> 
>>> +1. I also would like to provide some help on CDC 3.2.
>>> 
>>> 
>>> Best Regards
>>> Peter Huang
>>> 
>>> On Thu, May 9, 2024 at 3:21 AM Xiqian YU  wrote:
>>> 
>>>> Thanks Qingsheng for driving the release!
>>>> 
>>>> +1. Would love to provide my help on CDC 3.2.
>>>> 
>>>> Regards,
>>>> Xiqian
>>>> 
>>>> De : Hang Ruan 
>>>> Date : jeudi, 9 mai 2024 à 17:50
>>>> À : dev@flink.apache.org 
>>>> Objet : Re: [DISCUSS] Flink CDC 3.2 Release Planning
>>>> Thanks Qinsheng for driving.
>>>> 
>>>> I would like to provide some helps for this verison too. +1.
>>>> 
>>>> Best,
>>>> Hang
>>>> 
>>>> Hongshun Wang  于2024年5月9日周四 14:16写道:
>>>> 
>>>>> Thanks Qinsheng for driving,
>>>>> +1 from my side.
>>>>> 
>>>>> Besi,
>>>>> Hongshun
>>>>> 
>>>>> On Wed, May 8, 2024 at 11:41 PM Leonard Xu  wrote:
>>>>> 
>>>>>> +1 for the proposal code freeze date and RM candidate.
>>>>>> 
>>>>>> Best,
>>>>>> Leonard
>>>>>> 
>>>>>>> 2024年5月8日 下午10:27,gongzhongqiang  写道:
>>>>>>> 
>>>>>>> Hi Qingsheng
>>>>>>> 
>>>>>>> Thank you for driving the release.
>>>>>>> Agree with the goal and I'm willing to help.
>>>>>>> 
>>>>>>> Best,
>>>>>>> Zhongqiang Gong
>>>>>>> 
>>>>>>> Qingsheng Ren  于2024年5月8日周三 14:22写道:
>>>>>>> 
>>>>>>>> Hi devs,
>>>>>>>> 
>>>>>>>> As we are in the midst of the release voting process for Flink CDC
>>>>>> 3.1.0, I
>>>>>>>> think it's a good time to kick off the upcoming Flink CDC 3.2
>>>> release
>>>>>>>> cycle.
>>>>>>>> 
>>>>>>>> In this release cycle I would like to focus on the stability of
>>>> Flink
>>>>>> CDC,
>>>>>>>> especially for the newly introduced YAML-based data integration
>>>>>>>> framework. To ensure we can iterate and improve swiftly, I propose
>>>> to
>>>>>> make
>>>>>>>> 3.2 a relatively short release cycle, targeting a feature freeze by
>>>>> May
>>>>>> 24,
>>>>>>>> 2024.
>>>>>>>> 
>>>>>>>> For developers that are interested in participating and contributing
>>>>> new
>>>>>>>> features in this release cycle, please feel free to list your
>>>> planning
>>>>>>>> features in the wiki page [1].
>>>>>>>> 
>>>>>>>> I'm happy to volunteer as a release manager and of course open to
>>>> work
>>>>>>>> together with someone on this.
>>>>>>>> 
>>>>>>>> What do you think?
>>>>>>>> 
>>>>>>>> Best,
>>>>>>>> Qingsheng
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>> https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release
>>>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>> 



Re: [DISCUSS] Flink CDC 3.1.1 Release

2024-05-28 Thread Leonard Xu
Thanks XIqian for kicking off the discussion, +1 from my side.

Best,
Leonard


> 2024年5月28日 下午7:43,Xiqian YU  写道:
> 
> Hi devs,
> 
> I would like to make a proposal about creating a new Flink CDC 3.1 patch 
> release (3.1.1). It’s been a week since the last CDC version 3.1.0 got 
> released [1], and since then, 7 tickets have been closed, 4 of them are of 
> high priority.
> 
> Currently, there are 5 items open at the moment: 1 of them is a blocker, 
> which stops users from restoring with existed checkpoints after upgrading 
> [2]. There’s a PR ready and will be merged soon. Other 4 of them have 
> approved PRs, and will be merged soon [3][4][5][6]. I propose that a patch 
> version could be released after all pending tickets closed.
> 
> Please reply if there are any unresolved blocking issues you’d like to 
> include in this release.
> 
> Regards,
> Xiqian
> 
> [1] 
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
> [2] https://issues.apache.org/jira/browse/FLINK-35464
> [3] https://issues.apache.org/jira/browse/FLINK-35149
> [4] https://issues.apache.org/jira/browse/FLINK-35323
> [5] https://issues.apache.org/jira/browse/FLINK-35430
> [6] https://issues.apache.org/jira/browse/FLINK-35447
> 



Re: [DISCUSS] Add Flink CDC Channel to Apache Flink Slack Workspace

2024-05-27 Thread Leonard Xu


Thanks Zhongqiang for the proposal, we need the Channel and I should have been 
created it but not yet, +1 from my side.

Best,
Leonard

> 2024年5月28日 上午11:54,gongzhongqiang  写道:
> 
> Hi devs,
> 
> I would like to propose adding a dedicated Flink CDC channel to the Apache
> Flink Slack workspace.
> 
> Creating a channel focused on Flink CDC will help community members easily
> find previous discussions
> and target new discussions and questions to the correct place. Flink CDC is
> a sufficiently distinct component
> within the Apache Flink ecosystem, and having a dedicated channel will make
> it viable and useful for
> those specifically working with or interested in this technology.
> 
> Looking forward to your feedback and support on this proposal.
> 
> 
> Best,
> Zhongqiang Gong



Re: [DISCUSS] Flink CDC 3.2 Release Planning

2024-05-27 Thread Leonard Xu
Hey, Xiqian

> In 3.2 KO plannings, it is expected to add some strongly demanded feature to 
> meet YAML pipeline job users’ needs [1]. However, 0 out of 10 3.2 feature 
> tickets have been completed till now, and it’s very unlikely for us to catch 
> up with planned feature freeze (May 25th) and release deadline (June 1st) [2].
> 
> I hereby suggest postponing release schedule by 2 weeks, extending feature 
> freeze day to June 8th and release day to June 15th. Considering its wide 
> affect, please leave your comments & concerns about this suggestion.

Thanks for the proposal, +1 for extending 2 weeks as the 3.1 release requires 
more work than expected.


Best,
Leonard

> 
> Regards,
> Xiqian
> 
> [1] https://issues.apache.org/jira/secure/RapidBoard.jspa?rapidView=607
> [2] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=303794651
> 
> 
> De : Peter Huang 
> Date : jeudi, 9 mai 2024 à 23:31
> À : dev@flink.apache.org 
> Objet : Re: [DISCUSS] Flink CDC 3.2 Release Planning
> Thanks Qingsheng for driving the release!
> 
> +1. I also would like to provide some help on CDC 3.2.
> 
> 
> Best Regards
> Peter Huang
> 
> On Thu, May 9, 2024 at 3:21 AM Xiqian YU  wrote:
> 
>> Thanks Qingsheng for driving the release!
>> 
>> +1. Would love to provide my help on CDC 3.2.
>> 
>> Regards,
>> Xiqian
>> 
>> De : Hang Ruan 
>> Date : jeudi, 9 mai 2024 à 17:50
>> À : dev@flink.apache.org 
>> Objet : Re: [DISCUSS] Flink CDC 3.2 Release Planning
>> Thanks Qinsheng for driving.
>> 
>> I would like to provide some helps for this verison too. +1.
>> 
>> Best,
>> Hang
>> 
>> Hongshun Wang  于2024年5月9日周四 14:16写道:
>> 
>>> Thanks Qinsheng for driving,
>>> +1 from my side.
>>> 
>>> Besi,
>>> Hongshun
>>> 
>>> On Wed, May 8, 2024 at 11:41 PM Leonard Xu  wrote:
>>> 
>>>> +1 for the proposal code freeze date and RM candidate.
>>>> 
>>>> Best,
>>>> Leonard
>>>> 
>>>>> 2024年5月8日 下午10:27,gongzhongqiang  写道:
>>>>> 
>>>>> Hi Qingsheng
>>>>> 
>>>>> Thank you for driving the release.
>>>>> Agree with the goal and I'm willing to help.
>>>>> 
>>>>> Best,
>>>>> Zhongqiang Gong
>>>>> 
>>>>> Qingsheng Ren  于2024年5月8日周三 14:22写道:
>>>>> 
>>>>>> Hi devs,
>>>>>> 
>>>>>> As we are in the midst of the release voting process for Flink CDC
>>>> 3.1.0, I
>>>>>> think it's a good time to kick off the upcoming Flink CDC 3.2
>> release
>>>>>> cycle.
>>>>>> 
>>>>>> In this release cycle I would like to focus on the stability of
>> Flink
>>>> CDC,
>>>>>> especially for the newly introduced YAML-based data integration
>>>>>> framework. To ensure we can iterate and improve swiftly, I propose
>> to
>>>> make
>>>>>> 3.2 a relatively short release cycle, targeting a feature freeze by
>>> May
>>>> 24,
>>>>>> 2024.
>>>>>> 
>>>>>> For developers that are interested in participating and contributing
>>> new
>>>>>> features in this release cycle, please feel free to list your
>> planning
>>>>>> features in the wiki page [1].
>>>>>> 
>>>>>> I'm happy to volunteer as a release manager and of course open to
>> work
>>>>>> together with someone on this.
>>>>>> 
>>>>>> What do you think?
>>>>>> 
>>>>>> Best,
>>>>>> Qingsheng
>>>>>> 
>>>>>> [1]
>>>>>> 
>>> https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release
>>>>>> 
>>>> 
>>>> 
>>> 
>> 



Re: [DISCUSS] Flink 1.19.1 release

2024-05-26 Thread Leonard Xu
+1 for the 1.19.1 release and +1 for Hong as release manager.

Best,
Leonard

> 2024年5月25日 上午2:55,Danny Cranmer  写道:
> 
> +1 for the 1.19.1 release and +1 for Hong as release manager.



Re: [VOTE] FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-05-23 Thread Leonard Xu
+1

Best,
Leonard

> 2024年5月24日 下午1:27,weijie guo  写道:
> 
> +1(binding)
> 
> Best regards,
> 
> Weijie
> 
> 
> Lincoln Lee  于2024年5月24日周五 12:20写道:
> 
>> +1(binding)
>> 
>> Best,
>> Lincoln Lee
>> 
>> 
>> Jane Chan  于2024年5月24日周五 09:52写道:
>> 
>>> Hi all,
>>> 
>>> I'd like to start a vote on FLIP-457[1] after reaching a consensus
>> through
>>> the discussion thread[2].
>>> 
>>> The vote will be open for at least 72 hours unless there is an objection
>> or
>>> insufficient votes.
>>> 
>>> 
>>> [1]
>>> 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992
>>> [2] https://lists.apache.org/thread/1sthbv6q00sq52pp04n2p26d70w4fqj1
>>> 
>>> Best,
>>> Jane
>>> 
>> 



Re: [DISCUSS] Flink CDC Upgrade Debezium version to 2.x

2024-05-22 Thread Leonard Xu
Thanks zhongqiang for bringing this discussion.

I also noticed you also sent a mail to Debezium’s dev mailing list, it will 
help us a lot if they can help maintain a LTS version for their 1.x serials. 

I can accept the proposal to reference DBZ 2.0’s doc as a temporary solution in 
current situation. 

About upgrade Debezium version and bump JDK version as well, we’ve to consider 
that flink’s default JDK version is still JDK1.8, it’s a hard decision 
to make at this moment, but I agree we need to bump DBZ version and JDK version 
finally.


Best,
Leonard


> 2024年5月23日 下午1:24,gongzhongqiang  写道:
> 
> Hi all,
> 
> I would like to start a discussion about upgrading Debezium to version 2.x.
> 
> Background:
> Currently, the Debezium community no longer maintains versions prior to
> 2.0,
> and the website has taken down the documentation for versions before 2.0.
> However, Flink CDC depends on Debezium version 1.9, and the documentation
> references links to that version.
> 
> 
> Problem:
> - References to Debezium's documentation links report errors [1]
> - The Debezium community will no longer maintain versions prior to 2.0.
> Flink CDC
> synchronizes bug fixes from Debezium 2.0 by overwriting classes, but the
> classes differ significantly between 2.x and 1.9.
> 
> 
> Compatibility and Deprecation:
> - Debezium uses JDK 11 starting from version 2.0 [2]
> 
> 
> Plan:
> - Migrate references in Flink CDC documentation from Debezium 1.9 to 2.0
> - Upgrade Debezium to version 2.x
> 
> [1]
> https://github.com/apache/flink-cdc/actions/runs/9192497396/job/25281283926#step:4:1148
> [2] https://debezium.io/releases/2.0/
> 
> Best,
> Zhongqiang Gong



Re: [DISCUSSION] FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-05-22 Thread Leonard Xu
Thanks Jane for the refine work, +1 from my side. 
I adjusted the table format of FLIP so that it can display all content in one 
page.

Best,
Leonard


> 2024年5月22日 下午3:42,Jane Chan  写道:
> 
> Hi Lincoln,
> 
> Thanks for your suggestion. I've reviewed the comments from the previous PR
> review[1], and the agreement at the time was that any configuration options
> not included in ExecutionConfigOptions and OptimizerConfigOptions should
> have the Experimental annotation explicitly added. Since this annotation
> has been relatively stable from 1.9.0 until now, you make a valid point,
> and we can elevate it to the PublicEvolving level.
> 
> Please let me know if you have any questions.
> 
> [1] https://github.com/apache/flink/pull/8980
> 
> Best,
> Jane
> 
> On Tue, May 21, 2024 at 10:25 PM Lincoln Lee  wrote:
> 
>> Hi Jane,
>> 
>> Thanks for the updates!
>> 
>> Just one small comment on the options in IncrementalAggregateRule
>> & RelNodeBlock, should we also change the API level from Experimental
>> to PublicEvolving?
>> 
>> 
>> Best,
>> Lincoln Lee
>> 
>> 
>> Jane Chan  于2024年5月21日周二 16:41写道:
>> 
>>> Hi all,
>>> 
>>> Thanks for your valuable feedback!
>>> 
>>> To @Xuannan
>>> 
>>> For options to be moved to another module/package, I think we have to
 mark the old option deprecated in 1.20 for it to be removed in 2.0,
 according to the API compatibility guarantees[1]. We can introduce the
 new option in 1.20 with the same option key in the intended class.
>>> 
>>> 
>>> Good point, fixed.
>>> 
>>> To @Lincoln and @Benchao
>>> 
>>> Thanks for sharing the insights into the historical context of which I
>> was
>>> unaware. I've reorganized the sheet.
>>> 
>>> 3. Regarding WindowEmitStrategy, IIUC it is currently unsupported on TVF
 window, so it's recommended to keep it untouched for now and follow up
>> in
 FLINK-29692
>>> 
>>> 
>>> How to tackle the configuration is up to whether to remove the legacy
>>> window aggregate in 2.0, and I've updated the FLIP to leverage this part
>> to
>>> FLINK-29692.
>>> 
>>> Please let me know if that answers your questions or if you have other
>>> comments.
>>> 
>>> Best,
>>> Jane
>>> 
>>> 
>>> On Mon, May 20, 2024 at 1:52 PM Ron Liu  wrote:
>>> 
 Hi, Lincoln
 
> 2. Regarding the options in HashAggCodeGenerator, since this new
>>> feature
 has gone
 through a couple of release cycles and could be considered for
 PublicEvolving now,
 cc @Ron Liu   WDYT?
 
 Thanks for cc'ing me,  +1 for public these options now.
 
 Best,
 Ron
 
 Benchao Li  于2024年5月20日周一 13:08写道:
 
> I agree with Lincoln about the experimental features.
> 
> Some of these configurations do not even have proper implementation,
> take 'table.exec.range-sort.enabled' as an example, there was a
> discussion[1] about it before.
> 
> [1] https://lists.apache.org/thread/q5h3obx36pf9po28r0jzmwnmvtyjmwdr
> 
> Lincoln Lee  于2024年5月20日周一 12:01写道:
>> 
>> Hi Jane,
>> 
>> Thanks for the proposal!
>> 
>> +1 for the changes except for these annotated as experimental ones.
>> 
>> For the options annotated as experimental,
>> 
>> +1 for the moving of IncrementalAggregateRule & RelNodeBlock.
>> 
>> For the rest of the options, there are some suggestions:
>> 
>> 1. for the batch related parameters, it's recommended to either
>>> delete
>> them (leaving the necessary defaults value in place) or leave them
>> as
> they
>> are. Including:
>> FlinkRelMdRowCount
>> FlinkRexUtil
>> BatchPhysicalSortRule
>> JoinDeriveNullFilterRule
>> BatchPhysicalJoinRuleBase
>> BatchPhysicalSortMergeJoinRule
>> 
>> What I understand about the history of these options is that they
>>> were
> once
>> used for fine
>> tuning for tpc testing, and the current flink planner no longer
>>> relies
 on
>> these internal
>> options when testing tpc[1]. In addition, these options are too
>>> obscure
> for
>> SQL users,
>> and some of them are actually magic numbers.
>> 
>> 2. Regarding the options in HashAggCodeGenerator, since this new
 feature
>> has gone
>> through a couple of release cycles and could be considered for
>> PublicEvolving now,
>> cc @Ron Liu   WDYT?
>> 
>> 3. Regarding WindowEmitStrategy, IIUC it is currently unsupported
>> on
 TVF
>> window, so
>> it's recommended to keep it untouched for now and follow up in
>> FLINK-29692[2]. cc @Xuyang 
>> 
>> [1]
>> 
> 
 
>>> 
>> https://github.com/ververica/flink-sql-benchmark/blob/master/tools/common/flink-conf.yaml
>> [2] https://issues.apache.org/jira/browse/FLINK-29692
>> 
>> 
>> Best,
>> Lincoln Lee
>> 
>> 
>> Yubin Li  于2024年5月17日周五 10:49写道:
>> 
>>> Hi Jane,
>>> 
>>> Thank Jane for driving this proposal !
>>> 
>>> This makes se

Re: [VOTE] Release flink-connector-opensearch v1.2.0, release candidate #1

2024-05-22 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- built from source code with JDK 1.8 succeeded
- checked Github release tag 
- checked release notes
- reviewed the web PR

Best,
Leonard

> 2024年5月16日 上午6:58,Andrey Redko  写道:
> 
> +1 (non-binding), thanks Sergey!
> 
> On Wed, May 15, 2024, 5:56 p.m. Sergey Nuyanzin  wrote:
> 
>> Hi everyone,
>> Please review and vote on release candidate #1 for
>> flink-connector-opensearch v1.2.0, as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific comments)
>> 
>> 
>> The complete staging area is available for your review, which includes:
>> * JIRA release notes [1],
>> * the official Apache source release to be deployed to dist.apache.org
>> [2],
>> which are signed with the key with fingerprint
>> F7529FAE24811A5C0DF3CA741596BBF0726835D8 [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag v1.2.0-rc1 [5],
>> * website pull request listing the new release [6].
>> * CI build of the tag [7].
>> 
>> The vote will be open for at least 72 hours. It is adopted by majority
>> approval, with at least 3 PMC affirmative votes.
>> 
>> Note that this release is for Opensearch v1.x
>> 
>> Thanks,
>> Release Manager
>> 
>> [1] https://issues.apache.org/jira/projects/FLINK/versions/12353812
>> [2]
>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-1.2.0-rc1
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4] https://repository.apache.org/content/repositories/orgapacheflink-1734
>> [5]
>> 
>> https://github.com/apache/flink-connector-opensearch/releases/tag/v1.2.0-rc1
>> [6] https://github.com/apache/flink-web/pull/740
>> [7]
>> 
>> https://github.com/apache/flink-connector-opensearch/actions/runs/9102334125
>> 



Re: [VOTE] Release flink-connector-opensearch v2.0.0, release candidate #1

2024-05-22 Thread Leonard Xu


> +1 (binding)
> 
> - verified signatures
> - verified hashsums
> - built from source code with JDK 1.8 succeeded
> - checked Github release tag 
> - checked release notes
> - reviewed the web PR

Supply more information about build from source code with JDK 1.8

> - built from source code with JDK 1.8 succeeded
It’s correct as we don’t activate opensearch2 profile by default.

- built from source code with JDK 1.8 and -Popensearch2 failed
- built from source code with JDK 11 and -Popensearch2 succeeded

Best,
Leonard


> 
> 
>> 2024年5月16日 上午6:58,Andrey Redko  写道:
>> 
>> +1 (non-binding), thanks Sergey!
>> 
>> On Wed, May 15, 2024, 6:00 p.m. Sergey Nuyanzin  wrote:
>> 
>>> Hi everyone,
>>> Please review and vote on release candidate #1 for
>>> flink-connector-opensearch v2.0.0, as follows:
>>> [ ] +1, Approve the release
>>> [ ] -1, Do not approve the release (please provide specific comments)
>>> 
>>> 
>>> The complete staging area is available for your review, which includes:
>>> * JIRA release notes [1],
>>> * the official Apache source release to be deployed to dist.apache.org
>>> [2],
>>> which are signed with the key with fingerprint
>>> F7529FAE24811A5C0DF3CA741596BBF0726835D8 [3],
>>> * all artifacts to be deployed to the Maven Central Repository [4],
>>> * source code tag v2.0.0-rc1 [5],
>>> * website pull request listing the new release [6].
>>> * CI build of the tag [7].
>>> 
>>> The vote will be open for at least 72 hours. It is adopted by majority
>>> approval, with at least 3 PMC affirmative votes.
>>> 
>>> Note that this release is for Opensearch v2.x
>>> 
>>> Thanks,
>>> Release Manager
>>> 
>>> [1] https://issues.apache.org/jira/projects/FLINK/versions/12354674
>>> [2]
>>> 
>>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-2.0.0-rc1
>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> [4]
>>> https://repository.apache.org/content/repositories/orgapacheflink-1735/
>>> [5]
>>> 
>>> https://github.com/apache/flink-connector-opensearch/releases/tag/v2.0.0-rc1
>>> [6] https://github.com/apache/flink-web/pull/741
>>> [7]
>>> 
>>> https://github.com/apache/flink-connector-opensearch/actions/runs/9102980808
>>> 
> 



Re: [VOTE] Release flink-connector-opensearch v2.0.0, release candidate #1

2024-05-22 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- built from source code with JDK 1.8 succeeded
- checked Github release tag 
- checked release notes
- reviewed the web PR

Best,
Leonard

> 2024年5月16日 上午6:58,Andrey Redko  写道:
> 
> +1 (non-binding), thanks Sergey!
> 
> On Wed, May 15, 2024, 6:00 p.m. Sergey Nuyanzin  wrote:
> 
>> Hi everyone,
>> Please review and vote on release candidate #1 for
>> flink-connector-opensearch v2.0.0, as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific comments)
>> 
>> 
>> The complete staging area is available for your review, which includes:
>> * JIRA release notes [1],
>> * the official Apache source release to be deployed to dist.apache.org
>> [2],
>> which are signed with the key with fingerprint
>> F7529FAE24811A5C0DF3CA741596BBF0726835D8 [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag v2.0.0-rc1 [5],
>> * website pull request listing the new release [6].
>> * CI build of the tag [7].
>> 
>> The vote will be open for at least 72 hours. It is adopted by majority
>> approval, with at least 3 PMC affirmative votes.
>> 
>> Note that this release is for Opensearch v2.x
>> 
>> Thanks,
>> Release Manager
>> 
>> [1] https://issues.apache.org/jira/projects/FLINK/versions/12354674
>> [2]
>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-2.0.0-rc1
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4]
>> https://repository.apache.org/content/repositories/orgapacheflink-1735/
>> [5]
>> 
>> https://github.com/apache/flink-connector-opensearch/releases/tag/v2.0.0-rc1
>> [6] https://github.com/apache/flink-web/pull/741
>> [7]
>> 
>> https://github.com/apache/flink-connector-opensearch/actions/runs/9102980808
>> 



Re: [VOTE] Release flink-connector-mongodb v1.2.0, release candidate #2

2024-05-22 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- built from source code with java 1.8 succeeded
- checked Github release tag 
- checked release notes
- reviewed the web PR

Best,
Leonard

> 2024年5月22日 下午4:16,Qingsheng Ren  写道:
> 
> +1 (binding)
> 
> - Verified checksum and signature
> - Built from source with Java 8
> - Verified source release contains no binaries
> - Verified tag exists on GitHub
> - Verified JARs on Maven repo is built by Java 8
> - Reviewed web PR
> 
> Thanks for the awesome work, Danny!
> 
> Best,
> Qingsheng
> 
> On Mon, May 20, 2024 at 2:53 PM Jiabao Sun  wrote:
>> 
>> We need more votes for this release.
>> Much appreciated for helping with this release verification.
>> 
>> Best,
>> Jiabao
>> 
>> Jiabao Sun  于2024年4月21日周日 21:35写道:
>> 
>>> +1 (non-binding)
>>> 
>>> - Validated checksum hash
>>> - Verified signature
>>> - Tag is present
>>> - Build successful with jdk8, jdk11 and jdk17
>>> - Checked the dist jar was built by jdk8
>>> - Reviewed web PR
>>> Best,
>>> Jiabao
>>> 
>>> Hang Ruan  于2024年4月21日周日 21:33写道:
>>> 
 +1 (non-binding)
 
 - Validated checksum hash
 - Verified signature
 - Verified that no binaries exist in the source archive
 - Build the source with Maven and jdk8
 - Verified web PR
 - Check that the jar is built by jdk8
 
 Best,
 Hang
 
 Ahmed Hamdy  于2024年4月18日周四 21:40写道:
 
> +1 (non-binding)
> 
> -  verified hashes and checksums
> - verified signature
> - verified source contains no binaries
> - tag exists in github
> - reviewed web PR
> 
> 
> Best Regards
> Ahmed Hamdy
> 
> 
> On Thu, 18 Apr 2024 at 11:21, Danny Cranmer 
> wrote:
> 
>> Hi everyone,
>> 
>> Please review and vote on the release candidate #2 for v1.2.0, as
> follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific comments)
>> 
>> This release supports Flink 1.18 and 1.19.
>> 
>> The complete staging area is available for your review, which
 includes:
>> * JIRA release notes [1],
>> * the official Apache source release to be deployed to
 dist.apache.org
>> [2],
>> which are signed with the key with fingerprint 125FD8DB [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag v1.2.0-rc2 [5],
>> * website pull request listing the new release [6].
>> * CI build of tag [7].
>> 
>> The vote will be open for at least 72 hours. It is adopted by majority
>> approval, with at least 3 PMC affirmative votes.
>> 
>> Thanks,
>> Danny
>> 
>> [1]
>> 
>> 
> 
 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354192
>> [2]
>> 
>> 
> 
 https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.2.0-rc2
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4]
>> 
 https://repository.apache.org/content/repositories/orgapacheflink-1719/
>> [5]
>> 
> 
 https://github.com/apache/flink-connector-mongodb/releases/tag/v1.2.0-rc2
>> [6] https://github.com/apache/flink-web/pull/735
>> [7]
>> 
> 
 https://github.com/apache/flink-connector-mongodb/actions/runs/8735987710
>> 
> 
 
>>> 



Re: [VOTE] Release flink-connector-kafka v3.2.0, release candidate #1

2024-05-22 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- built from source code with java 1.8 succeeded
- checked Github release tag 
- reviewed the web PR
- checked the CI result, 
  minor: the link [7] you post should be [1]
- checked release notes, 
  minor: the issue FLINK-34961[2] should move to next version


Best,
Leonard

[1] https://github.com/apache/flink-connector-kafka/actions/runs/8785158288
[2] https://issues.apache.org/jira/browse/FLINK-34961


> 2024年4月29日 上午12:34,Aleksandr Pilipenko  写道:
> 
> +1 (non-binding)
> 
> - Validated checksum
> - Verified signature
> - Checked that no binaries exist in the source archive
> - Build source
> - Verified web PR
> 
> Thanks,
> Aleksandr
> 
> On Sun, 28 Apr 2024 at 11:35, Hang Ruan  wrote:
> 
>> +1 (non-binding)
>> 
>> - Validated checksum hash
>> - Verified signature
>> - Verified that no binaries exist in the source archive
>> - Build the source with Maven and jdk8
>> - Verified web PR
>> - Check that the jar is built by jdk8
>> 
>> Best,
>> Hang
>> 
>> Ahmed Hamdy  于2024年4月24日周三 17:21写道:
>> 
>>> Thanks Danny,
>>> +1 (non-binding)
>>> 
>>> - Verified Checksums and hashes
>>> - Verified Signatures
>>> - Reviewed web PR
>>> - github tag exists
>>> - Build source
>>> 
>>> 
>>> Best Regards
>>> Ahmed Hamdy
>>> 
>>> 
>>> On Tue, 23 Apr 2024 at 03:47, Muhammet Orazov
>>> 
>>> wrote:
>>> 
 Thanks Danny, +1 (non-binding)
 
 - Checked 512 hash
 - Checked gpg signature
 - Reviewed pr
 - Built the source with JDK 11 & 8
 
 Best,
 Muhammet
 
 On 2024-04-22 13:55, Danny Cranmer wrote:
> Hi everyone,
> 
> Please review and vote on release candidate #1 for
> flink-connector-kafka
> v3.2.0, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
> 
> This release supports Flink 1.18 and 1.19.
> 
> The complete staging area is available for your review, which
>> includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to
>> dist.apache.org
> [2],
> which are signed with the key with fingerprint 125FD8DB [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.2.0-rc1 [5],
> * website pull request listing the new release [6].
> * CI build of the tag [7].
> 
> The vote will be open for at least 72 hours. It is adopted by
>> majority
> approval, with at least 3 PMC affirmative votes.
> 
> Thanks,
> Danny
> 
> [1]
> 
 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354209
> [2]
> 
 
>>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.2.0-rc1
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> 
>> https://repository.apache.org/content/repositories/orgapacheflink-1723
> [5]
> 
>>> https://github.com/apache/flink-connector-kafka/releases/tag/v3.2.0-rc1
> [6] https://github.com/apache/flink-web/pull/738
> [7] https://github.com/apache/flink-connector-kafka
 
>>> 
>> 



Re: [VOTE] Release flink-connector-jdbc v3.2.0, release candidate #2

2024-05-22 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- built from source code with java 1.8 succeeded
- checked Github release tag 
- checked release notes
- reviewed the web PR

Best,
Leonard

> 2024年4月21日 下午9:42,Hang Ruan  写道:
> 
> +1 (non-binding)
> 
> - Validated checksum hash
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven and jdk8
> - Verified web PR
> - Check that the jar is built by jdk8
> 
> Best,
> Hang
> 
> Ahmed Hamdy  于2024年4月18日周四 21:37写道:
> 
>> +1 (non-binding)
>> 
>> - Verified Checksums and hashes
>> - Verified Signatures
>> - No binaries in source
>> - Build source
>> - Github tag exists
>> - Reviewed Web PR
>> 
>> 
>> Best Regards
>> Ahmed Hamdy
>> 
>> 
>> On Thu, 18 Apr 2024 at 11:22, Danny Cranmer 
>> wrote:
>> 
>>> Sorry for typos:
>>> 
 Please review and vote on the release candidate #1 for the version
>> 3.2.0,
>>> as follows:
>>> Should be "release candidate #2"
>>> 
 * source code tag v3.2.0-rc1 [5],
>>> Should be "source code tag v3.2.0-rc2"
>>> 
>>> Thanks,
>>> Danny
>>> 
>>> On Thu, Apr 18, 2024 at 11:19 AM Danny Cranmer 
>>> wrote:
>>> 
 Hi everyone,
 
 Please review and vote on the release candidate #1 for the version
>> 3.2.0,
 as follows:
 [ ] +1, Approve the release
 [ ] -1, Do not approve the release (please provide specific comments)
 
 This release supports Flink 1.18 and 1.19.
 
 The complete staging area is available for your review, which includes:
 * JIRA release notes [1],
 * the official Apache source release to be deployed to dist.apache.org
 [2], which are signed with the key with fingerprint 125FD8DB [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag v3.2.0-rc1 [5],
 * website pull request listing the new release [6].
 * CI run of tag [7].
 
 The vote will be open for at least 72 hours. It is adopted by majority
 approval, with at least 3 PMC affirmative votes.
 
 Thanks,
 Danny
 
 [1]
 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353143
 [2]
 
>>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.2.0-rc2
 [3] https://dist.apache.org/repos/dist/release/flink/KEYS
 [4]
 
>> https://repository.apache.org/content/repositories/orgapacheflink-1718/
 [5]
>>> https://github.com/apache/flink-connector-jdbc/releases/tag/v3.2.0-rc2
 [6] https://github.com/apache/flink-web/pull/734
 [7]
>>> https://github.com/apache/flink-connector-jdbc/actions/runs/8736019099
 
>>> 
>> 



Re: [VOTE] Release flink-connector-gcp-pubsub v3.1.0, release candidate #1

2024-05-22 Thread Leonard Xu


+1 (binding)

- verified signatures
- verified hashsums
- built from source code with java 1.8 succeeded
- checked Github release tag 
- checked release notes
- reviewed the web PR

Best,
Leonard

> 2024年4月21日 下午9:52,Hang Ruan  写道:
> 
> +1 (non-binding)
> 
> - Validated checksum hash
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven and jdk8
> - Verified web PR
> - Check that the jar is built by jdk8
> 
> Best,
> Hang
> 
> Ahmed Hamdy  于2024年4月18日周四 20:01写道:
> 
>> Hi Danny,
>> +1 (non-binding)
>> 
>> -  verified hashes and checksums
>> - verified signature
>> - verified source contains no binaries
>> - tag exists in github
>> - reviewed web PR
>> 
>> Best Regards
>> Ahmed Hamdy
>> 
>> 
>> On Thu, 18 Apr 2024 at 11:32, Danny Cranmer 
>> wrote:
>> 
>>> Hi everyone,
>>> 
>>> Please review and vote on release candidate #1 for
>>> flink-connector-gcp-pubsub v3.1.0, as follows:
>>> [ ] +1, Approve the release
>>> [ ] -1, Do not approve the release (please provide specific comments)
>>> 
>>> This release supports Flink 1.18 and 1.19.
>>> 
>>> The complete staging area is available for your review, which includes:
>>> * JIRA release notes [1],
>>> * the official Apache source release to be deployed to dist.apache.org
>>> [2],
>>> which are signed with the key with fingerprint 125FD8DB [3],
>>> * all artifacts to be deployed to the Maven Central Repository [4],
>>> * source code tag v3.1.0-rc1 [5],
>>> * website pull request listing the new release [6].
>>> * CI build of the tag [7].
>>> 
>>> The vote will be open for at least 72 hours. It is adopted by majority
>>> approval, with at least 3 PMC affirmative votes.
>>> 
>>> Thanks,
>>> Danny
>>> 
>>> [1]
>>> 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353813
>>> [2]
>>> 
>>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-gcp-pubsub-3.1.0-rc1
>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> [4]
>> https://repository.apache.org/content/repositories/orgapacheflink-1720
>>> [5]
>>> 
>>> 
>> https://github.com/apache/flink-connector-gcp-pubsub/releases/tag/v3.1.0-rc1
>>> [6] https://github.com/apache/flink-web/pull/736/files
>>> [7]
>>> 
>>> 
>> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8735952883
>>> 
>> 



Re: [VOTE] Release flink-connector-cassandra v3.2.0, release candidate #1

2024-05-22 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- built from source code with java 1.8 succeeded
- checked Github release tag 
- checked release notes status which only left one issue is used for release 
tracking
- reviewed the web PR

Best,
Leonard

> 2024年5月22日 下午6:10,weijie guo  写道:
> 
> +1(non-binding)
> 
> -Validated checksum hash
> -Verified signature
> -Build from source
> 
> Best regards,
> 
> Weijie
> 
> 
> Hang Ruan  于2024年5月22日周三 10:12写道:
> 
>> +1 (non-binding)
>> 
>> - Validated checksum hash
>> - Verified signature
>> - Verified that no binaries exist in the source archive
>> - Build the source with Maven and jdk8
>> - Verified web PR
>> - Check that the jar is built by jdk8
>> 
>> Best,
>> Hang
>> 
>> Muhammet Orazov  于2024年5月22日周三 04:15写道:
>> 
>>> Hey all,
>>> 
>>> Could we please get some more votes to proceed with the release?
>>> 
>>> Thanks and best,
>>> Muhammet
>>> 
>>> On 2024-04-22 13:04, Danny Cranmer wrote:
 Hi everyone,
 
 Please review and vote on release candidate #1 for
 flink-connector-cassandra v3.2.0, as follows:
 [ ] +1, Approve the release
 [ ] -1, Do not approve the release (please provide specific comments)
 
 This release supports Flink 1.18 and 1.19.
 
 The complete staging area is available for your review, which includes:
 * JIRA release notes [1],
 * the official Apache source release to be deployed to dist.apache.org
 [2],
 which are signed with the key with fingerprint 125FD8DB [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag v3.2.0-rc1 [5],
 * website pull request listing the new release [6].
 * CI build of the tag [7].
 
 The vote will be open for at least 72 hours. It is adopted by majority
 approval, with at least 3 PMC affirmative votes.
 
 Thanks,
 Danny
 
 [1]
 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353148
 [2]
 
>>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-cassandra-3.2.0-rc1
 [3] https://dist.apache.org/repos/dist/release/flink/KEYS
 [4]
 https://repository.apache.org/content/repositories/orgapacheflink-1722
 [5]
 
>>> 
>> https://github.com/apache/flink-connector-cassandra/releases/tag/v3.2.0-rc1
 [6] https://github.com/apache/flink-web/pull/737
 [7]
 
>>> 
>> https://github.com/apache/flink-connector-cassandra/actions/runs/8784310241
>>> 
>> 



Re: [VOTE] Release flink-connector-aws v4.3.0, release candidate #2

2024-05-22 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- built from source code with java 1.8 succeeded
- checked Github release tag 
- checked release notes
- reviewed the web PR

Best,
Leonard

> 2024年4月28日 下午11:56,Aleksandr Pilipenko  写道:
> 
> +1 (non-binding)
> 
> - Verified checksums
> - Verified signatures
> - Checked that no binaries exist in the source archive
> - Reviewed Web PR
> - Built source
> 
> Thanks,
> Aleksandr
> 
> On Mon, 22 Apr 2024 at 09:31, Ahmed Hamdy  wrote:
> 
>> Thanks Danny,
>> +1 (non-binding)
>> 
>> - Verified Checksums
>> - Verified Signatures
>> - No binaries exists in source archive
>> - Built source
>> - Reviewed Web PR
>> - Run basic Kinesis example
>> 
>> 
>> Best Regards
>> Ahmed Hamdy
>> 
>> 
>> On Sun, 21 Apr 2024 at 14:25, Hang Ruan  wrote:
>> 
>>> +1 (non-binding)
>>> 
>>> - Validated checksum hash
>>> - Verified signature
>>> - Verified that no binaries exist in the source archive
>>> - Build the source with Maven and jdk8
>>> - Verified web PR
>>> - Check that the jar is built by jdk8
>>> 
>>> Best,
>>> Hang
>>> 
>>> Danny Cranmer  于2024年4月19日周五 18:08写道:
>>> 
 Hi everyone,
 
 Please review and vote on release candidate #2 for flink-connector-aws
 v4.3.0, as follows:
 [ ] +1, Approve the release
 [ ] -1, Do not approve the release (please provide specific comments)
 
 This version supports Flink 1.18 and 1.19.
 
 The complete staging area is available for your review, which includes:
 * JIRA release notes [1],
 * the official Apache source release to be deployed to dist.apache.org
 [2],
 which are signed with the key with fingerprint 125FD8DB [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag v4.3.0-rc2 [5],
 * website pull request listing the new release [6].
 * CI build of the tag [7].
 
 The vote will be open for at least 72 hours. It is adopted by majority
 approval, with at least 3 PMC affirmative votes.
 
 Thanks,
 Release Manager
 
 [1]
 
 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353793
 [2]
 
>>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-aws-4.3.0-rc2
 [3] https://dist.apache.org/repos/dist/release/flink/KEYS
 [4]
 
>> https://repository.apache.org/content/repositories/orgapacheflink-1721/
 [5]
>>> https://github.com/apache/flink-connector-aws/releases/tag/v4.3.0-rc2
 [6] https://github.com/apache/flink-web/pull/733
 [7]
>>> https://github.com/apache/flink-connector-aws/actions/runs/8751694197
 
>>> 
>> 



Re: [VOTE] FLIP-451: Introduce timeout configuration to AsyncSink

2024-05-22 Thread Leonard Xu


After discuss with Ahmed, the updated FLIP looks good to me.

+1(binding)


Best,
Leonard

> 2024年5月21日 下午6:12,Hong Liang  写道:
> 
> +1 (binding)
> 
> Thanks Ahmed
> 
> On Tue, May 14, 2024 at 11:51 AM David Radley 
> wrote:
> 
>> Thanks for the clarification Ahmed
>> 
>> +1 (non-binding)
>> 
>> From: Ahmed Hamdy 
>> Date: Monday, 13 May 2024 at 19:58
>> To: dev@flink.apache.org 
>> Subject: [EXTERNAL] Re: [VOTE] FLIP-451: Introduce timeout configuration
>> to AsyncSink
>> Thanks David,
>> I have replied to your question in the discussion thread.
>> Best Regards
>> Ahmed Hamdy
>> 
>> 
>> On Mon, 13 May 2024 at 16:21, David Radley 
>> wrote:
>> 
>>> Hi,
>>> I raised a question on the discussion thread, around retriable errors, as
>>> a possible alternative,
>>>  Kind regards, David.
>>> 
>>> 
>>> From: Aleksandr Pilipenko 
>>> Date: Monday, 13 May 2024 at 16:07
>>> To: dev@flink.apache.org 
>>> Subject: [EXTERNAL] Re: [VOTE] FLIP-451: Introduce timeout configuration
>>> to AsyncSink
>>> Thanks for driving this!
>>> 
>>> +1 (non-binding)
>>> 
>>> Thanks,
>>> Aleksandr
>>> 
>>> On Mon, 13 May 2024 at 14:08, 
>>> wrote:
>>> 
 Thanks Ahmed!
 
 +1 non binding
 On May 13, 2024 at 12:40 +0200, Jeyhun Karimov ,
 wrote:
> Thanks for driving this Ahmed.
> 
> +1 (non-binding)
> 
> Regards,
> Jeyhun
> 
> On Mon, May 13, 2024 at 12:37 PM Muhammet Orazov
>  wrote:
> 
>> Thanks Ahmed, +1 (non-binding)
>> 
>> Best,
>> Muhammet
>> 
>> On 2024-05-13 09:50, Ahmed Hamdy wrote:
 Hi all,
 
 Thanks for the feedback on the discussion thread[1], I would
>> like
 to
 start
 a vote on FLIP-451[2]: Introduce timeout configuration to
>>> AsyncSink
 
 The vote will be open for at least 72 hours unless there is an
 objection or
 insufficient votes.
 
 1-
>>> https://lists.apache.org/thread/ft7wcw7kyftvww25n5fm4l925tlgdfg0
 2-
 
>> 
 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-451%3A+Introduce+timeout+configuration+to+AsyncSink+API
 Best Regards
 Ahmed Hamdy
>> 
 
>>> 
>>> Unless otherwise stated above:
>>> 
>>> IBM United Kingdom Limited
>>> Registered in England and Wales with number 741598
>>> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>>> 
>> 
>> Unless otherwise stated above:
>> 
>> IBM United Kingdom Limited
>> Registered in England and Wales with number 741598
>> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>> 



Re: [DISCUSS] FLIP-451: Refactor Async sink API

2024-05-22 Thread Leonard Xu
Thanks Ahmed for the update, the FLIP looks good to me now.

Best,
Leonard

> 2024年5月22日 下午4:34,Ahmed Hamdy  写道:
> 
>> 
>> (1) Implicitly point a public API change is not enough, Could you add a
>> section Public Interfaces to enumerate all Public APIs that you proposed
>> and you changed?
>> It’s a standard part of a FLIP template[1].
>> 
> yes this is updated in the FLIP now.
> 
> 
> 
>> (2) About the proposed public interface ResultHandler,
>> Could you explain or show how to use the methods #completeExceptionally and
>> #retryForEntries? I didn’t find
>> detail explanation or Usage example code to understand them.
>> 
> 
> Added to the FLIP now.
> 
> 
> 
>> (3) Could you add necessary java documents for all public API changes like
>> new method AsyncSinkWriterConfiguration#setRequestTimeoutMs ? The java doc
>> of [2] is a good example.
>> 
> 
> sure, Added now.
> 
> (4) Another minor reminder AsyncSinkBase is a @PublicEvolving interface
>> too, please correct it, and please ensure the backward compatibility has
>> been considered for all public interfaces the FLIP changed.
>> 
> Done
> 
> Best Regards
> Ahmed Hamdy
> 
> 
> On Wed, 22 May 2024 at 04:16, Leonard Xu  wrote:
> 
>> Thanks for your reply, Ahmed.
>> 
>>> (2) The FLIP-451 aims to introduce a timeout configuration, but I didn’t
>>>> find the configuration in FLIP even I lookup some historical versions of
>>>> the FLIP. Did I miss some key informations?
>>>> 
>>> 
>>> Yes, I tried to implicitly point that it will be added to the existing
>>> AsyncSinkWriterConfiguration to not inflate the FLIP, but I get it might
>> be
>>> confusing. I have added the changes to the configuration classes in the
>>> FLIP to make it clearer.
>> 
>> (1) Implicitly point a public API change is not enough, Could you add a
>> section Public Interfaces to enumerate all Public APIs that you proposed
>> and you changed?
>> It’s a standard part of a FLIP template[1].
>> 
>> (2) About the proposed public interface ResultHandler,
>> Could you explain or show how to use the methods #completeExceptionally and
>> #retryForEntries? I didn’t find
>> detail explanation or Usage example code to understand them.
>> 
>> (3) Could you add necessary java documents for all public API changes like
>> new method AsyncSinkWriterConfiguration#setRequestTimeoutMs ? The java doc
>> of [2] is a good example.
>> 
>> (4) Another minor reminder AsyncSinkBase is a @PublicEvolving interface
>> too, please correct it, and please ensure the backward compatibility has
>> been considered for all public interfaces the FLIP changed.
>> 
>> 
>> Best,
>> Leonard
>> [1]https://cwiki.apache.org/confluence/display/FLINK/FLIP+Template
>> [2]
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink
>> 
>> 
>>> 
>>> 
>>> On Tue, 21 May 2024 at 14:56, Leonard Xu  wrote:
>>> 
>>>> Thanks Ahmed for kicking off this discussion, sorry for jumping the
>>>> discussion late.
>>>> 
>>>> (1)I’m confused about the discuss thread name ‘FLIP-451: Refactor Async
>>>> sink API’  and FLIP title/vote thread name '
>>>> FLIP-451: Introduce timeout configuration to AsyncSink API <
>>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-451%3A+Introduce+timeout+configuration+to+AsyncSink+API
>>> ’,
>>>> they are different for me. Could you help explain the change history?
>>>> 
>>>> (2) The FLIP-451 aims to introduce a timeout configuration, but I didn’t
>>>> find the configuration in FLIP even I lookup some historical versions of
>>>> the FLIP. Did I miss some key informations?
>>>> 
>>>> (3) About the code change part, there’re some un-complete pieces in
>>>> AsyncSinkWriter for example `submitRequestEntries(List
>>>> requestEntries,);` is incorrect and `sendTime` variable I didn’t
>>>> find the place we define it and where we use it.
>>>> 
>>>> Sorry for jumping the discussion thread during vote phase again.
>>>> 
>>>> Best,
>>>> Leonard
>>>> 
>>>> 
>>>>> 2024年5月21日 下午3:49,Ahmed Hamdy  写道:
>>>>> 
>>>>> Hi Hong,
>>>>> Thanks for pointing that out, no we are not
>>>>> deprecating getFatalExceptionCons(). I have updated the FLIP
>>>>

Re: [DISCUSS] FLIP-451: Refactor Async sink API

2024-05-21 Thread Leonard Xu
Thanks for your reply, Ahmed.

> (2) The FLIP-451 aims to introduce a timeout configuration, but I didn’t
>> find the configuration in FLIP even I lookup some historical versions of
>> the FLIP. Did I miss some key informations?
>> 
> 
> Yes, I tried to implicitly point that it will be added to the existing
> AsyncSinkWriterConfiguration to not inflate the FLIP, but I get it might be
> confusing. I have added the changes to the configuration classes in the
> FLIP to make it clearer.

(1) Implicitly point a public API change is not enough, Could you add a section 
Public Interfaces to enumerate all Public APIs that you proposed and you 
changed?
It’s a standard part of a FLIP template[1]. 

(2) About the proposed public interface ResultHandler, Could you 
explain or show how to use the methods #completeExceptionally and 
#retryForEntries? I didn’t find 
detail explanation or Usage example code to understand them.

(3) Could you add necessary java documents for all public API changes like new 
method AsyncSinkWriterConfiguration#setRequestTimeoutMs ? The java doc of [2] 
is a good example.

(4) Another minor reminder AsyncSinkBase is a @PublicEvolving interface too, 
please correct it, and please ensure the backward compatibility has been 
considered for all public interfaces the FLIP changed.


Best,
Leonard
[1]https://cwiki.apache.org/confluence/display/FLINK/FLIP+Template
[2]https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink


> 
> 
> On Tue, 21 May 2024 at 14:56, Leonard Xu  wrote:
> 
>> Thanks Ahmed for kicking off this discussion, sorry for jumping the
>> discussion late.
>> 
>> (1)I’m confused about the discuss thread name ‘FLIP-451: Refactor Async
>> sink API’  and FLIP title/vote thread name '
>> FLIP-451: Introduce timeout configuration to AsyncSink API <
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-451%3A+Introduce+timeout+configuration+to+AsyncSink+API>’,
>> they are different for me. Could you help explain the change history?
>> 
>> (2) The FLIP-451 aims to introduce a timeout configuration, but I didn’t
>> find the configuration in FLIP even I lookup some historical versions of
>> the FLIP. Did I miss some key informations?
>> 
>> (3) About the code change part, there’re some un-complete pieces in
>> AsyncSinkWriter for example `submitRequestEntries(List
>> requestEntries,);` is incorrect and `sendTime` variable I didn’t
>> find the place we define it and where we use it.
>> 
>> Sorry for jumping the discussion thread during vote phase again.
>> 
>> Best,
>> Leonard
>> 
>> 
>>> 2024年5月21日 下午3:49,Ahmed Hamdy  写道:
>>> 
>>> Hi Hong,
>>> Thanks for pointing that out, no we are not
>>> deprecating getFatalExceptionCons(). I have updated the FLIP
>>> Best Regards
>>> Ahmed Hamdy
>>> 
>>> 
>>> On Mon, 20 May 2024 at 15:40, Hong Liang  wrote:
>>> 
>>>> Hi Ahmed,
>>>> Thanks for putting this together! Should we still be marking
>>>> getFatalExceptionCons() as @Deprecated in this FLIP, if we are not
>>>> providing a replacement?
>>>> 
>>>> Regards,
>>>> Hong
>>>> 
>>>> On Mon, May 13, 2024 at 7:58 PM Ahmed Hamdy 
>> wrote:
>>>> 
>>>>> Hi David,
>>>>> yes there error classification was initially left to sink implementers
>> to
>>>>> handle while we provided utilities to classify[1] and bubble up[2]
>> fatal
>>>>> exceptions to avoid retrying them.
>>>>> Additionally some sink implementations provide an option to short
>> circuit
>>>>> the failures by exposing a `failOnError` flag as in
>>>> KinesisStreamsSink[3],
>>>>> however this FLIP scope doesn't include any changes for retry
>> mechanisms.
>>>>> 
>>>>> 1-
>>>>> 
>>>>> 
>>>> 
>> https://github.com/apache/flink/blob/015867803ff0c128b1c67064c41f37ca0731ed86/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/throwable/FatalExceptionClassifier.java#L32
>>>>> 2-
>>>>> 
>>>>> 
>>>> 
>> https://github.com/apache/flink/blob/015867803ff0c128b1c67064c41f37ca0731ed86/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java#L533
>>>>> 3-
>>>>> 
>>>>> 
>>>> 
>> https://github.com/apache/flink-connector-aws/blob/c6e0abb65a0e51b40dd218b890a111886fbf797f/flink-connector-

Re: [DISCUSS] Add a JDBC Sink Plugin to Flink-CDC-Pipeline

2024-05-21 Thread Leonard Xu
Thanks Jerry for kicking off this thread, the idea makes sense to me, JDBC Sink 
is users’ need and Flink CDC project should support it soon.

Could you share your design docs(FLIP) firstly[1]? And then we can continue the 
design discussion.

Please feel free to ping me if you have any concerns about FLIP process or 
Flink CDC design part.

Best,
Leonard
[1] https://cwiki.apache.org/confluence/display/FLINK/FLIP+Template 


> 2024年5月15日 下午3:06,Jerry  写道:
> 
> Hi all
> My name is ZhengjunZhou, an user and developer of FlinkCDC. In my recent
> projects, I realized that we could enhance the capabilities of
> Flink-CDC-Pipeline by introducing a JDBC Sink plugin, enabling FlinkCDC to
> directly output change data capture (CDC) to various JDBC-supported
> database systems.
> 
> Currently, while FlinkCDC offers support for a wide range of data sources,
> there is no direct solution for sinks, especially for common relational
> databases. I believe that adding a JDBC Sink plugin will significantly
> boost its applicability in data integration scenarios.
> 
> Specifically, this plugin would allow users to configure database
> connections and stream data directly to SQL databases via the standard JDBC
> interface. This could be used for data migration tasks as well as real-time
> data synchronization.
> 
> To further discuss this proposal and gather feedback from the community, I
> have prepared a preliminary design draft and hope to discuss it in detail
> in the upcoming community meeting. Please consider the potential value of
> this feature and provide your insights and guidance.
> 
> Thank you for your time and consideration. I look forward to your active
> feedback and further discussion.
> 
> [1] https://github.com/apache/flink-connector-jdbc



Re: [DISCUSS] FLIP-451: Refactor Async sink API

2024-05-21 Thread Leonard Xu
Thanks Ahmed for kicking off this discussion, sorry for jumping the discussion 
late.

(1)I’m confused about the discuss thread name ‘FLIP-451: Refactor Async sink 
API’  and FLIP title/vote thread name '
FLIP-451: Introduce timeout configuration to AsyncSink API 
’,
 they are different for me. Could you help explain the change history?

(2) The FLIP-451 aims to introduce a timeout configuration, but I didn’t find 
the configuration in FLIP even I lookup some historical versions of the FLIP. 
Did I miss some key informations?

(3) About the code change part, there’re some un-complete pieces in 
AsyncSinkWriter for example `submitRequestEntries(List 
requestEntries,);` is incorrect and `sendTime` variable I didn’t 
find the place we define it and where we use it.

Sorry for jumping the discussion thread during vote phase again.

Best,
Leonard


> 2024年5月21日 下午3:49,Ahmed Hamdy  写道:
> 
> Hi Hong,
> Thanks for pointing that out, no we are not
> deprecating getFatalExceptionCons(). I have updated the FLIP
> Best Regards
> Ahmed Hamdy
> 
> 
> On Mon, 20 May 2024 at 15:40, Hong Liang  wrote:
> 
>> Hi Ahmed,
>> Thanks for putting this together! Should we still be marking
>> getFatalExceptionCons() as @Deprecated in this FLIP, if we are not
>> providing a replacement?
>> 
>> Regards,
>> Hong
>> 
>> On Mon, May 13, 2024 at 7:58 PM Ahmed Hamdy  wrote:
>> 
>>> Hi David,
>>> yes there error classification was initially left to sink implementers to
>>> handle while we provided utilities to classify[1] and bubble up[2] fatal
>>> exceptions to avoid retrying them.
>>> Additionally some sink implementations provide an option to short circuit
>>> the failures by exposing a `failOnError` flag as in
>> KinesisStreamsSink[3],
>>> however this FLIP scope doesn't include any changes for retry mechanisms.
>>> 
>>> 1-
>>> 
>>> 
>> https://github.com/apache/flink/blob/015867803ff0c128b1c67064c41f37ca0731ed86/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/throwable/FatalExceptionClassifier.java#L32
>>> 2-
>>> 
>>> 
>> https://github.com/apache/flink/blob/015867803ff0c128b1c67064c41f37ca0731ed86/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java#L533
>>> 3-
>>> 
>>> 
>> https://github.com/apache/flink-connector-aws/blob/c6e0abb65a0e51b40dd218b890a111886fbf797f/flink-connector-aws/flink-connector-aws-kinesis-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisStreamsSinkWriter.java#L106
>>> 
>>> Best Regards
>>> Ahmed Hamdy
>>> 
>>> 
>>> On Mon, 13 May 2024 at 16:20, David Radley 
>>> wrote:
>>> 
 Hi,
 I wonder if the way that the async request fails could be a retriable
>> or
 non-retriable error, so it would retry only for retriable (transient)
 errors (like IOExceptions) . I see some talk on the internet around
 retriable SQL errors.
 If this was the case then we may need configuration to limit the
>> number
 of retries of retriable errors.
Kind regards, David
 
 
 From: Muhammet Orazov 
 Date: Monday, 13 May 2024 at 10:30
 To: dev@flink.apache.org 
 Subject: [EXTERNAL] Re: [DISCUSS] FLIP-451: Refactor Async sink API
 Great, thanks for clarifying!
 
 Best,
 Muhammet
 
 
 On 2024-05-06 13:40, Ahmed Hamdy wrote:
> Hi Muhammet,
> Thanks for the feedback.
> 
>> Could you please add more here why it is harder? Would the
>> `completeExceptionally`
>> method be related to it? Maybe you can add usage example for it
>> also.
>> 
> 
> this is mainly due to the current implementation of fatal exception
> failures which depends on base `getFatalExceptionConsumer` method
>> that
> is
> decoupled from the actual called method `submitRequestEntries`, Since
> this
> is now not the primary concern of the FLIP, I have removed it from
>> the
> motivation so that the scope is defined around introducing the
>> timeout
> configuration.
> 
>> Should we add a list of possible connectors that this FLIP would
>> improve?
> 
> Good call, I have added under migration plan.
> 
> Best Regards
> Ahmed Hamdy
> 
> 
> On Mon, 6 May 2024 at 08:49, Muhammet Orazov 
> wrote:
> 
>> Hey Ahmed,
>> 
>> Thanks for the FLIP! +1 (non-binding)
>> 
>>> Additionally the current interface for passing fatal exceptions
>> and
>>> retrying records relies on java consumers which makes it harder to
>>> understand.
>> 
>> Could you please add more here why it is harder? Would the
>> `completeExceptionally`
>> method be related to it? Maybe you can add usage example for it
>> also.
>> 
>>> we should proceed by adding support in all supporting connector
>>> repos.
>> 
>> Should we add li

Re: [DISCUSS] FLIP-XXX: Improve JDBC connector extensibility for Table API

2024-05-21 Thread Leonard Xu
Thanks Lorenzo for kicking off this discussion.

+1 for the motivation, and I left some comments as following:

(1) Please add API annotation for all Proposed public interfaces

(2) 
JdbcConnectionOptionsParser/JdbcReadOptionsParser/JdbcExecutionOptionsParser  
offer two methods validate and parse, it’s a little stranger to me as your POC 
code call them at the same time, could we finish validate action in parse 
method internal? And thus a Parser interface offers a parse method makes sense 
to me. It’s better introduce a Validator to support validation If you want to 
do some connection validations during job compile phase.

(3) Above methods return InternalJdbcConnectionOptions with fixed members, if 
the example db requires extra connection options like acessKey, acessId and 
etc, we need to change InternalJdbcConnectionOptions as well, how we show our 
extensibility?

Best,
Leonard


> 2024年5月15日 下午10:17,Ahmed Hamdy  写道:
> 
> Hi Lorenzo,
> This seems like a very useful addition.
> +1 (non-binding) from my side. I echo Jeyhun's question about backward
> compatibility as it is not mentioned in the FLIP.
> Best Regards
> Ahmed Hamdy
> 
> 
> On Wed, 15 May 2024 at 08:12,  wrote:
> 
>> Hello Muhammet and Jeyhun!
>> Thanks for your comments!
>> 
>> @Jeyhun:
>> 
>>> Could you please elaborate more on how the new approach will be backwards
>> compatible?
>> 
>> In the FLIP I provide how the current Factories in JDBC would be changed
>> with this refactor, do you mean something different? Can you be more
>> specific with your request?
>> On May 14, 2024 at 12:32 +0200, Jeyhun Karimov ,
>> wrote:
>>> Hi Lorenzo,
>>> 
>>> Thanks for driving this FLIP. +1 for it.
>>> 
>>> Could you please elaborate more on how the new approach will be backwards
>>> compatible?
>>> 
>>> Regards,
>>> Jeyhun
>>> 
>>> On Tue, May 14, 2024 at 10:00 AM Muhammet Orazov
>>>  wrote:
>>> 
 Hey Lorenzo,
 
 Thanks for driving this FLIP! +1
 
 It will improve the user experience of using JDBC based
 connectors and help developers to build with different drivers.
 
 Best,
 Muhammet
 
 On 2024-05-13 10:20, lorenzo.affe...@ververica.com.INVALID wrote:
>> Hello dev!
>> 
>> I want to share a draft of my FLIP to refactor the JDBC connector
>> to
>> improve its extensibility [1].
>> The goal is to allow implementers to write new connectors on top
>> of the
>> JDBC one for Table API with clean and maintainable code.
>> 
>> Any feedback from the community is more and welcome.
>> 
>> [1]
>> 
 
>> https://docs.google.com/document/d/1kl_AikMlqPUI-LNiPBraAFVZDRg1LF4bn6uiNtR4dlY/edit?usp=sharing
 
>> 



Re: [VOTE] FLIP-449: Reorganization of flink-connector-jdbc

2024-05-21 Thread Leonard Xu
+1(binding),  thanks Joao Boto for driving this FLIP.

Best,
Leonard

> 2024年5月17日 下午4:34,Ahmed Hamdy  写道:
> 
> Hi all,
> +1 (non-binding)
> Best Regards
> Ahmed Hamdy
> 
> 
> On Fri, 17 May 2024 at 02:13, Jiabao Sun  wrote:
> 
>> Thanks for driving this proposal!
>> 
>> +1 (binding)
>> 
>> Best,
>> Jiabao
>> 
>> 
>> On 2024/05/10 22:18:04 Jeyhun Karimov wrote:
>>> Thanks for driving this!
>>> 
>>> +1 (non-binding)
>>> 
>>> Regards,
>>> Jeyhun
>>> 
>>> On Fri, May 10, 2024 at 12:50 PM Muhammet Orazov
>>>  wrote:
>>> 
 Thanks João for your efforts and driving this!
 
 +1 (non-binding)
 
 Best,
 Muhammet
 
 On 2024-05-09 12:01, Joao Boto wrote:
> Hi everyone,
> 
> Thanks for all the feedback, I'd like to start a vote on the
>> FLIP-449:
> Reorganization of flink-connector-jdbc [1].
> The discussion thread is here [2].
> 
> The vote will be open for at least 72 hours unless there is an
> objection or
> insufficient votes.
> 
> [1]
> 
 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-449%3A+Reorganization+of+flink-connector-jdbc
> [2] https://lists.apache.org/thread/jc1yvvo35xwqzlxl5mj77qw3hq6f5sgr
> 
> Best
> Joao Boto
 
>>> 
>> 



Re: [VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-17 Thread Leonard Xu
+1(binding)

Best,
Leonard

> 2024年5月17日 下午5:40,Hang Ruan  写道:
> 
> +1(non-binding)
> 
> Best,
> Hang
> 
> Yuepeng Pan  于2024年5月17日周五 16:15写道:
> 
>> +1(non-binding)
>> 
>> 
>> Best,
>> Yuepeng Pan
>> 
>> 
>> At 2024-05-15 21:09:04, "Jing Ge"  wrote:
>>> +1(binding) Thanks Martijn!
>>> 
>>> Best regards,
>>> Jing
>>> 
>>> On Wed, May 15, 2024 at 7:00 PM Muhammet Orazov
>>>  wrote:
>>> 
 Thanks Martijn driving this! +1 (non-binding)
 
 Best,
 Muhammet
 
 On 2024-05-14 06:43, Martijn Visser wrote:
> Hi everyone,
> 
> With no more discussions being open in the thread [1] I would like to
> start
> a vote on FLIP-453: Promote Unified Sink API V2 to Public and
>> Deprecate
> SinkFunction [2]
> 
> The vote will be open for at least 72 hours unless there is an
> objection or
> insufficient votes.
> 
> Best regards,
> 
> Martijn
> 
> [1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
> [2]
> 
 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction
 
>> 



Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 Thread Leonard Xu
Congratulations !

Thanks Qingsheng for the great work and all contributors involved !!

Best,
Leonard


> 2024年5月17日 下午5:32,Qingsheng Ren  写道:
> 
> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
> 
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch data, bringing the simplicity and elegance of data
> integration via YAML to describe the data movement and transformation
> in a data pipeline.
> 
> Please check out the release blog post for an overview of the release:
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354387
> 
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> 
> Regards,
> Qingsheng Ren



Re: [VOTE] Apache Flink CDC Release 3.1.0, release candidate #3

2024-05-14 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- checked Github release tag 
- checked release notes
- Run pipeline from MySQL to StarRocks with fields projection, the result is 
expected
- Run pipeline from MySQL to StarRocks with filter, the result is expected
- reviewed Jira issues for cdc-3.1.0,and closed invalid issue for incorrect 
version 3.1.0
- reviewed the web PR and left two minor comments 

Best,
Leonard

> 2024年5月14日 下午6:07,Yanquan Lv  写道:
> 
> +1 (non-binding)
> - Validated checksum hash
> - Build the source with Maven and jdk8
> - Verified web PR
> - Check that the jar is built by jdk8
> - Check synchronizing from mysql to paimon
> - Check synchronizing from mysql to kafka
> 
> Hang Ruan  于2024年5月13日周一 13:55写道:
> 
>> +1 (non-binding)
>> 
>> - Validated checksum hash
>> - Verified signature
>> - Verified that no binaries exist in the source archive
>> - Build the source with Maven and jdk8
>> - Verified web PR
>> - Check that the jar is built by jdk8
>> - Check synchronizing schemas and data from mysql to starrocks following
>> the quickstart
>> 
>> Best,
>> Hang
>> 
>> Qingsheng Ren  于2024年5月11日周六 10:10写道:
>> 
>>> Hi everyone,
>>> 
>>> Please review and vote on the release candidate #3 for the version 3.1.0
>> of
>>> Apache Flink CDC, as follows:
>>> [ ] +1, Approve the release
>>> [ ] -1, Do not approve the release (please provide specific comments)
>>> 
>>> **Release Overview**
>>> 
>>> As an overview, the release consists of the following:
>>> a) Flink CDC source release to be deployed to dist.apache.org
>>> b) Maven artifacts to be deployed to the Maven Central Repository
>>> 
>>> **Staging Areas to Review**
>>> 
>>> The staging areas containing the above mentioned artifacts are as
>> follows,
>>> for your review:
>>> * All artifacts for a) can be found in the corresponding dev repository
>> at
>>> dist.apache.org [1], which are signed with the key with fingerprint
>>> A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B [2]
>>> * All artifacts for b) can be found at the Apache Nexus Repository [3]
>>> 
>>> Other links for your review:
>>> * JIRA release notes [4]
>>> * Source code tag "release-3.1.0-rc3" with commit hash
>>> 5452f30b704942d0ede64ff3d4c8699d39c63863 [5]
>>> * PR for release announcement blog post of Flink CDC 3.1.0 in flink-web
>> [6]
>>> 
>>> **Vote Duration**
>>> 
>>> The voting time will run for at least 72 hours, adopted by majority
>>> approval with at least 3 PMC affirmative votes.
>>> 
>>> Thanks,
>>> Qingsheng Ren
>>> 
>>> [1] https://dist.apache.org/repos/dist/dev/flink/flink-cdc-3.1.0-rc3/
>>> [2] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> [3]
>> https://repository.apache.org/content/repositories/orgapacheflink-1733
>>> [4]
>>> 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354387
>>> [5] https://github.com/apache/flink-cdc/releases/tag/release-3.1.0-rc3
>>> [6] https://github.com/apache/flink-web/pull/739
>>> 
>> 



Re: [RESULT][VOTE] FLIP-454: New Apicurio Avro format

2024-05-08 Thread Leonard Xu
Thanks David for driving the FLIP forward,  but we need 3 +1(binding)  votes 
according Flink Bylaws[1] before community accepted it.


Best,
Leonard
[1] https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws

> 2024年5月8日 下午11:05,David Radley  写道:
> 
> Hi everyone,
> I am happy to say that FLIP-454: New Apicurio Avro format [1] has been 
> accepted and voted through this thread [2].
> 
> The proposal has been accepted with 4 approving votes and there
> are no vetos:
> 
> - Ahmed Hamdy (non-binding)
> - Jeyhun Karimov (non-binding)
> - Mark Nuttall (non-binding)
> - Nic Townsend (non-binding)
> 
> Martijn:
> Please could you update the Flip with:
> - the voting thread link
> - the accepted status
> - the Jira number (https://issues.apache.org/jira/browse/FLINK-35311).
> As the involved committer, are you willing to assign me the Jira to work on 
> and merge once you approve the changes?
> 
> [1] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format
> [2] https://lists.apache.org/list?dev@flink.apache.org:lte=1M:apicurio
> 
> Thanks to all involved.
> 
> Kind regards,
> David
> 
> Unless otherwise stated above:
> 
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU



Re: [DISCUSS] Flink CDC 3.2 Release Planning

2024-05-08 Thread Leonard Xu
+1 for the proposal code freeze date and RM candidate.

Best,
Leonard

> 2024年5月8日 下午10:27,gongzhongqiang  写道:
> 
> Hi Qingsheng
> 
> Thank you for driving the release.
> Agree with the goal and I'm willing to help.
> 
> Best,
> Zhongqiang Gong
> 
> Qingsheng Ren  于2024年5月8日周三 14:22写道:
> 
>> Hi devs,
>> 
>> As we are in the midst of the release voting process for Flink CDC 3.1.0, I
>> think it's a good time to kick off the upcoming Flink CDC 3.2 release
>> cycle.
>> 
>> In this release cycle I would like to focus on the stability of Flink CDC,
>> especially for the newly introduced YAML-based data integration
>> framework. To ensure we can iterate and improve swiftly, I propose to make
>> 3.2 a relatively short release cycle, targeting a feature freeze by May 24,
>> 2024.
>> 
>> For developers that are interested in participating and contributing new
>> features in this release cycle, please feel free to list your planning
>> features in the wiki page [1].
>> 
>> I'm happy to volunteer as a release manager and of course open to work
>> together with someone on this.
>> 
>> What do you think?
>> 
>> Best,
>> Qingsheng
>> 
>> [1]
>> https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release
>> 



Re: [DISCUSS] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-06 Thread Leonard Xu
+1 from my side, thanks Martijn for the effort.

Best,
Leonard

> 2024年5月4日 下午7:41,Ahmed Hamdy  写道:
> 
> Hi Martijn
> Thanks for the proposal +1 from me.
> Should this change take place in 1.20, what are the planned release steps
> for connectors that only offer a deprecated interface in this case (i.e.
> RabbitMQ, Cassandra, pusbub, Hbase)? Are we going to refrain from releases
> that support 1.20+ till the blockers are implemented?
> Best Regards
> Ahmed Hamdy
> 
> 
> On Fri, 3 May 2024 at 14:32, Péter Váry  wrote:
> 
>>> With regards to FLINK-35149, the fix version indicates a change at Flink
>> CDC; is that indeed correct, or does it require a change in the SinkV2
>> interface?
>> 
>> The fix doesn't need change in SinkV2, so we are good there.
>> The issue is that the new SinkV2 SupportsCommitter/SupportsPreWriteTopology
>> doesn't work with the CDC yet.
>> 
>> Martijn Visser  ezt írta (időpont: 2024. máj.
>> 3.,
>> P, 14:06):
>> 
>>> Hi Ferenc,
>>> 
>>> You're right, 1.20 it is :)
>>> 
>>> I've assigned the HBase one to you!
>>> 
>>> Thanks,
>>> 
>>> Martijn
>>> 
>>> On Fri, May 3, 2024 at 1:55 PM Ferenc Csaky 
>>> wrote:
>>> 
 Hi Martijn,
 
 +1 for the proposal.
 
> targeted for Flink 1.19
 
 I guess you meant Flink 1.20 here.
 
 Also, I volunteer to take updating the HBase sink, feel free to assign
 that task to me.
 
 Best,
 Ferenc
 
 
 
 
 On Friday, May 3rd, 2024 at 10:20, Martijn Visser <
 martijnvis...@apache.org> wrote:
 
> 
> 
> Hi Peter,
> 
> I'll add it for completeness, thanks!
> With regards to FLINK-35149, the fix version indicates a change at
>>> Flink
> CDC; is that indeed correct, or does it require a change in the
>> SinkV2
> interface?
> 
> Best regards,
> 
> Martijn
> 
> 
> On Fri, May 3, 2024 at 7:47 AM Péter Váry
>> peter.vary.apa...@gmail.com
> 
> wrote:
> 
>> Hi Martijn,
>> 
>> We might want to add FLIP-371 [1] to the list. (Or we aim only for
 higher
>> level FLIPs?)
>> 
>> We are in the process of using the new API in Iceberg connector
>> [2] -
 so
>> far, so good.
>> 
>> I know of one minor known issue about the sink [3], which should be
 ready
>> for the release.
>> 
>> All-in-all, I think we are in good shape, and we could move forward
 with
>> the promotion.
>> 
>> Thanks,
>> Peter
>> 
>> [1] -
>> 
>> 
 
>>> 
>> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=263430387
>> [2] - https://github.com/apache/iceberg/pull/10179
>> [3] - https://issues.apache.org/jira/browse/FLINK-35149
>> 
>> On Thu, May 2, 2024, 09:47 Muhammet Orazov
 mor+fl...@morazow.com.invalid
>> wrote:
>> 
>>> Got it, thanks!
>>> 
>>> On 2024-05-02 06:53, Martijn Visser wrote:
>>> 
 Hi Muhammet,
 
 Thanks for joining the discussion! The changes in this FLIP
>> would
 be
 targeted for Flink 1.19, since it's only a matter of changing
>> the
 annotation.
 
 Best regards,
 
 Martijn
 
 On Thu, May 2, 2024 at 7:26 AM Muhammet Orazov
 mor+fl...@morazow.com
 wrote:
 
> Hello Martijn,
> 
> Thanks for the FLIP and detailed history of changes, +1.
> 
> Would FLIP changes target for 2.0? I think it would be good
> to have clear APIs on 2.0 release.
> 
> Best,
> Muhammet
> 
> On 2024-05-01 15:30, Martijn Visser wrote:
> 
>> Hi everyone,
>> 
>> I would like to start a discussion on FLIP-453: Promote
 Unified Sink
>> API V2
>> to Public and Deprecate SinkFunction
>> https://cwiki.apache.org/confluence/x/rIobEg
>> 
>> This FLIP proposes to promote the Unified Sink API V2 from
>> PublicEvolving
>> to Public and to mark the SinkFunction as Deprecated.
>> 
>> I'm looking forward to your thoughts.
>> 
>> Best regards,
>> 
>> Martijn
 
>>> 
>> 



Re: [VOTE] FLIP-436: Introduce Catalog-related Syntax

2024-04-26 Thread Leonard Xu
+1 for the new layout, it’s a minor but good improvement.

Best,
Leonard
 

> 2024年4月26日 下午2:03,Jark Wu  写道:
> 
> Thanks for driving this, Jane and Yubin.
> 
> +1. The new layout looks good to me.
> 
> Best,
> Jark
> 
> On Fri, 26 Apr 2024 at 13:57, Jane Chan  wrote:
> 
>> Hi Yubin,
>> 
>> Thanks for your effort. +1 with the display layout change (binding).
>> 
>> Best,
>> Jane
>> 
>> On Wed, Apr 24, 2024 at 5:28 PM Ahmed Hamdy  wrote:
>> 
>>> Hi, +1 (non-binding)
>>> Best Regards
>>> Ahmed Hamdy
>>> 
>>> 
>>> On Wed, 24 Apr 2024 at 09:58, Yubin Li  wrote:
>>> 
 Hi everyone,
 
 During the implementation of the "describe catalog" syntax, it was
 found that the original output style needed to be improved.
 ```
 desc catalog extended cat2;
 
 
>>> 
>> +--+-+
 | catalog_description_item |
 catalog_description_value |
 
 
>>> 
>> +--+-+
 | Name |
 cat2 |
 | Type |
 generic_in_memory |
 |  Comment |
  |
 |   Properties | ('default-database','db'),
 ('type','generic_in_memory') |
 
 
>>> 
>> +--+-+
 4 rows in set
 ```
 After offline discussions with Jane Chan and Jark Wu, we suggest
 improving it to the following form:
 ```
 desc catalog extended cat2;
 +-+---+
 |   info name |info value |
 +-+---+
 |name |  cat2 |
 |type | generic_in_memory |
 | comment |   |
 | option:default-database |db |
 +-+---+
 4 rows in set
 ```
 
 For the following reasons:
 1. The title should be consistent with engines such as Databricks for
 easy understanding, and it should also be consistent with Flink's own
 naming style. Therefore, the title adopts "info name", "info value",
 and the key name should be unified in lowercase, so "Name" is replaced
 by "name".
 Note: Databricks output style [1] as follows:
 ```
> DESCRIBE CATALOG main;
 info_name info_value
   
 Catalog Name  main
  Comment   Main catalog (auto-created)
Owner metastore-admin-users
 Catalog Type   Regular
 ```
 2. There may be many attributes of the catalog, and it is very poor in
 readability when displayed in one line. It should be expanded into
 multiple lines, and the key name is prefixed with "option:" to
 identify that this is an attribute row. And since `type` is an
 important information of the catalog, even if `extended` is not
 specified, it should also be displayed, and correspondingly,
 "option:type" should be removed to avoid redundancy.
 
 WDYT? Looking forward to your reply!
 
 [1]
 
>>> 
>> https://learn.microsoft.com/zh-tw/azure/databricks/sql/language-manual/sql-ref-syntax-aux-describe-catalog
 
 Best,
 Yubin
 
 On Wed, Mar 20, 2024 at 2:15 PM Benchao Li 
>> wrote:
> 
> +1 (binding)
> 
> gongzhongqiang  于2024年3月20日周三 11:40写道:
>> 
>> +1 (non-binding)
>> 
>> Best,
>> Zhongqiang Gong
>> 
>> Yubin Li  于2024年3月19日周二 18:03写道:
>> 
>>> Hi everyone,
>>> 
>>> Thanks for all the feedback, I'd like to start a vote on the
 FLIP-436:
>>> Introduce Catalog-related Syntax [1]. The discussion thread is
>> here
>>> [2].
>>> 
>>> The vote will be open for at least 72 hours unless there is an
>>> objection or insufficient votes.
>>> 
>>> [1]
>>> 
 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax
>>> [2]
>>> https://lists.apache.org/thread/10k1bjb4sngyjwhmfqfky28lyoo7sv0z
>>> 
>>> Best regards,
>>> Yubin
>>> 
> 
> 
> 
> --
> 
> Best,
> Benchao Li
 
>>> 
>> 



Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines

2024-04-17 Thread Leonard Xu
+1(binding)

Best,
Leonard

> 2024年4月17日 下午8:31,Lincoln Lee  写道:
> 
> +1(binding)
> 
> Best,
> Lincoln Lee
> 
> 
> Ferenc Csaky  于2024年4月17日周三 19:58写道:
> 
>> +1 (non-binding)
>> 
>> Best,
>> Ferenc
>> 
>> 
>> 
>> 
>> On Wednesday, April 17th, 2024 at 10:26, Ahmed Hamdy 
>> wrote:
>> 
>>> 
>>> 
>>> + 1 (non-binding)
>>> 
>>> Best Regards
>>> Ahmed Hamdy
>>> 
>>> 
>>> On Wed, 17 Apr 2024 at 08:28, Yuepeng Pan panyuep...@apache.org wrote:
>>> 
 +1(non-binding).
 
 Best,
 Yuepeng Pan
 
 At 2024-04-17 14:27:27, "Ron liu" ron9@gmail.com wrote:
 
> Hi Dev,
> 
> Thank you to everyone for the feedback on FLIP-435: Introduce a New
> Materialized Table for Simplifying Data Pipelines[1][2].
> 
> I'd like to start a vote for it. The vote will be open for at least
>> 72
> hours unless there is an objection or not enough votes.
> 
> [1]
 
 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines
 
> [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs
> 
> Best,
> Ron
>> 



Re: [Vote] FLIP-438: Amazon SQS Sink Connector

2024-04-16 Thread Leonard Xu
+1 (binding)

Best,
Leonard

> 2024年4月17日 上午2:25,Robert Metzger  写道:
> 
> +1 binding
> 
> On Tue, Apr 16, 2024 at 2:05 PM Jeyhun Karimov  wrote:
> 
>> Thanks Priya for driving the FLIP.
>> 
>> +1 (non-binding)
>> 
>> Regards,
>> Jeyhun
>> 
>> On Tue, Apr 16, 2024 at 12:37 PM Hong Liang  wrote:
>> 
>>> +1 (binding)
>>> 
>>> Thanks Priya for driving this! This has been a requested feature for a
>>> while now, and will benefit the community :)
>>> 
>>> Hong
>>> 
>>> On Tue, Apr 16, 2024 at 3:23 AM Muhammet Orazov
>>>  wrote:
>>> 
 +1 (non-binding)
 
 Thanks Priya for the FLIP and driving it!
 
 Best,
 Muhammet
 
 On 2024-04-12 21:56, Dhingra, Priya wrote:
> Hi devs,
> 
> 
> 
> Thank you to everyone for the feedback on FLIP-438: Amazon SQS Sink
> Connector<
 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector
> 
> 
> 
> 
> I would like to start a vote for it. The vote will be open for at
>> least
> 72
> 
> hours unless there is an objection or not enough votes.
> 
> 
> 
> 
 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector
> 
> Regards
> Priya
 
>>> 
>> 



Re: [ANNOUNCE] New Apache Flink Committer - Zakelly Lan

2024-04-15 Thread Leonard Xu
Congratulations Zakelly!


Best,
Leonard
> 2024年4月15日 下午3:56,Samrat Deb  写道:
> 
> Congratulations Zakelly!



Re: [ANNOUNCE] New Apache Flink PMC Member - Lincoln Lee

2024-04-14 Thread Leonard Xu
Congratulations, Lincoln~

Best,
Leonard



> 2024年4月12日 下午4:40,Yuepeng Pan  写道:
> 
> Congratulations, Lincoln!
> 
> Best,Yuepeng Pan
> At 2024-04-12 16:24:01, "Yun Tang"  wrote:
>> Congratulations, Lincoln!
>> 
>> 
>> Best
>> Yun Tang
>> 
>> From: Jark Wu 
>> Sent: Friday, April 12, 2024 15:59
>> To: dev 
>> Cc: Lincoln Lee 
>> Subject: [ANNOUNCE] New Apache Flink PMC Member - Lincoln Lee
>> 
>> Hi everyone,
>> 
>> On behalf of the PMC, I'm very happy to announce that Lincoln Lee has
>> joined the Flink PMC!
>> 
>> Lincoln has been an active member of the Apache Flink community for
>> many years. He mainly works on Flink SQL component and has driven
>> /pushed many FLIPs around SQL, including FLIP-282/373/415/435 in
>> the recent versions. He has a great technical vision of Flink SQL and
>> participated in plenty of discussions in the dev mailing list. Besides
>> that,
>> he is community-minded, such as being the release manager of 1.19,
>> verifying releases, managing release syncs, writing the release
>> announcement etc.
>> 
>> Congratulations and welcome Lincoln!
>> 
>> Best,
>> Jark (on behalf of the Flink PMC)



Re: [ANNOUNCE] New Apache Flink PMC Member - Jing Ge

2024-04-14 Thread Leonard Xu
Congratulations, Jing~

Best,
Leonard

> 2024年4月14日 下午4:23,Xia Sun  写道:
> 
> Congratulations, Jing!
> 
> Best,
> Xia
> 
> Ferenc Csaky  于2024年4月13日周六 00:50写道:
> 
>> Congratulations, Jing!
>> 
>> Best,
>> Ferenc
>> 
>> 
>> 
>> On Friday, April 12th, 2024 at 13:54, Ron liu  wrote:
>> 
>>> 
>>> 
>>> Congratulations, Jing!
>>> 
>>> Best,
>>> Ron
>>> 
>>> Junrui Lee jrlee@gmail.com 于2024年4月12日周五 18:54写道:
>>> 
 Congratulations, Jing!
 
 Best,
 Junrui
 
 Aleksandr Pilipenko z3d...@gmail.com 于2024年4月12日周五 18:28写道:
 
> Congratulations, Jing!
> 
> Best Regards,
> Aleksandr
>> 



Re: [VOTE] FLIP-399: Flink Connector Doris

2024-04-09 Thread Leonard Xu
+1 (binding)

Best,
Leonard

> 2024年4月9日 下午5:11,Muhammet Orazov  写道:
> 
> Hey Wudi,
> 
> Thanks for your efforts.
> 
> +1 (non-binding)
> 
> Best,
> Muhammet
> 
> On 2024-04-09 02:47, wudi wrote:
>> Hi devs,
>> I would like to start a vote about FLIP-399 [1]. The FLIP is about 
>> contributing the Flink Doris Connector[2] to the Flink community. Discussion 
>> thread [3].
>> The vote will be open for at least 72 hours unless there is an objection or
>> insufficient votes.
>> Thanks,
>> Di.Wu
>> [1] 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-399%3A+Flink+Connector+Doris
>> [2] https://github.com/apache/doris-flink-connector
>> [3] https://lists.apache.org/thread/p3z4wsw3ftdyfs9p2wd7bbr2gfyl3xnh



Re: [DISCUSS] FLIP-399: Flink Connector Doris

2024-04-06 Thread Leonard Xu
;>>> option is only used for error recovery scenarios, such as when a
>>>>>> transaction is cleared by the server but you want to reuse the upstream
>>>>>> offset from the checkpoint.
>>>>>> 
>>>>>> 3. Also, thank you for pointing out the issue with the parameter. It has
>>>>>> already been addressed[2], but the FLIP changes were overlooked. It has
>>>>>> been updated.
>>>>>> 
>>>>>> [1]
>>>>>> 
>>>> https://github.com/apache/doris-flink-connector/blob/master/flink-doris-connector/src/main/java/org/apache/doris/flink/sink/committer/DorisCommitter.java#L150-L160
>>>>>> [2]
>>>>>> 
>>>> https://github.com/apache/doris-flink-connector/blob/master/flink-doris-connector/src/main/java/org/apache/doris/flink/table/DorisConfigOptions.java#L89-L98
>>>>>> 
>>>>>> Brs
>>>>>> di.wu
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> 2024年3月22日 18:28,Feng Jin  写道:
>>>>>>> 
>>>>>>> Hi Di,
>>>>>>> 
>>>>>>> Thank you for the update, as well as quickly implementing corresponding
>>>>>>> capabilities including filter push down and project push down.
>>>>>>> 
>>>>>>> Regarding the transaction timeout, I still have some doubts. I would
>>>> like
>>>>>>> to confirm if we can control this timeout parameter in the connector,
>>>>>> such
>>>>>>> as setting it to 10 minutes or 1 hour.
>>>>>>> Also, when a transaction is cleared by the server, the commit operation
>>>>>> of
>>>>>>> the connector will fail, leading to job failure. In this case, can
>>>> users
>>>>>>> only choose to delete the checkpoint and re-consume historical data?
>>>>>>> 
>>>>>>> There is also a small question regarding the parameters*: *
>>>>>>> *doris.request.connect.timeout.ms <
>>>>>> http://doris.request.connect.timeout.ms>*
>>>>>>> and d*oris.request.read.timeout.ms <
>>>> http://oris.request.read.timeout.ms
>>>>>>> *,
>>>>>>> can we change them to Duration type and remove the "ms" suffix.?
>>>>>>> This way, all time parameters can be kept uniform in type as duration.
>>>>>>> 
>>>>>>> 
>>>>>>> Best,
>>>>>>> Feng
>>>>>>> 
>>>>>>> On Fri, Mar 22, 2024 at 4:46 PM wudi <676366...@qq.com.invalid> wrote:
>>>>>>> 
>>>>>>>> Hi, Feng,
>>>>>>>> Thank you, that's a great suggestion !
>>>>>>>> 
>>>>>>>> I have already implemented FilterPushDown and removed that parameter
>>>> on
>>>>>>>> DorisDynamicTableSource[1], and also updated FLIP.
>>>>>>>> 
>>>>>>>> Regarding the mention of [Doris also aborts transactions], it may not
>>>>>> have
>>>>>>>> been described accurately. It mainly refers to the automatic
>>>> expiration
>>>>>> of
>>>>>>>> long-running transactions in Doris that have not been committed for a
>>>>>>>> prolonged period.
>>>>>>>> 
>>>>>>>> As for two-phase commit, when a commit fails, the checkpoint will also
>>>>>>>> fail, and the job will be continuously retried.
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>> 
>>>> https://github.com/apache/doris-flink-connector/blob/master/flink-doris-connector/src/main/java/org/apache/doris/flink/table/DorisDynamicTableSource.java#L58
>>>>>>>> 
>>>>>>>> Brs
>>>>>>>> di.wu
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> 2024年3月15日 14:53,Feng Jin  写道:
>>>>>>>>> 
>>>>>>>>> Hi Di
>>>>>>>>> 
>>>>>>>>> Thank you for initiating this FLIP, +1 for this.
>>>>>>>>> 
>>>>>>>>> Regarding the opti

Re: [VOTE] FLIP-437: Support ML Models in Flink SQL

2024-04-03 Thread Leonard Xu
+1(binding)

Best,
Leonard

> 2024年4月3日 下午3:37,Piotr Nowojski  写道:
> 
> +1 (binding)
> 
> Best,
> Piotrek
> 
> śr., 3 kwi 2024 o 04:29 Yu Chen  napisał(a):
> 
>> +1 (non-binding)
>> 
>> Looking forward to this future.
>> 
>> Thanks,
>> Yu Chen
>> 
>>> 2024年4月3日 10:23,Jark Wu  写道:
>>> 
>>> +1 (binding)
>>> 
>>> Best,
>>> Jark
>>> 
>>> On Tue, 2 Apr 2024 at 15:12, Timo Walther  wrote:
>>> 
 +1 (binding)
 
 Thanks,
 Timo
 
 On 29.03.24 17:30, Hao Li wrote:
> Hi devs,
> 
> I'd like to start a vote on the FLIP-437: Support ML Models in Flink
> SQL [1]. The discussion thread is here [2].
> 
> The vote will be open for at least 72 hours unless there is an
>> objection
 or
> insufficient votes.
> 
> [1]
> 
 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-437%3A+Support+ML+Models+in+Flink+SQL
> 
> [2] https://lists.apache.org/thread/9z94m2bv4w265xb5l2mrnh4lf9m28ccn
> 
> Thanks,
> Hao
> 
 
 
>> 
>> 



Re: [DISCUSS] FLIP-434: Support optimizations for pre-partitioned data sources

2024-04-02 Thread Leonard Xu
Hey, Jeyhun 

Thanks for kicking off this discussion. I have two questions about streaming 
sources:

(1)The FLIP  motivation section says Kafka broker is already partitioned w.r.t. 
some key[s] , Is this the main use case in Kafka world? Partitioning by key 
fields is not the default partitioner of Kafka default partitioner[1] IIUC.

(2) Considering the FLIP’s optimization scope aims to both Batch and Streaming 
pre-partitioned source, could you add a Streaming Source example to help me 
understand the  FLIP better? I think Kafka Source is a good candidates for 
streaming source example, file source is a good one for batch source and it 
really helped me to follow-up the FLIP.

Best,
Leonard
[1]https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L31



> 2024年4月3日 上午5:53,Jeyhun Karimov  写道:
> 
> Hi Lincoln,
> 
> Thanks a lot for your comments. Please find my answers below.
> 
> 
> 1. Is this flip targeted only at batch scenarios or does it include
>> streaming?
>> (The flip and the discussion did not explicitly mention this, but in the
>> draft pr, I only
>> saw the implementation for batch scenarios
>> 
>> https://github.com/apache/flink/pull/24437/files#diff-a6d71dd7d9bf0e7776404f54473b504e1de1240e93f820214fa5d1f082fb30c8
>> <
>> https://github.com/apache/flink/pull/24437/files#diff-a6d71dd7d9bf0e7776404f54473b504e1de1240e93f820214fa5d1f082fb30c8%EF%BC%89
>>> 
>> )
>> If we expect this also apply to streaming, then we need to consider the
>> stricter
>> shuffle restrictions of streaming compared to batch (if support is
>> considered,
>> more discussion is needed here, let’s not expand for now). If it only
>> applies to batch,
>> it is recommended to clarify in the flip.
> 
> 
> - The FLIP targets both streaming and batch scenarios.
> Could you please elaborate more on what you mean by additional
> restrictions?
> 
> 
> 2. In the current implementation, the optimized plan seems to have some
>> problems.
>> As described in the class comments:
>> 
>> https://github.com/apache/flink/blob/d6e3b51fdb9a2e565709e8d7bc619234b3768ed1/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/rules/physical/batch/RemoveRedundantShuffleRule.java#L60
> 
> BatchPhysicalHashAggregate (local)
> 
>   +- BatchPhysicalLocalHashAggregate (local)
>>  +- BatchPhysicalTableSourceScan
>> The `BatchPhysicalLocalHashAggregate` here is redundant (in the case of
>> one-phase
>> hashAgg, localAgg is not necessary, which is the scenario currently handled
>> by
>> `RemoveRedundantLocalHashAggRule` and other rules)
> 
> 
> - Yes, you are completely right. Note that the PR you referenced is just a
> quick PoC.
> Redundant operators you mentioned exist because
> `RemoveRedundantShuffleRule` just removes the Exchange operator,
> without modifying upstream/downstream operators.
> As I mentioned, the implementation is just a PoC and the end implementation
> will make sure that existing redundancy elimination rules remove redundant
> operators.
> 
> 
> Also, in the draft pr,
>> the optimization of `testShouldEliminatePartitioning1` &
>> `testShouldEliminatePartitioning2`
>> seems didn't take effect?
>> 
>> https://github.com/apache/flink/blob/d6e3b51fdb9a2e565709e8d7bc619234b3768ed1/flink-table/flink-table-planner/src/test/resources/org/apache/flink/connector/file/table/BatchFileSystemTableSourceTest.xml#L38
> 
> 
> -  Note that in this example, Exchange operator have a
> property KEEP_INPUT_AS_IS that indicates that data distribution is the same
> as its input.
> Since we have redundant operators (as shown above, two aggregate operators)
> one of the rules (not in this FLIP)
> adds this Exchange operator with KEEP_INPUT_AS_IS in between.
> Similar to my comment above, the end implementation will be except from
> redundant operators.
> 
> In conjunction with question 2, I am wondering if we have a better choice
>> (of course, not simply adding the current `PHYSICAL_OPT_RULES`'s
>> `RemoveRedundantLocalXXRule`s
>> to the `PHYSICAL_REWRITE`).
>> For example, let the source actively provide some traits (including
>> `FlinkRelDistribution`
>> and `RelCollation`) to the planner. The advantage of doing this is to
>> directly reuse the
>> current shuffle remove optimization (as `FlinkExpandConversionRule`
>> implemented),
>> and according to the data distribution characteristics provided by the
>> source, the planner
>> may choose a physical operator with a cheaper costs (for example, according
>> to `RelCollation`,
>> the planner can use sortAgg, no need for a separate local sort operation).
>> WDYT?
> 
> 
> - Good point. Makes sense to me. I will check FlinkExpandConversionRule to
> be utilized in the implementation.
> 
> 
> Regards,
> Jeyhun
> 
> 
> 
> On Tue, Apr 2, 2024 at 6:01 PM Lincoln Lee  wrote:
> 
>> Hi Jeyhun,
>> 
>> Thank you for driving this, it would be very useful optimization!
>> 
>> Sorry for joining the discussion now(I or

Re: [DISCUSS] Externalized Google Cloud Connectors

2024-04-01 Thread Leonard Xu
Hey, Claire

Thanks starting this discussion, all flink external connector repos are 
sub-projects of Apache Flink, including  
https://github.com/apache/flink-connector-aws.

Creating a flink external connector repo  named flink-connectors-gcp as 
sub-project of Apache Beam is not a good idea from my side. 

>   Currently, we have no Flink committers on our team. We are actively
>   involved in the Apache Beam community and have a number of ASF members on
>   the team.

Not having Flink committer should not be a strong reason in this case,  Flink 
community welcome contributors to contribute and maintain the connectors, as a 
contributor, through continuous connector development and maintenance work in 
the community, you will also have the opportunity to become a Committer.

Best,
Leonard


> 2024年2月14日 上午12:24,Claire McCarthy  写道:
> 
> Hi Devs!
> 
> I’d like to kick off a discussion on setting up a repo for a new fleet of
> Google Cloud connectors.
> 
> A bit of context:
> 
>   -
> 
>   We have a team of Google engineers who are looking to build/maintain
>   5-10 GCP connectors for Flink.
>   -
> 
>   We are wondering if it would make sense to host our connectors under the
>   ASF umbrella following a similar repo structure as AWS (
>   https://github.com/apache/flink-connector-aws). In our case:
>   apache/flink-connectors-gcp.
>   -
> 
>   Currently, we have no Flink committers on our team. We are actively
>   involved in the Apache Beam community and have a number of ASF members on
>   the team.
> 
> 
> We saw that one of the original motivations for externalizing connectors
> was to encourage more activity and contributions around connectors by
> easing the contribution overhead. We understand that the decision was
> ultimately made to host the externalized connector repos under the ASF
> organization. For the same reasons (release infra, quality assurance,
> integration with the community, etc.), we would like all GCP connectors to
> live under the ASF organization.
> 
> We want to ask the Flink community what you all think of this idea, and
> what would be the best way for us to go about contributing something like
> this. We are excited to contribute and want to learn and follow your
> practices.
> 
> A specific issue we know of is that our changes need approval from Flink
> committers. Do you have a suggestion for how best to go about a new
> contribution like ours from a team that does not have committers? Is it
> possible, for example, to partner with a committer (or a small cohort) for
> tight engagement? We also know about ASF voting and release process, but
> that doesn't seem to be as much of a potential hurdle.
> 
> Huge thanks in advance for sharing your thoughts!
> 
> 
> Claire



Re: [DISCUSS] Planning Flink 1.20

2024-03-25 Thread Leonard Xu
Wow, happy to see Ufuk and Robert join the release managers group.

+1 for the release manager candidates(Weijie, Rui Fan,Ufuk and Robert) from my 
side.


Best,
Leonard



> 2024年3月25日 下午6:09,Robert Metzger  写道:
> 
> Hi, thanks for starting the discussion.
> 
> +1 for the proposed timeline and the three proposed release managers.
> 
> I'm happy to join the release managers group as well, as a backup for Ufuk
> (unless there are objections about the number of release managers)
> 
> On Mon, Mar 25, 2024 at 11:04 AM Ufuk Celebi  wrote:
> 
>> Hey all,
>> 
>> I'd like to join the release managers for 1.20 as well. I'm looking
>> forward to getting more actively involved again.
>> 
>> Cheers,
>> 
>> Ufuk
>> 
>> On Sun, Mar 24, 2024, at 11:27 AM, Ahmed Hamdy wrote:
>>> +1 for the proposed timeline and release managers.
>>> Best Regards
>>> Ahmed Hamdy
>>> 
>>> 
>>> On Fri, 22 Mar 2024 at 07:41, Xintong Song 
>> wrote:
>>> 
>>>> +1 for the proposed timeline and Weijie & Rui as the release managers.
>>>> 
>>>> I think it would be welcomed if another 1-2 volunteers join as the
>> release
>>>> managers, but that's not a must. We used to have only 1-2 release
>> managers
>>>> for each release,
>>>> 
>>>> Best,
>>>> 
>>>> Xintong
>>>> 
>>>> 
>>>> 
>>>> On Fri, Mar 22, 2024 at 2:55 PM Jark Wu  wrote:
>>>> 
>>>>> Thanks for kicking this off.
>>>>> 
>>>>> +1 for the volunteered release managers (Weijie Guo, Rui Fan) and the
>>>>> targeting date (feature freeze: June 15).
>>>>> 
>>>>> Best,
>>>>> Jark
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> On Fri, 22 Mar 2024 at 14:00, Rui Fan <1996fan...@gmail.com> wrote:
>>>>> 
>>>>>> Thanks Leonard for this feedback and help!
>>>>>> 
>>>>>> Best,
>>>>>> Rui
>>>>>> 
>>>>>> On Fri, Mar 22, 2024 at 12:36 PM weijie guo <
>> guoweijieres...@gmail.com
>>>>> 
>>>>>> wrote:
>>>>>> 
>>>>>>> Thanks Leonard!
>>>>>>> 
>>>>>>>> I'd like to help you if you need some help like permissions
>> from
>>>> PMC
>>>>>>> side, please feel free to ping me.
>>>>>>> 
>>>>>>> Nice to know. It'll help a lot!
>>>>>>> 
>>>>>>> Best regards,
>>>>>>> 
>>>>>>> Weijie
>>>>>>> 
>>>>>>> 
>>>>>>> Leonard Xu  于2024年3月22日周五 12:09写道:
>>>>>>> 
>>>>>>>> +1 for the proposed release managers (Weijie Guo, Rui Fan),
>> both the
>>>>> two
>>>>>>>> candidates are pretty active committers thus I believe they
>> know the
>>>>>>>> community development process well. The recent releases have
>> four
>>>>>> release
>>>>>>>> managers, and I am also looking forward to having other
>> volunteers
>>>>>>>> join the management of Flink 1.20.
>>>>>>>> 
>>>>>>>> +1 for targeting date (feature freeze: June 15, 2024),
>> referring to
>>>>> the
>>>>>>>> release cycle of recent versions, release cycle of 4 months
>> makes
>>>>> sense
>>>>>> to
>>>>>>>> me.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> I'd like to help you if you need some help like permissions
>> from PMC
>>>>>>>> side, please feel free to ping me.
>>>>>>>> 
>>>>>>>> Best,
>>>>>>>> Leonard
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> 2024年3月19日 下午5:35,Rui Fan <1996fan...@gmail.com> 写道:
>>>>>>>>> 
>>>>>>>>> Hi Weijie,
>>>>>>>>> 
>>>>>>>>> Thanks for kicking off 1.20! I'd like to join you and
>> participate
>>>> in
>>>>

Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.8.0 released

2024-03-25 Thread Leonard Xu
Congratulations!  Thanks Maximilian for the release work and all involved.

Best,
Leonard



> 2024年3月25日 下午7:04,Muhammet Orazov  写道:
> 
> Great! Thanks Maximilian and everyone involved for the effort and release!
> 
> Best,
> Muhammet
> 
> On 2024-03-25 10:35, Maximilian Michels wrote:
>> The Apache Flink community is very happy to announce the release of
>> the Apache Flink Kubernetes Operator version 1.8.0.
>> The Flink Kubernetes Operator allows users to manage their Apache
>> Flink applications on Kubernetes through all aspects of their
>> lifecycle.
>> Release highlights:
>> - Flink Autotuning automatically adjusts TaskManager memory
>> - Flink Autoscaling metrics and decision accuracy improved
>> - Improve standalone Flink Autoscaling
>> - Savepoint trigger nonce for savepoint-based restarts
>> - Operator stability improvements for cluster shutdown
>> Blog post: 
>> https://flink.apache.org/2024/03/21/apache-flink-kubernetes-operator-1.8.0-release-announcement/
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>> Maven artifacts for Flink Kubernetes Operator can be found at:
>> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>> Official Docker image for Flink Kubernetes Operator can be found at:
>> https://hub.docker.com/r/apache/flink-kubernetes-operator
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12353866&projectId=12315522
>> We would like to thank the Apache Flink community and its contributors
>> who made this release possible!
>> Cheers,
>> Max



Re: [DISCUSS] Flink Website Menu Adjustment

2024-03-25 Thread Leonard Xu
Thanks Zhongqiang for starting this discussion, updating documentation menus 
according to sub-projects' activities makes sense to me.  

+1 for the proposed menus:

> After:
> 
> With Flink
> With Flink Kubernetes Operator
> With Flink CDC
> With Flink ML
> With Flink Stateful Functions
> Training Course



Best,
Leonard

> 2024年3月25日 下午3:48,gongzhongqiang  写道:
> 
> Hi everyone,
> 
> I'd like to start a discussion on adjusting the Flink website [1] menu to
> improve accuracy and usability.While migrating Flink CDC documentation
> to the website, I found outdated links, need to review and update menus
> for the most relevant information for our users.
> 
> 
> Proposal:
> 
> - Remove Paimon [2] from the "Getting Started" and "Documentation" menus:
> Paimon [2] is now an independent top project of ASF. CC: jingsong lees
> 
> - Sort the projects in the subdirectory by the activity of the projects.
> Here I list the number of releases for each project in the past year.
> 
> Flink Kubernetes Operator : 7
> Flink CDC : 5
> Flink ML  : 2
> Flink Stateful Functions : 1
> 
> 
> Expected Outcome :
> 
> - Menu "Getting Started"
> 
> Before:
> 
> With Flink
> 
> With Flink Stateful Functions
> 
> With Flink ML
> 
> With Flink Kubernetes Operator
> 
> With Paimon(incubating) (formerly Flink Table Store)
> 
> With Flink CDC
> 
> Training Course
> 
> 
> After:
> 
> With Flink
> With Flink Kubernetes Operator
> 
> With Flink CDC
> 
> With Flink ML
> 
> With Flink Stateful Functions
> 
> Training Course
> 
> 
> - Menu "Documentation" will same with "Getting Started"
> 
> 
> I look forward to hearing your thoughts and suggestions on this proposal.
> 
> [1] https://flink.apache.org/
> [2] https://github.com/apache/incubator-paimon
> [3] https://github.com/apache/flink-statefun
> 
> 
> 
> Best regards,
> 
> Zhongqiang Gong



Re: [DISCUSS] Planning Flink 1.20

2024-03-21 Thread Leonard Xu
+1 for the proposed release managers (Weijie Guo, Rui Fan), both the two 
candidates are pretty active committers thus I believe they know the 
community development process well. The recent releases have four release 
managers, and I am also looking forward to having other volunteers
 join the management of Flink 1.20.

+1 for targeting date (feature freeze: June 15, 2024), referring to the release 
cycle of recent versions, release cycle of 4 months makes sense to me.


I'd like to help you if you need some help like permissions from PMC side, 
please feel free to ping me.

Best,
Leonard


> 2024年3月19日 下午5:35,Rui Fan <1996fan...@gmail.com> 写道:
> 
> Hi Weijie,
> 
> Thanks for kicking off 1.20! I'd like to join you and participate in the
> 1.20 release.
> 
> Best,
> Rui
> 
> On Tue, Mar 19, 2024 at 5:30 PM weijie guo 
> wrote:
> 
>> Hi everyone,
>> 
>> With the release announcement of Flink 1.19, it's a good time to kick off
>> discussion of the next release 1.20.
>> 
>> 
>> - Release managers
>> 
>> 
>> I'd like to volunteer as one of the release managers this time. It has been
>> good practice to have a team of release managers from different
>> backgrounds, so please raise you hand if you'd like to volunteer and get
>> involved.
>> 
>> 
>> 
>> - Timeline
>> 
>> 
>> Flink 1.19 has been released. With a target release cycle of 4 months,
>> we propose a feature freeze date of *June 15, 2024*.
>> 
>> 
>> 
>> - Collecting features
>> 
>> 
>> As usual, we've created a wiki page[1] for collecting new features in 1.20.
>> 
>> 
>> In addition, we already have a number of FLIPs that have been voted or are
>> in the process, including pre-works for version 2.0.
>> 
>> 
>> In the meantime, the release management team will be finalized in the next
>> few days, and we'll continue to create Jira Boards and Sync meetings
>> to make it easy
>> for everyone to get an overview and track progress.
>> 
>> 
>> 
>> Best regards,
>> 
>> Weijie
>> 
>> 
>> 
>> [1] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release
>> 



Re: [VOTE] FLIP-439: Externalize Kudu Connector from Bahir

2024-03-21 Thread Leonard Xu
+1(binding)

Best,
Leonard

> 2024年3月21日 下午5:21,Martijn Visser  写道:
> 
> +1 (binding)
> 
> On Thu, Mar 21, 2024 at 8:01 AM gongzhongqiang 
> wrote:
> 
>> +1 (non-binding)
>> 
>> Bests,
>> Zhongqiang Gong
>> 
>> Ferenc Csaky  于2024年3月20日周三 22:11写道:
>> 
>>> Hello devs,
>>> 
>>> I would like to start a vote about FLIP-439 [1]. The FLIP is about to
>>> externalize the Kudu
>>> connector from the recently retired Apache Bahir project [2] to keep it
>>> maintainable and
>>> make it up to date as well. Discussion thread [3].
>>> 
>>> The vote will be open for at least 72 hours (until 2024 March 23 14:03
>>> UTC) unless there
>>> are any objections or insufficient votes.
>>> 
>>> Thanks,
>>> Ferenc
>>> 
>>> [1]
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-439%3A+Externalize+Kudu+Connector+from+Bahir
>>> [2] https://attic.apache.org/projects/bahir.html
>>> [3] https://lists.apache.org/thread/oydhcfkco2kqp4hdd1glzy5vkw131rkz
>> 



[ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 Thread Leonard Xu
Hi devs and users,

We are thrilled to announce that the donation of Flink CDC as a sub-project of 
Apache Flink has completed. We invite you to explore the new resources 
available:

- GitHub Repository: https://github.com/apache/flink-cdc
- Flink CDC Documentation: 
https://nightlies.apache.org/flink/flink-cdc-docs-stable

After Flink community accepted this donation[1], we have completed software 
copyright signing, code repo migration, code cleanup, website migration, CI 
migration and github issues migration etc. 
Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng Ren, 
Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their 
contributions and help during this process!


For all previous contributors: The contribution process has slightly changed to 
align with the main Flink project. To report bugs or suggest new features, 
please open tickets 
Apache Jira (https://issues.apache.org/jira).  Note that we will no longer 
accept GitHub issues for these purposes.


Welcome to explore the new repository and documentation. Your feedback and 
contributions are invaluable as we continue to improve Flink CDC.

Thanks everyone for your support and happy exploring Flink CDC!

Best,
Leonard
[1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob



Re: [SUMMARY] Flink 1.19 last sync summary on 03/19/2024

2024-03-19 Thread Leonard Xu


Thanks for your continuous release sync summary,  Lincoln.  It help me a lot.


Best,
Leonard

> 2024年3月19日 下午11:49,Lincoln Lee  写道:
> 
> Hi everyone,
> 
> Flink 1.19.0 has been officially released yesterday[1].
> 
> I'd like to share some highlights of the last release sync of 1.19:
> 
> - Remaining works
> 
> The official docker image is still in progress[2], will be available once
> the related pr been merged[3].
> In addition, some follow-up items are being processed[4], and about end of
> support for lower versions will be discussed in separate mail.
> 
> Thanks to all contributors for your great work on 1.19 and the support for
> the release!
> 
> The new 1.20 release cycle[5] has set off, welcome to continue contributing!
> 
> [1] https://lists.apache.org/thread/sofmxytbh6y20nwot1gywqqc2lqxn4hm
> [2] https://issues.apache.org/jira/browse/FLINK-34701
> [3] https://github.com/docker-library/official-images/pull/16430
> [4] https://issues.apache.org/jira/browse/FLINK-34706
> [5] https://lists.apache.org/thread/80h3nzk08v276xmllswbbbg1z7m3v70t
> 
> 
> Best,
> Yun, Jing, Martijn and Lincoln



Re: [VOTE] FLIP-436: Introduce Catalog-related Syntax

2024-03-19 Thread Leonard Xu
+1(binding)


Best,
Leonard
> 2024年3月19日 下午9:03,Lincoln Lee  写道:
> 
> +1 (binding)
> 
> Best,
> Lincoln Lee
> 
> 
> Feng Jin  于2024年3月19日周二 19:59写道:
> 
>> +1 (non-binding)
>> 
>> Best,
>> Feng
>> 
>> On Tue, Mar 19, 2024 at 7:46 PM Ferenc Csaky 
>> wrote:
>> 
>>> +1 (non-binding).
>>> 
>>> Best,
>>> Ferenc
>>> 
>>> 
>>> 
>>> 
>>> On Tuesday, March 19th, 2024 at 12:39, Jark Wu  wrote:
>>> 
 
 
 +1 (binding)
 
 Best,
 Jark
 
 On Tue, 19 Mar 2024 at 19:05, Yuepeng Pan panyuep...@apache.org wrote:
 
> Hi, Yubin
> 
> Thanks for driving it !
> 
> +1 non-binding.
> 
> Best,
> Yuepeng Pan.
> 
> At 2024-03-19 17:56:42, "Yubin Li" lyb5...@gmail.com wrote:
> 
>> Hi everyone,
>> 
>> Thanks for all the feedback, I'd like to start a vote on the
>>> FLIP-436:
>> Introduce Catalog-related Syntax [1]. The discussion thread is here
>> [2].
>> 
>> The vote will be open for at least 72 hours unless there is an
>> objection or insufficient votes.
>> 
>> [1]
>> 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax
>> [2]
>> https://lists.apache.org/thread/10k1bjb4sngyjwhmfqfky28lyoo7sv0z
>> 
>> Best regards,
>> Yubin
>>> 
>> 



Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Leonard Xu
Congratulations, thanks release managers and all involved for the great work!


Best,
Leonard

> 2024年3月18日 下午4:32,Jingsong Li  写道:
> 
> Congratulations!
> 
> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
>> 
>> Congratulations, thanks for the great work!
>> 
>> Best,
>> Rui
>> 
>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee  wrote:
>>> 
>>> The Apache Flink community is very happy to announce the release of Apache 
>>> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>>> 
>>> Apache Flink® is an open-source stream processing framework for 
>>> distributed, high-performing, always-available, and accurate data streaming 
>>> applications.
>>> 
>>> The release is available for download at:
>>> https://flink.apache.org/downloads.html
>>> 
>>> Please check out the release blog post for an overview of the improvements 
>>> for this bugfix release:
>>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>>> 
>>> The full release notes are available in Jira:
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353282
>>> 
>>> We would like to thank all contributors of the Apache Flink community who 
>>> made this release possible!
>>> 
>>> 
>>> Best,
>>> Yun, Jing, Martijn and Lincoln



Re: Apache Flink : Vector database connector

2024-03-17 Thread Leonard Xu
Hey, Asimansu

Happy to hear you’re interested to integrate vector database and Apache Flink, 
I’ve talked similar topic with milvus community[1] members, 
I am also very familiar with the CDC, thus I know the underlying value well 
that the integration of the two can bring to AI users.

If you are willing to contribute the corresponding connector, I will be very 
happy to take time to help review FLIP[2] and code in the community. 

Best,
Leonard

[1] https://github.com/milvus-io/milvus
[2] https://cwiki.apache.org/confluence/display/FLINK/FLIP+Connector+Template


> 2024年3月17日 上午2:18,Asimansu Bera  写道:
> 
> Hello Dev,
> 
> Vector databases like Waveiate are utilized in GenAI-based applications to
> store vector data in numeric format, supporting CRUD operations.
> Write-ahead logging (WAL) is employed to capture any changes made to the
> vector databases. For streaming-based applications, it's uncertain if
> tracking changes in vector classes and objects is significant. However,
> incorporating CDC (Change Data Capture) connectors can enhance processing
> capabilities for vector data in streaming applications, especially for
> LLM-based applications focused on data ingestion.
> 
> https://lnkd.in/gByTZYDY
> 
> Thoughts?
> 
> Note: all RDBMS are introducing support for vector data.



Re: [DISCUSS] FLIP-436: Introduce "SHOW CREATE CATALOG" Syntax

2024-03-14 Thread Leonard Xu
Hi Yubin,

Thanks for driving the discussion, generally +1 for the FLIP, big +1 to 
finalize the whole catalog syntax story in one FLIP, 
thus I want to jump into the discussion again after you completed the whole 
catalog syntax story.

Best,
Leonard



> 2024年3月14日 下午8:39,Roc Marshal  写道:
> 
> Hi, Yubin
> 
> 
> Thank you for initiating this discussion! +1 for the proposal.
> 
> 
> 
> 
> 
> 
> Best,
> Yuepeng Pan
> 
> 
> 
> 
> 
> 
> 
> 
> 
> At 2024-03-14 18:57:35, "Ferenc Csaky"  wrote:
>> Hi Yubin,
>> 
>> Thank you for initiating this discussion! +1 for the proposal.
>> 
>> I also think it makes sense to group the missing catalog related
>> SQL syntaxes under this FLIP.
>> 
>> Looking forward to these features!
>> 
>> Best,
>> Ferenc
>> 
>> 
>> 
>> 
>> On Thursday, March 14th, 2024 at 08:31, Jane Chan  
>> wrote:
>> 
>>> 
>>> 
>>> Hi Yubin,
>>> 
>>> Thanks for leading the discussion. I'm +1 for the FLIP.
>>> 
>>> As Jark said, it's a good opportunity to enhance the syntax for Catalog
>>> from a more comprehensive perspective. So, I suggest expanding the scope of
>>> this FLIP by focusing on the mechanism instead of one use case to enhance
>>> the overall functionality. WDYT?
>>> 
>>> Best,
>>> Jane
>>> 
>>> On Thu, Mar 14, 2024 at 11:38 AM Hang Ruan ruanhang1...@gmail.com wrote:
>>> 
 Hi, Yubin.
 
 Thanks for the FLIP. +1 for it.
 
 Best,
 Hang
 
 Yubin Li lyb5...@gmail.com 于2024年3月14日周四 10:15写道:
 
> Hi Jingsong, Feng, and Jeyhun
> 
> Thanks for your support and feedback!
> 
>> However, could we add a new method `getCatalogDescriptor()` to
>> CatalogManager instead of directly exposing CatalogStore?
> 
> Good point, Besides the audit tracking issue, The proposed feature
> only requires `getCatalogDescriptor()` function. Exposing components
> with excessive functionality will bring unnecessary risks, I have made
> modifications in the FLIP doc [1]. Thank Feng :)
> 
>> Showing the SQL parser implementation in the FLIP for the SQL syntax
>> might be a bit confusing. Also, the formal definition is missing for
>> this SQL clause.
> 
> Thank Jeyhun for pointing it out :) I have updated the doc [1] .
> 
> [1]
 
 https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=296290756
 
> Best,
> Yubin
> 
> On Thu, Mar 14, 2024 at 2:18 AM Jeyhun Karimov je.kari...@gmail.com
> wrote:
> 
>> Hi Yubin,
>> 
>> Thanks for the proposal. +1 for it.
>> I have one comment:
>> 
>> I would like to see the SQL syntax for the proposed statement. Showing
>> the
>> SQL parser implementation in the FLIP
>> for the SQL syntax might be a bit confusing. Also, the formal
>> definition
>> is
>> missing for this SQL clause.
>> Maybe something like [1] might be useful. WDYT?
>> 
>> Regards,
>> Jeyhun
>> 
>> [1]
 
 https://github.com/apache/flink/blob/0da60ca1a4754f858cf7c52dd4f0c97ae0e1b0cb/docs/content/docs/dev/table/sql/show.md?plain=1#L620-L632
 
>> On Wed, Mar 13, 2024 at 3:28 PM Feng Jin jinfeng1...@gmail.com
>> wrote:
>> 
>>> Hi Yubin
>>> 
>>> Thank you for initiating this FLIP.
>>> 
>>> I have just one minor question:
>>> 
>>> I noticed that we added a new function `getCatalogStore` to expose
>>> CatalogStore, and it seems fine.
>>> However, could we add a new method `getCatalogDescriptor()` to
>>> CatalogManager instead of directly exposing CatalogStore?
>>> By only providing the `getCatalogDescriptor()` interface, it may be
>>> easier
>>> for us to implement audit tracking in CatalogManager in the future.
>>> WDYT ?
>>> Although we have only collected some modified events at the
>>> moment.[1]
>>> 
>>> [1].
 
 https://cwiki.apache.org/confluence/display/FLINK/FLIP-294%3A+Support+Customized+Catalog+Modification+Listener
 
>>> Best,
>>> Feng
>>> 
>>> On Wed, Mar 13, 2024 at 5:31 PM Jingsong Li jingsongl...@gmail.com
>>> wrote:
>>> 
 +1 for this.
 
 We are missing a series of catalog related syntaxes.
 Especially after the introduction of catalog store. [1]
 
 [1]
 
 https://cwiki.apache.org/confluence/display/FLINK/FLIP-295%3A+Support+lazy+initialization+of+catalogs+and+persistence+of+catalog+configurations
 
 Best,
 Jingsong
 
 On Wed, Mar 13, 2024 at 5:09 PM Yubin Li lyb5...@gmail.com
 wrote:
 
> Hi devs,
> 
> I'd like to start a discussion about FLIP-436: Introduce "SHOW
> CREATE
> CATALOG" Syntax [1].
> 
> At present, the `SHOW CREATE TABLE` statement provides strong
> support
> for
> users to easily
> reuse created tables. However, despite the increasing importance
> of the
>

Re: [VOTE] Release 1.19.0, release candidate #2

2024-03-14 Thread Leonard Xu

+1 (binding)

- verified signatures
- verified hashsums
- checked Github release tag 
- started SQL Client, used MySQL CDC connector to read records from database , 
the result is expected
- checked Jira issues for 1.19.0 and discussed with RMs that  FLINK-29114 won’t 
block this RC
- checked release notes
- reviewed the web PR 

Best,
Leonard

> 2024年3月14日 下午9:36,Sergey Nuyanzin  写道:
> 
> +1 (non-binding)
> 
> - Checked the pre-built jars are generated with jdk8
> - Verified signature and checksum
> - Verified no binary in source
> - Verified source code tag
> - Reviewed release note
> - Reviewed web PR
> - Built from source
> - Run a simple job successfully
> 
> On Thu, Mar 14, 2024 at 2:21 PM Martijn Visser 
> wrote:
> 
>> +1 (binding)
>> 
>> - Validated hashes
>> - Verified signature
>> - Verified that no binaries exist in the source archive
>> - Build the source with Maven via mvn clean install -Pcheck-convergence
>> -Dflink.version=1.19.0
>> - Verified licenses
>> - Verified web PR
>> - Started a cluster and the Flink SQL client, successfully read and wrote
>> with the Kafka connector to Confluent Cloud with AVRO and Schema Registry
>> enabled
>> 
>> On Thu, Mar 14, 2024 at 1:32 PM gongzhongqiang 
>> wrote:
>> 
>>> +1 (non-binding)
>>> 
>>> - Verified no binary files in source code
>>> - Verified signature and checksum
>>> - Build source code and run a simple job successfully
>>> - Reviewed the release announcement PR
>>> 
>>> Best,
>>> 
>>> Zhongqiang Gong
>>> 
>>> Ferenc Csaky  于2024年3月14日周四 20:07写道:
>>> 
 +1 (non-binding)
 
 - Verified checksum and signature
 - Verified no binary in src
 - Built from src
 - Reviewed release note PR
 - Reviewed web PR
 - Tested a simple datagen query and insert to blackhole sink via SQL
 Gateway
 
 Best,
 Ferenc
 
 
 
 
 On Thursday, March 14th, 2024 at 12:14, Jane Chan <
>> qingyue@gmail.com
 
 wrote:
 
> 
> 
> Hi Lincoln,
> 
> Thank you for the prompt response and the effort to provide clarity
>> on
 this
> matter.
> 
> Best,
> Jane
> 
> On Thu, Mar 14, 2024 at 6:02 PM Lincoln Lee lincoln.8...@gmail.com
 wrote:
> 
>> Hi Jane,
>> 
>> Thank you for raising this question. I saw the discussion in the
>> Jira
>> (include Matthias' point)
>> and sought advice from several PMCs (including the previous RMs),
>> the
>> majority of people
>> are in favor of merging the bugfix into the release branch even
>>> during
 the
>> release candidate
>> (RC) voting period, so we should accept all bugfixes (unless there
>>> is a
>> specific community
>> rule preventing it).
>> 
>> Thanks again for contributing to the community!
>> 
>> Best,
>> Lincoln Lee
>> 
>> Matthias Pohl matthias.p...@aiven.io.invalid 于2024年3月14日周四
>> 17:50写道:
>> 
>>> Update on FLINK-34227 [1] which I mentioned above: Chesnay helped
>>> identify
>>> a concurrency issue in the JobMaster shutdown logic which seems
>> to
 be in
>>> the code for quite some time. I created a PR fixing the issue
>>> hoping
 that
>>> the test instability is resolved with it.
>>> 
>>> The concurrency issue doesn't really explain why it only started
>> to
>>> appear
>>> recently in a specific CI setup (GHA with AdaptiveScheduler).
>> There
 is no
>>> hint in the git history indicating that it's caused by some newly
>>> introduced change. That is why I wouldn't make FLINK-34227 a
>> reason
 to
>>> cancel rc2. Instead, the fix can be provided in subsequent patch
>>> releases.
>>> 
>>> Matthias
>>> 
>>> [1] https://issues.apache.org/jira/browse/FLINK-34227
>>> 
>>> On Thu, Mar 14, 2024 at 8:49 AM Jane Chan qingyue@gmail.com
 wrote:
>>> 
 Hi Yun, Jing, Martijn and Lincoln,
 
 I'm seeking guidance on whether merging the bugfix[1][2] at
>> this
 stage
 is
 appropriate. I want to ensure that the actions align with the
 current
 release process and do not disrupt the ongoing preparations.
 
 [1] https://issues.apache.org/jira/browse/FLINK-29114
 [2] https://github.com/apache/flink/pull/24492
 
 Best,
 Jane
 
 On Thu, Mar 14, 2024 at 1:33 PM Yun Tang myas...@live.com
>> wrote:
 
> +1 (non-binding)
> 
> *
> Verified the signature and checksum.
> *
> Reviewed the release note PR
> *
> Reviewed the web announcement PR
> *
> Start a standalone cluster to submit the state machine
>> example,
 which
> works well.
> *
> Checked the pre-built jars are generated via JDK8
> *
> Verified the process profiler works well after setting
> rest.profiling.enabled: true
> 
>

Re: [DISCUSS] FLIP-399: Flink Connector Doris

2024-03-06 Thread Leonard Xu
Thanks wudi for the updating, the FLIP generally looks good to me, I only left 
two minor suggestions:

(1) The suffix `.s` in configoption doris.request.query.timeout.s looks strange 
to me, could we change all time interval related option value type to Duration ?

(2) Could you check and improve all config options  like `doris.exec.mem.limit` 
to make them to follow flink config option naming and value type?

Best,
Leonard


> 
> 
>> 2024年3月6日 06:12,Jing Ge  写道:
>> 
>> Hi Di,
>> 
>> Thanks for your proposal. +1 for the contribution. I'd like to know your
>> thoughts about the following questions:
>> 
>> 1. According to your clarification of the exactly-once, thanks for it BTW,
>> no PreCommitTopology is required. Does it make sense to let DorisSink[1]
>> implement SupportsCommitter, since the TwoPhaseCommittingSink is
>> deprecated[2] before turning the Doris connector into a Flink connector?
>> 2. OLAP engines are commonly used as the tail/downstream of a data pipeline
>> to support further e.g. ad-hoc query or cube with feasible pre-aggregation.
>> Just out of curiosity, would you like to share some real use cases that
>> will use OLAP engines as the source of a streaming data pipeline? Or it
>> will only be used as the source for the batch?
>> 3. The E2E test only covered sink[3], if I am not mistaken. Would you like
>> to test the source in E2E too?
>> 
>> [1]
>> https://github.com/apache/doris-flink-connector/blob/43e0e5cf9b832854ea228fb093077872e3a311b6/flink-doris-connector/src/main/java/org/apache/doris/flink/sink/DorisSink.java#L55
>> [2]
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-372%3A+Enhance+and+synchronize+Sink+API+to+match+the+Source+API
>> [3]
>> https://github.com/apache/doris-flink-connector/blob/43e0e5cf9b832854ea228fb093077872e3a311b6/flink-doris-connector/src/test/java/org/apache/doris/flink/tools/cdc/MySQLDorisE2ECase.java#L96
>> 
>> Best regards,
>> Jing
>> 
>> On Tue, Mar 5, 2024 at 11:18 AM wudi <676366...@qq.com.invalid> wrote:
>> 
>>> Hi, Jeyhun Karimov.
>>> Thanks for your question.
>>> 
>>> - How to ensure Exactly-Once?
>>> 1. When the Checkpoint Barrier arrives, DorisSink will trigger the
>>> precommit api of StreamLoad to complete the persistence of data in Doris
>>> (the data will not be visible at this time), and will also pass this TxnID
>>> to the Committer.
>>> 2. When this Checkpoint of the entire Job is completed, the Committer will
>>> call the commit api of StreamLoad and commit TxnID to complete the
>>> visibility of the transaction.
>>> 3. When the task is restarted, the Txn with successful precommit and
>>> failed commit will be aborted based on the label-prefix, and Doris' abort
>>> API will be called. (At the same time, Doris will also abort transactions
>>> that have not been committed for a long time)
>>> 
>>> ps: At the same time, this part of the content has been updated in FLIP
>>> 
>>> - Because the default table model in Doris is Duplicate (
>>> https://doris.apache.org/docs/data-table/data-model/), which does not
>>> have a primary key, batch writing may cause data duplication, but UNIQ The
>>> model has a primary key, which ensures the idempotence of writing, thus
>>> achieving Exactly-Once
>>> 
>>> Brs,
>>> di.wu
>>> 
>>> 
 2024年3月2日 17:50,Jeyhun Karimov  写道:
 
 Hi,
 
 Thanks for the proposal. +1 for the FLIP.
 I have a few questions:
 
 - How exactly the two (Stream Load's two-phase commit and Flink's
>>> two-phase
 commit) combination will ensure the e2e exactly-once semantics?
 
 - The FLIP proposes to combine Doris's batch writing with the primary key
 table to achieve Exactly-Once semantics. Could you elaborate more on
>>> that?
 Why it is not the default behavior but a workaround?
 
 Regards,
 Jeyhun
 
 On Sat, Mar 2, 2024 at 10:14 AM Yanquan Lv  wrote:
 
> Thanks for driving this.
> The content is very detailed, it is recommended to add a section on Test
> Plan for more completeness.
> 
> Di Wu  于2024年1月25日周四 15:40写道:
> 
>> Hi all,
>> 
>> Previously, we had some discussions about contributing Flink Doris
>> Connector to the Flink community [1]. I want to further promote this
> work.
>> I hope everyone will help participate in this FLIP discussion and
>>> provide
>> more valuable opinions and suggestions.
>> Thanks.
>> 
>> [1] https://lists.apache.org/thread/lvh8g9o6qj8bt3oh60q81z0o1cv3nn8p
>> 
>> Brs,
>> di.wu
>> 
>> 
>> 
>> On 2023/12/07 05:02:46 wudi wrote:
>>> 
>>> Hi all,
>>> 
>>> As discussed in the previous email [1], about contributing the Flink
>> Doris Connector to the Flink community.
>>> 
>>> 
>>> Apache Doris[2] is a high-performance, real-time analytical database
>> based on MPP architecture, for scenarios where Flink is used for data
>> analysis, processing, or real-time writing on Doris, Flink Doris
> Conn

Re: [DISCUSS] FLIP Suggestion: Externalize Kudu Connector from Bahir

2024-03-06 Thread Leonard Xu
Thanks Ferenc for kicking off this discussion, I left some comments here:

(1) About the release version, could you specify kudu connector version instead 
of flink version 1.18 as external connector version is different with flink ?

(2) About the connector config options, could you enumerate these options so 
that we can review they’re reasonable or not?

(3) Metrics is also key part of connector, could you add the supported 
connector metrics to public interface as well?


Best,
Leonard


> 2024年3月6日 下午11:23,Ferenc Csaky  写道:
> 
> Hello devs,
> 
> Opening this thread to discuss a FLIP [1] about externalizing the Kudu 
> connector, as recently
> the Apache Bahir project were moved to the attic [2]. Some details were 
> discussed already
> in another thread [3]. I am proposing to externalize this connector and keep 
> it maintainable,
> and up to date.
> 
> Best regards,
> Ferenc
> 
> [1] 
> https://docs.google.com/document/d/1vHF_uVe0FTYCb6PRVStovqDeqb_C_FKjt2P5xXa7uhE
> [2] https://bahir.apache.org/
> [3] https://lists.apache.org/thread/2nb8dxxfznkyl4hlhdm3vkomm8rk4oyq



Re: [DISCUSS] Apache Bahir retired

2024-02-26 Thread Leonard Xu
Hey, Ferenc

Thanks for initiating this discussion. Apache Bahir is a great project that 
provided significant assistance to many Apache Flink/Spark users. It's pity 
news that it has been retired.

I believe that connectivity is crucial for building the ecosystem of the Flink 
such a computing engine. The community, or at least I, would actively support 
the introduction and maintenance of new connectors. Therefore, adding a Kudu 
connector or other connectors from Bahir makes sense to me, as long as we 
adhere to the development process for connectors in the Flink community[1].
I recently visited the Bahir Flink repository. Although the last release of 
Bahir Flink was in August ’22[2] which is compatible with Flink 1.14, its 
latest code is compatible with Flink 1.17[3]. So, based on the existing 
codebase, developing an official Apache Flink connector for Kudu or other 
connectors should be manageable. One point to consider is that if we're not 
developing a connector entirely from scratch but based on an existing 
repository, we must ensure that there are no copyright issues. Here, "no 
issues" means satisfying both Apache Bahir's and Apache Flink's copyright 
requirements. Honestly, I'm not an expert in copyright or legal matters. If 
you're interested in contributing to the Kudu connector, it might be necessary 
to attract other experienced community members to participate in this aspect.

Best,
Leonard

[1] https://cwiki.apache.org/confluence/display/FLINK/FLIP+Connector+Template
[2] https://github.com/apache/bahir-flink/releases/tag/v1.1.0
[3] https://github.com/apache/bahir-flink/blob/master/pom.xml#L116



> 2024年2月22日 下午6:37,Ferenc Csaky  写道:
> 
> Hello devs,
> 
> Just saw that the Bahir project is retired [1]. Any plans on what's happening 
> with the Flink connectors that were part of this project? We specifically use 
> the Kudu connector and integrate it to our platform at Cloudera, so we would 
> be okay to maintain it. Would it be possible to carry it over as separate 
> connector repu under the Apache umbrella similarly as it happened with the 
> external connectors previously?
> 
> Thanks,
> Ferenc



Re: [VOTE] Release flink-connector-jdbc, release candidate #3

2024-02-20 Thread Leonard Xu
Thanks Sergey for driving this release.

+1 (binding)

- verified signatures
- verified hashsums
- built from source code with Maven 3.8.1 and Scala 2.12 succeeded
- checked Github release tag 
- checked release notes
- reviewed all jira tickets has been resolved
- reviewed the web PR and left one minor comment about backporting bugfix to 
main branch
**Note** The release date in jira[1] need to be updated

Best,
Leonard
[1] https://issues.apache.org/jira/projects/FLINK/versions/12354088


> 2024年2月20日 下午5:15,Sergey Nuyanzin  写道:
> 
> +1 (non-binding)
> 
> - Validated checksum hash
> - Verified signature from another machine
> - Checked that tag is present in Github
> - Built the source
> 
> On Tue, Feb 20, 2024 at 10:13 AM Sergey Nuyanzin 
> wrote:
> 
>> Hi David
>> thanks for checking and sorry for the late reply
>> 
>> yep, that's ok this just means that you haven't signed my key which is ok
>> (usually it could happen during virtual key signing parties)
>> 
>> For release checking it is ok to check that the key which was used to sign
>> the artifacts is included into Flink release KEYS file [1]
>> 
>> [1] https://dist.apache.org/repos/dist/release/flink/KEYS
>> 
>> On Thu, Feb 8, 2024 at 3:50 PM David Radley 
>> wrote:
>> 
>>> Thanks Sergey,
>>> 
>>> It looks better now.
>>> 
>>> gpg --verify flink-connector-jdbc-3.1.2-1.18.jar.asc
>>> 
>>> gpg: assuming signed data in 'flink-connector-jdbc-3.1.2-1.18.jar'
>>> 
>>> gpg: Signature made Thu  1 Feb 10:54:45 2024 GMT
>>> 
>>> gpg:using RSA key F7529FAE24811A5C0DF3CA741596BBF0726835D8
>>> 
>>> gpg: Good signature from "Sergey Nuyanzin (CODE SIGNING KEY)
>>> snuyan...@apache.org" [unknown]
>>> 
>>> gpg: aka "Sergey Nuyanzin (CODE SIGNING KEY)
>>> snuyan...@gmail.com" [unknown]
>>> 
>>> gpg: aka "Sergey Nuyanzin snuyan...@gmail.com>> snuyan...@gmail.com>" [unknown]
>>> 
>>> gpg: WARNING: This key is not certified with a trusted signature!
>>> 
>>> gpg:  There is no indication that the signature belongs to the
>>> owner.
>>> 
>>> I assume the warning is ok,
>>>  Kind regards, David.
>>> 
>>> From: Sergey Nuyanzin 
>>> Date: Thursday, 8 February 2024 at 14:39
>>> To: dev@flink.apache.org 
>>> Subject: [EXTERNAL] Re: FW: RE: [VOTE] Release flink-connector-jdbc,
>>> release candidate #3
>>> Hi David
>>> 
>>> it looks like in your case you don't specify the jar itself and probably
>>> it
>>> is not in current dir
>>> so it should be something like that (assuming that both asc and jar file
>>> are downloaded and are in current folder)
>>> gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
>>> flink-connector-jdbc-3.1.2-1.16.jar
>>> 
>>> Here it is a more complete guide how to do it for Apache projects [1]
>>> 
>>> [1] https://www.apache.org/info/verification.html#CheckingSignatures
>>> 
>>> On Thu, Feb 8, 2024 at 12:38 PM David Radley 
>>> wrote:
>>> 
 Hi,
 I was looking more at the asc files. I imported the keys and tried.
 
 
 gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
 
 gpg: no signed data
 
 gpg: can't hash datafile: No data
 
 This seems to be the same for all the asc file. It does not look right;
>>> am
 I doing doing incorrect?
   Kind regards, David.
 
 
 From: David Radley 
 Date: Thursday, 8 February 2024 at 10:46
 To: dev@flink.apache.org 
 Subject: [EXTERNAL] RE: [VOTE] Release flink-connector-jdbc, release
 candidate #3
 +1 (non-binding)
 
 I assume that thttps://github.com/apache/flink-web/pull/707 and be
 completed after the release is out.
 
 From: Martijn Visser 
 Date: Friday, 2 February 2024 at 08:38
 To: dev@flink.apache.org 
 Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-jdbc, release
 candidate #3
 +1 (binding)
 
 - Validated hashes
 - Verified signature
 - Verified that no binaries exist in the source archive
 - Build the source with Maven
 - Verified licenses
 - Verified web PRs
 
 On Fri, Feb 2, 2024 at 9:31 AM Yanquan Lv  wrote:
 
> +1 (non-binding)
> 
> - Validated checksum hash
> - Verified signature
> - Build the source with Maven and jdk8/11/17
> - Check that the jar is built by jdk8
> - Verified that no binaries exist in the source archive
> 
> Sergey Nuyanzin  于2024年2月1日周四 19:50写道:
> 
>> Hi everyone,
>> Please review and vote on the release candidate #3 for the version
 3.1.2,
>> as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific
>>> comments)
>> 
>> This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
>> 
>> The complete staging area is available for your review, which
>>> includes:
>> * JIRA release notes [1],
>> * the official Apache source release to be deployed to
>>> dist.apache

Re: [VOTE] Release flink-connector-parent 1.1.0 release candidate #2

2024-02-19 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- built from source code succeeded
- checked Github release tag 
- checked release notes
- reviewed all Jira tickets have been resolved
- reviewed the web PR

Best,
Leonard


> 2024年2月20日 上午11:14,Rui Fan <1996fan...@gmail.com> 写道:
> 
> Thanks for driving this, Etienne!
> 
> +1 (non-binding)
> 
> - Verified checksum and signature
> - Verified pom content
> - Build source on my Mac with jdk8
> - Verified no binaries in source
> - Checked staging repo on Maven central
> - Checked source code tag
> - Reviewed web PR
> 
> Best,
> Rui
> 
> On Tue, Feb 20, 2024 at 10:33 AM Qingsheng Ren  wrote:
> 
>> Thanks for driving this, Etienne!
>> 
>> +1 (binding)
>> 
>> - Checked release note
>> - Verified checksum and signature
>> - Verified pom content
>> - Verified no binaries in source
>> - Checked staging repo on Maven central
>> - Checked source code tag
>> - Reviewed web PR
>> - Built Kafka connector from source with parent pom in staging repo
>> 
>> Best,
>> Qingsheng
>> 
>> On Tue, Feb 20, 2024 at 1:34 AM Etienne Chauchot 
>> wrote:
>> 
>>> Hi everyone,
>>> Please review and vote on the release candidate #2 for the version
>>> 1.1.0, as follows:
>>> [ ] +1, Approve the release
>>> [ ] -1, Do not approve the release (please provide specific comments)
>>> 
>>> 
>>> The complete staging area is available for your review, which includes:
>>> * JIRA release notes [1],
>>> * the official Apache source release to be deployed to dist.apache.org
>>> [2], which are signed with the key with fingerprint
>>> D1A76BA19D6294DD0033F6843A019F0B8DD163EA [3],
>>> * all artifacts to be deployed to the Maven Central Repository [4],
>>> * source code tag v1.1.0-rc2 [5],
>>> * website pull request listing the new release [6].
>>> 
>>> * confluence wiki: connector parent upgrade to version 1.1.0 that will
>>> be validated after the artifact is released (there is no PR mechanism on
>>> the wiki) [7]
>>> 
>>> 
>>> The vote will be open for at least 72 hours. It is adopted by majority
>>> approval, with at least 3 PMC affirmative votes.
>>> 
>>> Thanks,
>>> Etienne
>>> 
>>> [1]
>>> 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353442
>>> [2]
>>> 
>>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-parent-1.1.0-rc2
>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> [4]
>> https://repository.apache.org/content/repositories/orgapacheflink-1707
>>> [5]
>>> 
>>> 
>> https://github.com/apache/flink-connector-shared-utils/releases/tag/v1.1.0-rc2
>>> 
>>> [6] https://github.com/apache/flink-web/pull/717
>>> 
>>> [7]
>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
>>> 
>> 



[ANNOUNCE] Apache flink-connector-mongodb 1.1.0 released

2024-02-19 Thread Leonard Xu
The Apache Flink community is very happy to announce the release of Apache
flink-connector-mongodb 1.1.0. This release is compatible with Flink 1.17.x and 
1.18.x series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353483

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Best,
Leonard

Re: [ANNOUNCE] New Apache Flink Committer - Jiabao Sun

2024-02-19 Thread Leonard Xu
Congratulations, Jiabao! Well deserved.


Best,
Leonard


> 2024年2月19日 下午6:21,David Radley  写道:
> 
> Congratulations Jiabao!
> 
> From: Swapnal Varma 
> Date: Monday, 19 February 2024 at 10:14
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: [ANNOUNCE] New Apache Flink Committer - Jiabao Sun
> Congratulations Jiabao!
> 
> Best,
> Swapnal
> 
> On Mon, 19 Feb 2024, 15:37 weijie guo,  wrote:
> 
>> Congratulations, Jiabao :)
>> 
>> Best regards,
>> 
>> Weijie
>> 
>> 
>> Hang Ruan  于2024年2月19日周一 18:04写道:
>> 
>>> Congratulations, Jiabao!
>>> 
>>> Best,
>>> Hang
>>> 
>>> Qingsheng Ren  于2024年2月19日周一 17:53写道:
>>> 
 Hi everyone,
 
 On behalf of the PMC, I'm happy to announce Jiabao Sun as a new Flink
 Committer.
 
 Jiabao began contributing in August 2022 and has contributed 60+
>> commits
 for Flink main repo and various connectors. His most notable
>> contribution
 is being the core author and maintainer of MongoDB connector, which is
 fully functional in DataStream and Table/SQL APIs. Jiabao is also the
 author of FLIP-377 and the main contributor of JUnit 5 migration in
>>> runtime
 and table planner modules.
 
 Beyond his technical contributions, Jiabao is an active member of our
 community, participating in the mailing list and consistently
>>> volunteering
 for release verifications and code reviews with enthusiasm.
 
 Please join me in congratulating Jiabao for becoming an Apache Flink
 committer!
 
 Best,
 Qingsheng (on behalf of the Flink PMC)
 
>>> 
>> 
> 
> Unless otherwise stated above:
> 
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU



[RESULT][VOTE] Release flink-connector-mongodb 1.1.0, release candidate #2

2024-02-19 Thread Leonard Xu
I'm happy to announce that we have unanimously approved this release.

There are 6 approving votes, 3 of which are binding:

* Jiabao Sun (non-binding) 
* gongzhongqiang (non-binding)
* Hang Ruan (non-binding)
* Danny Cranmer (binding)
* Martijn Visser (binding)
* Leonard Xu (binding)

There are no disapproving votes.

Thanks all! and I’ll complete the release soon and announce it soon after this 
email.

Best,
Leonard

Re: [VOTE] Release flink-connector-mongodb v1.1.0, release candidate #2

2024-02-19 Thread Leonard Xu
Thanks all for the voting, I’ll summarize the result in another mail.

Best,
Leonard


> 2024年2月19日 下午4:46,Leonard Xu  写道:
> 
> +1 (binding)
> 
> - built from source code succeeded
> - verified signatures
> - verified hashsums 
> - checked the contents contains jar and pom files in apache repo 
> - checked Github release tag 
> - checked release notes
> 
> Best,
> Leonard
> 
>> 2024年2月8日 下午11:37,Martijn Visser  写道:
>> 
>> +1 (binding)
>> 
>> - Validated hashes
>> - Verified signature
>> - Verified that no binaries exist in the source archive
>> - Build the source with Maven
>> - Verified licenses
>> - Verified web PRs
>> 
>> On Wed, Jan 31, 2024 at 10:41 AM Danny Cranmer  
>> wrote:
>>> 
>>> Thanks for driving this Leonard!
>>> 
>>> +1 (binding)
>>> 
>>> - Release notes look ok
>>> - Signatures/checksums of source archive are good
>>> - Verified there are no binaries in the source archive
>>> - Built sources locally successfully
>>> - v1.0.0-rc2 tag exists in github
>>> - Tag build passing on CI [1]
>>> - Contents of Maven dist look complete
>>> - Verified signatures/checksums of binary in maven dist is correct
>>> - Verified NOTICE files and bundled dependencies
>>> 
>>> Thanks,
>>> Danny
>>> 
>>> [1]
>>> https://github.com/apache/flink-connector-mongodb/actions/runs/7709467379
>>> 
>>> On Wed, Jan 31, 2024 at 7:54 AM gongzhongqiang 
>>> wrote:
>>> 
>>>> +1(non-binding)
>>>> 
>>>> - Signatures and Checksums are good
>>>> - No binaries in the source archive
>>>> - Tag is present
>>>> - Build successful with jdk8 on ubuntu 22.04
>>>> 
>>>> 
>>>> Leonard Xu  于2024年1月30日周二 18:23写道:
>>>> 
>>>>> Hey all,
>>>>> 
>>>>> Please help review and vote on the release candidate #2 for the version
>>>>> v1.1.0 of the
>>>>> Apache Flink MongoDB Connector as follows:
>>>>> 
>>>>> [ ] +1, Approve the release
>>>>> [ ] -1, Do not approve the release (please provide specific comments)
>>>>> 
>>>>> The complete staging area is available for your review, which includes:
>>>>> * JIRA release notes [1],
>>>>> * The official Apache source release to be deployed to dist.apache.org
>>>>> [2],
>>>>> which are signed with the key with fingerprint
>>>>> 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
>>>>> * All artifacts to be deployed to the Maven Central Repository [4],
>>>>> * Source code tag v1.1.0-rc2 [5],
>>>>> * Website pull request listing the new release [6].
>>>>> 
>>>>> The vote will be open for at least 72 hours. It is adopted by majority
>>>>> approval, with at least 3 PMC affirmative votes.
>>>>> 
>>>>> 
>>>>> Best,
>>>>> Leonard
>>>>> [1]
>>>>> 
>>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353483
>>>>> [2]
>>>>> 
>>>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc2/
>>>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>>>> [4]
>>>>> https://repository.apache.org/content/repositories/orgapacheflink-1705/
>>>>> [5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc2
>>>>> [6] https://github.com/apache/flink-web/pull/719
>>>> 
> 



Re: [VOTE] Release flink-connector-mongodb v1.1.0, release candidate #2

2024-02-19 Thread Leonard Xu
+1 (binding)

- built from source code succeeded
- verified signatures
- verified hashsums 
- checked the contents contains jar and pom files in apache repo 
- checked Github release tag 
- checked release notes

Best,
Leonard

> 2024年2月8日 下午11:37,Martijn Visser  写道:
> 
> +1 (binding)
> 
> - Validated hashes
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven
> - Verified licenses
> - Verified web PRs
> 
> On Wed, Jan 31, 2024 at 10:41 AM Danny Cranmer  
> wrote:
>> 
>> Thanks for driving this Leonard!
>> 
>> +1 (binding)
>> 
>> - Release notes look ok
>> - Signatures/checksums of source archive are good
>> - Verified there are no binaries in the source archive
>> - Built sources locally successfully
>> - v1.0.0-rc2 tag exists in github
>> - Tag build passing on CI [1]
>> - Contents of Maven dist look complete
>> - Verified signatures/checksums of binary in maven dist is correct
>> - Verified NOTICE files and bundled dependencies
>> 
>> Thanks,
>> Danny
>> 
>> [1]
>> https://github.com/apache/flink-connector-mongodb/actions/runs/7709467379
>> 
>> On Wed, Jan 31, 2024 at 7:54 AM gongzhongqiang 
>> wrote:
>> 
>>> +1(non-binding)
>>> 
>>> - Signatures and Checksums are good
>>> - No binaries in the source archive
>>> - Tag is present
>>> - Build successful with jdk8 on ubuntu 22.04
>>> 
>>> 
>>> Leonard Xu  于2024年1月30日周二 18:23写道:
>>> 
>>>> Hey all,
>>>> 
>>>> Please help review and vote on the release candidate #2 for the version
>>>> v1.1.0 of the
>>>> Apache Flink MongoDB Connector as follows:
>>>> 
>>>> [ ] +1, Approve the release
>>>> [ ] -1, Do not approve the release (please provide specific comments)
>>>> 
>>>> The complete staging area is available for your review, which includes:
>>>> * JIRA release notes [1],
>>>> * The official Apache source release to be deployed to dist.apache.org
>>>> [2],
>>>> which are signed with the key with fingerprint
>>>> 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
>>>> * All artifacts to be deployed to the Maven Central Repository [4],
>>>> * Source code tag v1.1.0-rc2 [5],
>>>> * Website pull request listing the new release [6].
>>>> 
>>>> The vote will be open for at least 72 hours. It is adopted by majority
>>>> approval, with at least 3 PMC affirmative votes.
>>>> 
>>>> 
>>>> Best,
>>>> Leonard
>>>> [1]
>>>> 
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353483
>>>> [2]
>>>> 
>>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc2/
>>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>>> [4]
>>>> https://repository.apache.org/content/repositories/orgapacheflink-1705/
>>>> [5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc2
>>>> [6] https://github.com/apache/flink-web/pull/719
>>> 



Re: [VOTE] Release flink-connector-jdbc, release candidate #2

2024-01-31 Thread Leonard Xu
Thanks Sergey for driving this!
 
-1(binding) as we discussed on the web PR[1], looking forward next RC.

Best,
Leonard
[1] https://github.com/apache/flink-web/pull/707#discussion_r1471090061

> 2024年1月30日 上午8:17,Sergey Nuyanzin  写道:
> 
> Hi everyone,
> Please review and vote on the release candidate #2 for the version
> 3.1.2, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
> 
> This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
> 
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2], which are signed with the key with fingerprint
> 1596BBF0726835D8 [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.1.2-rc2 [5],
> * website pull request listing the new release [6].
> 
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
> 
> Thanks,
> Release Manager
> 
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12354088
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.1.2-rc2
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1704/
> [5] https://github.com/apache/flink-connector-jdbc/releases/tag/v3.1.2-rc2
> [6] https://github.com/apache/flink-web/pull/707



Re: [VOTE] Release flink-connector-opensearch v1.1.0, release candidate #1

2024-01-30 Thread Leonard Xu
Sorry for late verification, +1(binding)

- built from source code succeeded
- verified signatures
- verified hashsums 
- checked the contents contains jar and pom files in apache repo 
- checked Github release tag 
- checked release notes
- reviewed the web PR


Best,
Leonard

> 2024年1月13日 下午4:41,Jiabao Sun  写道:
> 
> +1 (non-binding)
> 
> - Validated hashes
> - Verified signature
> - Verified tags
> - Verified no binaries in the source archive
> - Reviewed web pr and found that there're some conflicts need to be resolved
> 
> Best,
> Jiabao
> 
> 
>> 2024年1月12日 23:58,Danny Cranmer  写道:
>> 
>> Apologies I jumped the gun on this one. We only have 2 binding votes.
>> Reopening the thread.
>> 
>> On Fri, Jan 12, 2024 at 3:43 PM Danny Cranmer 
>> wrote:
>> 
>>> Thanks all, this vote is now closed, I will announce the results on a
>>> separate thread.
>>> 
>>> Thanks,
>>> Danny
>>> 
>>> On Fri, Jan 12, 2024 at 3:43 PM Danny Cranmer 
>>> wrote:
>>> 
 +1 (binding)
 
 - Verified signatures and checksums
 - Reviewed release notes
 - Verified no binaries in the source archive
 - Source builds using Maven
 - Reviewed NOTICE files (I suppose the copyright needs to be 2024 now!)
 
 Thanks,
 Danny
 
 On Fri, Jan 12, 2024 at 12:56 PM Martijn Visser 
 wrote:
 
> One non blocking nit: the version for flink.version in the main POM is
> set to 1.17.1. I think this should be 1.17.0 (since that's the lowest
> possible Flink version that's supported).
> 
> +1 (binding)
> 
> - Validated hashes
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven
> - Verified licenses
> - Verified web PRs
> 
> On Mon, Jan 1, 2024 at 11:57 AM Danny Cranmer 
> wrote:
>> 
>> Hey,
>> 
>> Gordon, apologies for the delay. Yes this is the correct
> understanding, all
>> connectors follow a similar pattern.
>> 
>> Would appreciate some PMC eyes on this release.
>> 
>> Thanks,
>> Danny
>> 
>> On Thu, 23 Nov 2023, 23:28 Tzu-Li (Gordon) Tai, 
> wrote:
>> 
>>> Hi Danny,
>>> 
>>> Thanks for starting a RC for this.
>>> 
>>> From the looks of the staged POMs for 1.1.0-1.18, the flink versions
> for
>>> Flink dependencies still point to 1.17.1.
>>> 
>>> My understanding is that this is fine, as those provided scope
>>> dependencies (e.g. flink-streaming-java) will have their versions
>>> overwritten by the user POM if they do intend to compile their jobs
> against
>>> Flink 1.18.x.
>>> Can you clarify if this is the correct understanding of how we
> intend the
>>> externalized connector artifacts to be published? Related discussion
> on
>>> [1].
>>> 
>>> Thanks,
>>> Gordon
>>> 
>>> [1] https://lists.apache.org/thread/x1pyrrrq7o1wv1lcdovhzpo4qhd4tvb4
>>> 
>>> On Thu, Nov 23, 2023 at 3:14 PM Sergey Nuyanzin > 
>>> wrote:
>>> 
 +1 (non-binding)
 
 - downloaded artifacts
 - built from source
 - verified checksums and signatures
 - reviewed web pr
 
 
 On Mon, Nov 6, 2023 at 5:31 PM Ryan Skraba
> >>> 
 wrote:
 
> Hello! +1 (non-binding) Thanks for the release!
> 
> I've validated the source for the RC1:
> * flink-connector-opensearch-1.1.0-src.tgz at r64995
> * The sha512 checksum is OK.
> * The source file is signed correctly.
> * The signature 0F79F2AFB2351BC29678544591F9C1EC125FD8DB is
> found in
>>> the
> KEYS file, and on https://keyserver.ubuntu.com/
> * The source file is consistent with the GitHub tag v1.1.0-rc1,
> which
> corresponds to commit 0f659cc65131c9ff7c8c35eb91f5189e80414ea1
> - The files explicitly excluded by create_pristine_sources (such
> as
> .gitignore and the submodule tools/releasing/shared) are not
> present.
> * Has a LICENSE file and a NOTICE file
> * Does not contain any compiled binaries.
> 
> * The sources can be compiled and unit tests pass with
> flink.version
 1.17.1
> and flink.version 1.18.0
> 
> * Nexus has three staged artifact ids for 1.1.0-1.17 and
> 1.1.0-1.18
> - flink-connector-opensearch (.jar, -javadoc.jar, -sources.jar,
> -tests.jar and .pom)
> - flink-sql-connector-opensearch (.jar, -sources.jar and .pom)
> - flink-connector-gcp-pubsub-parent (only .pom)
> 
> All my best, Ryan
> 
> On Fri, Nov 3, 2023 at 10:29 AM Danny Cranmer <
> dannycran...@apache.org
 
> wrote:
>> 
>> Hi everyone,
>> 
>> Please review and vote on the release candidate #1 for the
> version
 1.1.0
>

[VOTE] Release flink-connector-mongodb v1.1.0, release candidate #2

2024-01-30 Thread Leonard Xu
Hey all,

Please help review and vote on the release candidate #2 for the version v1.1.0 
of the
Apache Flink MongoDB Connector as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* The official Apache source release to be deployed to dist.apache.org [2],
which are signed with the key with fingerprint
5B2F6608732389AEB67331F5B197E1F1108998AD [3],
* All artifacts to be deployed to the Maven Central Repository [4],
* Source code tag v1.1.0-rc2 [5],
* Website pull request listing the new release [6].

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.


Best,
Leonard
[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353483
[2] 
https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc2/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1705/
[5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc2
[6] https://github.com/apache/flink-web/pull/719

[jira] [Created] (FLINK-34284) Submit Software License Grant to ASF

2024-01-30 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-34284:
--

 Summary: Submit Software License Grant to ASF
 Key: FLINK-34284
 URL: https://issues.apache.org/jira/browse/FLINK-34284
 Project: Flink
  Issue Type: Sub-task
  Components: Flink CDC
Reporter: Leonard Xu


As ASF software license grant[1] required, we need submit the Software Grant 
Agreement.

[1] https://www.apache.org/licenses/contributor-agreements.html#grants



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-mongodb 1.1.0, release candidate #1

2024-01-29 Thread Leonard Xu
Thanks Hang for the review!

 We should use jdk8 to compile the jar, I will cancel the rc1 and prepare rc2 
soon.

Best,
Leonard

> 2024年1月30日 下午2:39,Hang Ruan  写道:
> 
> Hi, Leonard.
> 
> I find that META-INF/MANIFEST.MF in
> flink-sql-connector-mongodb-1.1.0-1.18.jar shows as follow.
> 
> Manifest-Version: 1.0
> Archiver-Version: Plexus Archiver
> Created-By: Apache Maven 3.8.1
> Built-By: bangjiangxu
> Build-Jdk: 11.0.11
> Specification-Title: Flink : Connectors : SQL : MongoDB
> Specification-Version: 1.1.0-1.18
> Specification-Vendor: The Apache Software Foundation
> Implementation-Title: Flink : Connectors : SQL : MongoDB
> Implementation-Version: 1.1.0-1.18
> Implementation-Vendor-Id: org.apache.flink
> Implementation-Vendor: The Apache Software Foundation
> 
> Maybe we should build mongodb connector with jdk8.
> 
> Best,
> Hang
> 
> Jiabao Sun  于2024年1月29日周一 21:51写道:
> 
>> Thanks Leonard for driving this.
>> 
>> +1(non-binding)
>> 
>> - Release notes look good
>> - Tag is present in Github
>> - Validated checksum hash
>> - Verified signature
>> - Build the source with Maven by jdk8,11,17,21
>> - Verified web PR and left minor comments
>> - Run a filter push down test by sql-client on Flink 1.18.1 and it works
>> well
>> 
>> Best,
>> Jiabao
>> 
>> 
>> On 2024/01/29 12:33:23 Leonard Xu wrote:
>>> Hey all,
>>> 
>>> Please help review and vote on the release candidate #1 for the version
>> 1.1.0 of the
>>> Apache Flink MongoDB Connector as follows:
>>> 
>>> [ ] +1, Approve the release
>>> [ ] -1, Do not approve the release (please provide specific comments)
>>> 
>>> The complete staging area is available for your review, which includes:
>>> * JIRA release notes [1],
>>> * The official Apache source release to be deployed to dist.apache.org
>> [2],
>>> which are signed with the key with fingerprint
>>> 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
>>> * All artifacts to be deployed to the Maven Central Repository [4],
>>> * Source code tag v.1.0-rc1 [5],
>>> * Website pull request listing the new release [6].
>>> 
>>> The vote will be open for at least 72 hours. It is adopted by majority
>>> approval, with at least 3 PMC affirmative votes.
>>> 
>>> 
>>> Best,
>>> Leonard
>>> [1]
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353483
>>> [2]
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc1/
>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> [4]
>> https://repository.apache.org/content/repositories/orgapacheflink-1702/
>>> [5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc1
>>> [6] https://github.com/apache/flink-web/pull/719



  1   2   3   4   5   6   7   8   >