Re: [VOTE] Release 1.18.0, release candidate #3

2023-10-23 Thread Jingsong Li
+1 (binding)

- verified signatures & hash
- built from source code succeeded
- started SQL Client, used Paimon connector to write and read, the
result is expected

Best,
Jingsong

On Tue, Oct 24, 2023 at 12:15 PM Yuxin Tan  wrote:
>
> +1(non-binding)
>
> - Verified checksum
> - Build from source code
> - Verified signature
> - Started a local cluster and run Streaming & Batch wordcount job, the
> result is expected
> - Verified web PR
>
> Best,
> Yuxin
>
>
> Qingsheng Ren  于2023年10月24日周二 11:19写道:
>
> > +1 (binding)
> >
> > - Verified checksums and signatures
> > - Built from source with Java 8
> > - Started a standalone cluster and submitted a Flink SQL job that read and
> > wrote with Kafka connector and CSV / JSON format
> > - Reviewed web PR and release note
> >
> > Best,
> > Qingsheng
> >
> > On Mon, Oct 23, 2023 at 10:40 PM Leonard Xu  wrote:
> >
> > > +1 (binding)
> > >
> > > - verified signatures
> > > - verified hashsums
> > > - built from source code succeeded
> > > - checked all dependency artifacts are 1.18
> > > - started SQL Client, used MySQL CDC connector to read changelog from
> > > database , the result is expected
> > > - reviewed the web PR, left minor comments
> > > - reviewed the release notes PR, left minor comments
> > >
> > >
> > > Best,
> > > Leonard
> > >
> > > > 2023年10月21日 下午7:28,Rui Fan <1996fan...@gmail.com> 写道:
> > > >
> > > > +1(non-binding)
> > > >
> > > > - Downloaded artifacts from dist[1]
> > > > - Verified SHA512 checksums
> > > > - Verified GPG signatures
> > > > - Build the source with java-1.8 and verified the licenses together
> > > > - Verified web PR
> > > >
> > > > [1] https://dist.apache.org/repos/dist/dev/flink/flink-1.18.0-rc3/
> > > >
> > > > Best,
> > > > Rui
> > > >
> > > > On Fri, Oct 20, 2023 at 10:31 PM Martijn Visser <
> > > martijnvis...@apache.org>
> > > > wrote:
> > > >
> > > >> +1 (binding)
> > > >>
> > > >> - Validated hashes
> > > >> - Verified signature
> > > >> - Verified that no binaries exist in the source archive
> > > >> - Build the source with Maven
> > > >> - Verified licenses
> > > >> - Verified web PR
> > > >> - Started a cluster and the Flink SQL client, successfully read and
> > > >> wrote with the Kafka connector to Confluent Cloud with AVRO and Schema
> > > >> Registry enabled
> > > >>
> > > >> On Fri, Oct 20, 2023 at 2:55 PM Matthias Pohl
> > > >>  wrote:
> > > >>>
> > > >>> +1 (binding)
> > > >>>
> > > >>> * Downloaded artifacts
> > > >>> * Built Flink from sources
> > > >>> * Verified SHA512 checksums GPG signatures
> > > >>> * Compared checkout with provided sources
> > > >>> * Verified pom file versions
> > > >>> * Verified that there are no pom/NOTICE file changes since RC1
> > > >>> * Deployed standalone session cluster and ran WordCount example in
> > > batch
> > > >>> and streaming: Nothing suspicious in log files found
> > > >>>
> > > >>> On Thu, Oct 19, 2023 at 3:00 PM Piotr Nowojski  > >
> > > >> wrote:
> > > >>>
> > >  +1 (binding)
> > > 
> > >  Best,
> > >  Piotrek
> > > 
> > >  czw., 19 paź 2023 o 09:55 Yun Tang  napisał(a):
> > > 
> > > > +1 (non-binding)
> > > >
> > > >
> > > >  *   Build from source code
> > > >  *   Verify the pre-built jar packages were built with JDK8
> > > >  *   Verify FLIP-291 with a standalone cluster, and it works fine
> > > >> with
> > > > StateMachine example.
> > > >  *   Checked the signature
> > > >  *   Viewed the PRs.
> > > >
> > > > Best
> > > > Yun Tang
> > > > 
> > > > From: Cheng Pan 
> > > > Sent: Thursday, October 19, 2023 14:38
> > > > To: dev@flink.apache.org 
> > > > Subject: RE: [VOTE] Release 1.18.0, release candidate #3
> > > >
> > > > +1 (non-binding)
> > > >
> > > > We(the Apache Kyuubi community), verified that the Kyuubi Flink
> > > >> engine
> > > > works well[1] with Flink 1.18.0 RC3.
> > > >
> > > > [1] https://github.com/apache/kyuubi/pull/5465
> > > >
> > > > Thanks,
> > > > Cheng Pan
> > > >
> > > >
> > > > On 2023/10/19 00:26:24 Jing Ge wrote:
> > > >> Hi everyone,
> > > >>
> > > >> Please review and vote on the release candidate #3 for the version
> > > >> 1.18.0, as follows:
> > > >> [ ] +1, Approve the release
> > > >> [ ] -1, Do not approve the release (please provide specific
> > > >> comments)
> > > >>
> > > >> The complete staging area is available for your review, which
> > > >> includes:
> > > >>
> > > >> * JIRA release notes [1], and the pull request adding release note
> > > >> for
> > > >> users [2]
> > > >> * the official Apache source release and binary convenience
> > > >> releases to
> > > > be
> > > >> deployed to dist.apache.org [3], which are signed with the key
> > > >> with
> > > >> fingerprint 96AE0E32CBE6E0753CE6 [4],
> > > >> * all artifacts to be deployed to the Maven Central Repository
> > [5],

Re: [VOTE] Release 1.18.0, release candidate #3

2023-10-23 Thread Yuxin Tan
+1(non-binding)

- Verified checksum
- Build from source code
- Verified signature
- Started a local cluster and run Streaming & Batch wordcount job, the
result is expected
- Verified web PR

Best,
Yuxin


Qingsheng Ren  于2023年10月24日周二 11:19写道:

> +1 (binding)
>
> - Verified checksums and signatures
> - Built from source with Java 8
> - Started a standalone cluster and submitted a Flink SQL job that read and
> wrote with Kafka connector and CSV / JSON format
> - Reviewed web PR and release note
>
> Best,
> Qingsheng
>
> On Mon, Oct 23, 2023 at 10:40 PM Leonard Xu  wrote:
>
> > +1 (binding)
> >
> > - verified signatures
> > - verified hashsums
> > - built from source code succeeded
> > - checked all dependency artifacts are 1.18
> > - started SQL Client, used MySQL CDC connector to read changelog from
> > database , the result is expected
> > - reviewed the web PR, left minor comments
> > - reviewed the release notes PR, left minor comments
> >
> >
> > Best,
> > Leonard
> >
> > > 2023年10月21日 下午7:28,Rui Fan <1996fan...@gmail.com> 写道:
> > >
> > > +1(non-binding)
> > >
> > > - Downloaded artifacts from dist[1]
> > > - Verified SHA512 checksums
> > > - Verified GPG signatures
> > > - Build the source with java-1.8 and verified the licenses together
> > > - Verified web PR
> > >
> > > [1] https://dist.apache.org/repos/dist/dev/flink/flink-1.18.0-rc3/
> > >
> > > Best,
> > > Rui
> > >
> > > On Fri, Oct 20, 2023 at 10:31 PM Martijn Visser <
> > martijnvis...@apache.org>
> > > wrote:
> > >
> > >> +1 (binding)
> > >>
> > >> - Validated hashes
> > >> - Verified signature
> > >> - Verified that no binaries exist in the source archive
> > >> - Build the source with Maven
> > >> - Verified licenses
> > >> - Verified web PR
> > >> - Started a cluster and the Flink SQL client, successfully read and
> > >> wrote with the Kafka connector to Confluent Cloud with AVRO and Schema
> > >> Registry enabled
> > >>
> > >> On Fri, Oct 20, 2023 at 2:55 PM Matthias Pohl
> > >>  wrote:
> > >>>
> > >>> +1 (binding)
> > >>>
> > >>> * Downloaded artifacts
> > >>> * Built Flink from sources
> > >>> * Verified SHA512 checksums GPG signatures
> > >>> * Compared checkout with provided sources
> > >>> * Verified pom file versions
> > >>> * Verified that there are no pom/NOTICE file changes since RC1
> > >>> * Deployed standalone session cluster and ran WordCount example in
> > batch
> > >>> and streaming: Nothing suspicious in log files found
> > >>>
> > >>> On Thu, Oct 19, 2023 at 3:00 PM Piotr Nowojski  >
> > >> wrote:
> > >>>
> >  +1 (binding)
> > 
> >  Best,
> >  Piotrek
> > 
> >  czw., 19 paź 2023 o 09:55 Yun Tang  napisał(a):
> > 
> > > +1 (non-binding)
> > >
> > >
> > >  *   Build from source code
> > >  *   Verify the pre-built jar packages were built with JDK8
> > >  *   Verify FLIP-291 with a standalone cluster, and it works fine
> > >> with
> > > StateMachine example.
> > >  *   Checked the signature
> > >  *   Viewed the PRs.
> > >
> > > Best
> > > Yun Tang
> > > 
> > > From: Cheng Pan 
> > > Sent: Thursday, October 19, 2023 14:38
> > > To: dev@flink.apache.org 
> > > Subject: RE: [VOTE] Release 1.18.0, release candidate #3
> > >
> > > +1 (non-binding)
> > >
> > > We(the Apache Kyuubi community), verified that the Kyuubi Flink
> > >> engine
> > > works well[1] with Flink 1.18.0 RC3.
> > >
> > > [1] https://github.com/apache/kyuubi/pull/5465
> > >
> > > Thanks,
> > > Cheng Pan
> > >
> > >
> > > On 2023/10/19 00:26:24 Jing Ge wrote:
> > >> Hi everyone,
> > >>
> > >> Please review and vote on the release candidate #3 for the version
> > >> 1.18.0, as follows:
> > >> [ ] +1, Approve the release
> > >> [ ] -1, Do not approve the release (please provide specific
> > >> comments)
> > >>
> > >> The complete staging area is available for your review, which
> > >> includes:
> > >>
> > >> * JIRA release notes [1], and the pull request adding release note
> > >> for
> > >> users [2]
> > >> * the official Apache source release and binary convenience
> > >> releases to
> > > be
> > >> deployed to dist.apache.org [3], which are signed with the key
> > >> with
> > >> fingerprint 96AE0E32CBE6E0753CE6 [4],
> > >> * all artifacts to be deployed to the Maven Central Repository
> [5],
> > >> * source code tag "release-1.18.0-rc3" [6],
> > >> * website pull request listing the new release and adding
> > >> announcement
> > > blog
> > >> post [7].
> > >>
> > >> The vote will be open for at least 72 hours. It is adopted by
> > >> majority
> > >> approval, with at least 3 PMC affirmative votes.
> > >>
> > >> Best regards,
> > >> Konstantin, Sergey, Qingsheng, and Jing
> > >>
> > >> [1]
> > >>
> > >
> > 
> > >>
> >
> 

[jira] [Created] (FLINK-33344) Replace Time with Duration in RpcInputSplitProvider

2023-10-23 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-33344:
--

 Summary: Replace Time with Duration in RpcInputSplitProvider
 Key: FLINK-33344
 URL: https://issues.apache.org/jira/browse/FLINK-33344
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / RPC
Reporter: Jiabao Sun
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release 1.18.0, release candidate #3

2023-10-23 Thread Qingsheng Ren
+1 (binding)

- Verified checksums and signatures
- Built from source with Java 8
- Started a standalone cluster and submitted a Flink SQL job that read and
wrote with Kafka connector and CSV / JSON format
- Reviewed web PR and release note

Best,
Qingsheng

On Mon, Oct 23, 2023 at 10:40 PM Leonard Xu  wrote:

> +1 (binding)
>
> - verified signatures
> - verified hashsums
> - built from source code succeeded
> - checked all dependency artifacts are 1.18
> - started SQL Client, used MySQL CDC connector to read changelog from
> database , the result is expected
> - reviewed the web PR, left minor comments
> - reviewed the release notes PR, left minor comments
>
>
> Best,
> Leonard
>
> > 2023年10月21日 下午7:28,Rui Fan <1996fan...@gmail.com> 写道:
> >
> > +1(non-binding)
> >
> > - Downloaded artifacts from dist[1]
> > - Verified SHA512 checksums
> > - Verified GPG signatures
> > - Build the source with java-1.8 and verified the licenses together
> > - Verified web PR
> >
> > [1] https://dist.apache.org/repos/dist/dev/flink/flink-1.18.0-rc3/
> >
> > Best,
> > Rui
> >
> > On Fri, Oct 20, 2023 at 10:31 PM Martijn Visser <
> martijnvis...@apache.org>
> > wrote:
> >
> >> +1 (binding)
> >>
> >> - Validated hashes
> >> - Verified signature
> >> - Verified that no binaries exist in the source archive
> >> - Build the source with Maven
> >> - Verified licenses
> >> - Verified web PR
> >> - Started a cluster and the Flink SQL client, successfully read and
> >> wrote with the Kafka connector to Confluent Cloud with AVRO and Schema
> >> Registry enabled
> >>
> >> On Fri, Oct 20, 2023 at 2:55 PM Matthias Pohl
> >>  wrote:
> >>>
> >>> +1 (binding)
> >>>
> >>> * Downloaded artifacts
> >>> * Built Flink from sources
> >>> * Verified SHA512 checksums GPG signatures
> >>> * Compared checkout with provided sources
> >>> * Verified pom file versions
> >>> * Verified that there are no pom/NOTICE file changes since RC1
> >>> * Deployed standalone session cluster and ran WordCount example in
> batch
> >>> and streaming: Nothing suspicious in log files found
> >>>
> >>> On Thu, Oct 19, 2023 at 3:00 PM Piotr Nowojski 
> >> wrote:
> >>>
>  +1 (binding)
> 
>  Best,
>  Piotrek
> 
>  czw., 19 paź 2023 o 09:55 Yun Tang  napisał(a):
> 
> > +1 (non-binding)
> >
> >
> >  *   Build from source code
> >  *   Verify the pre-built jar packages were built with JDK8
> >  *   Verify FLIP-291 with a standalone cluster, and it works fine
> >> with
> > StateMachine example.
> >  *   Checked the signature
> >  *   Viewed the PRs.
> >
> > Best
> > Yun Tang
> > 
> > From: Cheng Pan 
> > Sent: Thursday, October 19, 2023 14:38
> > To: dev@flink.apache.org 
> > Subject: RE: [VOTE] Release 1.18.0, release candidate #3
> >
> > +1 (non-binding)
> >
> > We(the Apache Kyuubi community), verified that the Kyuubi Flink
> >> engine
> > works well[1] with Flink 1.18.0 RC3.
> >
> > [1] https://github.com/apache/kyuubi/pull/5465
> >
> > Thanks,
> > Cheng Pan
> >
> >
> > On 2023/10/19 00:26:24 Jing Ge wrote:
> >> Hi everyone,
> >>
> >> Please review and vote on the release candidate #3 for the version
> >> 1.18.0, as follows:
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific
> >> comments)
> >>
> >> The complete staging area is available for your review, which
> >> includes:
> >>
> >> * JIRA release notes [1], and the pull request adding release note
> >> for
> >> users [2]
> >> * the official Apache source release and binary convenience
> >> releases to
> > be
> >> deployed to dist.apache.org [3], which are signed with the key
> >> with
> >> fingerprint 96AE0E32CBE6E0753CE6 [4],
> >> * all artifacts to be deployed to the Maven Central Repository [5],
> >> * source code tag "release-1.18.0-rc3" [6],
> >> * website pull request listing the new release and adding
> >> announcement
> > blog
> >> post [7].
> >>
> >> The vote will be open for at least 72 hours. It is adopted by
> >> majority
> >> approval, with at least 3 PMC affirmative votes.
> >>
> >> Best regards,
> >> Konstantin, Sergey, Qingsheng, and Jing
> >>
> >> [1]
> >>
> >
> 
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352885
> >> [2] https://github.com/apache/flink/pull/23527
> >> [3] https://dist.apache.org/repos/dist/dev/flink/flink-1.18.0-rc3/
> >> [4] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> [5]
> >
> >> https://repository.apache.org/content/repositories/orgapacheflink-1662
> >> [6]
> >> https://github.com/apache/flink/releases/tag/release-1.18.0-rc3
> >> [7] https://github.com/apache/flink-web/pull/680
> >>
> >
> >
> >
> >
> 
> >>
>
>


Re: FLIP-233

2023-10-23 Thread Leonard Xu
+1 to reopen the FLIP, the FLIP  has been stalled for more than a year due to 
the author's time slot.

Glad to see the developers from IBM would like to take over the FLIP, we can 
continue the discussion in FLIP-233 discussion thread [1]

Best,
Leonard

[1] https://lists.apache.org/thread/cd60ln4pjgml7sv4kh23o1fohcfwvjcz

> 2023年10月24日 上午12:41,David Radley  写道:
> 
> Hi,
> I notice 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-233%3A+Introduce+HTTP+Connector
>  has been abandoned , due to lack of capacity. I work for IBM and my team is 
> interested in helping to get this connector contributed into Flink. Can we 
> open this Flip again and we can look to get agreement in the discussion 
> thread please,
> 
> Kind regards, David.
> 
> Unless otherwise stated above:
> 
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU



Re: FW: RE: Close orphaned/stale PRs

2023-10-23 Thread Venkatakrishnan Sowrirajan
Sorry for the delay. I filed
https://issues.apache.org/jira/projects/FLINK/issues/FLINK-33343 to track
and address the proposal here.

Regards
Venkata krishnan


On Tue, Oct 17, 2023 at 7:49 PM Venkatakrishnan Sowrirajan 
wrote:

> Thanks Martijn, David, Ryan and others for contributing to this great
> discussion!
>
> 1. From a project perspective, we can have a discussion about closing
>> PRs automatically that a) are not followed-up within X number of days
>> after a review and/or b) PRs that don't have a passing build and/or
>> don't follow contribution guidelines and/or C) need to be rebased
>> 2. In order to help understand which PRs are OK to get reviewed, we
>> could consider automatically adding a label "Ready for review" in case
>> 1b (passing build/contribution guidelines met) is the case.
>> 3. In order to help contributors, we could consider automatically
>> adding a label in case their PR isn't mergeable for the situations
>> that are displayed in situation 1
>
>
> I'm +1 on Martijn's proposal. We can get started on this and incrementally
> improve/amend as needed. Thanks everyone once again! Let me file tickets
> for each of the items.
>
> Regards
> Venkata krishnan
>
>
> On Thu, Oct 12, 2023 at 3:32 AM David Radley 
> wrote:
>
>> Hi everyone,
>> Martjin, I like your ideas. I think these labels will help, make it
>> obvious what work is actionable. I really feel these sort of process
>> improvements will incrementally help work to flow through appropriately.
>>
>> 2 additional thoughts – I hope these help this discussion:
>>
>>   *   A triaged label on the issue would indicate that a maintainer has
>> agreed this is a valid issue – this would be a better pool of issues for
>> contributors to pickup. I am not sure if maintainers currently do this sort
>> of work.
>>   *   I like the codeowners idea; did you find a way though this within
>> the Apache rules? An extension to this is that increasingly we are moving
>> out parts of the code from the main Flink repository to other repositories;
>> would this be doable. Could experts in those repositories be given write
>> access to those repos; so that each non core repo can work through its
>> issues and merge its prs more independently. This is how LF project Egeria
>> works with its connectors and UIS;  I guess the concern is that in ASF
>> these people would need to be  committers, or could they be a committer on
>> a subset of repos. Another way to manage who can merge prs is to gate the
>> pr process using git actions, so that if an approved approver indicates a
>> pr is good then the raiser can merge – this would give us granularity on
>> write access – PyTorch follows this sort of process.
>>
>>   kind regards, David.
>>
>>
>> From: Martijn Visser 
>> Date: Thursday, 12 October 2023 at 10:32
>> To: dev@flink.apache.org 
>> Subject: [EXTERNAL] Re: FW: RE: Close orphaned/stale PRs
>> Hi everyone,
>>
>> I'm overall +1 on Ryan's comment.
>> When we're talking about component ownership, I've started a
>> discussion on the Infra mailing list in the beginning of the year on
>> it. In principle, the "codeowners" idea goes against ASF principles.
>>
>> Let's summarize things:
>> 1. From a project perspective, we can have a discussion about closing
>> PRs automatically that a) are not followed-up within X number of days
>> after a review and/or b) PRs that don't have a passing build and/or
>> don't follow contribution guidelines and/or C) need to be rebased
>> 2. In order to help understand which PRs are OK to get reviewed, we
>> could consider automatically adding a label "Ready for review" in case
>> 1b (passing build/contribution guidelines met) is the case.
>> 3. In order to help contributors, we could consider automatically
>> adding a label in case their PR isn't mergeable for the situations
>> that are displayed in situation 1
>>
>> When that's done, we can see what the effect is on the PRs queue.
>>
>> Best regards,
>>
>> Martijn
>>
>> On Wed, Oct 4, 2023 at 5:13 PM David Radley 
>> wrote:
>> >
>> > Hi Ryan,
>> >
>> > I agree that good communication is key to determining what can be
>> worked on.
>> >
>> > In terms of metrics , we can use the gh cli to list prs and we can
>> export issues from Jira. A view across them, you could join on the Flink
>> issue (at the start of the pr comment and the flink issue itself – you
>> could then see which prs have an assigned Jira would be expected to be
>> reviewed. There is no explicit reviewer field in the Jira issue; I am not
>> sure if we can easily get this info without having a custom field (which
>> others have tried).
>> >
>> > In terms of what prs a committer could / should review – I would think
>> that component ownership helps scope the subset of prs to review / merge.
>> >
>> > Kind regards, David.
>> >
>> >
>> > From: Ryan Skraba 
>> > Date: Wednesday, 4 October 2023 at 15:09
>> > To: dev@flink.apache.org 
>> > Subject: [EXTERNAL] Re: FW: RE: Close orphaned/stale PRs
>> 

[jira] [Created] (FLINK-33343) Close stale Flink PRs

2023-10-23 Thread Venkata krishnan Sowrirajan (Jira)
Venkata krishnan Sowrirajan created FLINK-33343:
---

 Summary: Close stale Flink PRs
 Key: FLINK-33343
 URL: https://issues.apache.org/jira/browse/FLINK-33343
 Project: Flink
  Issue Type: Bug
Reporter: Venkata krishnan Sowrirajan


What is considered a stale PR? If any of the below condition is met, then the 
PR is considered as a stale PR

1. PRs that are not followed-up within 'X' number of days after a review
2. PRs that don't have a passing build and/or don't follow contribution 
guidelines after 'X' number of days.
3. PRs that have merge conflicts after 'X' number of days.

 

We are yet to decide on what is 'X' yet? This can be done as part of the PR and 
retroactively updating the same in the JIRA.

To see the complete set of conversations on this topic, see 
[here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Release 1.18.0, release candidate #0

2023-10-23 Thread Tzu-Li (Gordon) Tai
Hi David,

Just to follow-up on that last question: I can confirm that there are no
regressions for the Flink Kafka connector working with Flink 1.18. The
previous nightly build failures were caused by breaking changes in test
code, which has been resolved by now.

I'll be creating new releases for flink-connector-kafka 3.0.1-18 as soon as
the 1.18.0 artifacts are released.

Thanks,
Gordon

On Fri, Oct 6, 2023 at 1:40 AM David Radley  wrote:

> Hi Martjin,
> Thanks for your comments. I also think it is better to decouple the
> connectors – I agree they need to have their own release cycles. . I was
> worried that moving to Flink 1.118 is somehow causing the Kafka connector
> to fail – i.e. a regression. I think you are saying that there is no
> regression like this,
>   Kind regards, David.
> From: Martijn Visser 
> Date: Thursday, 5 October 2023 at 21:39
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: [ANNOUNCE] Release 1.18.0, release candidate #0
> Hi David,
>
> It’s a deliberate choice to decouple the connectors. We shouldn’t block
> Flink 1.18 on connector statuses. There’s already work being done to fix
> the Flink Kafka connector. Any Flink connector comes after the new minor
> version, similar to how it has been for all other connectors with Flink
> 1.17.
>
> Best regards,
>
> Martijn Visser
>
> Op do 5 okt 2023 om 11:33 schreef David Radley 
>
> > Hi Jing,
> > Yes I agree that if we can get them resolved then that would be ideal.
> >
> > I guess the worry is that at 1.17, we had a released Flink core and Kafka
> > connector.
> > At 1.18 we will have a released Core Flink but no new Kafka connector. So
> > the last released Kafka connector would now be
> >
> https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka/3.0.0-1.17
> > which should be the same as the Kafka connector in 1.17. I guess this is
> > the combination that people would pick up to deploy in production – and I
> > assume this has been tested.
> >
> > This issues with the nightly builds refers to kafka connector main
> > branch.  If they are not regressions, you are suggesting that
> pragmatically
> > we go forward with the release; I think that makes sense to do, but do
> > these issues effect 3.0.0.-1.117.
> >
> > I suspect we should release a new Kafka connector asap, so we have a
> > matching connector built outside of the Flink repo. We may want to not
> > include the Flink core version in the connector – or we might end up
> > wanting to release a Kafka connector when there are no changes just to
> have
> > a match with the Flink core version.
> >
> > Kind regards, David.
> >
> >
> >
> > From: Jing Ge 
> > Date: Wednesday, 4 October 2023 at 17:36
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] Re: [ANNOUNCE] Release 1.18.0, release candidate #0
> > Hi David,
> >
> > First of all, we should have enough time to wait for those issues to
> > be resolved. Secondly, it makes less sense to block upstream release by
> > downstream build issues. In case, those issues might need more time, we
> > should move forward with the Flink release without waiting for them.
> WDYT?
> >
> > Best regards,
> > Jing
> >
> > On Wed, Oct 4, 2023 at 6:15 PM David Radley 
> > wrote:
> >
> > > Hi ,
> > > As release 1.18 removes  the kafka connector from the core Flink
> > > repository, I assume we will wait until the kafka connector nightly
> build
> > > issues https://issues.apache.org/jira/browse/FLINK-33104   and
> > > https://issues.apache.org/jira/browse/FLINK-33017   are resolved
> before
> > > releasing 1.18?
> > >
> > >  Kind regards, David.
> > >
> > >
> > > From: Jing Ge 
> > > Date: Wednesday, 27 September 2023 at 15:11
> > > To: dev@flink.apache.org 
> > > Subject: [EXTERNAL] Re: [ANNOUNCE] Release 1.18.0, release candidate #0
> > > Hi Folks,
> > >
> > > @Ryan FYI: CI passed and the PR has been merged. Thanks!
> > >
> > > If there are no more other concerns, I will start publishing 1.18-rc1.
> > >
> > > Best regards,
> > > Jing
> > >
> > > On Mon, Sep 25, 2023 at 1:40 PM Jing Ge  wrote:
> > >
> > > > Hi Ryan,
> > > >
> > > > Thanks for reaching out. It is fine to include it but we need to wait
> > > > until the CI passes. I am not sure how long it will take, since there
> > > seems
> > > > to be some infra issues.
> > > >
> > > > Best regards,
> > > > Jing
> > > >
> > > > On Mon, Sep 25, 2023 at 11:34 AM Ryan Skraba
> > > 
> > > > wrote:
> > > >
> > > >> Hello!  There's a security fix that probably should be applied to
> 1.18
> > > >> in the next RC1 : https://github.com/apache/flink/pull/23461
> (bump
> > to
> > > >> snappy-java).  Do you think this would be possible to include?
> > > >>
> > > >> All my best, Ryan
> > > >>
> > > >> [1]: https://issues.apache.org/jira/browse/FLINK-33149"Bump
> > > >> snappy-java to 1.1.10.4"
> > > >>
> > > >>
> > > >>
> > > >> On Mon, Sep 25, 2023 at 3:54 PM Jing Ge  >
> > > >> wrote:
> > > >> >
> > > >> > Thanks Zakelly for the update! Appreciate it!
> > > >> >
> > > >> > 

FLIP-233

2023-10-23 Thread David Radley
Hi,
I notice 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-233%3A+Introduce+HTTP+Connector
 has been abandoned , due to lack of capacity. I work for IBM and my team is 
interested in helping to get this connector contributed into Flink. Can we open 
this Flip again and we can look to get agreement in the discussion thread 
please,

 Kind regards, David.

Unless otherwise stated above:

IBM United Kingdom Limited
Registered in England and Wales with number 741598
Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


Re: [VOTE] Apache Flink Kubernetes Operator Release 1.6.1, release candidate #1

2023-10-23 Thread Samrat Deb
+1 (non binding)

- Verified checksum and  signatures,
- checked Helm repo
- Installed operator,
- tested woed count and state machine example

Bests,
Samrat


On Mon, 23 Oct 2023 at 9:35 PM, Mate Czagany  wrote:

> +1 (non-binding)
>
> - Verified checksums, signatures, no binary found in source
> - Verified Helm chart and Docker images
> - Tested autoscaler on 1.18 with reactive scaling
>
> Regards,
> Mate
>
> Gyula Fóra  ezt írta (időpont: 2023. okt. 23., H,
> 9:45):
>
> > +1 (binding)
> >
> > - Verified checksums, signatures, source release content
> > - Helm repo works correctly and points to the correct image / version
> > - Installed operator, ran stateful example
> >
> > Gyula
> >
> > On Sat, Oct 21, 2023 at 1:43 PM Rui Fan <1996fan...@gmail.com> wrote:
> >
> > > +1(non-binding)
> > >
> > > - Downloaded artifacts from dist
> > > - Verified SHA512 checksums
> > > - Verified GPG signatures
> > > - Build the source with java-11 and verified the licenses together
> > > - Verified that all POM files point to the same version.
> > > - Verified that chart and appVersion matches the target release
> > > - Verified that helm chart / values.yaml points to the RC docker image
> > > - Verified that RC repo works as Helm repo (helm repo add
> > > flink-operator-repo-1.6.1-rc1
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.6.1-rc1/
> > > )
> > > - Verified Helm chart can be installed  (helm install
> > > flink-kubernetes-operator
> > > flink-operator-repo-1.6.1-rc1/flink-kubernetes-operator --set
> > > webhook.create=false)
> > > - Submitted the autoscaling demo, the autoscaler works well (kubectl
> > apply
> > > -f autoscaling.yaml)
> > > - Triggered a manual savepoint (update the yaml: savepointTriggerNonce:
> > > 101)
> > >
> > > Best,
> > > Rui
> > >
> > > On Sat, Oct 21, 2023 at 7:33 PM Rui Fan <1996fan...@gmail.com> wrote:
> > >
> > > > Hi Everyone,
> > > >
> > > > Please review and vote on the release candidate #1 for the version
> > 1.6.1
> > > of
> > > > Apache Flink Kubernetes Operator,
> > > > as follows:
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > > **Release Overview**
> > > >
> > > > As an overview, the release consists of the following:
> > > > a) Kubernetes Operator canonical source distribution (including the
> > > > Dockerfile), to be deployed to the release repository at
> > dist.apache.org
> > > > b) Kubernetes Operator Helm Chart to be deployed to the release
> > > repository
> > > > at dist.apache.org
> > > > c) Maven artifacts to be deployed to the Maven Central Repository
> > > > d) Docker image to be pushed to dockerhub
> > > >
> > > > **Staging Areas to Review**
> > > >
> > > > The staging areas containing the above mentioned artifacts are as
> > > follows,
> > > > for your review:
> > > > * All artifacts for a,b) can be found in the corresponding dev
> > repository
> > > > at dist.apache.org [1]
> > > > * All artifacts for c) can be found at the Apache Nexus Repository
> [2]
> > > > * The docker image for d) is staged on github [3]
> > > >
> > > > All artifacts are signed with the
> > > > key B2D64016B940A7E0B9B72E0D7D0528B28037D8BC [4]
> > > >
> > > > Other links for your review:
> > > > * source code tag "release-1.6.1-rc1" [5]
> > > > * PR to update the website Downloads page to
> > > > include Kubernetes Operator links [6]
> > > > * PR to update the doc version of flink-kubernetes-operator[7]
> > > >
> > > > **Vote Duration**
> > > >
> > > > The voting time will run for at least 72 hours.
> > > > It is adopted by majority approval, with at least 3 PMC affirmative
> > > votes.
> > > >
> > > > **Note on Verification**
> > > >
> > > > You can follow the basic verification guide here[8].
> > > > Note that you don't need to verify everything yourself, but please
> make
> > > > note of what you have tested together with your +- vote.
> > > >
> > > > [1]
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.6.1-rc1/
> > > > [2]
> > > >
> > https://repository.apache.org/content/repositories/orgapacheflink-1663/
> > > > [3]
> > > >
> > >
> >
> https://github.com/apache/flink-kubernetes-operator/pkgs/container/flink-kubernetes-operator/139454270?tag=51eeae1
> > > > [4] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > [5]
> > > >
> > >
> >
> https://github.com/apache/flink-kubernetes-operator/tree/release-1.6.1-rc1
> > > > [6] https://github.com/apache/flink-web/pull/690
> > > > [7] https://github.com/apache/flink-kubernetes-operator/pull/687
> > > > [8]
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Kubernetes+Operator+Release
> > > >
> > > > Best,
> > > > Rui
> > > >
> > >
> >
>


RE: Backport strategy

2023-10-23 Thread David Radley
Hi Martijn,
Thanks for the pointer; that makes sense – many (most?) projects only provide 
fixes at the current release (apart for exception circumstances – possibly some 
high priority security fixes) ; I am curious why Flink fixes 2 streams of code.

One thing that I wondered about is me is the use of the word ‘support’. In 
previous open source projects we have been keen to stress that the open source 
community does not provide support, in line with the Apache 2 license which 
talks of the code being supplied as-is with no warrantee. What do you think 
about not using the word support in case it is misleading?
  Kind regards, David.


From: Martijn Visser 
Date: Monday, 23 October 2023 at 16:18
To: dev@flink.apache.org 
Subject: [EXTERNAL] Re: Backport strategy
Hi David,

The policy is that the current and and previous minor release are
supported, and it's documented at
https://flink.apache.org/downloads/#update-policy-for-old-releases
One of the reasons for decoupling the connectors from Flink is that it
could be possible to support older versions of Flink as well. That
depends of course on the complexity of the backport etc which is a
case-by-case situation.

Best regards,

Martijn

On Mon, Oct 23, 2023 at 4:16 PM David Radley  wrote:
>
> Hi,
> I am relatively new to the Flink community. I notice that critical fixes are 
> backported to previous versions. Do we have a documented backport strategy 
> and set of principles?
>
> The reason I ask is that we recently moved removed the Kafka connector from 
> the core repository, so the Kafka connector should be picked up from its own 
> repository. I noticed this removal and updated issues in the core repo to 
> indicate the code has moved to another repo. One of the issues was 
> https://github.com/apache/flink/pull/21226#issuecomment-1775121605  . This is 
> a critical issue and the request is to backport it to 1.15.3. I assume a 
> backport would involved a 3rd number change 1.15.4 in this case.
>
> It seems to me that we should look to create fixes in the stand alone Kafka 
> connector where possible and to list the compatible Flink versions it can be 
> used with, this could include patch levels of 1.15 1.16 1.17.
>
> WDYT?
>   Kind regards, David.
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU

Unless otherwise stated above:

IBM United Kingdom Limited
Registered in England and Wales with number 741598
Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


Re: [VOTE] Apache Flink Kubernetes Operator Release 1.6.1, release candidate #1

2023-10-23 Thread Mate Czagany
+1 (non-binding)

- Verified checksums, signatures, no binary found in source
- Verified Helm chart and Docker images
- Tested autoscaler on 1.18 with reactive scaling

Regards,
Mate

Gyula Fóra  ezt írta (időpont: 2023. okt. 23., H,
9:45):

> +1 (binding)
>
> - Verified checksums, signatures, source release content
> - Helm repo works correctly and points to the correct image / version
> - Installed operator, ran stateful example
>
> Gyula
>
> On Sat, Oct 21, 2023 at 1:43 PM Rui Fan <1996fan...@gmail.com> wrote:
>
> > +1(non-binding)
> >
> > - Downloaded artifacts from dist
> > - Verified SHA512 checksums
> > - Verified GPG signatures
> > - Build the source with java-11 and verified the licenses together
> > - Verified that all POM files point to the same version.
> > - Verified that chart and appVersion matches the target release
> > - Verified that helm chart / values.yaml points to the RC docker image
> > - Verified that RC repo works as Helm repo (helm repo add
> > flink-operator-repo-1.6.1-rc1
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.6.1-rc1/
> > )
> > - Verified Helm chart can be installed  (helm install
> > flink-kubernetes-operator
> > flink-operator-repo-1.6.1-rc1/flink-kubernetes-operator --set
> > webhook.create=false)
> > - Submitted the autoscaling demo, the autoscaler works well (kubectl
> apply
> > -f autoscaling.yaml)
> > - Triggered a manual savepoint (update the yaml: savepointTriggerNonce:
> > 101)
> >
> > Best,
> > Rui
> >
> > On Sat, Oct 21, 2023 at 7:33 PM Rui Fan <1996fan...@gmail.com> wrote:
> >
> > > Hi Everyone,
> > >
> > > Please review and vote on the release candidate #1 for the version
> 1.6.1
> > of
> > > Apache Flink Kubernetes Operator,
> > > as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > > **Release Overview**
> > >
> > > As an overview, the release consists of the following:
> > > a) Kubernetes Operator canonical source distribution (including the
> > > Dockerfile), to be deployed to the release repository at
> dist.apache.org
> > > b) Kubernetes Operator Helm Chart to be deployed to the release
> > repository
> > > at dist.apache.org
> > > c) Maven artifacts to be deployed to the Maven Central Repository
> > > d) Docker image to be pushed to dockerhub
> > >
> > > **Staging Areas to Review**
> > >
> > > The staging areas containing the above mentioned artifacts are as
> > follows,
> > > for your review:
> > > * All artifacts for a,b) can be found in the corresponding dev
> repository
> > > at dist.apache.org [1]
> > > * All artifacts for c) can be found at the Apache Nexus Repository [2]
> > > * The docker image for d) is staged on github [3]
> > >
> > > All artifacts are signed with the
> > > key B2D64016B940A7E0B9B72E0D7D0528B28037D8BC [4]
> > >
> > > Other links for your review:
> > > * source code tag "release-1.6.1-rc1" [5]
> > > * PR to update the website Downloads page to
> > > include Kubernetes Operator links [6]
> > > * PR to update the doc version of flink-kubernetes-operator[7]
> > >
> > > **Vote Duration**
> > >
> > > The voting time will run for at least 72 hours.
> > > It is adopted by majority approval, with at least 3 PMC affirmative
> > votes.
> > >
> > > **Note on Verification**
> > >
> > > You can follow the basic verification guide here[8].
> > > Note that you don't need to verify everything yourself, but please make
> > > note of what you have tested together with your +- vote.
> > >
> > > [1]
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.6.1-rc1/
> > > [2]
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1663/
> > > [3]
> > >
> >
> https://github.com/apache/flink-kubernetes-operator/pkgs/container/flink-kubernetes-operator/139454270?tag=51eeae1
> > > [4] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [5]
> > >
> >
> https://github.com/apache/flink-kubernetes-operator/tree/release-1.6.1-rc1
> > > [6] https://github.com/apache/flink-web/pull/690
> > > [7] https://github.com/apache/flink-kubernetes-operator/pull/687
> > > [8]
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Kubernetes+Operator+Release
> > >
> > > Best,
> > > Rui
> > >
> >
>


Re: Maven and java version variables

2023-10-23 Thread Matthias Pohl
Hi David,
The change that caused the conflict in your PR is caused by FLINK-33291
[1]. I was thinking about adding links to the comments to make the
navigation to the corresponding resources easier as you rightfully
mentioned. I didn't do it in the end because I was afraid that
documentation might be moved in the future and those links wouldn't be
valid anymore. That is why I tried to make the comments descriptive instead.

But I agree: We could definitely do better with the documentation.
...especially (but not only) for CI.

Best,
Matthias

[1] https://issues.apache.org/jira/browse/FLINK-33291

On Mon, Oct 23, 2023 at 2:53 PM Alexander Fedulov <
alexander.fedu...@gmail.com> wrote:

> (under "Prepare for the release")
>
> As for CI:
>
> https://github.com/apache/flink/blob/78b5ddb11dfd2a3a00b58079fe9ee29a80555988/tools/ci/maven-utils.sh#L84
>
> https://github.com/apache/flink/blob/9b63099964b36ad9d78649bb6f5b39473e0031bd/tools/azure-pipelines/build-apache-repo.yml#L39
>
> https://github.com/apache/flink/blob/9b63099964b36ad9d78649bb6f5b39473e0031bd/azure-pipelines.yml#L39
>
> Best,
> Alexander Fedulov
>
>
> On Mon, 23 Oct 2023 at 14:44, Jing Ge  wrote:
>
> > Hi David,
> >
> > Please check [1] in the section Verify Java and Maven Version. Thanks!
> >
> > Best regards,
> > Jing
> >
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release
> >
> > On Mon, Oct 23, 2023 at 1:25 PM David Radley 
> > wrote:
> >
> > > Hi,
> > >
> > > I have an open pr in the backlog that improves the pom.xml by
> introducing
> > > some Maven variables. The pr is
> > https://github.com/apache/flink/pull/23469
> > > It has been reviewed but not merged. In the meantime another pom change
> > > has been added that caused a conflict. I have amended the code in my pr
> > to
> > > implement the new logic, introducing a new java upper bounds version
> > > variable.
> > > I notice that the pom change that was added introduced this comment:
> > >
> > >  > -->
> > >
> > > 
> > >
> > > I am not sure what the CI setup means and where in the Flink Release
> wiki
> > > the java range is mentioned. It would be great if the comment could be
> > > extended to include links to this information. I am happy to do that as
> > > part of this pr , if needed, if I can be supplied the links.  I think
> > this
> > > pr should be merged asap, so subsequent pom file changes use the Maven
> > > variables.
> > >
> > >   WDYT
> > >
> > > Kind regards, David.
> > >
> > > Unless otherwise stated above:
> > >
> > > IBM United Kingdom Limited
> > > Registered in England and Wales with number 741598
> > > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
> > >
> >
>


Re: Backport strategy

2023-10-23 Thread Martijn Visser
Hi David,

The policy is that the current and and previous minor release are
supported, and it's documented at
https://flink.apache.org/downloads/#update-policy-for-old-releases
One of the reasons for decoupling the connectors from Flink is that it
could be possible to support older versions of Flink as well. That
depends of course on the complexity of the backport etc which is a
case-by-case situation.

Best regards,

Martijn

On Mon, Oct 23, 2023 at 4:16 PM David Radley  wrote:
>
> Hi,
> I am relatively new to the Flink community. I notice that critical fixes are 
> backported to previous versions. Do we have a documented backport strategy 
> and set of principles?
>
> The reason I ask is that we recently moved removed the Kafka connector from 
> the core repository, so the Kafka connector should be picked up from its own 
> repository. I noticed this removal and updated issues in the core repo to 
> indicate the code has moved to another repo. One of the issues was 
> https://github.com/apache/flink/pull/21226#issuecomment-1775121605 . This is 
> a critical issue and the request is to backport it to 1.15.3. I assume a 
> backport would involved a 3rd number change 1.15.4 in this case.
>
> It seems to me that we should look to create fixes in the stand alone Kafka 
> connector where possible and to list the compatible Flink versions it can be 
> used with, this could include patch levels of 1.15 1.16 1.17.
>
> WDYT?
>   Kind regards, David.
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


Re: [VOTE] Release 1.18.0, release candidate #3

2023-10-23 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums 
- built from source code succeeded
- checked all dependency artifacts are 1.18
- started SQL Client, used MySQL CDC connector to read changelog from database 
, the result is expected
- reviewed the web PR, left minor comments
- reviewed the release notes PR, left minor comments


Best,
Leonard

> 2023年10月21日 下午7:28,Rui Fan <1996fan...@gmail.com> 写道:
> 
> +1(non-binding)
> 
> - Downloaded artifacts from dist[1]
> - Verified SHA512 checksums
> - Verified GPG signatures
> - Build the source with java-1.8 and verified the licenses together
> - Verified web PR
> 
> [1] https://dist.apache.org/repos/dist/dev/flink/flink-1.18.0-rc3/
> 
> Best,
> Rui
> 
> On Fri, Oct 20, 2023 at 10:31 PM Martijn Visser 
> wrote:
> 
>> +1 (binding)
>> 
>> - Validated hashes
>> - Verified signature
>> - Verified that no binaries exist in the source archive
>> - Build the source with Maven
>> - Verified licenses
>> - Verified web PR
>> - Started a cluster and the Flink SQL client, successfully read and
>> wrote with the Kafka connector to Confluent Cloud with AVRO and Schema
>> Registry enabled
>> 
>> On Fri, Oct 20, 2023 at 2:55 PM Matthias Pohl
>>  wrote:
>>> 
>>> +1 (binding)
>>> 
>>> * Downloaded artifacts
>>> * Built Flink from sources
>>> * Verified SHA512 checksums GPG signatures
>>> * Compared checkout with provided sources
>>> * Verified pom file versions
>>> * Verified that there are no pom/NOTICE file changes since RC1
>>> * Deployed standalone session cluster and ran WordCount example in batch
>>> and streaming: Nothing suspicious in log files found
>>> 
>>> On Thu, Oct 19, 2023 at 3:00 PM Piotr Nowojski 
>> wrote:
>>> 
 +1 (binding)
 
 Best,
 Piotrek
 
 czw., 19 paź 2023 o 09:55 Yun Tang  napisał(a):
 
> +1 (non-binding)
> 
> 
>  *   Build from source code
>  *   Verify the pre-built jar packages were built with JDK8
>  *   Verify FLIP-291 with a standalone cluster, and it works fine
>> with
> StateMachine example.
>  *   Checked the signature
>  *   Viewed the PRs.
> 
> Best
> Yun Tang
> 
> From: Cheng Pan 
> Sent: Thursday, October 19, 2023 14:38
> To: dev@flink.apache.org 
> Subject: RE: [VOTE] Release 1.18.0, release candidate #3
> 
> +1 (non-binding)
> 
> We(the Apache Kyuubi community), verified that the Kyuubi Flink
>> engine
> works well[1] with Flink 1.18.0 RC3.
> 
> [1] https://github.com/apache/kyuubi/pull/5465
> 
> Thanks,
> Cheng Pan
> 
> 
> On 2023/10/19 00:26:24 Jing Ge wrote:
>> Hi everyone,
>> 
>> Please review and vote on the release candidate #3 for the version
>> 1.18.0, as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific
>> comments)
>> 
>> The complete staging area is available for your review, which
>> includes:
>> 
>> * JIRA release notes [1], and the pull request adding release note
>> for
>> users [2]
>> * the official Apache source release and binary convenience
>> releases to
> be
>> deployed to dist.apache.org [3], which are signed with the key
>> with
>> fingerprint 96AE0E32CBE6E0753CE6 [4],
>> * all artifacts to be deployed to the Maven Central Repository [5],
>> * source code tag "release-1.18.0-rc3" [6],
>> * website pull request listing the new release and adding
>> announcement
> blog
>> post [7].
>> 
>> The vote will be open for at least 72 hours. It is adopted by
>> majority
>> approval, with at least 3 PMC affirmative votes.
>> 
>> Best regards,
>> Konstantin, Sergey, Qingsheng, and Jing
>> 
>> [1]
>> 
> 
 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352885
>> [2] https://github.com/apache/flink/pull/23527
>> [3] https://dist.apache.org/repos/dist/dev/flink/flink-1.18.0-rc3/
>> [4] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [5]
> 
>> https://repository.apache.org/content/repositories/orgapacheflink-1662
>> [6]
>> https://github.com/apache/flink/releases/tag/release-1.18.0-rc3
>> [7] https://github.com/apache/flink-web/pull/680
>> 
> 
> 
> 
> 
 
>> 



Backport strategy

2023-10-23 Thread David Radley
Hi,
I am relatively new to the Flink community. I notice that critical fixes are 
backported to previous versions. Do we have a documented backport strategy and 
set of principles?

The reason I ask is that we recently moved removed the Kafka connector from the 
core repository, so the Kafka connector should be picked up from its own 
repository. I noticed this removal and updated issues in the core repo to 
indicate the code has moved to another repo. One of the issues was 
https://github.com/apache/flink/pull/21226#issuecomment-1775121605 . This is a 
critical issue and the request is to backport it to 1.15.3. I assume a backport 
would involved a 3rd number change 1.15.4 in this case.

It seems to me that we should look to create fixes in the stand alone Kafka 
connector where possible and to list the compatible Flink versions it can be 
used with, this could include patch levels of 1.15 1.16 1.17.

WDYT?
  Kind regards, David.

Unless otherwise stated above:

IBM United Kingdom Limited
Registered in England and Wales with number 741598
Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


Re: Re: [DISCUSS] Remove legacy Paimon (TableStore) doc link from Flink web navigation

2023-10-23 Thread Yun Tang
Hi Marton and Martijn,

I have removed the link to the legacy Paimon (flink-table-store) but only left 
a link to the incubating-paimon doc. Please move to the PR review[1] for quick 
discussions.

[1] https://github.com/apache/flink-web/pull/665

Best
Yun Tang

From: Márton Balassi 
Sent: Monday, October 23, 2023 17:06
To: dev@flink.apache.org ; myas...@live.com 

Cc: d...@paimon.apache.org 
Subject: Re: Re: [DISCUSS] Remove legacy Paimon (TableStore) doc link from 
Flink web navigation

Hi all,

Thanks for your responses.
@Jingsong Li: Thanks for the reference to the web PR, I missed that.

@Yun Tang: Thanks, I prefer simply removing the TableStore link from the 
documentation navigation of Flink, as it is not a subproject of Flink anymore - 
it is now its own project. It has had 2 of its own releases over a ~half a year 
period.
I am all for having a proper links to Paimon, for example we could create a 
"Sister Projects" subsection in the About section of the 
flink.apache.org webpage and have a paragraph of 
intro/links there or simply add Paimon related content to the Table connector 
docs [1]. We can make these 2 changes separately, but ideally they should be 
merged relatively close in time.

Would you be open to updating your original PR [2] to simply remove the links 
or would you like me to do it instead? I am happy to review your change if you 
have a proposal of where to include Paimon.

[1] 
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/table/overview/
[2] https://github.com/apache/flink-web/pull/665

On Sat, Oct 21, 2023 at 7:33 AM Yun Tang 
mailto:myas...@live.com>> wrote:
Hi  devs,

I am supporting to update the links. As the PR [1] author, I originally wanted 
to keep the legacy link to the Flink table store (0.3) as that was part of the 
history of Flink. And I plan to add the new Paimon's main branch link to tell 
users from Flink that this is the latest version. WDYT?

[1] https://github.com/apache/flink-web/pull/665

Best
Yun Tang


From: junjie201...@gmail.com 
mailto:junjie201...@gmail.com>>
Sent: Friday, October 20, 2023 12:20
To: Jing Ge ; dev 
mailto:dev@flink.apache.org>>
Cc: dev mailto:d...@paimon.apache.org>>
Subject: Re: Re: [DISCUSS] Remove legacy Paimon (TableStore) doc link from 
Flink web navigation

+1

On Tue, Oct 17, 2023 at 11:13 AM Yong Fang 
mailto:zjur...@gmail.com>> wrote:

> +1
>
> On Tue, Oct 17, 2023 at 4:52 PM Leonard Xu 
> mailto:xbjt...@gmail.com>> wrote:
>
> > +1
> >
> >
> > > 2023年10月17日 下午4:50,Martijn Visser 
> > > mailto:martijnvis...@apache.org>> 写道:
> > >
> > > +1
> > >
> > > On Tue, Oct 17, 2023 at 10:34 AM Jingsong Li 
> > > mailto:jingsongl...@gmail.com>>
> > wrote:
> > >>
> > >> Hi marton,
> > >>
> > >> Thanks for driving. +1
> > >>
> > >> There is a PR to remove legacy Paimon
> > >> https://github.com/apache/flink-web/pull/665 , but it hasn't been
> > >> updated for a long time.
> > >>
> > >> Best,
> > >> Jingsong
> > >>
> > >> On Tue, Oct 17, 2023 at 4:28 PM Márton Balassi 
> > >> mailto:mbala...@apache.org>>
> > wrote:
> > >>>
> > >>> Hi Flink & Paimon devs,
> > >>>
> > >>> The Flink webpage documentation navigation section still lists the
> > outdated TableStore 0.3 and master docs as subproject docs (see
> > attachment). I am all for advertising Paimon as a sister project of
> Flink,
> > but the current state is misleading in multiple ways.
> > >>>
> > >>> I would like to remove these obsolete links if the communities agree.
> > >>>
> > >>> Best,
> > >>> Marton
> >
> >
>


[jira] [Created] (FLINK-33342) JDK 17 CI run doesn't set java17-target profile

2023-10-23 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-33342:
-

 Summary: JDK 17 CI run doesn't set java17-target profile
 Key: FLINK-33342
 URL: https://issues.apache.org/jira/browse/FLINK-33342
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines, Build System / CI
Reporter: Matthias Pohl


In contrast to the jdk11 CI run which has the java11-target profile set (see 
[tools/azure-pipelines/build-apache-repo.yml:138|https://github.com/apache/flink/blob/9b63099964b36ad9d78649bb6f5b39473e0031bd/tools/azure-pipelines/build-apache-repo.yml#L138]),
 it's missing for the jdk17 CI run (see 
[tools/azure-pipelines/build-apache-repo.yml:149|https://github.com/apache/flink/blob/9b63099964b36ad9d78649bb6f5b39473e0031bd/tools/azure-pipelines/build-apache-repo.yml#L149]).

The profile for the source version (i.e. {{java11}} and {{java17}}) are 
automatically activated through the JDK version of the run.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33341) Use available local state for rescaling

2023-10-23 Thread Stefan Richter (Jira)
Stefan Richter created FLINK-33341:
--

 Summary: Use available local state for rescaling
 Key: FLINK-33341
 URL: https://issues.apache.org/jira/browse/FLINK-33341
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / State Backends
Reporter: Stefan Richter
Assignee: Stefan Richter


Local state is currently only used for recovery. However, it would make sense 
to also use available local state in rescaling scenarios to reduce the amount 
of data to download from remote storage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Maven and java version variables

2023-10-23 Thread Alexander Fedulov
(under "Prepare for the release")

As for CI:
https://github.com/apache/flink/blob/78b5ddb11dfd2a3a00b58079fe9ee29a80555988/tools/ci/maven-utils.sh#L84
https://github.com/apache/flink/blob/9b63099964b36ad9d78649bb6f5b39473e0031bd/tools/azure-pipelines/build-apache-repo.yml#L39
https://github.com/apache/flink/blob/9b63099964b36ad9d78649bb6f5b39473e0031bd/azure-pipelines.yml#L39

Best,
Alexander Fedulov


On Mon, 23 Oct 2023 at 14:44, Jing Ge  wrote:

> Hi David,
>
> Please check [1] in the section Verify Java and Maven Version. Thanks!
>
> Best regards,
> Jing
>
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release
>
> On Mon, Oct 23, 2023 at 1:25 PM David Radley 
> wrote:
>
> > Hi,
> >
> > I have an open pr in the backlog that improves the pom.xml by introducing
> > some Maven variables. The pr is
> https://github.com/apache/flink/pull/23469
> > It has been reviewed but not merged. In the meantime another pom change
> > has been added that caused a conflict. I have amended the code in my pr
> to
> > implement the new logic, introducing a new java upper bounds version
> > variable.
> > I notice that the pom change that was added introduced this comment:
> >
> >  -->
> >
> > 
> >
> > I am not sure what the CI setup means and where in the Flink Release wiki
> > the java range is mentioned. It would be great if the comment could be
> > extended to include links to this information. I am happy to do that as
> > part of this pr , if needed, if I can be supplied the links.  I think
> this
> > pr should be merged asap, so subsequent pom file changes use the Maven
> > variables.
> >
> >   WDYT
> >
> > Kind regards, David.
> >
> > Unless otherwise stated above:
> >
> > IBM United Kingdom Limited
> > Registered in England and Wales with number 741598
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
> >
>


Re: [DISCUSS] FLIP-360: Merging ExecutionGraphInfoStore and JobResultStore into a single component

2023-10-23 Thread Gyula Fóra
I am a bit confused by the split in the CompletedJobStore / JobDetailsStore.
Seems like JobDetailsStore is simply a view on top of CompletedJobStore:
 - Maybe we should not call it a store? Is it storing anything?
 - Why couldn't the cleanup triggering be the responsibility of the
CompletedJobStore, wouldn't it make it simpler to have the storage/cleanup
related logic in a simple place?
 - Ideally the JobDetailsStore / JobDetailsProvider could be a very thin
interface exposed by the CompletedJobStore

Gyula

On Sat, Sep 30, 2023 at 2:18 AM Matthias Pohl
 wrote:

> Thanks for sharing your thoughts, Shammon FY. I kind of see your point.
>
> Initially, I didn't put much thought into splitting up the interface into
> two because the dispatcher would have been the only component dealing with
> the two interfaces. Having two interfaces wouldn't have added much value
> (in terms of testability and readability, I thought).
>
> But I iterated over the idea once more and came up with a new proposal that
> involves the two components CompletedJobStore and JobDetailsStore. It's not
> 100% what you suggested (because the retrieval of the ExecutionGraphInfo
> still lives in the CompletedJobStore) but the separation makes sense in my
> opinion:
> - The CompletedJobStore deals with the big data that might require
> accessing the disk.
> - The JobDetailsStore handles the small-footprint data that lives in memory
> all the time. Additionally, it takes care of actually deleting the metadata
> of the completed job in both stores if a TTL is configured.
>
> See FLIP-360 [1] with the newly added class and sequence diagrams and
> additional content. I only updated the Interfaces & Methods section (see
> diff [2]).
>
> I'm looking forward to feedback.
>
> Best,
> Matthias
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-360%3A+merging+the+executiongraphinfostore+and+the+jobresultstore+into+a+single+component+completedjobstore
> [2]
>
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=263428420=14=13
>
> On Mon, Sep 18, 2023 at 1:20 AM Shammon FY  wrote:
>
> > Hi Matthias,
> >
> > Thanks for initiating this discussion, and +1 for overall from my side.
> > It's really strange to have two different places to store completed jobs,
> > this also brings about the complexity of Flink. I agree with using a
> > unified instance to store the completed job information.
> >
> > In terms of ability, `ExecutionGraphInfoStore` and `JobResultStore` are
> > different: one is mainly used for information display, and the other is
> for
> > failover. So after unifying storage, can we use different interfaces to
> > meet the different requirements for jobs? Adding all these methods for
> > different components into one interface such as `CompletedJobStore` may
> be
> > a little strange. What do you think?
> >
> > Best,
> > Shammon FY
> >
> >
> >
> > On Fri, Sep 8, 2023 at 8:08 PM Gyula Fóra  wrote:
> >
> > > Hi Matthias!
> > >
> > > Thank you for the detailed proposal, overall I am in favor of making
> this
> > > unification to simplify the logic and make the integration for external
> > > components more straightforward.
> > > I will try to read through the proposal more carefully next week and
> > > provide some detailed feedback.
> > >
> > > +1
> > >
> > > Thanks
> > > Gyula
> > >
> > > On Fri, Sep 8, 2023 at 8:36 AM Matthias Pohl  > > .invalid>
> > > wrote:
> > >
> > > > Just a bit more elaboration on the question that we need to answer
> > here:
> > > Do
> > > > we want to expose the internal ArchivedExecutionGraph data structure
> > > > through JSON?
> > > >
> > > > - The JSON approach allows the user to have (almost) full access to
> the
> > > > information (that would be otherwise derived from the REST API).
> > > Therefore,
> > > > there's no need to spin up a cluster to access this information.
> > > > Any information that shall be exposed through the REST API needs to
> be
> > > > well-defined in this JSON structure, though. Large parts of the
> > > > ArchivedExecutionGraph data structure (essentially anything that
> shall
> > be
> > > > used to populate the REST API) become public domain, though, which
> puts
> > > > more constraints on this data structure and makes it harder to change
> > it
> > > in
> > > > the future.
> > > >
> > > > - The binary data approach allows us to keep the data structure
> itself
> > > > internal. We have more control over what we want to expose by
> providing
> > > > access points in the ClusterClient (e.g. just add a command to
> extract
> > > the
> > > > external storage path from the file).
> > > >
> > > > - The compromise (i.e. keeping ExecutionGraphInfoStore and
> > JobResultStore
> > > > separate and just expose the checkpoint information next to the
> > JobResult
> > > > in the JobResultStore file) would keep us the closest to the current
> > > state,
> > > > requires the least code changes and the least exposure of internal
> data
> > > > structures. It 

Re: Maven and java version variables

2023-10-23 Thread Jing Ge
Hi David,

Please check [1] in the section Verify Java and Maven Version. Thanks!

Best regards,
Jing


[1]
https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release

On Mon, Oct 23, 2023 at 1:25 PM David Radley 
wrote:

> Hi,
>
> I have an open pr in the backlog that improves the pom.xml by introducing
> some Maven variables. The pr is https://github.com/apache/flink/pull/23469
> It has been reviewed but not merged. In the meantime another pom change
> has been added that caused a conflict. I have amended the code in my pr to
> implement the new logic, introducing a new java upper bounds version
> variable.
> I notice that the pom change that was added introduced this comment:
>
> 
>
> 
>
> I am not sure what the CI setup means and where in the Flink Release wiki
> the java range is mentioned. It would be great if the comment could be
> extended to include links to this information. I am happy to do that as
> part of this pr , if needed, if I can be supplied the links.  I think this
> pr should be merged asap, so subsequent pom file changes use the Maven
> variables.
>
>   WDYT
>
> Kind regards, David.
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>


Maven and java version variables

2023-10-23 Thread David Radley
Hi,

I have an open pr in the backlog that improves the pom.xml by introducing some 
Maven variables. The pr is https://github.com/apache/flink/pull/23469
It has been reviewed but not merged. In the meantime another pom change has 
been added that caused a conflict. I have amended the code in my pr to 
implement the new logic, introducing a new java upper bounds version variable.
I notice that the pom change that was added introduced this comment:





I am not sure what the CI setup means and where in the Flink Release wiki the 
java range is mentioned. It would be great if the comment could be extended to 
include links to this information. I am happy to do that as part of this pr , 
if needed, if I can be supplied the links.  I think this pr should be merged 
asap, so subsequent pom file changes use the Maven variables.

  WDYT

Kind regards, David.

Unless otherwise stated above:

IBM United Kingdom Limited
Registered in England and Wales with number 741598
Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


[jira] [Created] (FLINK-33340) Bump Jackson to 2.15.3

2023-10-23 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-33340:
---

 Summary: Bump Jackson to 2.15.3
 Key: FLINK-33340
 URL: https://issues.apache.org/jira/browse/FLINK-33340
 Project: Flink
  Issue Type: Technical Debt
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


Among others there is a number of improvements regarding parsing of numbers 
(jackson-core)
https://github.com/FasterXML/jackson-core/blob/2.16/release-notes/VERSION-2.x




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[DISCUSS] Removal of unused e2e tests

2023-10-23 Thread Alexander Fedulov
FLINK-17375 [1] removed [2] run-pre-commit-tests.sh in Flink 1.12. Since
then the following tests are not executed anymore:
test_state_migration.sh
test_state_evolution.sh
test_streaming_kinesis.sh
test_streaming_classloader.sh
test_streaming_distributed_cache_via_blob.sh

Certain classes that were prior used for classloading and state evolution
testing only via the aforementioned scripts are still in the project. I
would like to understand if the removal was deliberate and if it is OK to
do a clean up [3].

[1] https://issues.apache.org/jira/browse/FLINK-17375
[2]
https://github.com/apache/flink/pull/12268/files#diff-39f0aea40d2dd3f026544bb4c2502b2e9eab4c825df5f2b68c6d4ca8c39d7b5e
[3] https://issues.apache.org/jira/browse/FLINK-5

Best,
Alexander Fedulov


Apply for FLIP Wiki Edit Permission

2023-10-23 Thread Dan Zou
Hi ,
I want to apply for FLIP Wiki Edit Permission, I am working on [FLINK-33267] 
https://issues.apache.org/jira/browse/FLINK-33267 , and I would like to create 
a FLIP for it.
My Jira ID is Dan Zou (zou...@apache.org).

Best,
Dan Zou   







[jira] [Created] (FLINK-33339) Update Guava to 32.1.3

2023-10-23 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-9:
--

 Summary: Update Guava to 32.1.3
 Key: FLINK-9
 URL: https://issues.apache.org/jira/browse/FLINK-9
 Project: Flink
  Issue Type: Technical Debt
  Components: BuildSystem / Shaded
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Re: [DISCUSS] Remove legacy Paimon (TableStore) doc link from Flink web navigation

2023-10-23 Thread Márton Balassi
Hi all,

Thanks for your responses.
@Jingsong Li: Thanks for the reference to the web PR, I missed that.

@Yun Tang: Thanks, I prefer simply removing the TableStore link from the
documentation navigation of Flink, as it is not a subproject of Flink
anymore - it is now its own project. It has had 2 of its own releases over
a ~half a year period.
I am all for having a proper links to Paimon, for example we could create a
"Sister Projects" subsection in the About section of the flink.apache.org
webpage and have a paragraph of intro/links there or simply add Paimon
related content to the Table connector docs [1]. We can make these 2
changes separately, but ideally they should be merged relatively close in
time.

Would you be open to updating your original PR [2] to simply remove the
links or would you like me to do it instead? I am happy to review your
change if you have a proposal of where to include Paimon.

[1]
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/table/overview/
[2] https://github.com/apache/flink-web/pull/665

On Sat, Oct 21, 2023 at 7:33 AM Yun Tang  wrote:

> Hi  devs,
>
> I am supporting to update the links. As the PR [1] author, I originally
> wanted to keep the legacy link to the Flink table store (0.3) as that was
> part of the history of Flink. And I plan to add the new Paimon's main
> branch link to tell users from Flink that this is the latest version. WDYT?
>
> [1] https://github.com/apache/flink-web/pull/665
>
> Best
> Yun Tang
>
> 
> From: junjie201...@gmail.com 
> Sent: Friday, October 20, 2023 12:20
> To: Jing Ge ; dev 
> Cc: dev 
> Subject: Re: Re: [DISCUSS] Remove legacy Paimon (TableStore) doc link from
> Flink web navigation
>
> +1
>
> On Tue, Oct 17, 2023 at 11:13 AM Yong Fang  wrote:
>
> > +1
> >
> > On Tue, Oct 17, 2023 at 4:52 PM Leonard Xu  wrote:
> >
> > > +1
> > >
> > >
> > > > 2023年10月17日 下午4:50,Martijn Visser  写道:
> > > >
> > > > +1
> > > >
> > > > On Tue, Oct 17, 2023 at 10:34 AM Jingsong Li  >
> > > wrote:
> > > >>
> > > >> Hi marton,
> > > >>
> > > >> Thanks for driving. +1
> > > >>
> > > >> There is a PR to remove legacy Paimon
> > > >> https://github.com/apache/flink-web/pull/665 , but it hasn't been
> > > >> updated for a long time.
> > > >>
> > > >> Best,
> > > >> Jingsong
> > > >>
> > > >> On Tue, Oct 17, 2023 at 4:28 PM Márton Balassi  >
> > > wrote:
> > > >>>
> > > >>> Hi Flink & Paimon devs,
> > > >>>
> > > >>> The Flink webpage documentation navigation section still lists the
> > > outdated TableStore 0.3 and master docs as subproject docs (see
> > > attachment). I am all for advertising Paimon as a sister project of
> > Flink,
> > > but the current state is misleading in multiple ways.
> > > >>>
> > > >>> I would like to remove these obsolete links if the communities
> agree.
> > > >>>
> > > >>> Best,
> > > >>> Marton
> > >
> > >
> >
>


[jira] [Created] (FLINK-33338) Bump up RocksDB version to 7.x

2023-10-23 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-8:
--

 Summary: Bump up RocksDB version to 7.x
 Key: FLINK-8
 URL: https://issues.apache.org/jira/browse/FLINK-8
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / State Backends
Reporter: Piotr Nowojski


We need to bump RocksDB in order to be able to use new IngestDB and ClipDB 
commands.

If some of the required changes haven't been merged to Facebook/RocksDB, we 
should cherry-pick and include them in our FRocksDB fork.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33337) Expose IngestDB and ClipDB in the official RocksDB API

2023-10-23 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-7:
--

 Summary: Expose IngestDB and ClipDB in the official RocksDB API
 Key: FLINK-7
 URL: https://issues.apache.org/jira/browse/FLINK-7
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / State Backends
Reporter: Piotr Nowojski
Assignee: Yue Ma


Remaining open PR: https://github.com/facebook/rocksdb/pull/11646



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-370: Support Balanced Tasks Scheduling

2023-10-23 Thread xiangyu feng
Thanks for driving that.
+1 (non-binding)

Regards,
Xiangyu

Yu Chen  于2023年10月23日周一 15:19写道:

> +1 (non-binding)
>
> We deeply need this capability to balance Tasks at the Taskmanager level in
> production, which helps to make a more sufficient usage of Taskmanager
> resources. Thanks for driving that.
>
> Best,
> Yu Chen
>
> Yangze Guo  于2023年10月23日周一 15:08写道:
>
> > +1 (binding)
> >
> > Best,
> > Yangze Guo
> >
> > On Mon, Oct 23, 2023 at 12:00 PM Rui Fan <1996fan...@gmail.com> wrote:
> > >
> > > +1(binding)
> > >
> > > Thanks to Yuepeng and to everyone who participated in the discussion!
> > >
> > > Best,
> > > Rui
> > >
> > > On Mon, Oct 23, 2023 at 11:55 AM Roc Marshal  wrote:
> > >>
> > >> Hi all,
> > >>
> > >> Thanks for all the feedback on FLIP-370[1][2].
> > >> I'd like to start a vote for  FLIP-370. The vote will last for at
> least
> > 72 hours (Oct. 26th at 10:00 A.M. GMT) unless there is an objection or
> > insufficient votes.
> > >>
> > >> Thanks,
> > >> Yuepeng Pan
> > >>
> > >> [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-370%3A+Support+Balanced+Tasks+Scheduling
> > >> [2] https://lists.apache.org/thread/mx3ot0fmk6zr02ccdby0s8oqxqm2pn1x
> >
>


[jira] [Created] (FLINK-33336) Upgrade ASM to 9.6

2023-10-23 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-6:
--

 Summary: Upgrade ASM to 9.6
 Key: FLINK-6
 URL: https://issues.apache.org/jira/browse/FLINK-6
 Project: Flink
  Issue Type: Technical Debt
  Components: BuildSystem / Shaded
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Apache Flink Kubernetes Operator Release 1.6.1, release candidate #1

2023-10-23 Thread Gyula Fóra
+1 (binding)

- Verified checksums, signatures, source release content
- Helm repo works correctly and points to the correct image / version
- Installed operator, ran stateful example

Gyula

On Sat, Oct 21, 2023 at 1:43 PM Rui Fan <1996fan...@gmail.com> wrote:

> +1(non-binding)
>
> - Downloaded artifacts from dist
> - Verified SHA512 checksums
> - Verified GPG signatures
> - Build the source with java-11 and verified the licenses together
> - Verified that all POM files point to the same version.
> - Verified that chart and appVersion matches the target release
> - Verified that helm chart / values.yaml points to the RC docker image
> - Verified that RC repo works as Helm repo (helm repo add
> flink-operator-repo-1.6.1-rc1
>
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.6.1-rc1/
> )
> - Verified Helm chart can be installed  (helm install
> flink-kubernetes-operator
> flink-operator-repo-1.6.1-rc1/flink-kubernetes-operator --set
> webhook.create=false)
> - Submitted the autoscaling demo, the autoscaler works well (kubectl apply
> -f autoscaling.yaml)
> - Triggered a manual savepoint (update the yaml: savepointTriggerNonce:
> 101)
>
> Best,
> Rui
>
> On Sat, Oct 21, 2023 at 7:33 PM Rui Fan <1996fan...@gmail.com> wrote:
>
> > Hi Everyone,
> >
> > Please review and vote on the release candidate #1 for the version 1.6.1
> of
> > Apache Flink Kubernetes Operator,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > **Release Overview**
> >
> > As an overview, the release consists of the following:
> > a) Kubernetes Operator canonical source distribution (including the
> > Dockerfile), to be deployed to the release repository at dist.apache.org
> > b) Kubernetes Operator Helm Chart to be deployed to the release
> repository
> > at dist.apache.org
> > c) Maven artifacts to be deployed to the Maven Central Repository
> > d) Docker image to be pushed to dockerhub
> >
> > **Staging Areas to Review**
> >
> > The staging areas containing the above mentioned artifacts are as
> follows,
> > for your review:
> > * All artifacts for a,b) can be found in the corresponding dev repository
> > at dist.apache.org [1]
> > * All artifacts for c) can be found at the Apache Nexus Repository [2]
> > * The docker image for d) is staged on github [3]
> >
> > All artifacts are signed with the
> > key B2D64016B940A7E0B9B72E0D7D0528B28037D8BC [4]
> >
> > Other links for your review:
> > * source code tag "release-1.6.1-rc1" [5]
> > * PR to update the website Downloads page to
> > include Kubernetes Operator links [6]
> > * PR to update the doc version of flink-kubernetes-operator[7]
> >
> > **Vote Duration**
> >
> > The voting time will run for at least 72 hours.
> > It is adopted by majority approval, with at least 3 PMC affirmative
> votes.
> >
> > **Note on Verification**
> >
> > You can follow the basic verification guide here[8].
> > Note that you don't need to verify everything yourself, but please make
> > note of what you have tested together with your +- vote.
> >
> > [1]
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.6.1-rc1/
> > [2]
> > https://repository.apache.org/content/repositories/orgapacheflink-1663/
> > [3]
> >
> https://github.com/apache/flink-kubernetes-operator/pkgs/container/flink-kubernetes-operator/139454270?tag=51eeae1
> > [4] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [5]
> >
> https://github.com/apache/flink-kubernetes-operator/tree/release-1.6.1-rc1
> > [6] https://github.com/apache/flink-web/pull/690
> > [7] https://github.com/apache/flink-kubernetes-operator/pull/687
> > [8]
> >
> https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Kubernetes+Operator+Release
> >
> > Best,
> > Rui
> >
>


Re: [VOTE] FLIP-370: Support Balanced Tasks Scheduling

2023-10-23 Thread Yu Chen
+1 (non-binding)

We deeply need this capability to balance Tasks at the Taskmanager level in
production, which helps to make a more sufficient usage of Taskmanager
resources. Thanks for driving that.

Best,
Yu Chen

Yangze Guo  于2023年10月23日周一 15:08写道:

> +1 (binding)
>
> Best,
> Yangze Guo
>
> On Mon, Oct 23, 2023 at 12:00 PM Rui Fan <1996fan...@gmail.com> wrote:
> >
> > +1(binding)
> >
> > Thanks to Yuepeng and to everyone who participated in the discussion!
> >
> > Best,
> > Rui
> >
> > On Mon, Oct 23, 2023 at 11:55 AM Roc Marshal  wrote:
> >>
> >> Hi all,
> >>
> >> Thanks for all the feedback on FLIP-370[1][2].
> >> I'd like to start a vote for  FLIP-370. The vote will last for at least
> 72 hours (Oct. 26th at 10:00 A.M. GMT) unless there is an objection or
> insufficient votes.
> >>
> >> Thanks,
> >> Yuepeng Pan
> >>
> >> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-370%3A+Support+Balanced+Tasks+Scheduling
> >> [2] https://lists.apache.org/thread/mx3ot0fmk6zr02ccdby0s8oqxqm2pn1x
>


Re: [VOTE] FLIP-370: Support Balanced Tasks Scheduling

2023-10-23 Thread Yangze Guo
+1 (binding)

Best,
Yangze Guo

On Mon, Oct 23, 2023 at 12:00 PM Rui Fan <1996fan...@gmail.com> wrote:
>
> +1(binding)
>
> Thanks to Yuepeng and to everyone who participated in the discussion!
>
> Best,
> Rui
>
> On Mon, Oct 23, 2023 at 11:55 AM Roc Marshal  wrote:
>>
>> Hi all,
>>
>> Thanks for all the feedback on FLIP-370[1][2].
>> I'd like to start a vote for  FLIP-370. The vote will last for at least 72 
>> hours (Oct. 26th at 10:00 A.M. GMT) unless there is an objection or 
>> insufficient votes.
>>
>> Thanks,
>> Yuepeng Pan
>>
>> [1] 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-370%3A+Support+Balanced+Tasks+Scheduling
>> [2] https://lists.apache.org/thread/mx3ot0fmk6zr02ccdby0s8oqxqm2pn1x


Re: Re: [DISCUSS] Remove legacy Paimon (TableStore) doc link from Flink web navigation

2023-10-23 Thread Martijn Visser
Hi Yun,

I think this discussion thread was about removing the links, I don't
think we should keep them. For history purposes, there's always Git
where things can be retrieved from.

Best regards,

Martijn

On Sat, Oct 21, 2023 at 7:33 AM Yun Tang  wrote:
>
> Hi  devs,
>
> I am supporting to update the links. As the PR [1] author, I originally 
> wanted to keep the legacy link to the Flink table store (0.3) as that was 
> part of the history of Flink. And I plan to add the new Paimon's main branch 
> link to tell users from Flink that this is the latest version. WDYT?
>
> [1] https://github.com/apache/flink-web/pull/665
>
> Best
> Yun Tang
>
> 
> From: junjie201...@gmail.com 
> Sent: Friday, October 20, 2023 12:20
> To: Jing Ge ; dev 
> Cc: dev 
> Subject: Re: Re: [DISCUSS] Remove legacy Paimon (TableStore) doc link from 
> Flink web navigation
>
> +1
>
> On Tue, Oct 17, 2023 at 11:13 AM Yong Fang  wrote:
>
> > +1
> >
> > On Tue, Oct 17, 2023 at 4:52 PM Leonard Xu  wrote:
> >
> > > +1
> > >
> > >
> > > > 2023年10月17日 下午4:50,Martijn Visser  写道:
> > > >
> > > > +1
> > > >
> > > > On Tue, Oct 17, 2023 at 10:34 AM Jingsong Li 
> > > wrote:
> > > >>
> > > >> Hi marton,
> > > >>
> > > >> Thanks for driving. +1
> > > >>
> > > >> There is a PR to remove legacy Paimon
> > > >> https://github.com/apache/flink-web/pull/665 , but it hasn't been
> > > >> updated for a long time.
> > > >>
> > > >> Best,
> > > >> Jingsong
> > > >>
> > > >> On Tue, Oct 17, 2023 at 4:28 PM Márton Balassi 
> > > wrote:
> > > >>>
> > > >>> Hi Flink & Paimon devs,
> > > >>>
> > > >>> The Flink webpage documentation navigation section still lists the
> > > outdated TableStore 0.3 and master docs as subproject docs (see
> > > attachment). I am all for advertising Paimon as a sister project of
> > Flink,
> > > but the current state is misleading in multiple ways.
> > > >>>
> > > >>> I would like to remove these obsolete links if the communities agree.
> > > >>>
> > > >>> Best,
> > > >>> Marton
> > >
> > >
> >


Re: [VOTE] Release 1.18.0, release candidate #3

2023-10-23 Thread Matthias Pohl
Hi Amir,
Usually, the plan is to release minor version support for Flink connectors
after the Flink minor version is released. See Martijn's post on that issue
in the 1.18.0 RC0 ML thread [1] for further context.

Best,
Matthias

[1] https://lists.apache.org/thread/687qrtq3894ycw69dvvr4sdjdqxngsj6


Re: [VOTE] FLIP-373: Support Configuring Different State TTLs using SQL Hint

2023-10-23 Thread Zakelly Lan
+1(non-binding)

Best,
Zakelly

On Mon, Oct 23, 2023 at 1:15 PM Benchao Li  wrote:
>
> +1 (binding)
>
> Feng Jin  于2023年10月23日周一 13:07写道:
> >
> > +1(non-binding)
> >
> >
> > Best,
> > Feng
> >
> >
> > On Mon, Oct 23, 2023 at 11:58 AM Xuyang  wrote:
> >
> > > +1(non-binding)
> > >
> > >
> > >
> > >
> > > --
> > >
> > > Best!
> > > Xuyang
> > >
> > >
> > >
> > >
> > >
> > > At 2023-10-23 11:38:15, "Jane Chan"  wrote:
> > > >Hi developers,
> > > >
> > > >Thanks for all the feedback on FLIP-373: Support Configuring Different
> > > >State TTLs using SQL Hint [1].
> > > >Based on the discussion [2], we have reached a consensus, so I'd like to
> > > >start a vote.
> > > >
> > > >The vote will last for at least 72 hours (Oct. 26th at 10:00 A.M. GMT)
> > > >unless there is an objection or insufficient votes.
> > > >
> > > >[1]
> > > >
> > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint
> > > >[2] https://lists.apache.org/thread/3s69dhv3rp4s0kysnslqbvyqo3qf7zq5
> > > >
> > > >Best,
> > > >Jane
> > >
>
>
>
> --
>
> Best,
> Benchao Li