Re: [VOTE] FLIP-465: Introduce DESCRIBE FUNCTION

2024-07-04 Thread Martijn Visser
+1 (binding)

On Thu, Jul 4, 2024 at 5:39 AM Yanquan Lv  wrote:

> Hi Natea, thanks for driving it.
> +1 (non-binding).
>
> Jim Hughes  于2024年7月4日周四 04:41写道:
>
> > Hi Natea,
> >
> > Looks good to me!
> >
> > +1 (non-binding).
> >
> > Cheers,
> >
> > Jim
> >
> > On Wed, Jul 3, 2024 at 3:16 PM Natea Eshetu Beshada
> >  wrote:
> >
> > > Sorry I forgot to include the FLIP [1] and the mailing thread
> discussion
> > > link [2] in my previous email.
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-465%3A+Introduce+DESCRIBE+FUNCTION
> > > [2] https://lists.apache.org/thread/s46ftnmz4ggmmssgyx6vfhqjttsk9lph
> > >
> > > Thanks,
> > > Natea
> > >
> > > On Wed, Jul 3, 2024 at 12:06 PM Natea Eshetu Beshada <
> > > nbesh...@confluent.io>
> > > wrote:
> > >
> > > > Hello everyone,
> > > >
> > > > I would like to start a vote on FLIP-465 [1]. It proposes adding SQL
> > > > syntax that would allow users to describe the metadata of a given
> > > function.
> > > >
> > > > The vote will be open for at least 72 hours (Saturday, July 6th of
> July
> > > > 2024,
> > > > 12:30 PST) unless there is an objection or insufficient votes.
> > > >
> > > > Thanks,
> > > > Natea
> > > >
> > > >
> > >
> >
>


Re: [DISCUSS] FLIP-299 Pub/Sub Lite Connector

2024-07-02 Thread Martijn Visser
Hi,

Since PubSub Lite is deprecated, I think we can safely close this FLIP and
not proceed further [1]

Best regards,

Martijn

[1] https://cloud.google.com/pubsub/docs/choosing-pubsub-or-lite

On Sun, Jun 9, 2024 at 12:23 AM Ahmed Hamdy  wrote:

> Hi all,
> I wonder if we can revive the discussion here.
> It has been discussed in another thread[1] that the lack of maintenance for
> connector repos as in gcp-pubsub should be a motive to promote more
> committers who actually commit to maintaining the repos instead of
> migrating/adding connectors out of support of the community, and I don't
> see another blocker for such FLIP to progress given the effort done by
> Daniel on the external repo.
> I do have comments on the FLIP itself,  and am happy to drive changes
> myself If Daniel is no longer available but this doesn't seem the core
> point of the discussion at the moment.
>
> I would love to hear your thoughts.
>
> 1-https://lists.apache.org/thread/63wdvo5hvr6koc3wzbqy2kw4krhmkfbx
> Best Regards
> Ahmed Hamdy
>
>
> On Fri, 17 Mar 2023 at 15:58, Daniel Collins  >
> wrote:
>
> > > would the repository ... be removed ... ?
> >
> > Yes, I would remove it once it is merged into a version of flink that is
> > supported by GCP dataproc. It exists now (and I am creating releases and
> > maven artifacts for it) to unblock users in the interim period.
> >
> > -Daniel
> >
> > On Thu, Mar 16, 2023 at 3:32 PM Martijn Visser  >
> > wrote:
> >
> > > Hi Daniel,
> > >
> > > > I don't know how to get to this point, it sounds like more of an
> > > organizational constraint than a technical one though- who is
> responsible
> > > for the same role for the standard Pub/Sub connector? I'm working with
> > the
> > > Pub/Sub team right now on prioritization of supporting the flink
> > connector
> > > and converting it to support the recommended delivery mechanism for
> that
> > > service.
> > >
> > > It's not so much an organizational constraint, it's more if there are
> one
> > > or more committers in the Flink community who have the bandwidth to
> help
> > > with reviewing and merging a new connector. The PubSub connector has
> > pretty
> > > much been unmaintained for the past couple of years; I have done a
> couple
> > > of outreaches to Google, but those were unfruitful.
> > >
> > > I'm hoping that someone from the Flink committers has the bandwidth for
> > > helping you out. @All, if you have bandwidth, please come forward.
> > >
> > > > I imagine our involvement would be similar to support for our
> > > self-managed client libraries
> > >
> > > I think that sounds fine.
> > >
> > > One question I have is if you envision that the code of
> > > https://github.com/googleapis/java-pubsublite-flink moves to
> > > https://github.com/apache/flink-connector-gcp-pubsub, would the
> > repository
> > > https://github.com/googleapis/java-pubsublite-flink be removed or do
> you
> > > propose to have both of them exist? I would be in favour of having one,
> > but
> > > wanted to check with you.
> > >
> > > Best regards,
> > >
> > > Martijn
> > >
> > > On Tue, Mar 14, 2023 at 4:09 AM Daniel Collins
> > > 
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > Thank you for the feedback. Responses inline.
> > > >
> > > > > we need feedback from a Committer who would review and help
> maintain
> > it
> > > > going forward. Ideally, this Committer would guide one or more
> > > contributors
> > > > from Google to Committership so that Google could step up and
> maintain
> > > > Flink
> > > > 's PubSub and PubSub Lite Connector in the future.
> > > >
> > > > I don't know how to get to this point, it sounds like more of an
> > > > organizational constraint than a technical one though- who is
> > responsible
> > > > for the same role for the standard Pub/Sub connector? I'm working
> with
> > > the
> > > > Pub/Sub team right now on prioritization of supporting the flink
> > > connector
> > > > and converting it to support the recommended delivery mechanism for
> > that
> > > > service.
> > > >
> > > > > For this, it would be good to understand how you envision the
> > > involvement
> > > > of the PubSub Lite team at Google.
>

Re: [VOTE] FLIP-456: CompiledPlan support for Batch Execution Mode

2024-07-02 Thread Martijn Visser
+1 (binding)

On Mon, Jul 1, 2024 at 7:00 PM Jim Hughes 
wrote:

> Hi Alexey,
>
> +1 (non-binding)
>
> I'm looking forward to parity between streaming and batch bound for
> compiled plans!
>
> Cheers,
>
> Jim
>
> On Mon, Jul 1, 2024 at 12:55 PM Alexey Leonov-Vendrovskiy <
> vendrov...@gmail.com> wrote:
>
> > Hello everyone,
> >
> > We had a good discussion of FLIP-456: CompiledPlan support for Batch
> > Execution Mode [1]. Discussion thread is here: [2].
> >
> > Let's start voting on it. The vote will be open for at least 72
> > hours unless there is an objection or insufficient votes. The FLIP will
> be
> > considered accepted if 3 binding votes (from active committers according
> to
> > the Flink bylaws [3]) are gathered by the community.
> >
> > Thanks,
> > Alexey
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-456%3A+CompiledPlan+support+for+Batch+Execution+Mode
> > [2] https://lists.apache.org/thread/7gpyqvdnnbjwbh3vbk6b0pj38l91crvv
> > [3]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Approvals
> > <
> >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Approvals](https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws%23FlinkBylaws-Approvals)
> > >
> >
>


Re: [VOTE] Release flink-shaded 19.0, release candidate #1

2024-07-01 Thread Martijn Visser
+1 (binding)

- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
- Verified web PRs

On Fri, Jun 28, 2024 at 2:02 PM Timo Walther  wrote:

> +1 (binding)
>
> Thanks for fixing the JSON functions!
>
> Timo
>
> On 28.06.24 12:54, Dawid Wysakowicz wrote:
> > Hi everyone,
> > Please review and vote on the release candidate 1 for the version 19.0,
> as
> > follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> [2],
> > which are signed with the key with fingerprint
> > EA93A435B4E2C9B4C9F533F631D2DD10BFC15A2D [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag release-19.0-rc1 [5],
> > * website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Dawid
> >
> > [1] https://issues.apache.org/jira/projects/FLINK/versions/12353853
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-shaded-19.0-rc1
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1743/
> > [5] https://github.com/apache/flink-shaded/releases/tag/release-19.0-rc1
> > [6] https://github.com/apache/flink-web/pull/749
> >
>
>


Re: [VOTE] FLIP-444: Native file copy support

2024-06-27 Thread Martijn Visser
+1 (binding)

On Thu, Jun 27, 2024 at 3:32 AM Rui Fan <1996fan...@gmail.com> wrote:

> +1(binding)
>
> Best,
> Rui
>
> On Wed, Jun 26, 2024 at 10:22 PM Stefan Richter
>  wrote:
>
> >
> > +1 (binding)
> >
> > Best,
> > Stefan
> >
> >
> >
> > > On 26. Jun 2024, at 16:14, Hong  wrote:
> > >
> > > +1 (binding)
> > >
> > > Hong
> > >
> > >> On 26 Jun 2024, at 12:27, Keith Lee  > > wrote:
> > >>
> > >> +1 (non binding)
> > >>
> > >> Best regards
> > >> Keith Lee
> > >>
> > >>
> > >>> On Wed, Jun 26, 2024 at 9:48 AM Zakelly Lan  > > wrote:
> > >>> +1 (binding)
> > >>> Best,
> > >>> Zakelly
> >  On Wed, Jun 26, 2024 at 3:54 PM Yuepeng Pan  > > wrote:
> >  +1 (non-binding)
> >  Best regards,
> >  Yuepeng Pan
> >  At 2024-06-26 15:27:17, "Piotr Nowojski"  > > wrote:
> > > Thanks for pointing this out Zakelly. After the discussion on the
> dev
> > > mailing list, I have updated the `PathsCopyingFileSystem` to merge
> > its
> > > functionalities with `DuplicatingFileSystem`, but I've just
> > forgotten to
> > > mention that it will removed/replaced with
> `PathsCopyingFileSystem`.
> > > Vote can be resumed.
> > > Best,
> > > Piotrek
> > > wt., 25 cze 2024 o 18:57 Piotr Nowojski 
> >  napisał(a):
> > >> Ops, I must have forgotten to update the FLIP as we discussed. I
> > will
> >  fix
> > >> it tomorrow and the vote period will be extended.
> > >> Best,
> > >> Piotrek
> > >> wt., 25 cze 2024 o 13:56 Zakelly Lan 
> >  napisał(a):
> > >>> Hi Piotrek,
> > >>> I don't see any statement about removing or renaming the
> > >>> `DuplicatingFileSystem` in the FLIP, shall we do that as
> mentioned
> > in
> >  the
> > >>> discussion thread?
> > >>> Best,
> > >>> Zakelly
> > >>> On Tue, Jun 25, 2024 at 4:58 PM Piotr Nowojski <
> > pnowoj...@apache.org
> > >>> wrote:
> >  Hi all,
> >  I would like to start a vote for the FLIP-444 [1]. The
> discussion
> > >>> thread is
> >  here [2].
> >  The vote will be open for at least 72.
> >  Best,
> >  Piotrek
> >  [1]
> >
> https://www.google.com/url?q=https://cwiki.apache.org/confluence/x/rAn9EQ=gmail-imap=172001618500=AOvVaw2Yrz31zWmRgrWMKU4z4V0k
> >  [2]
> > >>>
> >
> https://www.google.com/url?q=https://lists.apache.org/thread/lkwmyjt2bnmvgx4qpp82rldwmtd4516c=gmail-imap=172001618500=AOvVaw2i8Laq3tyfQM_Zd4rZQoPz
> >
> >
>


Re: [VOTE] FLIP-463: Schema Definition in CREATE TABLE AS Statement

2024-06-25 Thread Martijn Visser
+1 (binding)

On Sun, Jun 23, 2024 at 9:07 PM Ferenc Csaky 
wrote:

> +1 (non-binding)
>
> Best,
> Ferenc
>
>
>
>
> On Sunday, June 23rd, 2024 at 05:13, Yanquan Lv 
> wrote:
>
> >
> >
> > Thnaks Sergio, +1 (non-binding)
> >
> > gongzhongqiang gongzhongqi...@apache.org 于2024年6月23日周日 10:06写道:
> >
> > > +1 (non-binding)
> > >
> > > Best,
> > > Zhongqiang Gong
> > >
> > > Sergio Pena ser...@confluent.io.invalid 于2024年6月21日周五 22:18写道:
> > >
> > > > Hi everyone,
> > > >
> > > > Thanks for all the feedback about FLIP-463: Schema Definition in
> CREATE
> > > > TABLE AS Statement [1]. The discussion thread is here [2].
> > > >
> > > > I'd like to start a vote for it. The vote will be open for at least
> 72
> > > > hours unless there is an objection or insufficient votes. The FLIP
> will
> > > > be
> > > > considered accepted if 3 binding votes (from active committers
> according
> > > > to
> > > > the Flink bylaws [3]) are gathered by the community.
> > > >
> > > > [1]
> > >
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-463%3A+Schema+Definition+in+CREATE+TABLE+AS+Statement
> > >
> > > > [2] https://lists.apache.org/thread/1ryxxyyg3h9v4rbosc80zryvjk6c8k83
> > > > [3] [
> > >
> > >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Approvals](https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Approvals)
> 
> > >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Approvals](https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws%23FlinkBylaws-Approvals)
> > >
> > > > <
> > > >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Approvals](https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws%23FlinkBylaws-Approvals)
> > > >
> > > > <
> > >
> > >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Approvals](https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws%23FlinkBylaws-Approvals)
> > >
> > > > Thanks,
> > > > Sergio Peña
>


Re: [VOTE] Release flink-connector-kafka v3.2.0, release candidate #1

2024-06-11 Thread Martijn Visser
Hi David,

That's a blocker for a Flink Kafka connector 4.0, not for 3.2.0. It's not
related to this release.

Best regards,

Martijn

On Tue, Jun 11, 2024 at 3:54 PM David Radley 
wrote:

> Hi,
> Sorry I am a bit late.
> I notice https://issues.apache.org/jira/browse/FLINK-35109 is open and a
> blocker. Can I confirm that we have mitigated the impacts of this issue in
> this release?
>   Kind regards, David.
>
> From: Danny Cranmer 
> Date: Friday, 7 June 2024 at 11:46
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-kafka v3.2.0,
> release candidate #1
> Thanks all. This vote is now closed, I will announce the results in a
> separate thread.
>
> On Fri, Jun 7, 2024 at 11:45 AM Danny Cranmer 
> wrote:
>
> > +1 (binding)
> >
> > - Release notes look good
> > - Source archive checksum and signature is correct
> > - Binary checksum and signature is correct
> > - Contents of Maven repo looks good
> > - Verified there are no binaries in the source archive
> > - Builds from source, tests pass using Java 8
> > - CI run passed [1]
> > - Tag exists in repo
> > - NOTICE and LICENSE files present and correct
> >
> > Thanks,
> > Danny
> >
> > [1]
> > https://github.com/apache/flink-connector-kafka/actions/runs/8785158288
> >
> >
> > On Fri, Jun 7, 2024 at 7:19 AM Yanquan Lv  wrote:
> >
> >> +1 (non-binding)
> >>
> >> - verified gpg signatures
> >> - verified sha512 hash
> >> - built from source code with java 8/11/17
> >> - checked Github release tag
> >> - checked the CI result
> >> - checked release notes
> >>
> >> Danny Cranmer  于2024年4月22日周一 21:56写道:
> >>
> >> > Hi everyone,
> >> >
> >> > Please review and vote on release candidate #1 for
> flink-connector-kafka
> >> > v3.2.0, as follows:
> >> > [ ] +1, Approve the release
> >> > [ ] -1, Do not approve the release (please provide specific comments)
> >> >
> >> > This release supports Flink 1.18 and 1.19.
> >> >
> >> > The complete staging area is available for your review, which
> includes:
> >> > * JIRA release notes [1],
> >> > * the official Apache source release to be deployed to
> dist.apache.org
> >> > [2],
> >> > which are signed with the key with fingerprint 125FD8DB [3],
> >> > * all artifacts to be deployed to the Maven Central Repository [4],
> >> > * source code tag v3.2.0-rc1 [5],
> >> > * website pull request listing the new release [6].
> >> > * CI build of the tag [7].
> >> >
> >> > The vote will be open for at least 72 hours. It is adopted by majority
> >> > approval, with at least 3 PMC affirmative votes.
> >> >
> >> > Thanks,
> >> > Danny
> >> >
> >> > [1]
> >> >
> >> >
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354209
> >> > [2]
> >> >
> >> >
> >>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.2.0-rc1
> >> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> > [4]
> >> https://repository.apache.org/content/repositories/orgapacheflink-1723
> >> > [5]
> >> >
> https://github.com/apache/flink-connector-kafka/releases/tag/v3.2.0-rc1
> >> > [6] https://github.com/apache/flink-web/pull/738
> >> > [7] https://github.com/apache/flink-connector-kafka
> >> >
> >>
> >
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>


[jira] [Created] (FLINK-35566) Consider promoting TypeSerializer from PublicEvolving to Public

2024-06-11 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35566:
--

 Summary: Consider promoting TypeSerializer from PublicEvolving to 
Public
 Key: FLINK-35566
 URL: https://issues.apache.org/jira/browse/FLINK-35566
 Project: Flink
  Issue Type: Technical Debt
  Components: API / Core
Reporter: Martijn Visser


While working on implementing FLINK-35378, I ran into the problem that 
TypeSerializer is still on PublicEvolving since Flink 1.0. We should consider 
annotating this as Public. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-kafka v3.2.0, release candidate #1

2024-06-06 Thread Martijn Visser
+1 (binding)

- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
- Verified web PRs

On Tue, Jun 4, 2024 at 11:08 AM Fabian Paul  wrote:

> +1 (non-binding)
>
> - Verified signature
> - Verified checksum
> - Built release tag from source with JDK 11
> - approved docs PR
>
> Best,
> Fabian
>
> On Wed, May 22, 2024 at 2:22 PM Leonard Xu  wrote:
>
> > +1 (binding)
> >
> > - verified signatures
> > - verified hashsums
> > - built from source code with java 1.8 succeeded
> > - checked Github release tag
> > - reviewed the web PR
> > - checked the CI result,
> >   minor: the link [7] you post should be [1]
> > - checked release notes,
> >   minor: the issue FLINK-34961[2] should move to next version
> >
> >
> > Best,
> > Leonard
> >
> > [1]
> > https://github.com/apache/flink-connector-kafka/actions/runs/8785158288
> > [2] https://issues.apache.org/jira/browse/FLINK-34961
> >
> >
> > > 2024年4月29日 上午12:34,Aleksandr Pilipenko  写道:
> > >
> > > +1 (non-binding)
> > >
> > > - Validated checksum
> > > - Verified signature
> > > - Checked that no binaries exist in the source archive
> > > - Build source
> > > - Verified web PR
> > >
> > > Thanks,
> > > Aleksandr
> > >
> > > On Sun, 28 Apr 2024 at 11:35, Hang Ruan 
> wrote:
> > >
> > >> +1 (non-binding)
> > >>
> > >> - Validated checksum hash
> > >> - Verified signature
> > >> - Verified that no binaries exist in the source archive
> > >> - Build the source with Maven and jdk8
> > >> - Verified web PR
> > >> - Check that the jar is built by jdk8
> > >>
> > >> Best,
> > >> Hang
> > >>
> > >> Ahmed Hamdy  于2024年4月24日周三 17:21写道:
> > >>
> > >>> Thanks Danny,
> > >>> +1 (non-binding)
> > >>>
> > >>> - Verified Checksums and hashes
> > >>> - Verified Signatures
> > >>> - Reviewed web PR
> > >>> - github tag exists
> > >>> - Build source
> > >>>
> > >>>
> > >>> Best Regards
> > >>> Ahmed Hamdy
> > >>>
> > >>>
> > >>> On Tue, 23 Apr 2024 at 03:47, Muhammet Orazov
> > >>> 
> > >>> wrote:
> > >>>
> >  Thanks Danny, +1 (non-binding)
> > 
> >  - Checked 512 hash
> >  - Checked gpg signature
> >  - Reviewed pr
> >  - Built the source with JDK 11 & 8
> > 
> >  Best,
> >  Muhammet
> > 
> >  On 2024-04-22 13:55, Danny Cranmer wrote:
> > > Hi everyone,
> > >
> > > Please review and vote on release candidate #1 for
> > > flink-connector-kafka
> > > v3.2.0, as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific
> comments)
> > >
> > > This release supports Flink 1.18 and 1.19.
> > >
> > > The complete staging area is available for your review, which
> > >> includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release to be deployed to
> > >> dist.apache.org
> > > [2],
> > > which are signed with the key with fingerprint 125FD8DB [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag v3.2.0-rc1 [5],
> > > * website pull request listing the new release [6].
> > > * CI build of the tag [7].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by
> > >> majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Danny
> > >
> > > [1]
> > >
> > 
> > >>>
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354209
> > > [2]
> > >
> > 
> > >>>
> > >>
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.2.0-rc1
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > >
> > >>
> https://repository.apache.org/content/repositories/orgapacheflink-1723
> > > [5]
> > >
> > >>>
> > https://github.com/apache/flink-connector-kafka/releases/tag/v3.2.0-rc1
> > > [6] https://github.com/apache/flink-web/pull/738
> > > [7] https://github.com/apache/flink-connector-kafka
> > 
> > >>>
> > >>
> >
> >
>


Re: [DISCUSS] Proposing an LTS Release for the 1.x Line

2024-05-24 Thread Martijn Visser
Hi David,

> If there is a maintainer willing to merge backported features to v1, as
it is important to some part of the community, this should be allowed, as
different parts of the community have different priorities and timelines,

I don't think this is a good idea. Backporting a feature can cause issues
in other components that might be outside the span of expertise of the
maintainer that backported said feature, causing the overall stability to
be degraded. I think our starting point should be "We don't backport
features, unless discussed and agreed on the Dev mailing list". That still
opens up the ability to backport features but makes it clear where the bar
lies.

Best regards,

Martijn

On Fri, May 24, 2024 at 11:21 AM David Radley 
wrote:

> Hi,
> I agree with Martijn that we only put features into version 2. Back
> porting to v1 should not be business as usual for features, only for
> security and stability changes.
>
> If there is a maintainer willing to merge backported features to v1, as it
> is important to some part of the community, this should be allowed, as
> different parts of the community have different priorities and timelines,
>  Kind regards, David.
>
>
> From: Alexander Fedulov 
> Date: Thursday, 23 May 2024 at 18:50
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: [DISCUSS] Proposing an LTS Release for the 1.x Line
> Good point, Xintong, I incorporated this item into the FLIP.
>
> Best,
> Alex
>
> On Wed, 22 May 2024 at 10:37, Xintong Song  wrote:
>
> > Thanks, Alex.
> >
> > I see one task that needs to be done once the FLIP is approved, which I'd
> > suggest to also mention in the: To explain the LTS policy to users on
> > website / documentation (because FLIP is developer-facing) before / upon
> > releasing 1.20.
> >
> > Other than that, the FLIP LGTM.
> >
> > Best,
> >
> > Xintong
> >
> >
> >
> > On Tue, May 21, 2024 at 5:21 PM Alexander Fedulov <
> > alexander.fedu...@gmail.com> wrote:
> >
> > > Hi everyone,
> > >
> > > let's finalize this discussion. As Martijn suggested, I summarized this
> > > thread into a FLIP [1]. Please take a look and let me know if there’s
> > > anything important that I might have missed.
> > >
> > > Best,
> > > Alex
> > >
> > > [1] https://cwiki.apache.org/confluence/x/BApeEg
> > >
> > >
> > > On Tue, 23 Jan 2024 at 03:30, Rui Fan <1996fan...@gmail.com> wrote:
> > >
> > > > Thanks Martijn for the feedback!
> > > >
> > > > Sounds make sense to me! And I don't have strong opinion that allow
> > > > backporting new features to 1.x.
> > > >
> > > > Best,
> > > > Rui
> > > >
> > > > On Mon, Jan 22, 2024 at 8:56 PM Martijn Visser <
> > martijnvis...@apache.org
> > > >
> > > > wrote:
> > > >
> > > > > Hi Rui,
> > > > >
> > > > > I don't think that we should allow backporting of new features from
> > > > > the first minor version of 2.x to 1.x. If a user doesn't yet want
> to
> > > > > upgrade to 2.0, I think that's fine since we'll have a LTS for 1.x.
> > If
> > > > > a newer feature becomes available in 2.x that's interesting for the
> > > > > user, the user at that point can decide if they want to do the
> > > > > migration. It's always a case-by-case tradeoff of effort vs
> benefits,
> > > > > and I think with a LTS version that has bug fixes only we provide
> the
> > > > > users with assurance that existing bugs can get fixed, and that
> they
> > > > > can decide for themselves when they want to migrate to a newer
> > version
> > > > > with better/newer features.
> > > > >
> > > > > Best regards,
> > > > >
> > > > > Martijn
> > > > >
> > > > > On Thu, Jan 11, 2024 at 3:50 AM Rui Fan <1996fan...@gmail.com>
> > wrote:
> > > > > >
> > > > > > Thanks everyone for discussing this topic!
> > > > > >
> > > > > > My question is could we make a trade-off between Flink users
> > > > > > and Flink maintainers?
> > > > > >
> > > > > > 1. From the perspective of a Flink maintainer
> > > > > >
> > > > > > I strongly agree with Martin's point of view, such as:
> > > > > >
> > > > > > - All

Re: [VOTE] FLIP-443: Interruptible watermark processing

2024-05-24 Thread Martijn Visser
+1 (binding)

On Fri, May 24, 2024 at 7:31 AM weijie guo 
wrote:

> +1(binding)
>
> Thanks for driving this!
>
> Best regards,
>
> Weijie
>
>
> Rui Fan <1996fan...@gmail.com> 于2024年5月24日周五 13:03写道:
>
> > +1(binding)
> >
> > Best,
> > Rui
> >
> > On Fri, May 24, 2024 at 12:01 PM Yanfei Lei  wrote:
> >
> > > Thanks for driving this!
> > >
> > > +1 (binding)
> > >
> > > Best,
> > > Yanfei
> > >
> > > Zakelly Lan  于2024年5月24日周五 10:13写道:
> > >
> > > >
> > > > +1 (binding)
> > > >
> > > > Best,
> > > > Zakelly
> > > >
> > > > On Thu, May 23, 2024 at 8:21 PM Piotr Nowojski  >
> > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > After reaching what looks like a consensus in the discussion thread
> > > [1], I
> > > > > would like to put FLIP-443 [2] to the vote.
> > > > >
> > > > > The vote will be open for at least 72 hours unless there is an
> > > objection or
> > > > > insufficient votes.
> > > > >
> > > > > [1]
> https://lists.apache.org/thread/flxm7rphvfgqdn2gq2z0bb7kl007olpz
> > > > > [2] https://cwiki.apache.org/confluence/x/qgn9EQ
> > > > >
> > > > > Bets,
> > > > > Piotrek
> > > > >
> > >
> >
>


[RESULT][VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-17 Thread Martijn Visser
Hi all,

I'm happy to announce that we have unanimously approved FLIP-453.
There were 12 votes, of which 4 were binding.

- weijie guo (binding)
- Ferenc Csaky (non-binding)
- Rui Fan (binding)
- Zhongqiang Gong (non-binding)
- Péter Váry (non-binding)
- Rodrigo Meneses (non-binding)
- lorenzo affetti (non-binding)
- Jing Ge (binding)
- Ahmed Hamdy (non-binding)
- Yuepeng Pan (non-binding)
- Hang Ruan (non-binding)
- Leonard Xu (binding)

Best regards,

Martijn


Re: [VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-17 Thread Martijn Visser
Hi all,

Thanks for all your votes, I hereby close the vote and I'll announce the
results in a separate email.

Best regards,

Martijn

On Fri, May 17, 2024 at 11:48 AM Leonard Xu  wrote:

> +1(binding)
>
> Best,
> Leonard
>
> > 2024年5月17日 下午5:40,Hang Ruan  写道:
> >
> > +1(non-binding)
> >
> > Best,
> > Hang
> >
> > Yuepeng Pan  于2024年5月17日周五 16:15写道:
> >
> >> +1(non-binding)
> >>
> >>
> >> Best,
> >> Yuepeng Pan
> >>
> >>
> >> At 2024-05-15 21:09:04, "Jing Ge"  wrote:
> >>> +1(binding) Thanks Martijn!
> >>>
> >>> Best regards,
> >>> Jing
> >>>
> >>> On Wed, May 15, 2024 at 7:00 PM Muhammet Orazov
> >>>  wrote:
> >>>
> >>>> Thanks Martijn driving this! +1 (non-binding)
> >>>>
> >>>> Best,
> >>>> Muhammet
> >>>>
> >>>> On 2024-05-14 06:43, Martijn Visser wrote:
> >>>>> Hi everyone,
> >>>>>
> >>>>> With no more discussions being open in the thread [1] I would like to
> >>>>> start
> >>>>> a vote on FLIP-453: Promote Unified Sink API V2 to Public and
> >> Deprecate
> >>>>> SinkFunction [2]
> >>>>>
> >>>>> The vote will be open for at least 72 hours unless there is an
> >>>>> objection or
> >>>>> insufficient votes.
> >>>>>
> >>>>> Best regards,
> >>>>>
> >>>>> Martijn
> >>>>>
> >>>>> [1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
> >>>>> [2]
> >>>>>
> >>>>
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction
> >>>>
> >>
>
>


Re: [DISCUSS] FLINK-35369: Improve `Table API and SQL` page or add new page to guide new users to right Flink SQL option

2024-05-16 Thread Martijn Visser
Hi Keith Lee,

Yeah, I think it makes sense that large changes to documentation should be
FLIP worthy. We've done it before when we reorganized the project website
as well.

>From a strict point of view, we can't retrospectively change an accepted
FLIP: we can create a new FLIP that refers to the old one. I would do that.

Best regards,

Martijn

On Thu, May 16, 2024 at 11:13 PM Keith Lee 
wrote:

> Hi Martijn
>
> Thank you for pointing me to this FLIP. I initially thought of making
> limited scoped improvement hence the Jira.
>
> Given that the Flip was voted on more than four years ago. Would it make
> sense to for me to update the FLIP, reopen discussion and redrive a vote
> for it?
>
> On a side note, should FLIP criteria [1] be updated so that reorganisation
> of public documentation fall under it as well?
>
> Best regards
>
> Keith Lee
>
> [1]
>
> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=65145551#FlinkImprovementProposals-Whatisconsidereda%22majorchange%22thatneedsaFLIP
> ?
>
>
> On Thu, 16 May 2024 at 13:03, Martijn Visser 
> wrote:
>
> > Hi Keith Lee,
> >
> > There is actually an open and accepted FLIP [1] that was just never
> > implemented. I think that needs to be taken into account as well, since
> it
> > was voted on. I think it also makes more sense to have a proposal as a
> > FLIP, instead of under a Jira.
> >
> > Best regards,
> >
> > Martijn
> >
> > [1]
> > https://cwiki.apache.org/confluence/pages/view
> page.action?pageId=127405685
> > <
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=127405685
> >
> >
> > On Wed, May 15, 2024 at 10:58 PM Keith Lee 
> > wrote:
> >
> > > Hello everyone,
> > >
> > > I'd like to start a discussion on improving `Table API and SQL`
> overview
> > > page or add a new page to give new users quick overview of options
> > > available. I've opted for improvement Jira instead of a FLIP as the
> > change
> > > does not affect public interfaces or Flink runtime.
> > >
> > > https://issues.apache.org/jira/browse/FLINK-35369
> > >
> > > Appreciate your thoughts and suggestions.
> > >
> > > Best regards
> > > Keith Lee
> > >
> >
>


[jira] [Created] (FLINK-35378) [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunc

2024-05-16 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35378:
--

 Summary: [FLIP-453] Promote Unified Sink API V2 to Public and 
Deprecate SinkFunc
 Key: FLINK-35378
 URL: https://issues.apache.org/jira/browse/FLINK-35378
 Project: Flink
  Issue Type: Technical Debt
  Components: API / Core
Reporter: Martijn Visser
Assignee: Martijn Visser


https://cwiki.apache.org/confluence/pages/resumedraft.action?draftId=303794871=af4ace88-98b7-4a53-aece-cd67d2f91a15;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLINK-35369: Improve `Table API and SQL` page or add new page to guide new users to right Flink SQL option

2024-05-16 Thread Martijn Visser
Hi Keith Lee,

There is actually an open and accepted FLIP [1] that was just never
implemented. I think that needs to be taken into account as well, since it
was voted on. I think it also makes more sense to have a proposal as a
FLIP, instead of under a Jira.

Best regards,

Martijn

[1]
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=127405685

On Wed, May 15, 2024 at 10:58 PM Keith Lee 
wrote:

> Hello everyone,
>
> I'd like to start a discussion on improving `Table API and SQL` overview
> page or add a new page to give new users quick overview of options
> available. I've opted for improvement Jira instead of a FLIP as the change
> does not affect public interfaces or Flink runtime.
>
> https://issues.apache.org/jira/browse/FLINK-35369
>
> Appreciate your thoughts and suggestions.
>
> Best regards
> Keith Lee
>


[jira] [Created] (FLINK-35350) Add documentation for Kudu

2024-05-14 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35350:
--

 Summary: Add documentation for Kudu
 Key: FLINK-35350
 URL: https://issues.apache.org/jira/browse/FLINK-35350
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Kudu
Reporter: Martijn Visser
 Fix For: kudu-2.0.0


There's currently no documentation for Kudu; this should be added



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-14 Thread Martijn Visser
Hi everyone,

With no more discussions being open in the thread [1] I would like to start
a vote on FLIP-453: Promote Unified Sink API V2 to Public and Deprecate
SinkFunction [2]

The vote will be open for at least 72 hours unless there is an objection or
insufficient votes.

Best regards,

Martijn

[1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
[2]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction


Re: [jira] [Updated] (FLINK-35336) SQL failed to restore from savepoint after change in default-parallelism

2024-05-13 Thread Martijn Visser
Hi Keith Lee,

Yes, a FLIP will be required.

Best regards,

Martijn

On Mon, May 13, 2024 at 2:51 PM Keith Lee 
wrote:

> Thank you Martijn for confirming and switching the Jira to New Feature.
>
> I intend to explore approaches on implementing the feature to allow for:
>
> 1. Configurations that will make Flink SQL job restore robust to
> parallelism changes
> 2. Configurations that will allow best effort Flink SQL job restore after
> Flink statement changes.
>
> How best should such a feature request be driven? Will a FLIP be necessary?
>
> Best regards
> Keith Lee
>
>
> On Mon, May 13, 2024 at 1:27 PM Martijn Visser (Jira) 
> wrote:
>
> >
> >  [
> >
> https://issues.apache.org/jira/browse/FLINK-35336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> > ]
> >
> > Martijn Visser updated FLINK-35336:
> > ---
> > Issue Type: New Feature  (was: Bug)
> >
> > > SQL failed to restore from savepoint after change in
> default-parallelism
> > >
> 
> > >
> > > Key: FLINK-35336
> > > URL: https://issues.apache.org/jira/browse/FLINK-35336
> > > Project: Flink
> > >  Issue Type: New Feature
> > >  Components: Table SQL / Planner
> > >Affects Versions: 1.18.1
> > > Environment: Flink SQL Client, Flink 1.18.1 on MacOS
> > >Reporter: Keith Lee
> > >Priority: Major
> > >
> > > After bumping 'table.exec.resource.default-parallelism' from 1 to 4, I
> > am observing the following exception on restoring job from savepoint with
> > an unmodified statement set.
> > >
> > > {quote}[ERROR] Could not execute SQL statement. Reason:
> > > java.lang.IllegalStateException: Failed to rollback to
> > checkpoint/savepoint
> >
> [file:/tmp/flink-savepoints/savepoint-4392e6-575fa6b692ff|file:///tmp/flink-savepoints/savepoint-4392e6-575fa6b692ff].
> > Cannot map checkpoint/savepoint state for operator
> > 46ba9b22862c3bbe9373c6abee964b2a to the new program, because the operator
> > is not available in the new program. If you want to allow to skip this,
> you
> > can set the --allowNonRestoredState option on the CLI.
> > > {quote}
> > > When started without savepoints, the jobgraph differs for the jobs
> > despite identical statements being ran.
> > > There are 2 operators when default parallelism is 1.
> > > {quote}A: Source: UserBehaviourKafkaSource[68] -> (Calc[69] ->
> > StreamRecordTimestampInserter[70] -> StreamingFileWriter -> Sink: end,
> > Calc[71] -> LocalWindowAggregate[72])
> > > B: GlobalWindowAggregate[74] -> Calc[75] -> Sink:
> > CampaignAggregationsJDBC[76]
> > > {quote}
> > > Three operators when default parallelism is 4.
> > > {quote}A: Source: UserBehaviourKafkaSource[86] -> (Calc[87] ->
> > StreamRecordTimestampInserter[88] -> StreamingFileWriter, Calc[89] ->
> > LocalWindowAggregate[90])
> > > B: Sink: end
> > > C: GlobalWindowAggregate[92] -> Calc[93] -> Sink:
> > CampaignAggregationsJDBC[94]
> > > {quote}
> > >
> > > Notice that the operator 'Sink: end' is separated out when parallelism
> > is set to 4, causing the incompatibility in job graph. EXPLAIN PLAN did
> not
> > show any difference between syntax tree, physical plan or execution plan.
> > > I have attempted various configurations in `table.optimizer.*`.
> > > Steps to reproduce
> > > {quote}SET 'table.exec.resource.default-parallelism' = '1';
> > > EXECUTE STATEMENT SET BEGIN
> > > INSERT INTO UserErrorExperienceS3Sink (user_id, user_session,
> > interaction_type, interaction_target, interaction_tags, event_date,
> > event_hour, event_time)
> > > SELECT
> > > user_id,
> > > user_session,
> > > interaction_type,
> > > interaction_target,
> > > interaction_tags,
> > > DATE_FORMAT(event_time , '-MM-dd'),
> > > DATE_FORMAT(event_time , 'HH'),
> > > event_time
> > > FROM UserBehaviourKafkaSource
> > > WHERE
> > > interaction_result Like '%ERROR%';
> > > INSERT INTO CampaignAggregationsJDBC
> > > SELECT
> > > CONCAT_WS('/', interaction_tags, interaction_result,
> > DATE_FORMAT(window_start, '-MM-DD HH:mm:

[jira] [Created] (FLINK-35333) JdbcXaSinkTestBase fails in weekly Flink JDBC Connector tests

2024-05-13 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35333:
--

 Summary: JdbcXaSinkTestBase fails in weekly Flink JDBC Connector 
tests
 Key: FLINK-35333
 URL: https://issues.apache.org/jira/browse/FLINK-35333
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: jdbc-3.2.0
Reporter: Martijn Visser


https://github.com/apache/flink-connector-jdbc/actions/runs/9047366679/job/24859224407#step:15:147

{code:java}
Error:  Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
(default-testCompile) on project flink-connector-jdbc: Compilation failure
Error:  
/home/runner/work/flink-connector-jdbc/flink-connector-jdbc/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/xa/JdbcXaSinkTestBase.java:[164,37]
  is not 
abstract and does not override abstract method getTaskInfo() in 
org.apache.flink.api.common.functions.RuntimeContext
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-13 Thread Martijn Visser
Hi Ahmed,

There's no reason to refrain from releases for Flink 1.* versions: these
connector implementations are still supported in the Flink 1.* series.

Best regards,

Martijn

On Sun, May 12, 2024 at 5:55 PM Ahmed Hamdy  wrote:

> Thanks Martijn
> I believe you missed my question,
>
> Should this change take place in 1.20, what are the planned release steps
> > for connectors that only offer a deprecated interface in this case (i.e.
> > RabbitMQ, Cassandra, pusbub, Hbase)? Are we going to refrain from
> releases
> > till the blockers are implemented?
> >
>
> Could you please clarify?
>
> Best Regards
> Ahmed Hamdy
>
>
> On Sun, 12 May 2024 at 14:07, Martijn Visser 
> wrote:
>
> > Hi all,
> >
> > If there are no more considerations, I'll open up a vote in the next
> couple
> > of days.
> >
> > Best regards,
> >
> > Martijn
> >
> > On Wed, May 8, 2024 at 4:08 AM Hongshun Wang 
> > wrote:
> >
> > > Hi Martijn, Thanks for the proposal +1 from me.Some sinks still use
> > > sinkfunction; it's time to take a step forward.
> > >
> > > Best,
> > > Hongshun
> > >
> > > On Mon, May 6, 2024 at 5:44 PM Leonard Xu  wrote:
> > >
> > > > +1 from my side, thanks Martijn for the effort.
> > > >
> > > > Best,
> > > > Leonard
> > > >
> > > > > 2024年5月4日 下午7:41,Ahmed Hamdy  写道:
> > > > >
> > > > > Hi Martijn
> > > > > Thanks for the proposal +1 from me.
> > > > > Should this change take place in 1.20, what are the planned release
> > > steps
> > > > > for connectors that only offer a deprecated interface in this case
> > > (i.e.
> > > > > RabbitMQ, Cassandra, pusbub, Hbase)? Are we going to refrain from
> > > > releases
> > > > > that support 1.20+ till the blockers are implemented?
> > > > > Best Regards
> > > > > Ahmed Hamdy
> > > > >
> > > > >
> > > > > On Fri, 3 May 2024 at 14:32, Péter Váry <
> peter.vary.apa...@gmail.com
> > >
> > > > wrote:
> > > > >
> > > > >>> With regards to FLINK-35149, the fix version indicates a change
> at
> > > > Flink
> > > > >> CDC; is that indeed correct, or does it require a change in the
> > SinkV2
> > > > >> interface?
> > > > >>
> > > > >> The fix doesn't need change in SinkV2, so we are good there.
> > > > >> The issue is that the new SinkV2
> > > > SupportsCommitter/SupportsPreWriteTopology
> > > > >> doesn't work with the CDC yet.
> > > > >>
> > > > >> Martijn Visser  ezt írta (időpont:
> 2024.
> > > máj.
> > > > >> 3.,
> > > > >> P, 14:06):
> > > > >>
> > > > >>> Hi Ferenc,
> > > > >>>
> > > > >>> You're right, 1.20 it is :)
> > > > >>>
> > > > >>> I've assigned the HBase one to you!
> > > > >>>
> > > > >>> Thanks,
> > > > >>>
> > > > >>> Martijn
> > > > >>>
> > > > >>> On Fri, May 3, 2024 at 1:55 PM Ferenc Csaky
> > >  > > > >
> > > > >>> wrote:
> > > > >>>
> > > > >>>> Hi Martijn,
> > > > >>>>
> > > > >>>> +1 for the proposal.
> > > > >>>>
> > > > >>>>> targeted for Flink 1.19
> > > > >>>>
> > > > >>>> I guess you meant Flink 1.20 here.
> > > > >>>>
> > > > >>>> Also, I volunteer to take updating the HBase sink, feel free to
> > > assign
> > > > >>>> that task to me.
> > > > >>>>
> > > > >>>> Best,
> > > > >>>> Ferenc
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> On Friday, May 3rd, 2024 at 10:20, Martijn Visser <
> > > > >>>> martijnvis...@apache.org> wrote:
> > > > >>>>
> > > > >>>>>
> > > > >>>>>
> > > > >

Re: [DISCUSS] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-12 Thread Martijn Visser
Hi all,

If there are no more considerations, I'll open up a vote in the next couple
of days.

Best regards,

Martijn

On Wed, May 8, 2024 at 4:08 AM Hongshun Wang 
wrote:

> Hi Martijn, Thanks for the proposal +1 from me.Some sinks still use
> sinkfunction; it's time to take a step forward.
>
> Best,
> Hongshun
>
> On Mon, May 6, 2024 at 5:44 PM Leonard Xu  wrote:
>
> > +1 from my side, thanks Martijn for the effort.
> >
> > Best,
> > Leonard
> >
> > > 2024年5月4日 下午7:41,Ahmed Hamdy  写道:
> > >
> > > Hi Martijn
> > > Thanks for the proposal +1 from me.
> > > Should this change take place in 1.20, what are the planned release
> steps
> > > for connectors that only offer a deprecated interface in this case
> (i.e.
> > > RabbitMQ, Cassandra, pusbub, Hbase)? Are we going to refrain from
> > releases
> > > that support 1.20+ till the blockers are implemented?
> > > Best Regards
> > > Ahmed Hamdy
> > >
> > >
> > > On Fri, 3 May 2024 at 14:32, Péter Váry 
> > wrote:
> > >
> > >>> With regards to FLINK-35149, the fix version indicates a change at
> > Flink
> > >> CDC; is that indeed correct, or does it require a change in the SinkV2
> > >> interface?
> > >>
> > >> The fix doesn't need change in SinkV2, so we are good there.
> > >> The issue is that the new SinkV2
> > SupportsCommitter/SupportsPreWriteTopology
> > >> doesn't work with the CDC yet.
> > >>
> > >> Martijn Visser  ezt írta (időpont: 2024.
> máj.
> > >> 3.,
> > >> P, 14:06):
> > >>
> > >>> Hi Ferenc,
> > >>>
> > >>> You're right, 1.20 it is :)
> > >>>
> > >>> I've assigned the HBase one to you!
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Martijn
> > >>>
> > >>> On Fri, May 3, 2024 at 1:55 PM Ferenc Csaky
>  > >
> > >>> wrote:
> > >>>
> > >>>> Hi Martijn,
> > >>>>
> > >>>> +1 for the proposal.
> > >>>>
> > >>>>> targeted for Flink 1.19
> > >>>>
> > >>>> I guess you meant Flink 1.20 here.
> > >>>>
> > >>>> Also, I volunteer to take updating the HBase sink, feel free to
> assign
> > >>>> that task to me.
> > >>>>
> > >>>> Best,
> > >>>> Ferenc
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> On Friday, May 3rd, 2024 at 10:20, Martijn Visser <
> > >>>> martijnvis...@apache.org> wrote:
> > >>>>
> > >>>>>
> > >>>>>
> > >>>>> Hi Peter,
> > >>>>>
> > >>>>> I'll add it for completeness, thanks!
> > >>>>> With regards to FLINK-35149, the fix version indicates a change at
> > >>> Flink
> > >>>>> CDC; is that indeed correct, or does it require a change in the
> > >> SinkV2
> > >>>>> interface?
> > >>>>>
> > >>>>> Best regards,
> > >>>>>
> > >>>>> Martijn
> > >>>>>
> > >>>>>
> > >>>>> On Fri, May 3, 2024 at 7:47 AM Péter Váry
> > >> peter.vary.apa...@gmail.com
> > >>>>>
> > >>>>> wrote:
> > >>>>>
> > >>>>>> Hi Martijn,
> > >>>>>>
> > >>>>>> We might want to add FLIP-371 [1] to the list. (Or we aim only for
> > >>>> higher
> > >>>>>> level FLIPs?)
> > >>>>>>
> > >>>>>> We are in the process of using the new API in Iceberg connector
> > >> [2] -
> > >>>> so
> > >>>>>> far, so good.
> > >>>>>>
> > >>>>>> I know of one minor known issue about the sink [3], which should
> be
> > >>>> ready
> > >>>>>> for the release.
> > >>>>>>
> > >>>>>> All-in-all, I think we are in good shape, and we could move
> forward
> > >>>> with
> > >>>>>> the promotion.
&

Re: [DISCUSS] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-03 Thread Martijn Visser
Hi Ferenc,

You're right, 1.20 it is :)

I've assigned the HBase one to you!

Thanks,

Martijn

On Fri, May 3, 2024 at 1:55 PM Ferenc Csaky 
wrote:

> Hi Martijn,
>
> +1 for the proposal.
>
> > targeted for Flink 1.19
>
> I guess you meant Flink 1.20 here.
>
> Also, I volunteer to take updating the HBase sink, feel free to assign
> that task to me.
>
> Best,
> Ferenc
>
>
>
>
> On Friday, May 3rd, 2024 at 10:20, Martijn Visser <
> martijnvis...@apache.org> wrote:
>
> >
> >
> > Hi Peter,
> >
> > I'll add it for completeness, thanks!
> > With regards to FLINK-35149, the fix version indicates a change at Flink
> > CDC; is that indeed correct, or does it require a change in the SinkV2
> > interface?
> >
> > Best regards,
> >
> > Martijn
> >
> >
> > On Fri, May 3, 2024 at 7:47 AM Péter Váry peter.vary.apa...@gmail.com
> >
> > wrote:
> >
> > > Hi Martijn,
> > >
> > > We might want to add FLIP-371 [1] to the list. (Or we aim only for
> higher
> > > level FLIPs?)
> > >
> > > We are in the process of using the new API in Iceberg connector [2] -
> so
> > > far, so good.
> > >
> > > I know of one minor known issue about the sink [3], which should be
> ready
> > > for the release.
> > >
> > > All-in-all, I think we are in good shape, and we could move forward
> with
> > > the promotion.
> > >
> > > Thanks,
> > > Peter
> > >
> > > [1] -
> > >
> > >
> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=263430387
> > > [2] - https://github.com/apache/iceberg/pull/10179
> > > [3] - https://issues.apache.org/jira/browse/FLINK-35149
> > >
> > > On Thu, May 2, 2024, 09:47 Muhammet Orazov
> mor+fl...@morazow.com.invalid
> > > wrote:
> > >
> > > > Got it, thanks!
> > > >
> > > > On 2024-05-02 06:53, Martijn Visser wrote:
> > > >
> > > > > Hi Muhammet,
> > > > >
> > > > > Thanks for joining the discussion! The changes in this FLIP would
> be
> > > > > targeted for Flink 1.19, since it's only a matter of changing the
> > > > > annotation.
> > > > >
> > > > > Best regards,
> > > > >
> > > > > Martijn
> > > > >
> > > > > On Thu, May 2, 2024 at 7:26 AM Muhammet Orazov
> mor+fl...@morazow.com
> > > > > wrote:
> > > > >
> > > > > > Hello Martijn,
> > > > > >
> > > > > > Thanks for the FLIP and detailed history of changes, +1.
> > > > > >
> > > > > > Would FLIP changes target for 2.0? I think it would be good
> > > > > > to have clear APIs on 2.0 release.
> > > > > >
> > > > > > Best,
> > > > > > Muhammet
> > > > > >
> > > > > > On 2024-05-01 15:30, Martijn Visser wrote:
> > > > > >
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > I would like to start a discussion on FLIP-453: Promote
> Unified Sink
> > > > > > > API V2
> > > > > > > to Public and Deprecate SinkFunction
> > > > > > > https://cwiki.apache.org/confluence/x/rIobEg
> > > > > > >
> > > > > > > This FLIP proposes to promote the Unified Sink API V2 from
> > > > > > > PublicEvolving
> > > > > > > to Public and to mark the SinkFunction as Deprecated.
> > > > > > >
> > > > > > > I'm looking forward to your thoughts.
> > > > > > >
> > > > > > > Best regards,
> > > > > > >
> > > > > > > Martijn
>


Re: [DISCUSS] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-03 Thread Martijn Visser
Hi Peter,

I'll add it for completeness, thanks!
With regards to FLINK-35149, the fix version indicates a change at Flink
CDC; is that indeed correct, or does it require a change in the SinkV2
interface?

Best regards,

Martijn


On Fri, May 3, 2024 at 7:47 AM Péter Váry 
wrote:

> Hi Martijn,
>
> We might want to add FLIP-371 [1] to the list. (Or we aim only for higher
> level FLIPs?)
>
> We are in the process of using the new API in Iceberg connector [2] - so
> far, so good.
>
> I know of one minor known issue about the sink [3], which should be ready
> for the release.
>
> All-in-all, I think we are in good shape, and we could move forward with
> the promotion.
>
> Thanks,
> Peter
>
> [1] -
>
> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=263430387
> [2] - https://github.com/apache/iceberg/pull/10179
> [3] - https://issues.apache.org/jira/browse/FLINK-35149
>
>
> On Thu, May 2, 2024, 09:47 Muhammet Orazov 
> wrote:
>
> > Got it, thanks!
> >
> > On 2024-05-02 06:53, Martijn Visser wrote:
> > > Hi Muhammet,
> > >
> > > Thanks for joining the discussion! The changes in this FLIP would be
> > > targeted for Flink 1.19, since it's only a matter of changing the
> > > annotation.
> > >
> > > Best regards,
> > >
> > > Martijn
> > >
> > > On Thu, May 2, 2024 at 7:26 AM Muhammet Orazov 
> > > wrote:
> > >
> > >> Hello Martijn,
> > >>
> > >> Thanks for the FLIP and detailed history of changes, +1.
> > >>
> > >> Would FLIP changes target for 2.0? I think it would be good
> > >> to have clear APIs on 2.0 release.
> > >>
> > >> Best,
> > >> Muhammet
> > >>
> > >> On 2024-05-01 15:30, Martijn Visser wrote:
> > >> > Hi everyone,
> > >> >
> > >> > I would like to start a discussion on FLIP-453: Promote Unified Sink
> > >> > API V2
> > >> > to Public and Deprecate SinkFunction
> > >> > https://cwiki.apache.org/confluence/x/rIobEg
> > >> >
> > >> > This FLIP proposes to promote the Unified Sink API V2 from
> > >> > PublicEvolving
> > >> > to Public and to mark the SinkFunction as Deprecated.
> > >> >
> > >> > I'm looking forward to your thoughts.
> > >> >
> > >> > Best regards,
> > >> >
> > >> > Martijn
> > >>
> >
>


Re: FW: RE: [DISCUSS] FLIP-XXX Apicurio-avro format

2024-05-02 Thread Martijn Visser
Done :)

On Thu, May 2, 2024 at 11:01 AM David Radley 
wrote:

> Hi Martijn,
> Thank you very much for looking at this. In response to your feedback; I
> produced a reduced version which is on this link.
>
>
> https://docs.google.com/document/d/1J1E-cE-X2H3-kw4rNjLn71OGPQk_Yl1iGX4-eCHWLgE/edit?usp=sharing
>
> The original version you have copied is a bit out-dated and verbose.
> Please could you replace the Flip with content from the above link,
> Kind regards, David,
>
> From: Martijn Visser 
> Date: Wednesday, 1 May 2024 at 16:31
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: FW: RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> Hi David,
>
> I've copied and pasted it into
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format
> ;
> please take a look if it's as expected.
>
> Best regards,
>
> Martijn
>
> On Wed, May 1, 2024 at 3:47 PM David Radley 
> wrote:
>
> > Hi Martijn,
> > Any news?
> >Kind regards, David.
> >
> >
> > From: David Radley 
> > Date: Monday, 22 April 2024 at 09:48
> > To: dev@flink.apache.org 
> > Subject: FW: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> > Hi Martijn,
> > A gentle nudge, is this ok for you or one of the PMC or committers to
> > create a Flip now?
> >Kind regards, David.
> >
> > From: David Radley 
> > Date: Monday, 15 April 2024 at 12:29
> > To: dev@flink.apache.org 
> > Subject: Re: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> > Hi Martijn,
> > Thanks for looking at this. I have used the template in a new  Google Doc
> >
> https://docs.google.com/document/d/1J1E-cE-X2H3-kw4rNjLn71OGPQk_Yl1iGX4-eCHWLgE/edit?usp=sharing
> .
> > I have significantly reduced the content in the Flip, in line with what I
> > see as the template and its usage. If this it too much or too little, I
> can
> > amend,
> >
> > Kind regards, David.
> >
> > From: Martijn Visser 
> > Date: Friday, 12 April 2024 at 18:11
> > To: dev@flink.apache.org 
> > Subject: Re: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> > Hi David,
> >
> > I tried, but the format wasn't as the FLIP template expects, so I ended
> up
> > needing to change the entire formatting and that was just too much work
> to
> > be honest. If you could make sure that especially the headers match with
> > the FLIP template, and that all of the contents from the FLIP template is
> > there, that would make things much easier.
> >
> > Thanks,
> >
> > Martijn
> >
> > On Fri, Apr 12, 2024 at 6:08 PM David Radley 
> > wrote:
> >
> > > Hi,
> > > A gentle nudge. Please could a committer/PMC member raise the Flip for
> > > this,
> > >   Kind regards, David.
> > >
> > >
> > > From: David Radley 
> > > Date: Monday, 8 April 2024 at 09:40
> > > To: dev@flink.apache.org 
> > > Subject: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> > > Hi,
> > > I have posted a Google Doc [0] to the mailing list for a discussion
> > thread
> > > for a Flip proposal to introduce a Apicurio-avro format. The
> discussions
> > > have been resolved, please could a committer/PMC member copy the
> contents
> > > from the Google Doc, and create a FLIP number for this,. as per the
> > process
> > > [1],
> > >   Kind regards, David.
> > > [0]
> > >
> > >
> >
> https://docs.google.com/document/d/14LWZPVFQ7F9mryJPdKXb4l32n7B0iWYkcOdEd1xTC7w/edit?usp=sharing
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals#FlinkImprovementProposals-CreateyourOwnFLIP
> > >
> > > From: Jeyhun Karimov 
> > > Date: Friday, 22 March 2024 at 13:05
> > > To: dev@flink.apache.org 
> > > Subject: [EXTERNAL] Re: [DISCUSS] FLIP-XXX Apicurio-avro format
> > > Hi David,
> > >
> > > Thanks a lot for clarification.
> > > Sounds good to me.
> > >
> > > Regards,
> > > Jeyhun
> > >
> > > On Fri, Mar 22, 2024 at 10:54 AM David Radley  >
> > > wrote:
> > >
> > > > Hi Jeyhun,
> > > > Thanks for your feedback.
> > > >
> > > > So for outbound messages, the message includes the global ID. We
> > register
> > > > the schema and match on the artifact id. So if the schema then
&g

Re: [DISCUSS] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-02 Thread Martijn Visser
Hi Muhammet,

Thanks for joining the discussion! The changes in this FLIP would be
targeted for Flink 1.19, since it's only a matter of changing the
annotation.

Best regards,

Martijn

On Thu, May 2, 2024 at 7:26 AM Muhammet Orazov 
wrote:

> Hello Martijn,
>
> Thanks for the FLIP and detailed history of changes, +1.
>
> Would FLIP changes target for 2.0? I think it would be good
> to have clear APIs on 2.0 release.
>
> Best,
> Muhammet
>
> On 2024-05-01 15:30, Martijn Visser wrote:
> > Hi everyone,
> >
> > I would like to start a discussion on FLIP-453: Promote Unified Sink
> > API V2
> > to Public and Deprecate SinkFunction
> > https://cwiki.apache.org/confluence/x/rIobEg
> >
> > This FLIP proposes to promote the Unified Sink API V2 from
> > PublicEvolving
> > to Public and to mark the SinkFunction as Deprecated.
> >
> > I'm looking forward to your thoughts.
> >
> > Best regards,
> >
> > Martijn
>


[DISCUSS] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-01 Thread Martijn Visser
Hi everyone,

I would like to start a discussion on FLIP-453: Promote Unified Sink API V2
to Public and Deprecate SinkFunction
https://cwiki.apache.org/confluence/x/rIobEg

This FLIP proposes to promote the Unified Sink API V2 from PublicEvolving
to Public and to mark the SinkFunction as Deprecated.

I'm looking forward to your thoughts.

Best regards,

Martijn


Re: FW: RE: [DISCUSS] FLIP-XXX Apicurio-avro format

2024-05-01 Thread Martijn Visser
Hi David,

I've copied and pasted it into
https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format;
please take a look if it's as expected.

Best regards,

Martijn

On Wed, May 1, 2024 at 3:47 PM David Radley  wrote:

> Hi Martijn,
> Any news?
>Kind regards, David.
>
>
> From: David Radley 
> Date: Monday, 22 April 2024 at 09:48
> To: dev@flink.apache.org 
> Subject: FW: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> Hi Martijn,
> A gentle nudge, is this ok for you or one of the PMC or committers to
> create a Flip now?
>Kind regards, David.
>
> From: David Radley 
> Date: Monday, 15 April 2024 at 12:29
> To: dev@flink.apache.org 
> Subject: Re: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> Hi Martijn,
> Thanks for looking at this. I have used the template in a new  Google Doc
> https://docs.google.com/document/d/1J1E-cE-X2H3-kw4rNjLn71OGPQk_Yl1iGX4-eCHWLgE/edit?usp=sharing.
> I have significantly reduced the content in the Flip, in line with what I
> see as the template and its usage. If this it too much or too little, I can
> amend,
>
> Kind regards, David.
>
> From: Martijn Visser 
> Date: Friday, 12 April 2024 at 18:11
> To: dev@flink.apache.org 
> Subject: Re: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> Hi David,
>
> I tried, but the format wasn't as the FLIP template expects, so I ended up
> needing to change the entire formatting and that was just too much work to
> be honest. If you could make sure that especially the headers match with
> the FLIP template, and that all of the contents from the FLIP template is
> there, that would make things much easier.
>
> Thanks,
>
> Martijn
>
> On Fri, Apr 12, 2024 at 6:08 PM David Radley 
> wrote:
>
> > Hi,
> > A gentle nudge. Please could a committer/PMC member raise the Flip for
> > this,
> >   Kind regards, David.
> >
> >
> > From: David Radley 
> > Date: Monday, 8 April 2024 at 09:40
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> > Hi,
> > I have posted a Google Doc [0] to the mailing list for a discussion
> thread
> > for a Flip proposal to introduce a Apicurio-avro format. The discussions
> > have been resolved, please could a committer/PMC member copy the contents
> > from the Google Doc, and create a FLIP number for this,. as per the
> process
> > [1],
> >   Kind regards, David.
> > [0]
> >
> >
> https://docs.google.com/document/d/14LWZPVFQ7F9mryJPdKXb4l32n7B0iWYkcOdEd1xTC7w/edit?usp=sharing
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals#FlinkImprovementProposals-CreateyourOwnFLIP
> >
> > From: Jeyhun Karimov 
> > Date: Friday, 22 March 2024 at 13:05
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] Re: [DISCUSS] FLIP-XXX Apicurio-avro format
> > Hi David,
> >
> > Thanks a lot for clarification.
> > Sounds good to me.
> >
> > Regards,
> > Jeyhun
> >
> > On Fri, Mar 22, 2024 at 10:54 AM David Radley 
> > wrote:
> >
> > > Hi Jeyhun,
> > > Thanks for your feedback.
> > >
> > > So for outbound messages, the message includes the global ID. We
> register
> > > the schema and match on the artifact id. So if the schema then evolved,
> > > adding a new  version, the global ID would still be unique and the same
> > > version would be targeted. If you wanted to change the Flink table
> > > definition in line with a higher version, then you could do this – the
> > > artifact id would need to match for it to use the same schema and a
> > higher
> > > artifact version would need to be provided. I notice that Apicurio has
> > > rules around compatibility that you can configure, I suppose if we
> > attempt
> > > to create an artifact that breaks these rules , then the register
> schema
> > > will fail and the associated operation should fail (e.g. an insert). I
> > have
> > > not tried this.
> > >
> > >
> > > For inbound messages, using the global id in the header – this targets
> > one
> > > version of the schema. I can create different messages on the topic
> built
> > > with different schema versions, and I can create different tables in
> > Flink,
> > > as long as the reader and writer schemas are compatible as per the
> > >
> >
> https://github.com/apache/flink/blob/779459168c46b7b4c600ef52f99a5435f81b9048/flink-formats/flink-avro/src

[jira] [Created] (FLINK-35280) Migrate HBase Sink connector to use the ASync Sink API

2024-05-01 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35280:
--

 Summary: Migrate HBase Sink connector to use the ASync Sink API
 Key: FLINK-35280
 URL: https://issues.apache.org/jira/browse/FLINK-35280
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / HBase
Affects Versions: hbase-3.0.0, hbase-3.0.1, hbase-4.0.0
Reporter: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ DISCUSS ] FLIP-XXX : [Plugin] Enhancing Flink Failure Management in Kubernetes with Dynamic Termination Log Integration

2024-04-25 Thread Martijn Visser
Hi Swathi C,

Also including the Dev mailing list.

If you have a good reason for not being able to use the pluggable enricher
FLIP, you'll have to include that rationale in your own FLIP and explain
it. You might get challenged for it in the Dev mailing list thread
discussion, but that's the point.

Regards,

Martijn

On Thu, Apr 25, 2024 at 8:51 AM Swathi C  wrote:

> Hi Martijn and Ahmed,
>
> This proposed FLIP was mainly focusing for the CRUD failures use case (
> and not job failures ) and might not be able to use pluggable enricher FLIP
> ( as that mainly focuses on job failures ). Hence, for going forward as a
> new FLIP, we might not be able to leverage pluggable enricher FLIP for this
> use case. So, we might not be able to reformulate it for CRUD failures.
>
> So, is it ok with this new proposal or let us know if I'm missing
> anything and if it is related to pluggable enricher FLIP or anyway we can
> use pluggable enricker FLIP here for CRUD failures.
>
> Regards,
> Swathi C
>
> ------ Forwarded message -
> From: Martijn Visser 
> Date: Thu, Apr 25, 2024 at 2:46 AM
> Subject: Re: [ DISCUSS ] FLIP-XXX : [Plugin] Enhancing Flink Failure
> Management in Kubernetes with Dynamic Termination Log Integration
> To: 
> Cc: , 
>
>
> I would prefer a separate FLIP
>
> On Wed, Apr 24, 2024 at 3:25 PM Swathi C 
> wrote:
>
> > Sure Ahmed and Martijn.
> > Fetching the flink particular job related failure and adding this logic
> to
> > termination-log is definitely a sub-task of pluggable enricher as we can
> > leverage pluggable enricher to achieve this.
> > But for CRUD level failures, which is mainly used to notify if the job
> > manager failed might not be using the pluggable enricher. So, let us know
> > if that needs to be there as a separate FLIP or we can combine that as
> well
> > under the pluggable enricher ( by adding another sub task ) ?
> >
> > Regards,
> > Swathi C
> >
> > On Wed, Apr 24, 2024 at 3:46 PM Ahmed Hamdy 
> wrote:
> >
> > > Hi,
> > > I agree with the Martijn, We can reformulate the FLIP to introduce
> > > termination log as supported pluggable enricher. If you believe the
> scope
> > > of work is a subset (Further implementation) we can just add a Jira
> > ticket
> > > for it. IMO this will also help with implementation taking the existing
> > > enrichers into reference.
> > > Best Regards
> > > Ahmed Hamdy
> > >
> > >
> > > On Tue, 23 Apr 2024 at 15:23, Martijn Visser  >
> > > wrote:
> > >
> > > > From a procedural point of view, we shouldn't make FLIPs sub-tasks
> for
> > > > existing FLIPs that have been voted/are released. That will only
> cause
> > > > confusion down the line. A new FLIP should take existing
> functionality
> > > > (like FLIP-304) into account, and propose how to improve on what that
> > > > original FLIP has introduced or how you're going to leverage what's
> > > already
> > > > there.
> > > >
> > > > On Tue, Apr 23, 2024 at 11:42 AM ramkrishna vasudevan <
> > > > ramvasu.fl...@gmail.com> wrote:
> > > >
> > > > > Hi Gyula and Ahmed,
> > > > >
> > > > > I totally agree that there is an interlap in the final goal that
> both
> > > the
> > > > > FLIPs are achieving here and infact FLIP-304 is more comprehensive
> > for
> > > > job
> > > > > failures.
> > > > >
> > > > > But as a proposal to move forward can we make Swathi's FLIP/JIRA
> as a
> > > sub
> > > > > task for FLIP-304 and continue with the PR since the main aim is to
> > get
> > > > the
> > > > > cluster failure pushed to the termination log for K8s based
> > > deployments.
> > > > > And once it is completed we can work to make FLIP-304 to support
> job
> > > > > failure propagation to termination log?
> > > > >
> > > > > Regards
> > > > > Ram
> > > > >
> > > > > On Thu, Apr 18, 2024 at 10:07 PM Swathi C <
> swathi.c.apa...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hi Gyula and  Ahmed,
> > > > > >
> > > > > > Thanks for reviewing this.
> > > > > >
> > > > > > @gyula.f...@gmail.com  , currently since
> our
> > > aim
> > > > > as
> > > > >

Re: [ DISCUSS ] FLIP-XXX : [Plugin] Enhancing Flink Failure Management in Kubernetes with Dynamic Termination Log Integration

2024-04-24 Thread Martijn Visser
I would prefer a separate FLIP

On Wed, Apr 24, 2024 at 3:25 PM Swathi C  wrote:

> Sure Ahmed and Martijn.
> Fetching the flink particular job related failure and adding this logic to
> termination-log is definitely a sub-task of pluggable enricher as we can
> leverage pluggable enricher to achieve this.
> But for CRUD level failures, which is mainly used to notify if the job
> manager failed might not be using the pluggable enricher. So, let us know
> if that needs to be there as a separate FLIP or we can combine that as well
> under the pluggable enricher ( by adding another sub task ) ?
>
> Regards,
> Swathi C
>
> On Wed, Apr 24, 2024 at 3:46 PM Ahmed Hamdy  wrote:
>
> > Hi,
> > I agree with the Martijn, We can reformulate the FLIP to introduce
> > termination log as supported pluggable enricher. If you believe the scope
> > of work is a subset (Further implementation) we can just add a Jira
> ticket
> > for it. IMO this will also help with implementation taking the existing
> > enrichers into reference.
> > Best Regards
> > Ahmed Hamdy
> >
> >
> > On Tue, 23 Apr 2024 at 15:23, Martijn Visser 
> > wrote:
> >
> > > From a procedural point of view, we shouldn't make FLIPs sub-tasks for
> > > existing FLIPs that have been voted/are released. That will only cause
> > > confusion down the line. A new FLIP should take existing functionality
> > > (like FLIP-304) into account, and propose how to improve on what that
> > > original FLIP has introduced or how you're going to leverage what's
> > already
> > > there.
> > >
> > > On Tue, Apr 23, 2024 at 11:42 AM ramkrishna vasudevan <
> > > ramvasu.fl...@gmail.com> wrote:
> > >
> > > > Hi Gyula and Ahmed,
> > > >
> > > > I totally agree that there is an interlap in the final goal that both
> > the
> > > > FLIPs are achieving here and infact FLIP-304 is more comprehensive
> for
> > > job
> > > > failures.
> > > >
> > > > But as a proposal to move forward can we make Swathi's FLIP/JIRA as a
> > sub
> > > > task for FLIP-304 and continue with the PR since the main aim is to
> get
> > > the
> > > > cluster failure pushed to the termination log for K8s based
> > deployments.
> > > > And once it is completed we can work to make FLIP-304 to support job
> > > > failure propagation to termination log?
> > > >
> > > > Regards
> > > > Ram
> > > >
> > > > On Thu, Apr 18, 2024 at 10:07 PM Swathi C  >
> > > > wrote:
> > > >
> > > > > Hi Gyula and  Ahmed,
> > > > >
> > > > > Thanks for reviewing this.
> > > > >
> > > > > @gyula.f...@gmail.com  , currently since our
> > aim
> > > > as
> > > > > part of this FLIP was only to fail the cluster when job
> manager/flink
> > > has
> > > > > issues such that the cluster would no longer be usable, hence, we
> > > > proposed
> > > > > only related to that.
> > > > > Your right, that it covers only job main class errors, job manager
> > run
> > > > time
> > > > > failures, if the Job manager wants to write any metadata to any
> other
> > > > > system ( ABFS, S3 , ... )  and the job failures will not be
> covered.
> > > > >
> > > > > FLIP-304 is mainly used to provide Failure enrichers for job
> > failures.
> > > > > Since, this FLIP is mainly for flink Job manager failures, let us
> > know
> > > if
> > > > > we can leverage the goodness of both and try to extend FLIP-304 and
> > add
> > > > our
> > > > > plugin implementation to cover the job level issues ( propagate
> this
> > > info
> > > > > to the /dev/termination-log such that, the container status reports
> > it
> > > > for
> > > > > flink on K8S by implementing Failure Enricher interface and
> > > > > processFailure() to do this ) and use this FLIP proposal for
> generic
> > > > flink
> > > > > cluster (Job manager/cluster ) failures.
> > > > >
> > > > > Regards,
> > > > > Swathi C
> > > > >
> > > > > On Thu, Apr 18, 2024 at 7:36 PM Ahmed Hamdy 
> > > > wrote:
> > > > >
> > > > > > Hi Swathi!
> > &

Re: [DISCUSS] FLIP-447: Upgrade FRocksDB from 6.20.3 to 8.10.0

2024-04-24 Thread Martijn Visser
+1

On Wed, Apr 24, 2024 at 5:31 PM Congxian Qiu  wrote:

> Thanks for driving this,  yue
>
> We also observed significant performance improvements in some cases after
> bumped the Rocksdb version, +1 for this work
>
> Best,
> Congxian
>
>
> yue ma  于2024年4月24日周三 19:16写道:
>
> > hi Yanfei,
> >
> > Thanks for your feedback and reminders I have updated related
> information.
> > In fact, most of them use the default Configrations.
> >
> > Yanfei Lei  于2024年4月23日周二 12:51写道:
> >
> > > Hi Yue & Roman,
> > >
> > > Thanks for initiating this FLIP and all the efforts for the upgrade.
> > >
> > > 8.10.0 introduces some new features, making it possible for Flink to
> > > implement some new exciting features, and the upgrade also makes
> > > FRocksDB easier to maintain, +1 for upgrading.
> > >
> > > I read the FLIP and have a minor comment, it would be better to add
> > > some description about the environment/configuration of the nexmark's
> > > result.
> > >
> > > Roman Khachatryan  于2024年4月23日周二 12:07写道:
> > >
> > > >
> > > > Hi,
> > > >
> > > > Thanks for writing the proposal and preparing the upgrade.
> > > >
> > > > FRocksDB  definitely needs to be kept in sync with the upstream and
> the
> > > new
> > > > APIs are necessary for faster rescaling.
> > > > We're already using a similar version internally.
> > > >
> > > > I reviewed the FLIP and it looks good to me (disclaimer: I took part
> in
> > > > some steps of this effort).
> > > >
> > > >
> > > > Regards,
> > > > Roman
> > > >
> > > > On Mon, Apr 22, 2024, 08:11 yue ma  wrote:
> > > >
> > > > > Hi Flink devs,
> > > > >
> > > > > I would like to start a discussion on FLIP-447: Upgrade FRocksDB
> from
> > > > > 6.20.3 to 8.10.0
> > > > >
> > > > >
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-447%3A+Upgrade+FRocksDB+from+6.20.3++to+8.10.0
> > > > >
> > > > > This FLIP proposes upgrading the version of FRocksDB in the Flink
> > > Project
> > > > > from 6.20.3 to 8.10.0.
> > > > > The FLIP mainly introduces the main benefits of upgrading FRocksDB,
> > > > > including the use of IngestDB which can improve Rescaling
> performance
> > > by
> > > > > more than 10 times in certain scenarios, as well as other potential
> > > > > optimization points such as async_io, blob db, and tiered
> storage.The
> > > > > FLIP also presented test results based on RocksDB 8.10, including
> > > > > StateBenchmark and Nexmark tests.
> > > > > Overall, upgrading FRocksDB may result in a small regression of
> write
> > > > > performance( which is a very small part of the overall overhead),
> but
> > > it
> > > > > can bring many important performance benefits.
> > > > > So we hope to upgrade the version of FRocksDB through this FLIP.
> > > > >
> > > > > Looking forward to everyone's feedback and suggestions. Thank you!
> > > > > --
> > > > > Best regards,
> > > > > Yue
> > > > >
> > >
> > >
> > >
> > > --
> > > Best,
> > > Yanfei
> > >
> >
> >
> > --
> > Best,
> > Yue
> >
>


Re: [ DISCUSS ] FLIP-XXX : [Plugin] Enhancing Flink Failure Management in Kubernetes with Dynamic Termination Log Integration

2024-04-23 Thread Martijn Visser
>From a procedural point of view, we shouldn't make FLIPs sub-tasks for
existing FLIPs that have been voted/are released. That will only cause
confusion down the line. A new FLIP should take existing functionality
(like FLIP-304) into account, and propose how to improve on what that
original FLIP has introduced or how you're going to leverage what's already
there.

On Tue, Apr 23, 2024 at 11:42 AM ramkrishna vasudevan <
ramvasu.fl...@gmail.com> wrote:

> Hi Gyula and Ahmed,
>
> I totally agree that there is an interlap in the final goal that both the
> FLIPs are achieving here and infact FLIP-304 is more comprehensive for job
> failures.
>
> But as a proposal to move forward can we make Swathi's FLIP/JIRA as a sub
> task for FLIP-304 and continue with the PR since the main aim is to get the
> cluster failure pushed to the termination log for K8s based deployments.
> And once it is completed we can work to make FLIP-304 to support job
> failure propagation to termination log?
>
> Regards
> Ram
>
> On Thu, Apr 18, 2024 at 10:07 PM Swathi C 
> wrote:
>
> > Hi Gyula and  Ahmed,
> >
> > Thanks for reviewing this.
> >
> > @gyula.f...@gmail.com  , currently since our aim
> as
> > part of this FLIP was only to fail the cluster when job manager/flink has
> > issues such that the cluster would no longer be usable, hence, we
> proposed
> > only related to that.
> > Your right, that it covers only job main class errors, job manager run
> time
> > failures, if the Job manager wants to write any metadata to any other
> > system ( ABFS, S3 , ... )  and the job failures will not be covered.
> >
> > FLIP-304 is mainly used to provide Failure enrichers for job failures.
> > Since, this FLIP is mainly for flink Job manager failures, let us know if
> > we can leverage the goodness of both and try to extend FLIP-304 and add
> our
> > plugin implementation to cover the job level issues ( propagate this info
> > to the /dev/termination-log such that, the container status reports it
> for
> > flink on K8S by implementing Failure Enricher interface and
> > processFailure() to do this ) and use this FLIP proposal for generic
> flink
> > cluster (Job manager/cluster ) failures.
> >
> > Regards,
> > Swathi C
> >
> > On Thu, Apr 18, 2024 at 7:36 PM Ahmed Hamdy 
> wrote:
> >
> > > Hi Swathi!
> > > Thanks for the proposal.
> > > Could you please elaborate what this FLIP offers more than Flip-304[1]?
> > > Flip 304 proposes a Pluggable mechanism for enriching Job failures, If
> I
> > am
> > > not mistaken this proposal looks like a subset of it.
> > >
> > > 1-
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-304%3A+Pluggable+Failure+Enrichers
> > >
> > > Best Regards
> > > Ahmed Hamdy
> > >
> > >
> > > On Thu, 18 Apr 2024 at 08:23, Gyula Fóra  wrote:
> > >
> > > > Hi Swathi!
> > > >
> > > > Thank you for creating this proposal. I really like the general idea
> of
> > > > increasing the K8s native observability of Flink job errors.
> > > >
> > > > I took a quick look at your reference PR, the termination log related
> > > logic
> > > > is contained completely in the ClusterEntrypoint. What type of errors
> > > will
> > > > this actually cover?
> > > >
> > > > To me this seems to cover only:
> > > >  - Job main class errors (ie startup errors)
> > > >  - JobManager failures
> > > >
> > > > Would regular job errors (that cause only job failover but not JM
> > errors)
> > > > be reported somehow with this plugin?
> > > >
> > > > Thanks
> > > > Gyula
> > > >
> > > > On Tue, Apr 16, 2024 at 8:21 AM Swathi C 
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I would like to start a discussion on FLIP-XXX : [Plugin] Enhancing
> > > Flink
> > > > > Failure Management in Kubernetes with Dynamic Termination Log
> > > > Integration.
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
> https://docs.google.com/document/d/1tWR0Fi3w7VQeD_9VUORh8EEOva3q-V0XhymTkNaXHOc/edit?usp=sharing
> > > > >
> > > > >
> > > > > This FLIP proposes an improvement plugin and focuses mainly on
> Flink
> > on
> > > > > K8S but can be used as a generic plugin and add further
> enhancements.
> > > > >
> > > > > Looking forward to everyone's feedback and suggestions. Thank you
> !!
> > > > >
> > > > > Best Regards,
> > > > > Swathi Chandrashekar
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] FLIP-435: Introduce a New Materialized Table for Simplifying Data Pipelines

2024-04-19 Thread Martijn Visser
+1 (binding)

On Fri, Apr 19, 2024 at 10:07 AM Yuepeng Pan  wrote:

> +1(non-binding)
>
> Best,
> Yuepeng Pan
>
> At 2024-04-19 15:22:04, "gongzhongqiang" 
> wrote:
> >+1(non-binding)
> >
> >
> >Best,
> >
> >Zhongqiang Gong
> >
> >Ron liu  于2024年4月17日周三 14:28写道:
> >
> >> Hi Dev,
> >>
> >> Thank you to everyone for the feedback on FLIP-435: Introduce a New
> >> Materialized Table for Simplifying Data Pipelines[1][2].
> >>
> >> I'd like to start a vote for it. The vote will be open for at least 72
> >> hours unless there is an objection or not enough votes.
> >>
> >> [1]
> >>
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-435%3A+Introduce+a+New+Materialized+Table+for+Simplifying+Data+Pipelines
> >> [2] https://lists.apache.org/thread/c1gnn3bvbfs8v1trlf975t327s4rsffs
> >>
> >> Best,
> >> Ron
> >>
>


[jira] [Created] (FLINK-35109) Drop support for Flink 1.17 and 1.18 in Flink Kafka connector

2024-04-15 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35109:
--

 Summary: Drop support for Flink 1.17 and 1.18 in Flink Kafka 
connector
 Key: FLINK-35109
 URL: https://issues.apache.org/jira/browse/FLINK-35109
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / Kafka
Reporter: Martijn Visser
 Fix For: kafka-4.0.0


The Flink Kafka connector currently can't compile against Flink 1.20-SNAPSHOT. 
An example failure can be found at 
https://github.com/apache/flink-connector-kafka/actions/runs/8659822490/job/23746484721#step:15:169

The {code:java} TypeSerializerUpgradeTestBase{code} has had issues before, see 
FLINK-32455. See also specifically the comment in 
https://issues.apache.org/jira/browse/FLINK-32455?focusedCommentId=17739785=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17739785

Next to that, there's also FLINK-25509 which can only be supported with Flink 
1.19 and higher. 

So we should:
* Drop support for 1.17 and 1.18
* Refactor the Flink Kafka connector to use the new 
{code:java}MigrationTest{code}

We will support the Flink Kafka connector for Flink 1.18 via the v3.1 branch; 
this change will be a new v4.0 version with support for Flink 1.19 and the 
upcoming Flink 1.20



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format

2024-04-12 Thread Martijn Visser
Hi David,

I tried, but the format wasn't as the FLIP template expects, so I ended up
needing to change the entire formatting and that was just too much work to
be honest. If you could make sure that especially the headers match with
the FLIP template, and that all of the contents from the FLIP template is
there, that would make things much easier.

Thanks,

Martijn

On Fri, Apr 12, 2024 at 6:08 PM David Radley 
wrote:

> Hi,
> A gentle nudge. Please could a committer/PMC member raise the Flip for
> this,
>   Kind regards, David.
>
>
> From: David Radley 
> Date: Monday, 8 April 2024 at 09:40
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] RE: [DISCUSS] FLIP-XXX Apicurio-avro format
> Hi,
> I have posted a Google Doc [0] to the mailing list for a discussion thread
> for a Flip proposal to introduce a Apicurio-avro format. The discussions
> have been resolved, please could a committer/PMC member copy the contents
> from the Google Doc, and create a FLIP number for this,. as per the process
> [1],
>   Kind regards, David.
> [0]
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_document_d_14LWZPVFQ7F9mryJPdKXb4l32n7B0iWYkcOdEd1xTC7w_edit-3Fusp-3Dsharing=DwIGaQ=BSDicqBQBDjDI9RkVyTcHQ=a_7ppZzQ4vpQjmqdi73nB22RONTV0tEZsZXcfdiBEOA=ir9ageEmhu8pt03AmvMqEG9MHPp8aZLMBcqU2pmOnyg6yHra8b6IRXFylvH_aP8G=pHL2e8waNNtvTDT0a3PQM0bcXrb1Fywv0YW_Ln50jCo=
>
> [1]
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_FLINK_Flink-2BImprovement-2BProposals-23FlinkImprovementProposals-2DCreateyourOwnFLIP=DwIGaQ=BSDicqBQBDjDI9RkVyTcHQ=a_7ppZzQ4vpQjmqdi73nB22RONTV0tEZsZXcfdiBEOA=ir9ageEmhu8pt03AmvMqEG9MHPp8aZLMBcqU2pmOnyg6yHra8b6IRXFylvH_aP8G=_7fvlZYc-gUtkFEhwSz9utYsgbDrUtkHEToTdhtQvQc=
>
> From: Jeyhun Karimov 
> Date: Friday, 22 March 2024 at 13:05
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: [DISCUSS] FLIP-XXX Apicurio-avro format
> Hi David,
>
> Thanks a lot for clarification.
> Sounds good to me.
>
> Regards,
> Jeyhun
>
> On Fri, Mar 22, 2024 at 10:54 AM David Radley 
> wrote:
>
> > Hi Jeyhun,
> > Thanks for your feedback.
> >
> > So for outbound messages, the message includes the global ID. We register
> > the schema and match on the artifact id. So if the schema then evolved,
> > adding a new  version, the global ID would still be unique and the same
> > version would be targeted. If you wanted to change the Flink table
> > definition in line with a higher version, then you could do this – the
> > artifact id would need to match for it to use the same schema and a
> higher
> > artifact version would need to be provided. I notice that Apicurio has
> > rules around compatibility that you can configure, I suppose if we
> attempt
> > to create an artifact that breaks these rules , then the register schema
> > will fail and the associated operation should fail (e.g. an insert). I
> have
> > not tried this.
> >
> >
> > For inbound messages, using the global id in the header – this targets
> one
> > version of the schema. I can create different messages on the topic built
> > with different schema versions, and I can create different tables in
> Flink,
> > as long as the reader and writer schemas are compatible as per the
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_flink_blob_779459168c46b7b4c600ef52f99a5435f81b9048_flink-2Dformats_flink-2Davro_src_main_java_org_apache_flink_formats_avro_RegistryAvroDeserializationSchema.java-23L109=DwIGaQ=BSDicqBQBDjDI9RkVyTcHQ=a_7ppZzQ4vpQjmqdi73nB22RONTV0tEZsZXcfdiBEOA=ir9ageEmhu8pt03AmvMqEG9MHPp8aZLMBcqU2pmOnyg6yHra8b6IRXFylvH_aP8G=kfPzGTjUx9alvbOMoJoeWEHHQ14qwYxTJXbVWAhYvAc=
> > Then this should work.
> >
> > Does this address your question?
> > Kind regards, David.
> >
> >
> > From: Jeyhun Karimov 
> > Date: Thursday, 21 March 2024 at 21:06
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] Re: [DISCUSS] FLIP-XXX Apicurio-avro format
> > Hi David,
> >
> > Thanks for the FLIP. +1 for it.
> > I have a minor comment.
> >
> > Can you please elaborate more on mechanisms in place to ensure data
> > consistency and integrity, particularly in the event of schema conflicts?
> > Since each message includes a schema ID for inbound and outbound
> messages,
> > can you elaborate more on message consistency in the context of schema
> > evolution?
> >
> > Regards,
> > Jeyhun
> >
> >
> >
> >
> >
> > On Wed, Mar 20, 2024 at 4:34 PM David Radley 
> wrote:
> >
> > > Thank you very much for your feedback Mark. I have made the changes in
> > the
> > > latest google document. On reflection I agree with you that the
> > > globalIdPlacement format configuration should apply to the
> > deserialization
> > > as well, so it is declarative. I am also going to have a new
> > configuration
> > > option to work with content IDs as well as global IDs. In line with the
> > > deser Apicurio IdHandler and headerHandlers.
> > >
> > >  kind regards, David.
> > >
> > >
> > > On 2024/03/20 15:18:37 Mark Nuttall 

Re: Proposal: I hope the official documentation can provide an "Integrations" section.

2024-04-12 Thread Martijn Visser
Hi Kaiming,

Since AutoMQ isn't part of the ASF Flink project, I'm not sure if it makes
sense to document this in the Flink documentation. There are many
technologies compatible with the Kafka API, but we also don't have those
listed in our docs and I don't think we should.

Best regards,

Martijn

On Fri, Apr 12, 2024 at 11:57 AM Kaiming Wan  wrote:

> Hi,I am a developer from AutoMQ who want to contribute an integration doc
> to flink. Regrettably, I did not see an "Integrations" related section in
> the official documentation. The only section that might be somewhat close
> is "Connectors". However, our situation is quite special. Since AutoMQ is
> fully compatible with Kafka, it does not require a new connector to
> integrate with Flink. Attached is the integration documentation I wrote.
> Where should I contribute content like this? If there is currently no
> suitable section, can I propose that the Flink official documentation open
> an "Integrations” section to carry this kind of content?
>
>


Re: [VOTE] FLIP-399: Flink Connector Doris

2024-04-11 Thread Martijn Visser
+1 (binding)

On Wed, Apr 10, 2024 at 4:34 AM Jing Ge  wrote:

> +1(binding)
>
> Best regards,
> Jing
>
> On Tue, Apr 9, 2024 at 8:54 PM Feng Jin  wrote:
>
> > +1 (non-binding)
> >
> > Best,
> > Feng
> >
> > On Tue, Apr 9, 2024 at 5:56 PM gongzhongqiang  >
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > Best,
> > >
> > > Zhongqiang Gong
> > >
> > > wudi <676366...@qq.com.invalid> 于2024年4月9日周二 10:48写道:
> > >
> > > > Hi devs,
> > > >
> > > > I would like to start a vote about FLIP-399 [1]. The FLIP is about
> > > > contributing the Flink Doris Connector[2] to the Flink community.
> > > > Discussion thread [3].
> > > >
> > > > The vote will be open for at least 72 hours unless there is an
> > objection
> > > or
> > > > insufficient votes.
> > > >
> > > >
> > > > Thanks,
> > > > Di.Wu
> > > >
> > > >
> > > > [1]
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-399%3A+Flink+Connector+Doris
> > > > [2] https://github.com/apache/doris-flink-connector
> > > > [3] https://lists.apache.org/thread/p3z4wsw3ftdyfs9p2wd7bbr2gfyl3xnh
> > > >
> > > >
> > >
> >
>


[jira] [Created] (FLINK-35009) Change on getTransitivePredecessors breaks connectors

2024-04-04 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35009:
--

 Summary: Change on getTransitivePredecessors breaks connectors
 Key: FLINK-35009
 URL: https://issues.apache.org/jira/browse/FLINK-35009
 Project: Flink
  Issue Type: Bug
  Components: API / Core, Connectors / Kafka
Affects Versions: 1.18.2, 1.20.0, 1.19.1
Reporter: Martijn Visser


{code:java}
Error:  Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
(default-testCompile) on project flink-connector-kafka: Compilation failure: 
Compilation failure: 
Error:  
/home/runner/work/flink-connector-kafka/flink-connector-kafka/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/testutils/DataGenerators.java:[214,24]
 
org.apache.flink.streaming.connectors.kafka.testutils.DataGenerators.InfiniteStringsGenerator.MockTransformation
 is not abstract and does not override abstract method 
getTransitivePredecessorsInternal() in org.apache.flink.api.dag.Transformation
Error:  
/home/runner/work/flink-connector-kafka/flink-connector-kafka/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/testutils/DataGenerators.java:[220,44]
 getTransitivePredecessors() in 
org.apache.flink.streaming.connectors.kafka.testutils.DataGenerators.InfiniteStringsGenerator.MockTransformation
 cannot override getTransitivePredecessors() in 
org.apache.flink.api.dag.Transformation
Error:overridden method is final
{code}

Example: 
https://github.com/apache/flink-connector-kafka/actions/runs/8494349338/job/23269406762#step:15:167





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35008) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink Kafka connector

2024-04-04 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35008:
--

 Summary: Bump org.apache.commons:commons-compress from 1.25.0 to 
1.26.0 for Flink Kafka connector
 Key: FLINK-35008
 URL: https://issues.apache.org/jira/browse/FLINK-35008
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / Kafka
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35007) Update Flink Kafka connector to support 1.19 and test 1.20-SNAPSHOT

2024-04-04 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35007:
--

 Summary: Update Flink Kafka connector to support 1.19 and test 
1.20-SNAPSHOT
 Key: FLINK-35007
 URL: https://issues.apache.org/jira/browse/FLINK-35007
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / Kafka
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Externalized Google Cloud Connectors

2024-04-04 Thread Martijn Visser
Hi Lorenzo,

Bahir is retired, see the homepage. It plays no role (anymore).

>  This, unfortunately, is the tradeoff for developing the connectors
outside of Apache in exchange for development velocity.

I understand that. It can be considered to develop the connectors outside
of the Flink project, in order to achieve development velocity. We've seen
a similar thing happen with the CDC connectors, before that was ultimately
donated to the Flink project. However, there are no guarantees that
external contributions are considered when evaluating committers, because
there's no visibility for the PMC on these external contributions.

Best regards,

Martijn

On Wed, Apr 3, 2024 at 3:26 PM 
wrote:

> @Leonard @Martijn
> Following up on @Claire question, what is the role of Bahir (
> https://bahir.apache.org/) in this scenario?
>
> I am also trying to understand how connectors fir in the Flink project
> scenario :)
>
> Thank you,
> Lorenzo
> On Apr 2, 2024 at 06:13 +0200, Leonard Xu , wrote:
> > Hey, Claire
> >
> > Thanks starting this discussion, all flink external connector repos are
> sub-projects of Apache Flink, including
> https://github.com/apache/flink-connector-aws.
> >
> > Creating a flink external connector repo named flink-connectors-gcp as
> sub-project of Apache Beam is not a good idea from my side.
> >
> > > Currently, we have no Flink committers on our team. We are actively
> > > involved in the Apache Beam community and have a number of ASF members
> on
> > > the team.
> >
> > Not having Flink committer should not be a strong reason in this case,
> Flink community welcome contributors to contribute and maintain the
> connectors, as a contributor, through continuous connector development and
> maintenance work in the community, you will also have the opportunity to
> become a Committer.
> >
> > Best,
> > Leonard
> >
> >
> > > 2024年2月14日 上午12:24,Claire McCarthy 
> 写道:
> > >
> > > Hi Devs!
> > >
> > > I’d like to kick off a discussion on setting up a repo for a new fleet
> of
> > > Google Cloud connectors.
> > >
> > > A bit of context:
> > >
> > > -
> > >
> > > We have a team of Google engineers who are looking to build/maintain
> > > 5-10 GCP connectors for Flink.
> > > -
> > >
> > > We are wondering if it would make sense to host our connectors under
> the
> > > ASF umbrella following a similar repo structure as AWS (
> > > https://github.com/apache/flink-connector-aws). In our case:
> > > apache/flink-connectors-gcp.
> > > -
> > >
> > > Currently, we have no Flink committers on our team. We are actively
> > > involved in the Apache Beam community and have a number of ASF members
> on
> > > the team.
> > >
> > >
> > > We saw that one of the original motivations for externalizing
> connectors
> > > was to encourage more activity and contributions around connectors by
> > > easing the contribution overhead. We understand that the decision was
> > > ultimately made to host the externalized connector repos under the ASF
> > > organization. For the same reasons (release infra, quality assurance,
> > > integration with the community, etc.), we would like all GCP
> connectors to
> > > live under the ASF organization.
> > >
> > > We want to ask the Flink community what you all think of this idea, and
> > > what would be the best way for us to go about contributing something
> like
> > > this. We are excited to contribute and want to learn and follow your
> > > practices.
> > >
> > > A specific issue we know of is that our changes need approval from
> Flink
> > > committers. Do you have a suggestion for how best to go about a new
> > > contribution like ours from a team that does not have committers? Is it
> > > possible, for example, to partner with a committer (or a small cohort)
> for
> > > tight engagement? We also know about ASF voting and release process,
> but
> > > that doesn't seem to be as much of a potential hurdle.
> > >
> > > Huge thanks in advance for sharing your thoughts!
> > >
> > >
> > > Claire
> >
>


Re: [DISCUSS] FLIP-435: Introduce a New Dynamic Table for Simplifying Data Pipelines

2024-04-03 Thread Martijn Visser
Hi all,

Thanks for the proposal. While the FLIP talks extensively on how Snowflake
has Dynamic Tables and Databricks has Delta Live Tables, my understanding
is that Databricks has CREATE STREAMING TABLE [1] which relates with this
proposal.

I do have concerns about using CREATE DYNAMIC TABLE, specifically about
confusing the users who are familiar with Snowflake's approach where you
can't change the content via DML statements, while that is something that
would work in this proposal. Naming is hard of course, but I would probably
prefer something like CREATE CONTINUOUS TABLE, CREATE REFRESH TABLE or
CREATE LIVE TABLE.

Best regards,

Martijn

[1]
https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-streaming-table.html

On Wed, Apr 3, 2024 at 5:19 AM Ron liu  wrote:

> Hi, dev
>
> After offline discussion with Becket Qin, Lincoln Lee and Jark Wu,  we have
> improved some parts of the FLIP.
>
> 1. Add Full Refresh Mode section to clarify the semantics of full refresh
> mode.
> 2. Add Future Improvement section explaining why query statement does not
> support references to temporary view and possible solutions.
> 3. The Future Improvement section explains a possible future solution for
> dynamic table to support the modification of query statements to meet the
> common field-level schema evolution requirements of the lakehouse.
> 4. The Refresh section emphasizes that the Refresh command and the
> background refresh job can be executed in parallel, with no restrictions at
> the framework level.
> 5. Convert RefreshHandler into a plug-in interface to support various
> workflow schedulers.
>
> Best,
> Ron
>
> Ron liu  于2024年4月2日周二 10:28写道:
>
> > Hi, Venkata krishnan
> >
> > Thank you for your involvement and suggestions, and hope that the design
> > goals of this FLIP will be helpful to your business.
> >
> > >>> 1. In the proposed FLIP, given the example for the dynamic table, do
> > the
> > data sources always come from a single lake storage such as Paimon or
> does
> > the same proposal solve for 2 disparate storage systems like Kafka and
> > Iceberg where Kafka events are ETLed to Iceberg similar to Paimon?
> > Basically the lambda architecture that is mentioned in the FLIP as well.
> > I'm wondering if it is possible to switch b/w sources based on the
> > execution mode, for eg: if it is backfill operation, switch to a data
> lake
> > storage system like Iceberg, otherwise an event streaming system like
> > Kafka.
> >
> > Dynamic table is a design abstraction at the framework level and is not
> > tied to the physical implementation of the connector. If a connector
> > supports a combination of Kafka and lake storage, this works fine.
> >
> > >>> 2. What happens in the context of a bootstrap (batch) + nearline
> update
> > (streaming) case that are stateful applications? What I mean by that is,
> > will the state from the batch application be transferred to the nearline
> > application after the bootstrap execution is complete?
> >
> > I think this is another orthogonal thing, something that FLIP-327 tries
> to
> > address, not directly related to Dynamic Table.
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-327%3A+Support+switching+from+batch+to+stream+mode+to+improve+throughput+when+processing+backlog+data
> >
> > Best,
> > Ron
> >
> > Venkatakrishnan Sowrirajan  于2024年3月30日周六 07:06写道:
> >
> >> Ron and Lincoln,
> >>
> >> Great proposal and interesting discussion for adding support for dynamic
> >> tables within Flink.
> >>
> >> At LinkedIn, we are also trying to solve compute/storage convergence for
> >> similar problems discussed as part of this FLIP, specifically periodic
> >> backfill, bootstrap + nearline update use cases using single
> >> implementation
> >> of business logic (single script).
> >>
> >> Few clarifying questions:
> >>
> >> 1. In the proposed FLIP, given the example for the dynamic table, do the
> >> data sources always come from a single lake storage such as Paimon or
> does
> >> the same proposal solve for 2 disparate storage systems like Kafka and
> >> Iceberg where Kafka events are ETLed to Iceberg similar to Paimon?
> >> Basically the lambda architecture that is mentioned in the FLIP as well.
> >> I'm wondering if it is possible to switch b/w sources based on the
> >> execution mode, for eg: if it is backfill operation, switch to a data
> lake
> >> storage system like Iceberg, otherwise an event streaming system like
> >> Kafka.
> >> 2. What happens in the context of a bootstrap (batch) + nearline update
> >> (streaming) case that are stateful applications? What I mean by that is,
> >> will the state from the batch application be transferred to the nearline
> >> application after the bootstrap execution is complete?
> >>
> >> Regards
> >> Venkata krishnan
> >>
> >>
> >> On Mon, Mar 25, 2024 at 8:03 PM Ron liu  wrote:
> >>
> >> > Hi, Timo
> >> >
> >> > Thanks for your quick response, and your suggestion.
> >> >
> >> > Yes, this 

Re: [VOTE] FLIP-437: Support ML Models in Flink SQL

2024-04-03 Thread Martijn Visser
+1 (binding)

On Wed, Apr 3, 2024 at 9:52 AM Leonard Xu  wrote:

> +1(binding)
>
> Best,
> Leonard
>
> > 2024年4月3日 下午3:37,Piotr Nowojski  写道:
> >
> > +1 (binding)
> >
> > Best,
> > Piotrek
> >
> > śr., 3 kwi 2024 o 04:29 Yu Chen  napisał(a):
> >
> >> +1 (non-binding)
> >>
> >> Looking forward to this future.
> >>
> >> Thanks,
> >> Yu Chen
> >>
> >>> 2024年4月3日 10:23,Jark Wu  写道:
> >>>
> >>> +1 (binding)
> >>>
> >>> Best,
> >>> Jark
> >>>
> >>> On Tue, 2 Apr 2024 at 15:12, Timo Walther  wrote:
> >>>
>  +1 (binding)
> 
>  Thanks,
>  Timo
> 
>  On 29.03.24 17:30, Hao Li wrote:
> > Hi devs,
> >
> > I'd like to start a vote on the FLIP-437: Support ML Models in Flink
> > SQL [1]. The discussion thread is here [2].
> >
> > The vote will be open for at least 72 hours unless there is an
> >> objection
>  or
> > insufficient votes.
> >
> > [1]
> >
> 
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-437%3A+Support+ML+Models+in+Flink+SQL
> >
> > [2] https://lists.apache.org/thread/9z94m2bv4w265xb5l2mrnh4lf9m28ccn
> >
> > Thanks,
> > Hao
> >
> 
> 
> >>
> >>
>
>


Re: [DISCUSS] Flink Website Menu Adjustment

2024-03-25 Thread Martijn Visser
Hi Zhongqiang Gong,

Are you suggesting to continuously update the menu based on the number of
releases, or just this one time? I wouldn't be in favor of continuously
updating: returning customers expect a certain order in the menu, and I
don't see a lot of value in continuously changing that. I do think that the
order that you have currently proposed is better then the one we have right
now, so I would +1 a one-time update but not a continuously updating order.

Best regards,

Martijn

On Mon, Mar 25, 2024 at 4:15 PM Yanquan Lv  wrote:

> +1 for this proposal.
>
> gongzhongqiang  于2024年3月25日周一 15:49写道:
>
> > Hi everyone,
> >
> > I'd like to start a discussion on adjusting the Flink website [1] menu to
> > improve accuracy and usability.While migrating Flink CDC documentation
> > to the website, I found outdated links, need to review and update menus
> > for the most relevant information for our users.
> >
> >
> > Proposal:
> >
> > - Remove Paimon [2] from the "Getting Started" and "Documentation" menus:
> > Paimon [2] is now an independent top project of ASF. CC: jingsong lees
> >
> > - Sort the projects in the subdirectory by the activity of the projects.
> > Here I list the number of releases for each project in the past year.
> >
> > Flink Kubernetes Operator : 7
> > Flink CDC : 5
> > Flink ML  : 2
> > Flink Stateful Functions : 1
> >
> >
> > Expected Outcome :
> >
> > - Menu "Getting Started"
> >
> > Before:
> >
> > With Flink
> >
> > With Flink Stateful Functions
> >
> > With Flink ML
> >
> > With Flink Kubernetes Operator
> >
> > With Paimon(incubating) (formerly Flink Table Store)
> >
> > With Flink CDC
> >
> > Training Course
> >
> >
> > After:
> >
> > With Flink
> > With Flink Kubernetes Operator
> >
> > With Flink CDC
> >
> > With Flink ML
> >
> > With Flink Stateful Functions
> >
> > Training Course
> >
> >
> > - Menu "Documentation" will same with "Getting Started"
> >
> >
> > I look forward to hearing your thoughts and suggestions on this proposal.
> >
> > [1] https://flink.apache.org/
> > [2] https://github.com/apache/incubator-paimon
> > [3] https://github.com/apache/flink-statefun
> >
> >
> >
> > Best regards,
> >
> > Zhongqiang Gong
> >
>


Re: [VOTE] FLIP-402: Extend ZooKeeper Curator configurations

2024-03-21 Thread Martijn Visser
+1 (binding)

On Wed, Mar 20, 2024 at 1:19 PM Ferenc Csaky 
wrote:

> +1 (non-binding), thanks for driving this!
>
> Best,
> Ferenc
>
>
> On Wednesday, March 20th, 2024 at 10:57, Yang Wang <
> wangyang0...@apache.org> wrote:
>
> >
> >
> > +1 (binding) since ZK HA is still widely used.
> >
> >
> > Best,
> > Yang
> >
> > On Thu, Mar 14, 2024 at 6:27 PM Matthias Pohl
> > matthias.p...@aiven.io.invalid wrote:
> >
> > > Nothing to add from my side. Thanks, Alex.
> > >
> > > +1 (binding)
> > >
> > > On Thu, Mar 7, 2024 at 4:09 PM Alex Nitavsky alexnitav...@gmail.com
> > > wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > I'd like to start a vote on FLIP-402 [1]. It introduces new
> configuration
> > > > options for Apache Flink's ZooKeeper integration for high
> availability by
> > > > reflecting existing Apache Curator configuration options. It has been
> > > > discussed in this thread [2].
> > > >
> > > > I would like to start a vote. The vote will be open for at least 72
> > > > hours
> > > > (until March 10th 18:00 GMT) unless there is an objection or
> > > > insufficient votes.
> > > >
> > > > [1]
> > >
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-402%3A+Extend+ZooKeeper+Curator+configurations
> > >
> > > > [2] https://lists.apache.org/thread/gqgs2jlq6bmg211gqtgdn8q5hp5v9l1z
> > > >
> > > > Thanks
> > > > Alex
>


Re: [VOTE] FLIP-439: Externalize Kudu Connector from Bahir

2024-03-21 Thread Martijn Visser
+1 (binding)

On Thu, Mar 21, 2024 at 8:01 AM gongzhongqiang 
wrote:

> +1 (non-binding)
>
> Bests,
> Zhongqiang Gong
>
> Ferenc Csaky  于2024年3月20日周三 22:11写道:
>
> > Hello devs,
> >
> > I would like to start a vote about FLIP-439 [1]. The FLIP is about to
> > externalize the Kudu
> > connector from the recently retired Apache Bahir project [2] to keep it
> > maintainable and
> > make it up to date as well. Discussion thread [3].
> >
> > The vote will be open for at least 72 hours (until 2024 March 23 14:03
> > UTC) unless there
> > are any objections or insufficient votes.
> >
> > Thanks,
> > Ferenc
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-439%3A+Externalize+Kudu+Connector+from+Bahir
> > [2] https://attic.apache.org/projects/bahir.html
> > [3] https://lists.apache.org/thread/oydhcfkco2kqp4hdd1glzy5vkw131rkz
>


Re: [VOTE] Release 1.19.0, release candidate #2

2024-03-14 Thread Martijn Visser
+1 (binding)

- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven via mvn clean install -Pcheck-convergence
-Dflink.version=1.19.0
- Verified licenses
- Verified web PR
- Started a cluster and the Flink SQL client, successfully read and wrote
with the Kafka connector to Confluent Cloud with AVRO and Schema Registry
enabled

On Thu, Mar 14, 2024 at 1:32 PM gongzhongqiang 
wrote:

> +1 (non-binding)
>
> - Verified no binary files in source code
> - Verified signature and checksum
> - Build source code and run a simple job successfully
> - Reviewed the release announcement PR
>
> Best,
>
> Zhongqiang Gong
>
> Ferenc Csaky  于2024年3月14日周四 20:07写道:
>
> >  +1 (non-binding)
> >
> > - Verified checksum and signature
> > - Verified no binary in src
> > - Built from src
> > - Reviewed release note PR
> > - Reviewed web PR
> > - Tested a simple datagen query and insert to blackhole sink via SQL
> > Gateway
> >
> > Best,
> > Ferenc
> >
> >
> >
> >
> > On Thursday, March 14th, 2024 at 12:14, Jane Chan  >
> > wrote:
> >
> > >
> > >
> > > Hi Lincoln,
> > >
> > > Thank you for the prompt response and the effort to provide clarity on
> > this
> > > matter.
> > >
> > > Best,
> > > Jane
> > >
> > > On Thu, Mar 14, 2024 at 6:02 PM Lincoln Lee lincoln.8...@gmail.com
> > wrote:
> > >
> > > > Hi Jane,
> > > >
> > > > Thank you for raising this question. I saw the discussion in the Jira
> > > > (include Matthias' point)
> > > > and sought advice from several PMCs (including the previous RMs), the
> > > > majority of people
> > > > are in favor of merging the bugfix into the release branch even
> during
> > the
> > > > release candidate
> > > > (RC) voting period, so we should accept all bugfixes (unless there
> is a
> > > > specific community
> > > > rule preventing it).
> > > >
> > > > Thanks again for contributing to the community!
> > > >
> > > > Best,
> > > > Lincoln Lee
> > > >
> > > > Matthias Pohl matthias.p...@aiven.io.invalid 于2024年3月14日周四 17:50写道:
> > > >
> > > > > Update on FLINK-34227 [1] which I mentioned above: Chesnay helped
> > > > > identify
> > > > > a concurrency issue in the JobMaster shutdown logic which seems to
> > be in
> > > > > the code for quite some time. I created a PR fixing the issue
> hoping
> > that
> > > > > the test instability is resolved with it.
> > > > >
> > > > > The concurrency issue doesn't really explain why it only started to
> > > > > appear
> > > > > recently in a specific CI setup (GHA with AdaptiveScheduler). There
> > is no
> > > > > hint in the git history indicating that it's caused by some newly
> > > > > introduced change. That is why I wouldn't make FLINK-34227 a reason
> > to
> > > > > cancel rc2. Instead, the fix can be provided in subsequent patch
> > > > > releases.
> > > > >
> > > > > Matthias
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/FLINK-34227
> > > > >
> > > > > On Thu, Mar 14, 2024 at 8:49 AM Jane Chan qingyue@gmail.com
> > wrote:
> > > > >
> > > > > > Hi Yun, Jing, Martijn and Lincoln,
> > > > > >
> > > > > > I'm seeking guidance on whether merging the bugfix[1][2] at this
> > stage
> > > > > > is
> > > > > > appropriate. I want to ensure that the actions align with the
> > current
> > > > > > release process and do not disrupt the ongoing preparations.
> > > > > >
> > > > > > [1] https://issues.apache.org/jira/browse/FLINK-29114
> > > > > > [2] https://github.com/apache/flink/pull/24492
> > > > > >
> > > > > > Best,
> > > > > > Jane
> > > > > >
> > > > > > On Thu, Mar 14, 2024 at 1:33 PM Yun Tang myas...@live.com wrote:
> > > > > >
> > > > > > > +1 (non-binding)
> > > > > > >
> > > > > > > *
> > > > > > > Verified the signature and checksum.
> > > > > > > *
> > > > > > > Reviewed the release note PR
> > > > > > > *
> > > > > > > Reviewed the web announcement PR
> > > > > > > *
> > > > > > > Start a standalone cluster to submit the state machine example,
> > which
> > > > > > > works well.
> > > > > > > *
> > > > > > > Checked the pre-built jars are generated via JDK8
> > > > > > > *
> > > > > > > Verified the process profiler works well after setting
> > > > > > > rest.profiling.enabled: true
> > > > > > >
> > > > > > > Best
> > > > > > > Yun Tang
> > > > > > >
> > > > > > > 
> > > > > > > From: Qingsheng Ren re...@apache.org
> > > > > > > Sent: Wednesday, March 13, 2024 12:45
> > > > > > > To: dev@flink.apache.org dev@flink.apache.org
> > > > > > > Subject: Re: [VOTE] Release 1.19.0, release candidate #2
> > > > > > >
> > > > > > > +1 (binding)
> > > > > > >
> > > > > > > - Verified signature and checksum
> > > > > > > - Verified no binary in source
> > > > > > > - Built from source
> > > > > > > - Tested reading and writing Kafka with SQL client and Kafka
> > > > > > > connector
> > > > > > > 3.1.0
> > > > > > > - Verified source code tag
> > > > > > > - Reviewed release note
> > > > > > > - Reviewed web PR
> > > > > > >
> > > > > > > 

Re: [DISCUSS] Externalized Google Cloud Connectors

2024-03-08 Thread Martijn Visser
Hi Claire,

I don't think it's a good idea to actually develop outside of Apache;
contributions that have happened outside of the Apache realm do not play a
role when evaluating potential new committers. I think the best course of
action would be to create a FLIP to add these connectors to the ASF, while
trying to find one or two committers in the Flink project that are willing
to help with the reviews. Would that be possible?

Best regards,

Martijn

On Thu, Feb 15, 2024 at 12:39 PM Claire McCarthy
 wrote:

> Hi Alexander,
>
> Thanks so much for the info!
>
> It sounds like the best path forward is for us to develop outside of Apache
> while, in parallel, working to gain committer status. Our goal will be to
> eventually move anything we build under the Apache umbrella once we're more
> plugged in to the community.
>
> As for migrating the existing Pub/Sub connector to the new Source API, we
> actually have somebody currently building a new Pub/Sub connector from
> scratch (using the new Source API). Once that is ready, we will make sure
> to get that new implementation moved under Apache and help with the
> migration effort.
>
> Thanks again for the response and I'm sure we will be chatting soon!
>
> Best,
> Claire
>
> On Wed, Feb 14, 2024 at 7:36 AM Alexander Fedulov <
> alexander.fedu...@gmail.com> wrote:
>
> > Hi Claire,
> >
> > Thanks for reaching out. It's great that there is interest from Google
> > in spearheading the development of the respective Flink connectors.
> >
> > As of now,there is only one GCP-specific connector developed directly as
> > part
> > of ASF Flink, namely the Pub/Sub one. It has already been externalized
> here
> > [1].
> > Grouping further connectors under apache/flink-connectors-gcp makes
> sense,
> > but
> > it would be nice to first understand which GCP connectors you plan to add
> > before we create this new umbrella project.
> >
> > I do not think establishing a dedicated workgroup to help with the
> > GCP-specific
> > development is a realistic goal, though. The development will most
> probably
> > take
> > place on the regular ASF best effort basis (which involves mailing list
> > discussions,
> > reaching out to people for reviews, etc.) until your developers gain
> > committer status
> > and can work more independently.
> >
> > One immediate open item where the Flink community would definitely
> > appreciate your
> > help is with the migration of the existing Pub/Sub connector to the new
> > Source API.
> > As you can see here [2], it is one of the two remaining connectors where
> we
> > have not
> > yet made progress, and it seems like a great place to start the
> > collaboration.
> > Flink 2.0 aims to remove the SourceFunction API, which the current
> Pub/Sub
> > connector
> > relies on. It would be great if your colleagues could assist with this
> > effort [3].
> >
> > Best,
> > Alexander Fedulov
> >
> > [1] https://github.com/apache/flink-connector-gcp-pubsub
> > [2] https://issues.apache.org/jira/browse/FLINK-28045
> > [3] https://issues.apache.org/jira/browse/FLINK-32673
> >
> >
> >
> > On Tue, 13 Feb 2024 at 17:25, Claire McCarthy
> >  wrote:
> >
> > > Hi Devs!
> > >
> > > I’d like to kick off a discussion on setting up a repo for a new fleet
> of
> > > Google Cloud connectors.
> > >
> > > A bit of context:
> > >
> > >-
> > >
> > >We have a team of Google engineers who are looking to build/maintain
> > >5-10 GCP connectors for Flink.
> > >-
> > >
> > >We are wondering if it would make sense to host our connectors under
> > the
> > >ASF umbrella following a similar repo structure as AWS (
> > >https://github.com/apache/flink-connector-aws). In our case:
> > >apache/flink-connectors-gcp.
> > >-
> > >
> > >Currently, we have no Flink committers on our team. We are actively
> > >involved in the Apache Beam community and have a number of ASF
> members
> > > on
> > >the team.
> > >
> > >
> > > We saw that one of the original motivations for externalizing
> connectors
> > > was to encourage more activity and contributions around connectors by
> > > easing the contribution overhead. We understand that the decision was
> > > ultimately made to host the externalized connector repos under the ASF
> > > organization. For the same reasons (release infra, quality assurance,
> > > integration with the community, etc.), we would like all GCP connectors
> > to
> > > live under the ASF organization.
> > >
> > > We want to ask the Flink community what you all think of this idea, and
> > > what would be the best way for us to go about contributing something
> like
> > > this. We are excited to contribute and want to learn and follow your
> > > practices.
> > >
> > > A specific issue we know of is that our changes need approval from
> Flink
> > > committers. Do you have a suggestion for how best to go about a new
> > > contribution like ours from a team that does not have committers? Is it
> > > possible, for example, to partner 

Re: [DISCUSS] FLIP-419: Optimize multi-sink query plan generation

2024-03-08 Thread Martijn Visser
Hi Jeyhun Karimov,

I see that you've already opened up a VOTE thread, but since you're talking
about having a prototype already and results, I wondered if you could
include the POC and how you've tested these results in the FLIP?

Best regards,

Martijn

On Tue, Jan 30, 2024 at 4:47 AM Jeyhun Karimov  wrote:

> Hi devs,
>
> I just wanted to give an update on this FLIP.
> I updated the doc based on the comments from Jim.
> Also, I developed a prototype and did some testing.
>
> I in my small prototype I ran the following tests:
>
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks1
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks2
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks3
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks4
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks5
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksWithUDTF
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksSplitOnUnion1
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksSplitOnUnion2
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksSplitOnUnion3
>-
>
>  
> org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksSplitOnUnion4
>
>
> These tests are e2e dag optimization, including query parsing, validation,
> optimization, and checking the results.
>
> In these e2e optimization tests, my prototype was 15-20% faster than
> existing Flink optimization structure (with the "cost" of simplifying the
> codebase).
>
>
> Any questions/comments are more than welcome.
>
>
> Regards,
>
> Jeyhun Karimov
>
> On Wed, Jan 17, 2024 at 9:11 PM Jeyhun Karimov 
> wrote:
>
> > Hi Jim,
> >
> > Thanks for your comments. Please find my answers below:
> >
> >1. StreamOptimizeContext may still be needed to pass the fact that we
> >>are optimizing a streaming query.  I don't think this class will go
> >> away
> >>completely.  (I agree it may become more simple if the kind or
> >>mini-batch configuration can be removed.)
> >
> >
> > What I meant is that it might go away if we get rid of
> > *isUpdateBeforeRequired* and *getMiniBatchInterval *fields.
> > Of course if we can get rid of only one of them, then the
> > *StreamOptimizeContext* class will not be removed but get simpler.
> > Will update the doc accordingly.
> >
> >2. How are the mini-batch and changelog inference rules tightly
> coupled?
> >>I looked a little bit and I haven't seen any connection between them.
> >> It
> >>seems like the changelog inference is what needs to run multiple
> times.
> >
> >
> > Sorry for the misunderstanding. The mini-batch and changelog inference
> are
> > not coupled among themselves but with the high-level optimization logic.
> > The idea is to separate the query optimization into 1) optimize 2) enrich
> > with changelog inference 3) enrich with mini-batch interval inference and
> > 4) rewrite
> >
> >3. I think your point about code complexity is unnecessary.
> >> StreamOptimizeContext
> >>extends org.apache.calcite.plan.Context which is used an interface to
> >> pass
> >>information and objects through the Calcite stack.
> >
> >
> > I partially agree. Please see my answer above for the question 1.
> >
> >4. Is an alternative where the complexity of the changelog
> optimization
> >>can be moved into the `FlinkChangelogModeInferenceProgram`?  (If this
> >> is
> >>coupling between the mini-batch and changelog rules, then this would
> >> not
> >>make sense.)
> >
> >
> > Good point. Yes, this is definitely an alternative.
> >
> >5. There are some other smaller refactorings.  I tried some of them
> >>here: https://github.com/apache/flink/pull/24108 Mostly, it is
> syntax
> >>and using lazy vals to avoid recomputing various things.  (Feel free
> to
> >>take whatever actually works; I haven't run the tests.)
> >
> >
> > I took a look at your PR. For sure, some of the refactorings I will reuse
> > (probably rebase by the time I have this ready :))
> >
> >
> > Separately, folks on the Calcite dev list are thinking about multi-query
> >> optimization:
> >> https://lists.apache.org/thread/mcdqwrtpx0os54t2nn9vtk17spkp5o5k
> >> https://issues.apache.org/jira/browse/CALCITE-6188
> >
> >
> > Seems interesting. But Calcite's MQO approach will probably require some
> > drastic changes in our codebase once we adopt it.
> > This approach is more incremental.
> >
> > Hope my comments answer your questions.
> >
> > Regards,
> > Jeyhun Karimov
> >
> > On Wed, Jan 17, 2024 at 2:36 AM Jim Hughes  >
> > wrote:
> >
> >> Hi Jeyhun,
> >>
> >>
> >> Generally, I like the idea of 

Re: [DISCUSS] Add "Special Thanks" Page on the Flink Website

2024-03-08 Thread Martijn Visser
Hi all,

I'm +1 on it. As long as we follow the ASF rules on this, we can thank
those that are/have made contributions.

Best regards,

Martijn

On Wed, Mar 6, 2024 at 7:45 AM Jark Wu  wrote:

> Hi Matthias,
>
> Thanks for your comments! Please see my reply inline.
>
> > What do we do if we have enough VMs? Do we still allow
> companies to add more VMs to the pool even though it's not adding any
> value?
>
> The ASF policy[1] makes it very clear: "Project Thanks pages are to show
> appreciation
> for goods that the project truly needs, not just for goods that someone
> wants to donate."
> Therefore, the community should reject new VMs if it is enough.
>
>
> > The community lacks the openly accessible tools to monitor the VM usage
> independently
> as far as I know (the Azure Pipelines project is owned by Ververica right
> now).
>
> The Azure pipeline account is sponsored by Ververica, and is managed by the
> community.
> AFAIK, Chesnay and Robert both have admin permissions [2] to the Azure
> pipeline project.
> Others can contact the managers to get access to the environment.
>
> > I figured that there could be a chance for us to
> rely on Apache-provided infrastructure entirely with our current workload
> when switching over from Azure Pipelines.
>
> That sounds great. We can return back the VMs and mark the donations as
> historical
> on the Thank Page once the new GitHub Actions CI is ready.
>
> > I am fine with creating a Thank You page to acknowledge the financial
> contributions from Alibaba and Ververica in the past (since Apache allows
> historical donations) considering that the contributions of the two
> companies go way back in time and are quite significant in my opinion. I
> suggest focusing on the past for now because of the option to migrate to
> Apache infrastructure midterm.
>
> Sorry, do you mean we only mention past donations for now?
> IIUC, the new GitHub Actions might be ready after the end of v1.20, which
> probably be in half a year.
> I'm worried that if we say the sponsorship is ongoing until now (but it's
> not), it will confuse
> people and disrespect the sponsor.
>
> Besides, I'm not sure whether the new GitHub Actions CI will replace the
> machines for running
> flink-ci mirrors [3] and the flink benchmarks [4]. If not, I think it's
> inappropriate to say they are
> historical donations.
>
> Furthermore, we are collecting all kinds of donations. I just noticed that
> AWS donated [5] service costs
> for flink-connector-aws tests that hit real AWS services. This is an
> ongoing donation and I think it's not
> good to mark it as a historical donation. (Thanks for the donation, AWS,
> @Danny
> Cranmer  @HongTeoh!
> We should add it to the Thank Page!)
>
> Best,
> Jark
>
>
> [1]: https://www.apache.org/foundation/marks/linking#projectthanks
> [2]:
>
> https://cwiki.apache.org/confluence/display/FLINK/Continuous+Integration#ContinuousIntegration-Contacts
>
> [3]:
>
> https://cwiki.apache.org/confluence/display/FLINK/Continuous+Integration#ContinuousIntegration-Repositories
>
> [4]: https://lists.apache.org/thread/bkw6ozoflgltwfwmzjtgx522hyssfko6
>
> [5]: https://issues.apache.org/jira/browse/INFRA-24474
>
> On Wed, 6 Mar 2024 at 17:58, Matthias Pohl  wrote:
>
> > Thanks for starting this discussion. I see the value of such a page if we
> > want to encourage companies to sponsor CI infrastructure in case we need
> > this infrastructure (as Yun Tang pointed out). The question is, though:
> Do
> > we need more VMs? The amount of commits to master is constantly
> decreasing
> > since its peak in 2019/2020 [1]. Did we observe shortage of CI runners in
> > the past years? What do we do if we have enough VMs? Do we still allow
> > companies to add more VMs to the pool even though it's not adding any
> > value? Then it becomes a marketing tool for companies. The community
> lacks
> > the openly accessible tools to monitor the VM usage independently as far
> as
> > I know (the Azure Pipelines project is owned by Ververica right now). My
> > concern is (which goes towards what Max is saying) that this can be a
> > source of friction in the community (even if it's not about individuals
> but
> > companies). I'm not sure whether the need for additional infrastructure
> > out-weights the risk for friction.
> >
> > On another note: After monitoring the GitHub Action workflows (FLIP-396
> > [2]) for the past weeks, I figured that there could be a chance for us to
> > rely on Apache-provided infrastructure entirely with our current workload
> > when switching over from Azure Pipelines. But that might be a premature
> > judgement because the monitoring started after the feature freeze of
> Flink
> > 1.19. We should wait with a final conclusion till the end of the 1.20
> > release cycle. Apache Infra increased the amount of VMs they are offering
> > since 2018 (when the Apache Flink community decided to go for Azure
> > Pipelines and custom VMs as far as I know). That's based on a
> conversation
> 

Re: [DISCUSS] Support the Ozone Filesystem

2024-03-08 Thread Martijn Visser
Hi Ferenc,

I'm +0: I have seen no demand for Ozone, but if the community is OK with
it, why not.

Best regards,

Martijn

On Mon, Feb 26, 2024 at 6:08 AM Ferenc Csaky 
wrote:

> Hi,
>
> gentle reminder on this thread, any opinions or thoughts?
>
> Regards,
> Ferenc
>
>
>
>
> On Thursday, February 8th, 2024 at 18:02, Ferenc Csaky
>  wrote:
>
> >
> >
> > Hello devs,
> >
> > I would like to start a discussion regarding Apache Ozone FS support. The
> > jira [1] is stale for quite a while, but supporting it with some
> limitations could
> > be done with minimal effort.
> >
> > Ozone do not have truncate() impl, so it falls to the same category as
> > Hadoop < 2.7 [2], on Datastream API it requires the usage of
> > OnCheckpointRollingPolicy when checkpointing enabled to make sure
> > the FileSink will not use truncate().
> >
> > Table API is a bit trickier, because checkpointing policy cannot be
> ocnfigured
> > explicitly (why?), it behaves differently regarding the write mode [3].
> Bulk mode
> > is covered, but for fow format, auto-compaction has to be set.
> >
> > Even with the mentioned limitations, I think it would worth to add
> support for OFS,
> > it would require 1 small change to enable "ofs" [4] and documenting the
> limitations.
> >
> > WDYT?
> >
> > Regards,
> > Ferenc
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-28231
> > [2]
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/filesystem/#general
> > [3]
> https://github.com/apache/flink/blob/a33a0576364ac3d9c0c038c74362f1faac8d47b8/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/FileSystemTableSink.java#L226
> > [4]
> https://github.com/apache/flink/blob/a33a0576364ac3d9c0c038c74362f1faac8d47b8/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopRecoverableWriter.java#L62
>


[jira] [Created] (FLINK-34621) Bump com.google.guava:guava from 31.1-jre to 32.0.0-jre in /flink-connector-hbase-base

2024-03-07 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34621:
--

 Summary: Bump com.google.guava:guava from 31.1-jre to 32.0.0-jre 
in /flink-connector-hbase-base
 Key: FLINK-34621
 URL: https://issues.apache.org/jira/browse/FLINK-34621
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / HBase
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Default scale and precision SQL data types

2024-03-03 Thread Martijn Visser
Hi,

I think it would first require a FLIP, given it touches on the core type
system of SQL.

Best regards,

Martijn

On Sat, Mar 2, 2024 at 5:34 PM Sergei Morozov  wrote:

> Hi there,
>
> org.apache.flink.table.api.DataTypes allows the creation of temporal data
> types by specifying precision (e.g. TIME(3)) or omitting it (TIME()). The
> ability to omit precision for temporal types was introduced in
> apache/flink@36fef44
> <
> https://github.com/apache/flink/commit/36fef4457a7f1de47989c8a2485581bcf8633b32
> >
> .
>
> Unfortunately, this isn't possible for other data types (e.g. CHAR,
> DECIMAL).
> Even though they define defaults for length, precision, and scale, their
> values have to be passed to the method explicitly.
>
> Would a PR be accepted which will introduce the methods for the remaining
> types similar to the temporal ones?
>
> Thanks.
>


Re: [DISCUSS] Apache Bahir retired

2024-02-28 Thread Martijn Visser
Hi all,

+1 to have a connector FLIP to propose a Kudu connector. I'm +0 overall
because I don't see a lot of activity happening in newly proposed
connectors, but if there's demand for it and people want to volunteer with
contributions, there's no reason to block it.

Best regards,

Martijn

On Wed, Feb 28, 2024 at 4:31 PM Márton Balassi 
wrote:

> Hi team,
>
> Thanks for bringing this up, Feri. I am +1 for maintaining the Kudu
> connector as an external Flink connector.
>
> As per the legal/trademark questions this is actually fair game because one
> does not donate code to a specific Apache project, technically it is
> donated to the Apache Software foundation. Consequently moving between ASF
> projects is fine, I would add a line to the NOTICE file stating that this
> code originally lived in Bahir once we forked it.
>
> Although I did not find an easy to link precedent this is also implied in
> the Attic Bahir site [1] ("notify us if you fork outside Apache") and in
> this [2] Apache community dev list chat. We should notify the Attic team in
> any case. :-)
>
> [1] https://attic.apache.org/projects/bahir.html
> [2] https://lists.apache.org/thread/p31mz4x4dcvd43f026d5p05rpglzfyrt
>
> On Tue, Feb 27, 2024 at 10:09 AM Ferenc Csaky 
> wrote:
>
> > Thank you Leonard for sharing your thoughts on this topic.
> >
> > I agree that complying with the Flink community connector
> > development process would be a must, if there are no legal or
> > copyright issues, I would be happy to take that task for this
> > particular case.
> >
> > I am no legal/copyright expert myslef, but Bahir uses the Apache
> > 2.0 license as well, so I believe it should be possible without too many
> > complications, but I try to look for help on that front.
> >
> > FYI we are using and supporting a downstream fork of the Kudu connector
> on
> > top of Flink 1.18 without any major modifications, so it is pretty up to
> > date upstream as well.
> >
> > Regards,
> > Ferenc
> >
> >
> >
> >
> > On Monday, February 26th, 2024 at 10:29, Leonard Xu 
> > wrote:
> >
> > >
> > >
> > > Hey, Ferenc
> > >
> > > Thanks for initiating this discussion. Apache Bahir is a great project
> > that provided significant assistance to many Apache Flink/Spark users.
> It's
> > pity news that it has been retired.
> > >
> > > I believe that connectivity is crucial for building the ecosystem of
> the
> > Flink such a computing engine. The community, or at least I, would
> actively
> > support the introduction and maintenance of new connectors. Therefore,
> > adding a Kudu connector or other connectors from Bahir makes sense to me,
> > as long as we adhere to the development process for connectors in the
> Flink
> > community[1].
> > > I recently visited the Bahir Flink repository. Although the last
> release
> > of Bahir Flink was in August ’22[2] which is compatible with Flink 1.14,
> > its latest code is compatible with Flink 1.17[3]. So, based on the
> existing
> > codebase, developing an official Apache Flink connector for Kudu or other
> > connectors should be manageable. One point to consider is that if we're
> not
> > developing a connector entirely from scratch but based on an existing
> > repository, we must ensure that there are no copyright issues. Here, "no
> > issues" means satisfying both Apache Bahir's and Apache Flink's copyright
> > requirements. Honestly, I'm not an expert in copyright or legal matters.
> If
> > you're interested in contributing to the Kudu connector, it might be
> > necessary to attract other experienced community members to participate
> in
> > this aspect.
> > >
> > > Best,
> > > Leonard
> > >
> > > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+Connector+Template
> > > [2] https://github.com/apache/bahir-flink/releases/tag/v1.1.0
> > > [3] https://github.com/apache/bahir-flink/blob/master/pom.xml#L116
> > >
> > >
> > >
> > > > 2024年2月22日 下午6:37,Ferenc Csaky ferenc.cs...@pm.me.INVALID 写道:
> > > >
> > > > Hello devs,
> > > >
> > > > Just saw that the Bahir project is retired [1]. Any plans on what's
> > happening with the Flink connectors that were part of this project? We
> > specifically use the Kudu connector and integrate it to our platform at
> > Cloudera, so we would be okay to maintain it. Would it be possible to
> carry
> > it over as separate connector repu under the Apache umbrella similarly as
> > it happened with the external connectors previously?
> > > >
> > > > Thanks,
> > > > Ferenc
> >
>


Re: [DISCUSS] Alternative way of posting FLIPs

2024-02-26 Thread Martijn Visser
Hi all,

I've created https://issues.apache.org/jira/browse/FLINK-34515 and
updated 
https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals
to update the instructions for the community.

Best regards,

Martijn

On Thu, Feb 15, 2024 at 10:04 AM Martijn Visser
 wrote:
>
> Hi all,
>
> Thanks for all your input. If there's no other opinions in the next
> couple of days, then I'll update the documentation accordingly.
>
> Best regards,
>
> Martijn
>
> On Mon, Feb 12, 2024 at 4:51 AM Benchao Li  wrote:
> >
> > +1 for option 1. Thanks Martijn for taking care of this, I've also
> > filed a issue[1] for the infra a few days ago, but haven't been
> > resolved yet.
> >
> > [1] https://issues.apache.org/jira/browse/INFRA-25451
> >
> > Jane Chan  于2024年2月10日周六 14:33写道:
> > >
> > > +1 for option 1.
> > >
> > > Hi Yun,
> > >
> > > > I am concerned about whether we can view the history of design docs
> > > >
> > >
> > > If someone has editor permissions, they can also view the editing history
> > > of Google Docs[1]. If the change history of the design document is equally
> > > important, would it be possible to selectively share the editor link with
> > > the developers involved in the discussion for the FLIP?
> > >
> > > [1]
> > > https://support.google.com/docs/answer/190843?sjid=8405245178977557481-NC
> > >
> > > Best,
> > > Jane
> > >
> > > On Fri, Feb 9, 2024 at 5:33 PM Sergey Nuyanzin  
> > > wrote:
> > >
> > > > +1 for option 1
> > > >
> > > > On Fri, Feb 9, 2024 at 10:18 AM Jing Ge 
> > > > wrote:
> > > >
> > > > > +1 for option 1. The Github discussions look more like an overlap to 
> > > > > the
> > > > ML
> > > > > instead of a wiki tool like Confluence.
> > > > >
> > > > > Best regards,
> > > > > Jing
> > > > >
> > > > > On Fri, Feb 9, 2024 at 10:08 AM Yun Tang  wrote:
> > > > >
> > > > > > For the first solution, I am concerned about whether we can view the
> > > > > > history of design docs, which is supported by Confluence wiki and
> > > > GitHub
> > > > > > discussions. From my understanding, even the discussion history 
> > > > > > could
> > > > let
> > > > > > others know the evolution of this feature and the history of a 
> > > > > > design
> > > > doc
> > > > > > is also really important.
> > > > > >
> > > > > > Best
> > > > > > Yun Tang
> > > > > > 
> > > > > > From: Piotr Nowojski 
> > > > > > Sent: Thursday, February 8, 2024 14:17
> > > > > > To: dev@flink.apache.org 
> > > > > > Subject: Re: [DISCUSS] Alternative way of posting FLIPs
> > > > > >
> > > > > > +1 for the first option as well
> > > > > >
> > > > > > Best,
> > > > > > Piotrek
> > > > > >
> > > > > > śr., 7 lut 2024 o 16:48 Matthias Pohl 
> > > > > > 
> > > > > > napisał(a):
> > > > > >
> > > > > > > +1 for option 1 since it's a reasonable temporary workaround
> > > > > > >
> > > > > > > Moving to GitHub discussions would either mean moving the current
> > > > FLIP
> > > > > > > collection or having the FLIPs in two locations. Both options do 
> > > > > > > not
> > > > > seem
> > > > > > > to be optimal. Another concern I had was that GitHub Discussions
> > > > > wouldn't
> > > > > > > allow integrating diagrams that easily. But it looks like they
> > > > support
> > > > > > > Mermaid [1] for diagrams.
> > > > > > >
> > > > > > > One flaw of the GoogleDocs approach is, though, that we have to 
> > > > > > > rely
> > > > on
> > > > > > > diagrams being provided as PNG/JPG/SVG rather than draw.io 
> > > > > > > diagrams.
> > > > > > > draw.io
> > > > > > > is more tightly integrated with the Confluence wiki which allows
> > > > &g

[jira] [Created] (FLINK-34515) Document new FLIP process

2024-02-26 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34515:
--

 Summary: Document new FLIP process
 Key: FLINK-34515
 URL: https://issues.apache.org/jira/browse/FLINK-34515
 Project: Flink
  Issue Type: New Feature
  Components: Documentation
Reporter: Martijn Visser
Assignee: Martijn Visser


Per https://lists.apache.org/thread/rkpvlnwj9gv1hvx1dyklx6k88qpnvk2t

Contributors create a Google Doc and make that view-only, and post that Google 
Doc to the mailing list for a discussion thread. When the discussions have been 
resolved, the contributor ask on the Dev mailing list to a committer/PMC to 
copy the contents from the Google Doc, and create a FLIP number for them. The 
contributor can then use that FLIP
to actually have a VOTE thread.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: FW: RE: [DISCUSS] FLIP-314: Support Customized Job Lineage Listener

2024-02-19 Thread Martijn Visser
I'm a bit confused: did we add new interfaces after FLIP-314 was
accepted? If so, please move the new interfaces to a new FLIP and
start a separate vote. We can't retrospectively change an accepted
FLIP with new interfaces and a new vote.

On Mon, Feb 19, 2024 at 3:22 AM Yong Fang  wrote:
>
> Hi all,
>
> If there are no more feedbacks, I will start a vote for the new interfaces
> in the next day, thanks
>
> Best,
> Fang Yong
>
> On Thu, Feb 8, 2024 at 1:30 PM Yong Fang  wrote:
>
> > Hi devs,
> >
> > According to the online-discussion in FLINK-3127 [1] and
> > offline-discussion with Maciej Obuchowski and Zhenqiu Huang, we would like
> > to update the lineage vertex relevant interfaces in FLIP-314 [2] as follows:
> >
> > 1. Introduce `LineageDataset` which represents source and sink in
> > `LineageVertex`. The fields in `LineageDataset` are as follows:
> > /* Name for this particular dataset. */
> > String name;
> > /* Unique name for this dataset's storage, for example, url for jdbc
> > connector and location for lakehouse connector. */
> > String namespace;
> > /* Facets for the lineage vertex to describe the particular
> > information of dataset, such as schema and config. */
> > Map facets;
> >
> > 2. There may be multiple datasets in one `LineageVertex`, for example,
> > kafka source or hybrid source. So users can get dataset list from
> > `LineageVertex`:
> > /** Get datasets from the lineage vertex. */
> > List datasets();
> >
> > 3. There will be built in facets for config and schema. To describe
> > columns in table/sql jobs and datastream jobs, we introduce
> > `DatasetSchemaField`.
> > /** Builtin config facet for dataset. */
> > @PublicEvolving
> > public interface DatasetConfigFacet extends LineageDatasetFacet {
> > Map config();
> > }
> >
> > /** Field for schema in dataset. */
> > public interface DatasetSchemaField {
> > /** The name of the field. */
> > String name();
> > /** The type of the field. */
> > T type();
> > }
> >
> > Thanks for valuable inputs from @Maciej and @Zhenqiu. And looking forward
> > to your feedback, thanks
> >
> > Best,
> > Fang Yong
> >
> > On Mon, Sep 25, 2023 at 1:18 PM Shammon FY  wrote:
> >
> >> Hi David,
> >>
> >> Do you want the detailed topology for Flink job? You can get
> >> `JobDetailsInfo` in `RestCusterClient` with the submitted job id, it has
> >> `String jsonPlan`. You can parse the json plan to get all steps and
> >> relations between them in a Flink job. Hope this can help you, thanks!
> >>
> >> Best,
> >> Shammon FY
> >>
> >> On Tue, Sep 19, 2023 at 11:46 PM David Radley 
> >> wrote:
> >>
> >>> Hi there,
> >>> I am looking at the interfaces. If I am reading it correctly,there is
> >>> one relationship between the source and sink and this relationship
> >>> represents the operational lineage. Lineage is usually represented as 
> >>> asset
> >>> -> process - > asset – see for example
> >>> https://egeria-project.org/features/lineage-management/overview/#the-lineage-graph
> >>>
> >>> Maybe I am missing it, but it seems to be that it would be useful to
> >>> store the process in the lineage graph.
> >>>
> >>> It is useful to have the top level lineage as source -> Flink job ->
> >>> sink. Where the Flink job is the process, but also to have this asset ->
> >>> process -> asset pattern for each of the steps in the job. If this is
> >>> present, please could you point me to it,
> >>>
> >>>   Kind regards, David.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> From: David Radley 
> >>> Date: Tuesday, 19 September 2023 at 16:11
> >>> To: dev@flink.apache.org 
> >>> Subject: [EXTERNAL] RE: [DISCUSS] FLIP-314: Support Customized Job
> >>> Lineage Listener
> >>> Hi,
> >>> I notice that there is an experimental lineage integration for Flink
> >>> with OpenLineage https://openlineage.io/docs/integrations/flink  . I
> >>> think this feature would allow for a superior Flink OpenLineage 
> >>> integration,
> >>> Kind regards, David.
> >>>
> >>> From: XTransfer 
> >>> Date: Tuesday, 19 September 2023 at 15:47
> >>> To: dev@flink.apache.org 
> >>> Subject: [EXTERNAL] Re: [DISCUSS] FLIP-314: Support Customized Job
> >>> Lineage Listener
> >>> Thanks Shammon for this proposal.
> >>>
> >>> That’s helpful for collecting the lineage of Flink tasks.
> >>> Looking forward to its implementation.
> >>>
> >>> Best,
> >>> Jiabao
> >>>
> >>>
> >>> > 2023年9月18日 20:56,Leonard Xu  写道:
> >>> >
> >>> > Thanks Shammon for the informations, the comment makes the lifecycle
> >>> clearer.
> >>> > +1
> >>> >
> >>> >
> >>> > Best,
> >>> > Leonard
> >>> >
> >>> >
> >>> >> On Sep 18, 2023, at 7:54 PM, Shammon FY  wrote:
> >>> >>
> >>> >> Hi devs,
> >>> >>
> >>> >> After discussing with @Qingsheng, I fixed a minor issue of the
> >>> lineage lifecycle in `StreamExecutionEnvironment`. I have added the 
> >>> comment
> >>> to explain that the lineage information in `StreamExecutionEnvironment`
> 

[jira] [Created] (FLINK-34461) MongoDB weekly builds fail with time out on Flink 1.18.1 for JDK17

2024-02-19 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34461:
--

 Summary: MongoDB weekly builds fail with time out on Flink 1.18.1 
for JDK17
 Key: FLINK-34461
 URL: https://issues.apache.org/jira/browse/FLINK-34461
 Project: Flink
  Issue Type: Bug
  Components: Connectors / MongoDB
Affects Versions: mongodb-1.1.0
Reporter: Martijn Visser


The weekly tests for MongoDB consistently time out for the v1.0 branch while 
testing Flink 1.18.1 for JDK17:

https://github.com/apache/flink-connector-mongodb/actions/runs/7770329490/job/21190387348

https://github.com/apache/flink-connector-mongodb/actions/runs/7858349600/job/21443232301

https://github.com/apache/flink-connector-mongodb/actions/runs/7945225005/job/21691624903




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: FLINK-21672

2024-02-16 Thread Martijn Visser
Well I wouldn't be too comfortable with merging a change about how
Flink works on different JDKs at this stage, especially since we
already have to test for 4 different JDKs at this point.

On Fri, Feb 16, 2024 at 5:00 PM Gyula Fóra  wrote:
>
> Depending on the scope of this change, it may not be considered a feature
> right Martijn?
> If it's a test improvement, can it still be part of the release?
>
> Gyula
>
> On Fri, Feb 16, 2024 at 4:45 PM Martijn Visser 
> wrote:
>
> > Hi David,
> >
> > Happy to assign it to you. It can't be merged for Flink 1.19 anymore
> > though: feature freeze has started for that one, as announced
> > previously on the mailing list.
> >
> > Best regards,
> >
> > Martijn
> >
> > On Fri, Feb 16, 2024 at 4:32 PM David Radley 
> > wrote:
> > >
> > > Hi,
> > > I see https://issues.apache.org/jira/browse/FLINK-21672 has been open
> > for a while. We at IBM are building Flink with the latest v11  Semeru JDK (
> > https://developer.ibm.com/languages/java/semeru-runtimes/).
> > > Flink fails to build with skipTests. It fails because
> > sun.management.VMManagement class
> > > Cannot be found at build time. I see some logic in the Flink code to
> > tolerate the lack of com.sun packages, but not this sun package. We get:
> > >
> > >
> > > ERROR] Failed to execute goal
> > org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile
> > (default-compile) on project flink-local-recovery-and-allocation-test:
> > Compilation failure: Compilation failure:
> > >
> > > [ERROR]
> > /Users/davidradley/flinkapicurio/flink-end-to-end-tests/flink-local-recovery-and-allocation-test/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java:[418,23]
> > cannot find symbol
> > >
> > > [ERROR]   symbol:   class VMManagement
> > >
> > > [ERROR]   location: package sun.management
> > >
> > > [ERROR]
> > /Users/davidradley/flinkapicurio/flink-end-to-end-tests/flink-local-recovery-and-allocation-test/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java:[418,59]
> > cannot find symbol
> > >
> > > [ERROR]   symbol:   class VMManagement
> > >
> > > [ERROR]   location: package sun.management
> > >
> > >
> > > As per the link in the issue, sun. packages are not supported or part of
> > the JDK after java 1.7.
> > >
> > > I would like to have the priority raised on this Jira and would like to
> > change the code so it builds successfully by  removing the dependency on
> > this old / unsupported sun package . I am happy to work on this, if you are
> > willing to support this by assigning me the Jira and merging the fix;
> > ideally we would like this to be in the next release - Flink 1.19,
> > >  Kind regards, David.
> > >
> > > Unless otherwise stated above:
> > >
> > > IBM United Kingdom Limited
> > > Registered in England and Wales with number 741598
> > > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
> >


Re: FLINK-21672

2024-02-16 Thread Martijn Visser
Hi David,

Happy to assign it to you. It can't be merged for Flink 1.19 anymore
though: feature freeze has started for that one, as announced
previously on the mailing list.

Best regards,

Martijn

On Fri, Feb 16, 2024 at 4:32 PM David Radley  wrote:
>
> Hi,
> I see https://issues.apache.org/jira/browse/FLINK-21672 has been open for a 
> while. We at IBM are building Flink with the latest v11  Semeru JDK 
> (https://developer.ibm.com/languages/java/semeru-runtimes/).
> Flink fails to build with skipTests. It fails because 
> sun.management.VMManagement class
> Cannot be found at build time. I see some logic in the Flink code to tolerate 
> the lack of com.sun packages, but not this sun package. We get:
>
>
> ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile 
> (default-compile) on project flink-local-recovery-and-allocation-test: 
> Compilation failure: Compilation failure:
>
> [ERROR] 
> /Users/davidradley/flinkapicurio/flink-end-to-end-tests/flink-local-recovery-and-allocation-test/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java:[418,23]
>  cannot find symbol
>
> [ERROR]   symbol:   class VMManagement
>
> [ERROR]   location: package sun.management
>
> [ERROR] 
> /Users/davidradley/flinkapicurio/flink-end-to-end-tests/flink-local-recovery-and-allocation-test/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java:[418,59]
>  cannot find symbol
>
> [ERROR]   symbol:   class VMManagement
>
> [ERROR]   location: package sun.management
>
>
> As per the link in the issue, sun. packages are not supported or part of the 
> JDK after java 1.7.
>
> I would like to have the priority raised on this Jira and would like to 
> change the code so it builds successfully by  removing the dependency on this 
> old / unsupported sun package . I am happy to work on this, if you are 
> willing to support this by assigning me the Jira and merging the fix; ideally 
> we would like this to be in the next release - Flink 1.19,
>  Kind regards, David.
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


Re: [VOTE] Release flink-connector-parent, release candidate #1

2024-02-15 Thread Martijn Visser
Hi Etienne,

> I fixed the source release [1] as requested, it no more contains 
> tools/release/shared directory.

I don't think that is the correct way: my understanding is that this
invalidates basically all the votes, because now the checked artifact
has changed. It was requested to file a ticket as a follow-up, not to
immediately change the binary. We can't have a lazy consensus on a
release topic, with a changed artifact.

Best regards,

Martijn

On Thu, Feb 15, 2024 at 2:32 PM Etienne Chauchot  wrote:
>
> Hi,
>
> Considering that the code and artifact have note changed since last vote
> (only source release and tag have changed) and considering that there
> were already 3 binding votes for this RC1, I'll do this on a lazy
> consensus. I'll release if no one objects until tomorrow as it will be
> 72h since last change.
>
> Best
>
> Etienne
>
> Le 13/02/2024 à 13:24, Etienne Chauchot a écrit :
> >
> > Hi all,
> >
> > I fixed the source release [1] as requested, it no more contains
> > tools/release/shared directory.
> >
> > I found out why it contained that directory, it was because parent_pom
> > branch was referring to an incorrect sub-module mount point for
> > release_utils branch (cf FLINK-34364 [2]). Here is the fixing PR (3).
> >
> > And by the way I noticed that all the connectors source releases were
> > containing an empty tools/releasing directory because only
> > tools/releasing/shared is excluded in the source release script and
> > not the whole tools/releasing directory. It seems a bit messy to me so
> > I think we should fix that in the release scripts later on for next
> > connectors releases.
> >
> > I also found out that the RC1 tag was pointing to my fork instead of
> > the main repo so I remade the tag (4)
> >
> > Apart of that, the code and artifact have not changed so I did not
> > invalidate the RC1.
> >
> > Please confirm that I can proceed to the release.
> >
> > Best
> >
> > Etienne
> >
> > [1]
> > https://dist.apache.org/repos/dist/dev/flink/flink-connector-parent-1.1.0-rc1/
> >
> > [2] https://issues.apache.org/jira/browse/FLINK-34364
> >
> > [3] https://github.com/apache/flink-connector-shared-utils/pull/36
> >
> > [4]
> > https://github.com/apache/flink-connector-shared-utils/releases/tag/v1.1.0-rc1
> >
> >
> > Le 05/02/2024 à 12:36, Etienne Chauchot a écrit :
> >>
> >> Hi,
> >>
> >> I just got back from vacations. I'll close the vote thread and
> >> proceed to the release later this week.
> >>
> >> Here is the ticket: https://issues.apache.org/jira/browse/FLINK-34364
> >>
> >> Best
> >>
> >> Etienne
> >>
> >> Le 04/02/2024 à 05:06, Qingsheng Ren a écrit :
> >>> +1 (binding)
> >>>
> >>> - Verified checksum and signature
> >>> - Verified pom content
> >>> - Built flink-connector-kafka from source with the parent pom in staging
> >>>
> >>> Best,
> >>> Qingsheng
> >>>
> >>> On Thu, Feb 1, 2024 at 11:19 PM Chesnay Schepler  
> >>> wrote:
> >>>
>  - checked source/maven pom contents
> 
>  Please file a ticket to exclude tools/release from the source release.
> 
>  +1 (binding)
> 
>  On 29/01/2024 15:59, Maximilian Michels wrote:
> > - Inspected the source for licenses and corresponding headers
> > - Checksums and signature OK
> >
> > +1 (binding)
> >
> > On Tue, Jan 23, 2024 at 4:08 PM Etienne Chauchot
>  wrote:
> >> Hi everyone,
> >>
> >> Please review and vote on the release candidate #1 for the version
> >> 1.1.0, as follows:
> >>
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific comments)
> >>
> >> The complete staging area is available for your review, which includes:
> >> * JIRA release notes [1],
> >> * the official Apache source release to be deployed to dist.apache.org
> >> [2], which are signed with the key with fingerprint
> >> D1A76BA19D6294DD0033F6843A019F0B8DD163EA [3],
> >> * all artifacts to be deployed to the Maven Central Repository [4],
> >> * source code tag v1.1.0-rc1 [5],
> >> * website pull request listing the new release [6]
> >>
> >> * confluence wiki: connector parent upgrade to version 1.1.0 that will
> >> be validated after the artifact is released (there is no PR mechanism 
> >> on
> >> the wiki) [7]
> >>
> >> The vote will be open for at least 72 hours. It is adopted by majority
> >> approval, with at least 3 PMC affirmative votes.
> >>
> >> Thanks,
> >>
> >> Etienne
> >>
> >> [1]
> >>
>  https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353442
> >> [2]
> >>
>  https://dist.apache.org/repos/dist/dev/flink/flink-connector-parent-1.1.0-rc1
> >> [3]https://dist.apache.org/repos/dist/release/flink/KEYS
> >> [4]
>  https://repository.apache.org/content/repositories/orgapacheflink-1698/
> >> [5]
> >>
>  

Re: [DISCUSS] Alternative way of posting FLIPs

2024-02-15 Thread Martijn Visser
Hi all,

Thanks for all your input. If there's no other opinions in the next
couple of days, then I'll update the documentation accordingly.

Best regards,

Martijn

On Mon, Feb 12, 2024 at 4:51 AM Benchao Li  wrote:
>
> +1 for option 1. Thanks Martijn for taking care of this, I've also
> filed a issue[1] for the infra a few days ago, but haven't been
> resolved yet.
>
> [1] https://issues.apache.org/jira/browse/INFRA-25451
>
> Jane Chan  于2024年2月10日周六 14:33写道:
> >
> > +1 for option 1.
> >
> > Hi Yun,
> >
> > > I am concerned about whether we can view the history of design docs
> > >
> >
> > If someone has editor permissions, they can also view the editing history
> > of Google Docs[1]. If the change history of the design document is equally
> > important, would it be possible to selectively share the editor link with
> > the developers involved in the discussion for the FLIP?
> >
> > [1]
> > https://support.google.com/docs/answer/190843?sjid=8405245178977557481-NC
> >
> > Best,
> > Jane
> >
> > On Fri, Feb 9, 2024 at 5:33 PM Sergey Nuyanzin  wrote:
> >
> > > +1 for option 1
> > >
> > > On Fri, Feb 9, 2024 at 10:18 AM Jing Ge 
> > > wrote:
> > >
> > > > +1 for option 1. The Github discussions look more like an overlap to the
> > > ML
> > > > instead of a wiki tool like Confluence.
> > > >
> > > > Best regards,
> > > > Jing
> > > >
> > > > On Fri, Feb 9, 2024 at 10:08 AM Yun Tang  wrote:
> > > >
> > > > > For the first solution, I am concerned about whether we can view the
> > > > > history of design docs, which is supported by Confluence wiki and
> > > GitHub
> > > > > discussions. From my understanding, even the discussion history could
> > > let
> > > > > others know the evolution of this feature and the history of a design
> > > doc
> > > > > is also really important.
> > > > >
> > > > > Best
> > > > > Yun Tang
> > > > > 
> > > > > From: Piotr Nowojski 
> > > > > Sent: Thursday, February 8, 2024 14:17
> > > > > To: dev@flink.apache.org 
> > > > > Subject: Re: [DISCUSS] Alternative way of posting FLIPs
> > > > >
> > > > > +1 for the first option as well
> > > > >
> > > > > Best,
> > > > > Piotrek
> > > > >
> > > > > śr., 7 lut 2024 o 16:48 Matthias Pohl 
> > > > > napisał(a):
> > > > >
> > > > > > +1 for option 1 since it's a reasonable temporary workaround
> > > > > >
> > > > > > Moving to GitHub discussions would either mean moving the current
> > > FLIP
> > > > > > collection or having the FLIPs in two locations. Both options do not
> > > > seem
> > > > > > to be optimal. Another concern I had was that GitHub Discussions
> > > > wouldn't
> > > > > > allow integrating diagrams that easily. But it looks like they
> > > support
> > > > > > Mermaid [1] for diagrams.
> > > > > >
> > > > > > One flaw of the GoogleDocs approach is, though, that we have to rely
> > > on
> > > > > > diagrams being provided as PNG/JPG/SVG rather than draw.io diagrams.
> > > > > > draw.io
> > > > > > is more tightly integrated with the Confluence wiki which allows
> > > > > > editing/updating diagrams in the wiki rather than using some 
> > > > > > external
> > > > > tool.
> > > > > > Google Draw is also not that convenient to use in my opinion. 
> > > > > > Anyway,
> > > > > > that's a minor issue, I guess.
> > > > > >
> > > > > > Matthias
> > > > > >
> > > > > > [1]
> > > > > >
> > > > > >
> > > > >
> > > >
> > > https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-diagrams
> > > > > >
> > > > > > On Wed, Feb 7, 2024 at 3:30 PM Lincoln Lee 
> > > > > wrote:
> > > > > >
> > > > > > > Thanks Martijn moving this forward!
> > > > > > >
> > > > > > > +1 for 

[jira] [Created] (FLINK-34435) Bump org.yaml:snakeyaml from 1.31 to 2.2 for flink-connector-elasticsearch

2024-02-13 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34435:
--

 Summary: Bump org.yaml:snakeyaml from 1.31 to 2.2 for 
flink-connector-elasticsearch
 Key: FLINK-34435
 URL: https://issues.apache.org/jira/browse/FLINK-34435
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / ElasticSearch
Reporter: Martijn Visser
Assignee: Martijn Visser


https://github.com/apache/flink-connector-elasticsearch/pull/90



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34432) Re-enable forkReuse for flink-table-planner

2024-02-13 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34432:
--

 Summary: Re-enable forkReuse for flink-table-planner
 Key: FLINK-34432
 URL: https://issues.apache.org/jira/browse/FLINK-34432
 Project: Flink
  Issue Type: Technical Debt
  Components: Table SQL / Client, Test Infrastructure, Tests
Affects Versions: 1.19.0, 1.18.2, 1.20.0
Reporter: Martijn Visser


With FLINK-18356 resolved, we should re-enable forkReuse for 
flink-table-planner to speed up the tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Issues running with Flink 1.20-SNAPSHOT

2024-02-09 Thread Martijn Visser
Hi,

There's never a flink_dist, this is most likely because of a local
build problem. See
https://repository.apache.org/content/groups/snapshots/org/apache/flink/flink-dist/1.19-SNAPSHOT/
which also doesn't exist. This is most likely a local build problem;
are you using the Maven wrapper?

Best regards,

Martijn

On Fri, Feb 9, 2024 at 4:43 PM David Radley  wrote:
>
> Hello,
> I am git cloned the latest flink to a new folder. I emptied my .m2 folder for 
> Flink then ran.
>
> mvn clean
>
> and see error
>
> [ERROR] Failed to execute goal on project flink-dist_2.12: Could not resolve 
> dependencies for project org.apache.flink:flink-dist_2.12:jar:1.20-SNAPSHOT: 
> The following artifacts could not be resolved: 
> org.apache.flink:flink-dist-scala_2.12:jar:1.20-SNAPSHOT, 
> org.apache.flink:flink-examples-streaming-state-machine:jar:1.20-SNAPSHOT: 
> Could not find artifact 
> org.apache.flink:flink-dist-scala_2.12:jar:1.20-SNAPSHOT in apache.snapshots 
> (https://repository.apache.org/snapshots) -> [Help 1]
>
> I look in 
> https://repository.apache.org/content/groups/snapshots/org/apache/flink/ and 
> do not see flink-dist-scala_2.12:jar
>
> My environment is
>
> mvn -version
> Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
> Maven home: /Applications/apache-maven-3.8.6
> Java version: 11.0.18, vendor: Eclipse Adoptium, runtime: 
> /Library/Java/JavaVirtualMachines/temurin-11.jdk/Contents/Home
> Default locale: en_GB, platform encoding: UTF-8
> OS name: "mac os x", version: "14.2.1", arch: "aarch64", family: "mac"
> (base) davidradley@Davids-MBP-2 flink120 % mvn -version
> Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
> Maven home: /Applications/apache-maven-3.8.6
> Java version: 11.0.18, vendor: Eclipse Adoptium, runtime: 
> /Library/Java/JavaVirtualMachines/temurin-11.jdk/Contents/Home
> Default locale: en_GB, platform encoding: UTF-8
> OS name: "mac os x", version: "14.2.1", arch: "aarch64", family: "mac"
>
> It looks like the new snapshot is not completely there. I was expecting to 
> see a folder for flink-dist-scala_2.12:jar
> similar to 
> https://repository.apache.org/content/groups/snapshots/org/apache/flink/flink-core/1.20-SNAPSHOT/
>
>Kind regards, David.
>
>
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


[jira] [Created] (FLINK-34415) Move away from Kafka-Zookeeper based tests in favor of Kafka-KRaft

2024-02-08 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34415:
--

 Summary: Move away from Kafka-Zookeeper based tests in favor of 
Kafka-KRaft
 Key: FLINK-34415
 URL: https://issues.apache.org/jira/browse/FLINK-34415
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / Kafka
Reporter: Martijn Visser


The current Flink Kafka connector still uses Zookeeper for Kafka-based testing. 
Since Kafka 3.4, KRaft has been marked as production ready [1]. In order to 
reduce tech debt, we should remove all the dependencies on Zookeeper and only 
uses KRaft for the Flink Kafka connector. 

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-833%3A+Mark+KRaft+as+Production+Ready




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-mongodb v1.1.0, release candidate #2

2024-02-08 Thread Martijn Visser
+1 (binding)

- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
- Verified web PRs

On Wed, Jan 31, 2024 at 10:41 AM Danny Cranmer  wrote:
>
> Thanks for driving this Leonard!
>
> +1 (binding)
>
> - Release notes look ok
> - Signatures/checksums of source archive are good
> - Verified there are no binaries in the source archive
> - Built sources locally successfully
> - v1.0.0-rc2 tag exists in github
> - Tag build passing on CI [1]
> - Contents of Maven dist look complete
> - Verified signatures/checksums of binary in maven dist is correct
> - Verified NOTICE files and bundled dependencies
>
> Thanks,
> Danny
>
> [1]
> https://github.com/apache/flink-connector-mongodb/actions/runs/7709467379
>
> On Wed, Jan 31, 2024 at 7:54 AM gongzhongqiang 
> wrote:
>
> > +1(non-binding)
> >
> > - Signatures and Checksums are good
> > - No binaries in the source archive
> > - Tag is present
> > - Build successful with jdk8 on ubuntu 22.04
> >
> >
> > Leonard Xu  于2024年1月30日周二 18:23写道:
> >
> > > Hey all,
> > >
> > > Please help review and vote on the release candidate #2 for the version
> > > v1.1.0 of the
> > > Apache Flink MongoDB Connector as follows:
> > >
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * The official Apache source release to be deployed to dist.apache.org
> > > [2],
> > > which are signed with the key with fingerprint
> > > 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
> > > * All artifacts to be deployed to the Maven Central Repository [4],
> > > * Source code tag v1.1.0-rc2 [5],
> > > * Website pull request listing the new release [6].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > >
> > > Best,
> > > Leonard
> > > [1]
> > >
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353483
> > > [2]
> > >
> > https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc2/
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > > https://repository.apache.org/content/repositories/orgapacheflink-1705/
> > > [5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc2
> > > [6] https://github.com/apache/flink-web/pull/719
> >


[jira] [Created] (FLINK-34413) Drop support for HBase v1

2024-02-08 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34413:
--

 Summary: Drop support for HBase v1
 Key: FLINK-34413
 URL: https://issues.apache.org/jira/browse/FLINK-34413
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / HBase
Reporter: Martijn Visser


As discussed in 
https://lists.apache.org/thread/6663052dmfnqm8wvqoxx9k8jwcshg1zq 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Drop support for HBase v1

2024-02-08 Thread Martijn Visser
Hi all,

I will open a ticket to drop support for HBase v1. If there are no objections 
brought forward next week, we'll move forward with dropping support for HBase 
v1.

Best regards,

Martijn

On 2024/02/01 02:31:00 jialiang tan wrote:
> Hi Martijn, Ferenc
> Thanks all for driving this. As Ferenc said, HBase 1.x is dead, so on the
> way forward it should be safe to drop it. Same view as mine. So +1 for this.
> 
> Best!
> tanjialiang
> 
> 
>  Replied Message 
> From Ferenc Csaky 
> Date 1/30/2024 22:14
> To  
> Subject Re: [DISCUSS] Drop support for HBase v1
> Hi Martijn,
> 
> thanks for starting the discussion. Let me link the older discussion
> regarding the same topic [1]. My opinion did not change, so +1.
> 
> BR,
> Ferenc
> 
> [1] https://lists.apache.org/thread/x7l2gj8g93r4v6x6953cyt6jrs8c4r1b
> 
> 
> 
> 
> On Monday, January 29th, 2024 at 09:37, Martijn Visser <
> martijnvis...@apache.org> wrote:
> 
> 
> 
> Hi all,
> 
> While working on adding support for Flink 1.19 for HBase, we've run into a
> dependency convergence issue because HBase v1 relies on a really old
> version of Guava.
> 
> HBase v2 has been made available since May 2018, and there have been no new
> releases of HBase v1 since August 2022.
> 
> I would like to propose that the Flink HBase connector drops support for
> HBase v1, and will only continue HBase v2 in the future. I don't think this
> requires a full FLIP and vote, but I do want to start a discussion thread
> for this.
> 
> Best regards,
> 
> Martijn
> 


Re: [DISCUSS] FLIP 411: Chaining-agnostic Operator ID generation for improved state compatibility on parallelism change

2024-02-08 Thread Martijn Visser
Hi,

> However, compiled plan is still too complicated for Flink newbies from my 
> point of view.

I don't think that the compiled plan was ever positioned to be a
simple solution. If you want to have an easy approach, we have a
declarative solution in place with SQL and/or the Table API imho.

Best regards,

Martijn

On Thu, Feb 8, 2024 at 9:14 AM Zhanghao Chen  wrote:
>
> Hi Piotr,
>
> Thanks for the comment. I agree that compiled plan is the ultimate tool for 
> Flink SQL if one wants to make any changes to
> query later, and this FLIP indeed is not essential in this sense. However, 
> compiled plan is still too complicated for Flink newbies from my point of 
> view. As I mentioned previously, our internal platform provides a visualized 
> tool for editing the compiled plan but most users still find it complex. 
> Therefore, the FLIP can still benefit users with better useability and the 
> proposed changes are actually quite lightweight (just copying a new hasher 
> with 2 lines deleted + extending the OperatorIdPair data structure) without 
> much extra effort.
>
> Best,
> Zhanghao Chen
> 
> From: Piotr Nowojski 
> Sent: Thursday, February 8, 2024 14:50
> To: Zhanghao Chen 
> Cc: Chesnay Schepler ; dev@flink.apache.org 
> ; Yu Chen 
> Subject: Re: [DISCUSS] FLIP 411: Chaining-agnostic Operator ID generation for 
> improved state compatibility on parallelism change
>
> Hey
>
> > AFAIK, there's no way to set UIDs for a SQL job,
>
> AFAIK you can't set UID manually, but  Flink SQL generates a compiled plan
> of a query with embedded UIDs. As I understand it, using a compiled plan is
> the preferred (only?) way for Flink SQL if one wants to make any changes to
> query later on or support Flink's runtime upgrades, without losing the
> state.
>
> If that's the case, what would be the usefulness of this FLIP? Only for
> DataStream API for users that didn't know that they should have manually
> configured UIDs? But they have the workaround to actually post-factum add
> the UIDs anyway, right? So maybe indeed Chesnay is right that this FLIP is
> not that helpful/worth the extra effort?
>
> Best,
> Piotrek
>
> czw., 8 lut 2024 o 03:55 Zhanghao Chen 
> napisał(a):
>
> > Hi Chesnay,
> >
> > AFAIK, there's no way to set UIDs for a SQL job, it'll be great if you can
> > share how you allow UID setting for SQL jobs. We've explored providing a
> > visualized DAG editor for SQL jobs that allows UID setting on our internal
> > platform, but most users found it too complicated to use. Another
> > possible way is to utilize SQL hints, but that's complicated as well. From
> > our experience, many SQL users are not familiar with Flink, what they want
> > is an experience similar to writing a normal SQL in MySQL, without
> > involving much extra concepts like the DAG and the UID. In fact, some
> > DataStream and PyFlink users also share the same concern.
> >
> > On the other hand, some performance-tuning is inevitable for a
> > long-running jobs in production, and parallelism tuning is among the most
> > common techniques. FLIP-367 [1] and FLIP-146 [2] allow user to tune the
> > parallelism of source and sinks, and both are well-received in the
> > discussion thread. Users definitely don't want to lost state after a
> > parallelism tuning, which is highly risky at present.
> >
> > Putting these together, I think the FLIP has a high value in production.
> > Through offline discussion, I leant that multiple companies have developed
> > or trying to develop similar hasher changes in their internal distribution,
> > including ByteDance, Xiaohongshu, and Bilibili. It'll be great if we can
> > improve the SQL experience for all community users as well, WDYT?
> >
> > Best,
> > Zhanghao Chen
> > --
> > *From:* Chesnay Schepler 
> > *Sent:* Thursday, February 8, 2024 2:01
> > *To:* dev@flink.apache.org ; Zhanghao Chen <
> > zhanghao.c...@outlook.com>; Piotr Nowojski ; Yu
> > Chen 
> > *Subject:* Re: [DISCUSS] FLIP 411: Chaining-agnostic Operator ID
> > generation for improved state compatibility on parallelism change
> >
> > The FLIP is a bit weird to be honest. It only applies in cases where
> > users haven't set uids, but that goes against best-practices and as far
> > as I'm told SQL also sets UIDs everywhere.
> >
> > I'm wondering if this is really worth the effort.
> >
> > On 07/02/2024 10:23, Zhanghao Chen wrote:
> > > After offline discussion with @Yu Chen > >, I've updated the FLIP [1] to include a design
> > that allows for compatible hasher upgrade by adding StreamGraphHasherV2 to
> > the legacy hasher list, which is actually a revival of the idea from
> > FLIP-5290 [2] when StreamGraphHasherV2 was introduced in Flink 1.2. We're
> > targeting to make V3 the default hasher in Flink 1.20 given that
> > state-compatibility is no longer an issue. Take a review when you have a
> > chance, and I'd like to especially thank @Yu Chen<
> > 

Re: jira permission request

2024-02-07 Thread Martijn Visser
Hi,

In order to contribute to Apache Flink, please ping the author of the
Jira ticket that you would like to work on, so it can be assigned to
you. There are no other permissions required.

Best regards,

Martijn

On Tue, Feb 6, 2024 at 2:25 AM 李游  wrote:
>
> HI,
> I want to contribute to Apache flink.
> Would you please give me the contributor permission?
> My jira id is lxliyou.
> Thanks.


Re: Confluence access request

2024-02-07 Thread Martijn Visser
Hi,

Sorry for the late reply, but this has been recently been disabled by
the ASF. There's an open discussion thread at
https://lists.apache.org/thread/rkpvlnwj9gv1hvx1dyklx6k88qpnvk2t on
how to deal with this.

Best regards,

Martijn

On Tue, Jan 30, 2024 at 10:16 AM tanjialiang  wrote:
>
> Hi, devs! I want to prepare a FLIP and start a discussion on the dev mailing 
> list, but I find I don't have the access, can someone give me access to 
> confluence?
>
>
> My Confluence username: tanjialiang
>
>
> Best regards,
> tanjialiang


Re: Confluence access

2024-02-07 Thread Martijn Visser
Hi,

Sorry for the late reply, but this has been recently been disabled by
the ASF. There's an open discussion thread at
https://lists.apache.org/thread/rkpvlnwj9gv1hvx1dyklx6k88qpnvk2t on
how to deal with this.

Best regards,

Martijn

On Tue, Jan 23, 2024 at 4:03 AM jufang he  wrote:
>
> Hi devs! I want to suggest a flip, but I couldn't find the Confluence
> registration portal.
>
>-
>
>Can you help me create an account and add editing permission?


[DISCUSS] Alternative way of posting FLIPs

2024-02-07 Thread Martijn Visser
Hi all,

ASF Infra has confirmed to me that only ASF committers can access the
ASF Confluence site since a recent change. One of the results of this
decision is that users can't signup and access Confluence, so only
committers+ can create FLIPs.

ASF Infra hopes to improve this situation when they move to the Cloud
shortly (as in: some months), but they haven't committed on an actual
date. The idea would be that we find a temporary solution until anyone
can request access to Confluence.

There are a couple of ways we could resolve this situation:
1. Contributors create a Google Doc and make that view-only, and post
that Google Doc to the mailing list for a discussion thread. When the
discussions have been resolved, the contributor ask on the Dev mailing
list to a committer/PMC to copy the contents from the Google Doc, and
create a FLIP number for them. The contributor can then use that FLIP
to actually have a VOTE thread.
2. We could consider moving FLIPs to "Discussions" on Github, like
Airflow does at https://github.com/apache/airflow/discussions
3. Perhaps someone else has another good idea.

Looking forward to your thoughts.

Best regards,

Martijn


[ANNOUNCE] Apache flink-connector-kafka v3.1.0 released

2024-02-07 Thread Martijn Visser
The Apache Flink community is very happy to announce the release of
Apache flink-connector-kafka v3.1.0. This release is compatible with
Apache Flink 1.17 and 1.18.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353135

We would like to thank all contributors of the Apache Flink community
who made this release possible!

Regards,
Release Manager


[RESULT][VOTE] Release flink-connector-kafka v3.1.0, release candidate #1

2024-02-07 Thread Martijn Visser
I'm happy to announce that we have unanimously approved this release.

There are 5 approving votes, 3 of which are binding:
* Hang Ruan (non-binding)
* Mason Chen (non-binding)
* Qingsheng Ren (binding)
* Maximilian Michels (binding)
* Martijn Visser (binding)

There are no disapproving votes.

I've worked yesterday already on completing the release, and I'll
announce it shortly after this email.

Best regards,

Martijn


Re: [VOTE] Release flink-connector-kafka v3.1.0, release candidate #1

2024-02-06 Thread Martijn Visser
Thanks all, this vote is now closed, I will announce the results on a
separate thread!

On Tue, Feb 6, 2024 at 4:48 PM Martijn Visser 
wrote:

> +1 (binding)
>
> - Validated hashes
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven
> - Verified licenses
> - Verified web PRs
>
> On Tue, Feb 6, 2024 at 12:09 PM Maximilian Michels  wrote:
>
>> +1 (binding)
>>
>> - Inspected source release (checked license, headers, no binaries)
>> - Verified checksums and signature
>>
>> Cheers,
>> Max
>>
>> On Sun, Feb 4, 2024 at 5:41 AM Qingsheng Ren  wrote:
>> >
>> > Thanks for driving this, Martijn!
>> >
>> > +1 (binding)
>> >
>> > - Verified checksum and signature
>> > - Verified no binaries in source
>> > - Built from source with Java 8
>> > - Reviewed web PRs
>> > - Run a Flink SQL job reading and writing Kafka on 1.18.1 cluster.
>> Results
>> > are as expected.
>> >
>> > Best,
>> > Qingsheng
>> >
>> > On Tue, Jan 30, 2024 at 3:50 PM Mason Chen 
>> wrote:
>> >
>> > > +1 (non-binding)
>> > >
>> > > * Verified LICENSE and NOTICE files (this RC has a NOTICE file that
>> points
>> > > to 2023 that has since been updated on the main branch by Hang)
>> > > * Verified hashes and signatures
>> > > * Verified no binaries
>> > > * Verified poms point to 3.1.0
>> > > * Reviewed web PR
>> > > * Built from source
>> > > * Verified git tag
>> > >
>> > > In the same vein as the web PR, do we want to prepare the PR to
>> update the
>> > > shortcode in the connector docs now [1]? Same for the Chinese
>> version. I
>> > > wonder if that should be included in the connector release
>> instructions.
>> > >
>> > > [1]
>> > >
>> > >
>> https://github.com/apache/flink-connector-kafka/blob/d89a082180232bb79e3c764228c4e7dbb9eb6b8b/docs/content/docs/connectors/datastream/kafka.md#L39
>> > >
>> > > Best,
>> > > Mason
>> > >
>> > > On Sun, Jan 28, 2024 at 11:41 PM Hang Ruan 
>> wrote:
>> > >
>> > > > +1 (non-binding)
>> > > >
>> > > > - Validated checksum hash
>> > > > - Verified signature
>> > > > - Verified that no binaries exist in the source archive
>> > > > - Build the source with Maven and jdk11
>> > > > - Verified web PR
>> > > > - Check that the jar is built by jdk8
>> > > >
>> > > > Best,
>> > > > Hang
>> > > >
>> > > > Martijn Visser  于2024年1月26日周五 21:05写道:
>> > > >
>> > > > > Hi everyone,
>> > > > > Please review and vote on the release candidate #1 for the Flink
>> Kafka
>> > > > > connector version 3.1.0, as follows:
>> > > > > [ ] +1, Approve the release
>> > > > > [ ] -1, Do not approve the release (please provide specific
>> comments)
>> > > > >
>> > > > > This release is compatible with Flink 1.17.* and Flink 1.18.*
>> > > > >
>> > > > > The complete staging area is available for your review, which
>> includes:
>> > > > > * JIRA release notes [1],
>> > > > > * the official Apache source release to be deployed to
>> dist.apache.org
>> > > > > [2],
>> > > > > which are signed with the key with fingerprint
>> > > > > A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
>> > > > > * all artifacts to be deployed to the Maven Central Repository
>> [4],
>> > > > > * source code tag v3.1.0-rc1 [5],
>> > > > > * website pull request listing the new release [6].
>> > > > >
>> > > > > The vote will be open for at least 72 hours. It is adopted by
>> majority
>> > > > > approval, with at least 3 PMC affirmative votes.
>> > > > >
>> > > > > Thanks,
>> > > > > Release Manager
>> > > > >
>> > > > > [1]
>> > > > >
>> > > > >
>> > > >
>> > >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353135
>> > > > > [2]
>> > > > >
>> > > > >
>> > > >
>> > >
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.1.0-rc1
>> > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> > > > > [4]
>> > > >
>> https://repository.apache.org/content/repositories/orgapacheflink-1700
>> > > > > [5]
>> > > > >
>> > >
>> https://github.com/apache/flink-connector-kafka/releases/tag/v3.1.0-rc1
>> > > > > [6] https://github.com/apache/flink-web/pull/718
>> > > > >
>> > > >
>> > >
>>
>


Re: [VOTE] Release flink-connector-kafka v3.1.0, release candidate #1

2024-02-06 Thread Martijn Visser
+1 (binding)

- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
- Verified web PRs

On Tue, Feb 6, 2024 at 12:09 PM Maximilian Michels  wrote:

> +1 (binding)
>
> - Inspected source release (checked license, headers, no binaries)
> - Verified checksums and signature
>
> Cheers,
> Max
>
> On Sun, Feb 4, 2024 at 5:41 AM Qingsheng Ren  wrote:
> >
> > Thanks for driving this, Martijn!
> >
> > +1 (binding)
> >
> > - Verified checksum and signature
> > - Verified no binaries in source
> > - Built from source with Java 8
> > - Reviewed web PRs
> > - Run a Flink SQL job reading and writing Kafka on 1.18.1 cluster.
> Results
> > are as expected.
> >
> > Best,
> > Qingsheng
> >
> > On Tue, Jan 30, 2024 at 3:50 PM Mason Chen 
> wrote:
> >
> > > +1 (non-binding)
> > >
> > > * Verified LICENSE and NOTICE files (this RC has a NOTICE file that
> points
> > > to 2023 that has since been updated on the main branch by Hang)
> > > * Verified hashes and signatures
> > > * Verified no binaries
> > > * Verified poms point to 3.1.0
> > > * Reviewed web PR
> > > * Built from source
> > > * Verified git tag
> > >
> > > In the same vein as the web PR, do we want to prepare the PR to update
> the
> > > shortcode in the connector docs now [1]? Same for the Chinese version.
> I
> > > wonder if that should be included in the connector release
> instructions.
> > >
> > > [1]
> > >
> > >
> https://github.com/apache/flink-connector-kafka/blob/d89a082180232bb79e3c764228c4e7dbb9eb6b8b/docs/content/docs/connectors/datastream/kafka.md#L39
> > >
> > > Best,
> > > Mason
> > >
> > > On Sun, Jan 28, 2024 at 11:41 PM Hang Ruan 
> wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > - Validated checksum hash
> > > > - Verified signature
> > > > - Verified that no binaries exist in the source archive
> > > > - Build the source with Maven and jdk11
> > > > - Verified web PR
> > > > - Check that the jar is built by jdk8
> > > >
> > > > Best,
> > > > Hang
> > > >
> > > > Martijn Visser  于2024年1月26日周五 21:05写道:
> > > >
> > > > > Hi everyone,
> > > > > Please review and vote on the release candidate #1 for the Flink
> Kafka
> > > > > connector version 3.1.0, as follows:
> > > > > [ ] +1, Approve the release
> > > > > [ ] -1, Do not approve the release (please provide specific
> comments)
> > > > >
> > > > > This release is compatible with Flink 1.17.* and Flink 1.18.*
> > > > >
> > > > > The complete staging area is available for your review, which
> includes:
> > > > > * JIRA release notes [1],
> > > > > * the official Apache source release to be deployed to
> dist.apache.org
> > > > > [2],
> > > > > which are signed with the key with fingerprint
> > > > > A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> > > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > > * source code tag v3.1.0-rc1 [5],
> > > > > * website pull request listing the new release [6].
> > > > >
> > > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > > > approval, with at least 3 PMC affirmative votes.
> > > > >
> > > > > Thanks,
> > > > > Release Manager
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> > >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353135
> > > > > [2]
> > > > >
> > > > >
> > > >
> > >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.1.0-rc1
> > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > > [4]
> > > >
> https://repository.apache.org/content/repositories/orgapacheflink-1700
> > > > > [5]
> > > > >
> > >
> https://github.com/apache/flink-connector-kafka/releases/tag/v3.1.0-rc1
> > > > > [6] https://github.com/apache/flink-web/pull/718
> > > > >
> > > >
> > >
>


[jira] [Created] (FLINK-34368) Update GCS filesystems to latest available version v3.0

2024-02-05 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34368:
--

 Summary: Update GCS filesystems to latest available version v3.0
 Key: FLINK-34368
 URL: https://issues.apache.org/jira/browse/FLINK-34368
 Project: Flink
  Issue Type: Technical Debt
  Components: FileSystems
Reporter: Martijn Visser
Assignee: Martijn Visser


Update to 
https://github.com/GoogleCloudDataproc/hadoop-connectors/releases/tag/v3.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34366) Add support to group rows by column ordinals

2024-02-05 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34366:
--

 Summary: Add support to group rows by column ordinals
 Key: FLINK-34366
 URL: https://issues.apache.org/jira/browse/FLINK-34366
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: Martijn Visser


Reference: BigQuery 
https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#group_by_col_ordinals

The GROUP BY clause can refer to expression names in the SELECT list. The GROUP 
BY clause also allows ordinal references to expressions in the SELECT list, 
using integer values. 1 refers to the first value in the SELECT list, 2 the 
second, and so forth. The value list can combine ordinals and value names. The 
following queries are equivalent:

{code:sql}
WITH PlayerStats AS (
  SELECT 'Adams' as LastName, 'Noam' as FirstName, 3 as PointsScored UNION ALL
  SELECT 'Buchanan', 'Jie', 0 UNION ALL
  SELECT 'Coolidge', 'Kiran', 1 UNION ALL
  SELECT 'Adams', 'Noam', 4 UNION ALL
  SELECT 'Buchanan', 'Jie', 13)
SELECT SUM(PointsScored) AS total_points, LastName, FirstName
FROM PlayerStats
GROUP BY LastName, FirstName;

/*--+--+---+
 | total_points | LastName | FirstName |
 +--+--+---+
 | 7| Adams| Noam  |
 | 13   | Buchanan | Jie   |
 | 1| Coolidge | Kiran |
 +--+--+---*/
{code}

{code:sql}
WITH PlayerStats AS (
  SELECT 'Adams' as LastName, 'Noam' as FirstName, 3 as PointsScored UNION ALL
  SELECT 'Buchanan', 'Jie', 0 UNION ALL
  SELECT 'Coolidge', 'Kiran', 1 UNION ALL
  SELECT 'Adams', 'Noam', 4 UNION ALL
  SELECT 'Buchanan', 'Jie', 13)
SELECT SUM(PointsScored) AS total_points, LastName, FirstName
FROM PlayerStats
GROUP BY 2, 3;

/*--+--+---+
 | total_points | LastName | FirstName |
 +--+--+---+
 | 7| Adams| Noam  |
 | 13   | Buchanan | Jie   |
 | 1| Coolidge | Kiran |
 +--+--+---*/
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34358) flink-connector-jdbc nightly fails with "Expecting code to raise a throwable"

2024-02-04 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34358:
--

 Summary: flink-connector-jdbc nightly fails with "Expecting code 
to raise a throwable"
 Key: FLINK-34358
 URL: https://issues.apache.org/jira/browse/FLINK-34358
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Reporter: Martijn Visser


https://github.com/apache/flink-connector-jdbc/actions/runs/7770283211/job/21190280602#step:14:346

{code:java}
[INFO] Running 
org.apache.flink.connector.jdbc.dialect.cratedb.CrateDBDialectTypeTest
Error:  Tests run: 19, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.554 
s <<< FAILURE! - in 
org.apache.flink.connector.jdbc.dialect.cratedb.CrateDBDialectTypeTest
Error:  
org.apache.flink.connector.jdbc.dialect.cratedb.CrateDBDialectTypeTest.testDataTypeValidate(TestItem)[19]
  Time elapsed: 0.018 s  <<< FAILURE!
java.lang.AssertionError: 

Expecting code to raise a throwable.
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 s 
- in org.apache.flink.connector.jdbc.catalog.JdbcCatalogUtilsTest
[INFO] Running org.apache.flink.architecture.ProductionCodeArchitectureTest
[INFO] Running org.apache.flink.architecture.ProductionCodeArchitectureBase
[INFO] Running org.apache.flink.architecture.rules.ApiAnnotationRules
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.155 s 
- in org.apache.flink.connector.jdbc.dialect.JdbcDialectTypeTest
[INFO] Running org.apache.flink.architecture.TestCodeArchitectureTest
[INFO] Running org.apache.flink.architecture.TestCodeArchitectureTestBase
[INFO] Running org.apache.flink.architecture.rules.ITCaseRules
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.109 s 
- in org.apache.flink.architecture.rules.ApiAnnotationRules
[INFO] Running org.apache.flink.architecture.rules.TableApiRules
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 s 
- in org.apache.flink.architecture.rules.TableApiRules
[INFO] Running org.apache.flink.architecture.rules.ConnectorRules
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.31 s - 
in org.apache.flink.architecture.rules.ConnectorRules
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.464 s 
- in org.apache.flink.architecture.ProductionCodeArchitectureBase
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.468 s 
- in org.apache.flink.architecture.ProductionCodeArchitectureTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.758 s 
- in org.apache.flink.architecture.rules.ITCaseRules
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.761 s 
- in org.apache.flink.architecture.TestCodeArchitectureTestBase
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.775 s 
- in org.apache.flink.architecture.TestCodeArchitectureTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 110.38 s 
- in 
org.apache.flink.connector.jdbc.databases.oracle.xa.OracleExactlyOnceSinkE2eTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 172.591 
s - in 
org.apache.flink.connector.jdbc.databases.db2.xa.Db2ExactlyOnceSinkE2eTest
[INFO] 
[INFO] Results:
[INFO] 
Error:  Failures: 
Error:PostgresDialectTypeTest>JdbcDialectTypeTest.testDataTypeValidate:102 
Expecting code to raise a throwable.
Error:TrinoDialectTypeTest>JdbcDialectTypeTest.testDataTypeValidate:102 
Expecting code to raise a throwable.
Error:CrateDBDialectTypeTest>JdbcDialectTypeTest.testDataTypeValidate:102 
Expecting code to raise a throwable.
[INFO] 
Error:  Tests run: 394, Failures: 3, Errors: 0, Skipped: 1
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Security fixes for Flink 1.18 (flink-shaded)

2024-02-02 Thread Martijn Visser
To add to this: we can't upgrade to flink-shaded 18.0, since we've just
reverted that for Flink 1.19 because of the performance regression. We will
need a new flink-shaded version to deal with these performance regressions.

On Fri, Feb 2, 2024 at 9:39 AM Martijn Visser 
wrote:

> Hi Hong,
>
> I do have objections: upgrading Flink-Shaded in a patch version is
> something that we should not take lightly, since it involves components
> that are used in the core functionality of Flink. We've seen in the past
> that changes in Flink Shaded have an impact on stability and performance. I
> would like to see how Flink is affected by these CVEs, since in almost all
> cases these are false-positives for Flink.
>
> Best regards,
>
> Martijn
>
> On Thu, Feb 1, 2024 at 4:22 PM Hong Liang  wrote:
>
>> Hi all,
>>
>> Recently, we detected some active CVEs on the flink-shaded-guava and
>> flink-shaded-zookeeper package used in Flink 1.18. Since Flink 1.18 is
>> still in support for security fixes, we should consider fixing this.
>> However, since the vulnerable package is coming from flink-shaded, I
>> wanted
>> to check if there are thoughts from the community around releasing a patch
>> version of flink-shaded.
>>
>> Problem:
>> Flink 1.18 uses guava 31.1-jre from flink-shaded-guava 17.0, which is
>> affected by CVE-2023-2976 (HIGH) [1] and CVE-2020-8908 (LOW) [2]. Flink
>> 1.18 also uses zookeeper 3.7.1, which is affected by CVE-2023-44981
>> (CRITICAL) [3].
>>
>> To fix, I can think of two options:
>> Option 1:
>> Upgrade Flink 1.18 to use flink.shaded.version 18.0. This is easiest as we
>> can backport the change for Flink 1.19 directly (after the performance
>> regression is addressed) [4]. However, there are also upgrades to jackson,
>> asm and netty in flink.shaded.version 1.18.
>>
>> Option 2:
>> Release flink.shaded.version 17.1, with just a bump in zookeeper and guava
>> versions. Then, upgrade Flink 1.18 to use this new flink.shaded.version
>> 17.1. This is harder, but keeps the changes contained and minimal.
>>
>> Given the version bump is on flink-shaded, which is relocated to keep the
>> usage of libraries contained within the flink runtime itself, I am
>> inclined
>> to go with Option 1, even though the change is slightly larger than just
>> the security fixes.
>>
>> Do people have any objections?
>>
>>
>> Regards,
>> Hong
>>
>> [1] https://nvd.nist.gov/vuln/detail/CVE-2023-2976
>> [2] https://nvd.nist.gov/vuln/detail/CVE-2020-8908
>> [3] https://nvd.nist.gov/vuln/detail/CVE-2023-44981
>> [4] https://issues.apache.org/jira/browse/FLINK-33705
>>
>


Re: Security fixes for Flink 1.18 (flink-shaded)

2024-02-02 Thread Martijn Visser
Hi Hong,

I do have objections: upgrading Flink-Shaded in a patch version is
something that we should not take lightly, since it involves components
that are used in the core functionality of Flink. We've seen in the past
that changes in Flink Shaded have an impact on stability and performance. I
would like to see how Flink is affected by these CVEs, since in almost all
cases these are false-positives for Flink.

Best regards,

Martijn

On Thu, Feb 1, 2024 at 4:22 PM Hong Liang  wrote:

> Hi all,
>
> Recently, we detected some active CVEs on the flink-shaded-guava and
> flink-shaded-zookeeper package used in Flink 1.18. Since Flink 1.18 is
> still in support for security fixes, we should consider fixing this.
> However, since the vulnerable package is coming from flink-shaded, I wanted
> to check if there are thoughts from the community around releasing a patch
> version of flink-shaded.
>
> Problem:
> Flink 1.18 uses guava 31.1-jre from flink-shaded-guava 17.0, which is
> affected by CVE-2023-2976 (HIGH) [1] and CVE-2020-8908 (LOW) [2]. Flink
> 1.18 also uses zookeeper 3.7.1, which is affected by CVE-2023-44981
> (CRITICAL) [3].
>
> To fix, I can think of two options:
> Option 1:
> Upgrade Flink 1.18 to use flink.shaded.version 18.0. This is easiest as we
> can backport the change for Flink 1.19 directly (after the performance
> regression is addressed) [4]. However, there are also upgrades to jackson,
> asm and netty in flink.shaded.version 1.18.
>
> Option 2:
> Release flink.shaded.version 17.1, with just a bump in zookeeper and guava
> versions. Then, upgrade Flink 1.18 to use this new flink.shaded.version
> 17.1. This is harder, but keeps the changes contained and minimal.
>
> Given the version bump is on flink-shaded, which is relocated to keep the
> usage of libraries contained within the flink runtime itself, I am inclined
> to go with Option 1, even though the change is slightly larger than just
> the security fixes.
>
> Do people have any objections?
>
>
> Regards,
> Hong
>
> [1] https://nvd.nist.gov/vuln/detail/CVE-2023-2976
> [2] https://nvd.nist.gov/vuln/detail/CVE-2020-8908
> [3] https://nvd.nist.gov/vuln/detail/CVE-2023-44981
> [4] https://issues.apache.org/jira/browse/FLINK-33705
>


Re: [VOTE] Release flink-connector-jdbc, release candidate #3

2024-02-02 Thread Martijn Visser
+1 (binding)

- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
- Verified web PRs

On Fri, Feb 2, 2024 at 9:31 AM Yanquan Lv  wrote:

> +1 (non-binding)
>
> - Validated checksum hash
> - Verified signature
> - Build the source with Maven and jdk8/11/17
> - Check that the jar is built by jdk8
> - Verified that no binaries exist in the source archive
>
> Sergey Nuyanzin  于2024年2月1日周四 19:50写道:
>
> > Hi everyone,
> > Please review and vote on the release candidate #3 for the version 3.1.2,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> > [2],
> > which are signed with the key with fingerprint 1596BBF0726835D8 [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v3.1.2-rc3 [5],
> > * website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Release Manager
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354088
> > [2]
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.1.2-rc3
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1706/
> > [5]
> https://github.com/apache/flink-connector-jdbc/releases/tag/v3.1.2-rc3
> > [6] https://github.com/apache/flink-web/pull/707
> >
>


[jira] [Created] (FLINK-34320) Flink Kafka connector tests time out

2024-01-31 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34320:
--

 Summary: Flink Kafka connector tests time out
 Key: FLINK-34320
 URL: https://issues.apache.org/jira/browse/FLINK-34320
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: kafka-3.1.0
Reporter: Martijn Visser


https://github.com/apache/flink-connector-kafka/actions/runs/7700171105/job/20987805277?pr=83#step:14:61746

{code:java}
2024-01-29T19:45:07.4412975Z 19:45:07,094 [main] INFO  
org.apache.kafka.common.utils.AppInfoParser  [] - App info 
kafka.producer for producer-client-id unregistered
2024-01-29T19:45:07.4413978Z 19:45:07,097 [main] INFO  
org.apache.flink.runtime.io.disk.FileChannelManagerImpl  [] - 
FileChannelManager removed spill file directory 
/tmp/flink-io-3306202c-1639-4b7b-a54c-381826e3682e
2024-01-29T19:45:07.4414533Z 19:45:07,440 [main] INFO  
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerMigrationTest [] 
- 
2024-01-29T19:45:07.4414785Z 

2024-01-29T19:45:07.4415494Z Test testRestoreProducer[Migration Savepoint: 
1.16](org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerMigrationTest)
 successfully run.
2024-01-29T19:45:07.4415646Z 

2024-01-29T19:45:07.4698277Z [WARNING] Tests run: 18, Failures: 0, Errors: 0, 
Skipped: 9, Time elapsed: 206.197 s - in 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerMigrationTest
2024-01-29T20:30:32.8459835Z ##[error]The action has timed out.
{code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34314) Update CI Node Actions from NodeJS 16 to NodeJS 20

2024-01-30 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34314:
--

 Summary: Update CI Node Actions from NodeJS 16 to NodeJS 20
 Key: FLINK-34314
 URL: https://issues.apache.org/jira/browse/FLINK-34314
 Project: Flink
  Issue Type: Technical Debt
  Components: Build System / CI
Reporter: Martijn Visser
Assignee: Martijn Visser


{code:java}
Node.js 16 actions are deprecated. Please update the following actions to use 
Node.js 20: actions/checkout@v3, actions/setup-java@v3, 
stCarolas/setup-maven@v4.5, actions/cache/restore@v3, actions/cache/save@v3. 
{code}

For more information see: 
https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[DISCUSS] Drop support for HBase v1

2024-01-29 Thread Martijn Visser
Hi all,

While working on adding support for Flink 1.19 for HBase, we've run into a
dependency convergence issue because HBase v1 relies on a really old
version of Guava.

HBase v2 has been made available since May 2018, and there have been no new
releases of HBase v1 since August 2022.

I would like to propose that the Flink HBase connector drops support for
HBase v1, and will only continue HBase v2 in the future. I don't think this
requires a full FLIP and vote, but I do want to start a discussion thread
for this.

Best regards,

Martijn


[jira] [Created] (FLINK-34260) Make flink-connector-aws compatible with SinkV2 changes

2024-01-29 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34260:
--

 Summary: Make flink-connector-aws compatible with SinkV2 changes
 Key: FLINK-34260
 URL: https://issues.apache.org/jira/browse/FLINK-34260
 Project: Flink
  Issue Type: Bug
  Components: Connectors / AWS
Affects Versions: aws-connector-4.3.0
Reporter: Martijn Visser


https://github.com/apache/flink-connector-aws/actions/runs/7689300085/job/20951547366#step:9:798

{code:java}
Error:  Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
(default-testCompile) on project flink-connector-dynamodb: Compilation failure
Error:  
/home/runner/work/flink-connector-aws/flink-connector-aws/flink-connector-aws/flink-connector-dynamodb/src/test/java/org/apache/flink/connector/dynamodb/sink/DynamoDbSinkWriterTest.java:[357,40]
 incompatible types: 
org.apache.flink.connector.base.sink.writer.TestSinkInitContext cannot be 
converted to org.apache.flink.api.connector.sink2.Sink.InitContext
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34259) flink-connector-jdbc fails to compile with NPE on hasGenericTypesDisabled

2024-01-29 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34259:
--

 Summary: flink-connector-jdbc fails to compile with NPE on 
hasGenericTypesDisabled
 Key: FLINK-34259
 URL: https://issues.apache.org/jira/browse/FLINK-34259
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Reporter: Martijn Visser


https://github.com/apache/flink-connector-jdbc/actions/runs/7682035724/job/20935884874#step:14:150

{code:java}
Error:  Tests run: 10, Failures: 5, Errors: 4, Skipped: 0, Time elapsed: 7.909 
s <<< FAILURE! - in org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest
Error:  
org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat
  Time elapsed: 3.254 s  <<< ERROR!
java.lang.NullPointerException: Cannot invoke 
"org.apache.flink.api.common.serialization.SerializerConfig.hasGenericTypesDisabled()"
 because "config" is null
at 
org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:85)
at 
org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:99)
at 
org.apache.flink.connector.jdbc.JdbcTestBase.getSerializer(JdbcTestBase.java:70)
at 
org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat(JdbcRowOutputFormatTest.java:336)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
{code}

Seems to be caused by FLINK-34122 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-jdbc, release candidate #1

2024-01-27 Thread Martijn Visser
For official purposes, this RC is cancelled and a new one will be created :)

On Fri, Jan 26, 2024 at 3:42 PM Sergey Nuyanzin  wrote:

> The blocker issue [1] was fixed
>
> huge thanks to David Radley and Benchao Li for working on this
>
> [1] https://issues.apache.org/jira/browse/FLINK-33365
>
>
> On Sun, Jan 7, 2024 at 4:54 PM David Radley 
> wrote:
>
> > Hi ,
> > I am working on FLINK-33365. I am making good progress;  thanks Sergey
> for
> > your fabulous feedback. A lot of the query cases are now working with the
> > latest fix but not all. I think it is pragmatic to revert the lookup join
> > predicate pushdown support, so we can release a functional JDBC
> connector.
> > I can then work on fixing the remaining FLINK-33365 query cases, which
> > should not take too long, but I am out until Thursday this week so will
> be
> > looking at it then,
> >Kind regards, David.
> >
> >
> > From: Martijn Visser 
> > Date: Friday, 5 January 2024 at 14:24
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-jdbc, release
> > candidate #1
> > Hi,
> >
> > Hmmm, it would have been good to mark the Jira ticket as a Blocker
> > then for the JDBC connector. Since it's marked as Critical, it doesn't
> > appear. It has also been open for multiple months, so it doesn't
> > really feel like a Blocker. I'm +0 with including this fix, but then
> > we should either get that in quickly or revert FLINK-16024, especially
> > since this bug ticket has been open for multiple months. Right now, it
> > means that we don't have a working JDBC connector for Flink 1.17 and
> > Flink 1.18. That shouldn't be OK.
> >
> > Thanks,
> >
> > Martijn
> >
> > On Fri, Jan 5, 2024 at 2:31 PM Sergey Nuyanzin 
> > wrote:
> > >
> > > Thanks for driving this
> > >
> > > the thing which makes me thinking about -1 (not sure yet and that's why
> > > asking here) is that there is FLINK-33365 [1]
> > > mentioned as a blocker for JDBC connector release at [2]
> > > Since the reason for that is FLINK-16024 [3] as also was explained in
> > > comments for [1].
> > >
> > > So should we wait for a fix of [1] or revert [3] for 3.1.x and continue
> > > releasing 3.1.2?
> > >
> > >
> > > [1] https://issues.apache.org/jira/browse/FLINK-33365
> > > [2] https://lists.apache.org/thread/sdkm5qshqozow9sljz6c0qjft6kg9cwc
> > >
> > > [3] https://issues.apache.org/jira/browse/FLINK-16024
> > >
> > > On Fri, Jan 5, 2024 at 2:19 PM Martijn Visser <
> martijnvis...@apache.org>
> > > wrote:
> > >
> > > > Hi everyone,
> > > > Please review and vote on the release candidate #1 for the version
> > > > 3.1.2, as follows:
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > > This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
> > > >
> > > > The complete staging area is available for your review, which
> includes:
> > > > * JIRA release notes [1],
> > > > * the official Apache source release to be deployed to
> dist.apache.org
> > > > [2], which are signed with the key with fingerprint
> > > > A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > * source code tag v3.1.2-rc1 [5],
> > > > * website pull request listing the new release [6].
> > > >
> > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > > approval, with at least 3 PMC affirmative votes.
> > > >
> > > > Thanks,
> > > > Release Manager
> > > >
> > > > [1]
> > > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354088
> > > > [2]
> > > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.1.2-rc1
> > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > [4]
> > > >
> > https://repository.apache.org/content/repositories/orgapacheflink-1691/
> > > > [5]
> > https://github.com/apache/flink-connector-jdbc/releases/tag/v3.1.2-rc1
> > > > [6] https://github.com/apache/flink-web/pull/707
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > > Sergey
> >
> > Unless otherwise stated above:
> >
> > IBM United Kingdom Limited
> > Registered in England and Wales with number 741598
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
> >
>
>
> --
> Best regards,
> Sergey
>


[ANNOUNCE] Community over Code EU 2024 Travel Assistance Applications now open!

2024-01-27 Thread Martijn Visser
Hi everyone,

The Apache Software Foundation is organizing another Community over Code
event, where a wide variety of speakers will be speaking. You can find all
the details at https://eu.communityovercode.org/

Within the ASF, there is a so-called Travel Assistance Committee (TAC).
This committee exists to help those that would like to attend Community
over Code events, but are unable to do so for financial reasons. I'm hoping
that we'll have a wide variety of Flink community members over there!

All the details and more information can be found in the message below.

Best regards,

Martijn Visser

-- Forwarded message -
From: Christofer Dutz 
Date: Sat, Jan 27, 2024 at 5:31 AM
Subject: Community over Code EU 2024 Travel Assistance Applications now
open!


Hi @,

The Travel Assistance Committee (TAC) are pleased to announce that
travel assistance applications for Community over Code EU 2024 are now
open!

We will be supporting Community over Code EU 2024, Bratislava,
Slovakia, June 3th - 5th, 2024.

TAC exists to help those that would like to attend Community over Code
events, but are unable to do so for financial reasons. For more info
on this years applications and qualifying criteria, please visit the
TAC website at < https://tac.apache.org/ >. Applications are already
open on https://tac-apply.apache.org/, so don't delay!

The Apache Travel Assistance Committee will only be accepting
applications from those people that are able to attend the full event.

Important: Applications close on Friday, March 1st, 2024.

Applicants have until the the closing date above to submit their
applications (which should contain as much supporting material as
required to efficiently and accurately process their request), this
will enable TAC to announce successful applications shortly
afterwards.

As usual, TAC expects to deal with a range of applications from a
diverse range of backgrounds; therefore, we encourage (as always)
anyone thinking about sending in an application to do so ASAP.

We look forward to greeting many of you in Bratislava, Slovakia in June,
2024!

Kind Regards,

Chris

(On behalf of the Travel Assistance Committee)

When replying, please reply to travel-assista...@apache.org


[jira] [Created] (FLINK-34244) Upgrade Confluent Platform to latest compatible version

2024-01-26 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34244:
--

 Summary: Upgrade Confluent Platform to latest compatible version
 Key: FLINK-34244
 URL: https://issues.apache.org/jira/browse/FLINK-34244
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / Kafka, Formats (JSON, Avro, Parquet, ORC, 
SequenceFile)
Affects Versions: kafka-3.1.0, 1.19.0
Reporter: Martijn Visser
Assignee: Martijn Visser


Flink uses Confluent Platform for its Confluent Avro Schema Registry 
implementation, and we can update that to the latest version.

It's also used by the Flink Kafka connector, and we should upgrade it to the 
latest compatible version of the used Kafka Client (in this case, 7.4.x)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[VOTE] Release flink-connector-kafka v3.1.0, release candidate #1

2024-01-26 Thread Martijn Visser
Hi everyone,
Please review and vote on the release candidate #1 for the Flink Kafka
connector version 3.1.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

This release is compatible with Flink 1.17.* and Flink 1.18.*

The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release to be deployed to dist.apache.org [2],
which are signed with the key with fingerprint
A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag v3.1.0-rc1 [5],
* website pull request listing the new release [6].

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.

Thanks,
Release Manager

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353135
[2]
https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.1.0-rc1
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1700
[5] https://github.com/apache/flink-connector-kafka/releases/tag/v3.1.0-rc1
[6] https://github.com/apache/flink-web/pull/718


Re: [NOTICE] master branch cannot compile for now

2024-01-26 Thread Martijn Visser
Hi Benchao,

Thanks for sending out the notification and the plan to fix it, much
appreciate

Best regards,

Martijn

On Fri, Jan 26, 2024 at 4:40 AM Benchao Li  wrote:

> Hi devs,
>
> I merged FLINK-33263[1] this morning (10:16 +8:00), and it based on an
> old commit which uses older guava version, so currently the master
> branch cannot compile.
>
> Zhanghao has discovered this in FLINK-33264[2], and the hotfix commit
> has been proposed in the same PR, hopefully we can merge it after CI
> passes (it may take a few hours).
>
> Sorry for the inconvenience.
>
> [1] https://github.com/apache/flink/pull/24128
> [2] https://github.com/apache/flink/pull/24133
>
> --
>
> Best,
> Benchao Li
>


Re: [DISCUSS] Release new version of Flink's Kafka connector

2024-01-26 Thread Martijn Visser
Hi!

Thanks for chipping in, clarifying and correcting me. I'll kick off a
release for v3.1 today then!

Best regards,

Martijn

On Fri, Jan 26, 2024 at 8:46 AM Mason Chen  wrote:

> Hi Martijn,
>
> +1 no objections, thanks for volunteering. I'll definitely help verify the
> rc when it becomes available.
>
> I think FLIP-288 (I assume you meant this) doesn't introduce incompatible
> changes since the implementation should be state compatible as well as the
> default changes should be transparent to the user and actually correct
> possibly erroneous behavior.
>
> Also, the RecordEvaluator was released with Flink 1.18 (I assume you meant
> this). Given the above, I'm +1 for a v3.1 release that only supports 1.18
> while we support patches on v3.0 that supports 1.17. This logic is also
> inline with what was agreed upon for external connector versioning [1].
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
>
> Best,
> Mason
>
> On Thu, Jan 25, 2024 at 2:16 PM Martijn Visser 
> wrote:
>
> > Hi everyone,
> >
> > The latest version of the Flink Kafka connector that's available is
> > currently v3.0.2, which is compatible with both Flink 1.17 and Flink
> 1.18.
> >
> > I would like to propose to create a release which is either v3.1, or v4.0
> > (see below), with compatibility for Flink 1.17 and Flink 1.18. This newer
> > version would contain many improvements [1] [2] like:
> >
> > * FLIP-246 Dynamic Kafka Source
> > * FLIP-288 Dynamic Partition Discovery
> > * Rack Awareness support
> > * Kafka Record support for KafkaSink
> > * Misc bug fixes and CVE issues
> >
> > If there are no objections, I would like to volunteer as release manager.
> >
> > The only thing why I'm not sure if this should be a v3.1 or a v4.0, is
> > because I'm not 100% sure if FLIP-246 introduces incompatible API changes
> > (requiring a new major version), or if the functionality was added in a
> > backwards compatible matter (meaning a new minor version would be
> > sufficient). I'm looping in Hongshun Wang and Leonard Xu to help clarify
> > this.
> >
> > There's also a discussion happening in an open PR [3] on dropping support
> > for Flink 1.18 afterwards (since this PR would add support for
> > RecordEvaluator, which only exists in Flink 1.19). My proposal would be
> > that after either v3.1 or v4.0 is released, we would indeed drop support
> > for Flink 1.18 with that PR and the next Flink Kafka connector would be
> > either v4.0 (if v3.1 is the next release) or v5.0 (if v4.0 is the next
> > release).
> >
> > Best regards,
> >
> > Martijn
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353135
> > [2]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352917
> > [3]
> >
> >
> https://github.com/apache/flink-connector-kafka/pull/76#pullrequestreview-1844645464
> >
>


[DISCUSS] Release new version of Flink's Kafka connector

2024-01-25 Thread Martijn Visser
Hi everyone,

The latest version of the Flink Kafka connector that's available is
currently v3.0.2, which is compatible with both Flink 1.17 and Flink 1.18.

I would like to propose to create a release which is either v3.1, or v4.0
(see below), with compatibility for Flink 1.17 and Flink 1.18. This newer
version would contain many improvements [1] [2] like:

* FLIP-246 Dynamic Kafka Source
* FLIP-288 Dynamic Partition Discovery
* Rack Awareness support
* Kafka Record support for KafkaSink
* Misc bug fixes and CVE issues

If there are no objections, I would like to volunteer as release manager.

The only thing why I'm not sure if this should be a v3.1 or a v4.0, is
because I'm not 100% sure if FLIP-246 introduces incompatible API changes
(requiring a new major version), or if the functionality was added in a
backwards compatible matter (meaning a new minor version would be
sufficient). I'm looping in Hongshun Wang and Leonard Xu to help clarify
this.

There's also a discussion happening in an open PR [3] on dropping support
for Flink 1.18 afterwards (since this PR would add support for
RecordEvaluator, which only exists in Flink 1.19). My proposal would be
that after either v3.1 or v4.0 is released, we would indeed drop support
for Flink 1.18 with that PR and the next Flink Kafka connector would be
either v4.0 (if v3.1 is the next release) or v5.0 (if v4.0 is the next
release).

Best regards,

Martijn

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353135
[2]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352917
[3]
https://github.com/apache/flink-connector-kafka/pull/76#pullrequestreview-1844645464


Re: [DISCUSS] Proposing an LTS Release for the 1.x Line

2024-01-22 Thread Martijn Visser
Hi Rui,

I don't think that we should allow backporting of new features from
the first minor version of 2.x to 1.x. If a user doesn't yet want to
upgrade to 2.0, I think that's fine since we'll have a LTS for 1.x. If
a newer feature becomes available in 2.x that's interesting for the
user, the user at that point can decide if they want to do the
migration. It's always a case-by-case tradeoff of effort vs benefits,
and I think with a LTS version that has bug fixes only we provide the
users with assurance that existing bugs can get fixed, and that they
can decide for themselves when they want to migrate to a newer version
with better/newer features.

Best regards,

Martijn

On Thu, Jan 11, 2024 at 3:50 AM Rui Fan <1996fan...@gmail.com> wrote:
>
> Thanks everyone for discussing this topic!
>
> My question is could we make a trade-off between Flink users
> and Flink maintainers?
>
> 1. From the perspective of a Flink maintainer
>
> I strongly agree with Martin's point of view, such as:
>
> - Allowing backporting of new features to Flink 1.x will result in users
> delaying the upgrade.
> - New features will also introduce new bugs, meaning that maintainers will
> have to spend time on two release versions.
>
> Considering the simplicity of maintenance, don't backport
> new features to Flink 1.x is fine.
>
> 2. From the perspective of a flink user
>
> In the first version Flink 2.x, flink will remove a lot of
> deprecated api, and introduce some features.
>
> It's a new major version, major version changes are much
> greater than minor version and patch version. Big changes
> may introduce more bugs, so I guess that a large number
> of Flink users will not use the first version of 2.x in the
> production environment. Maybe they will wait for the second
> minor version of 2.x.
>
> So, I was wondering whether we allow backport new features
> from the first minor version of 2.x to 1.x?
>
> It means, we allow backport new features of 2.0.0 to 1.21.
> And 1.21.x is similar to 2.0.x, their features are same, but
> 2.0.x removes deprecated apis. After 2.0.0 is released,
> all new features in 2.1.x and above are only available in 2.x.
>
> Looking forward to your opinions~
>
> Best,
> Rui
>
> On Wed, Jan 10, 2024 at 9:39 PM Martijn Visser 
> wrote:
>
> > Hi Alex,
> >
> > I saw that I missed replying to this topic. I do think that Xintong
> > touched on an important topic when he mentioned that we should define
> > what an LTS version means. From my point of view, I would state that
> > an LTS version for Apache Flink means that bug fixes only will be made
> > available for a longer period of time. I think that, combined with
> > what you called option 1 (a clear end-of-life date) is the best
> > option.
> >
> > Flink 2.0 will give us primarily the ability to remove a lot of
> > deprecated APIs, especially with Flink's deprecation strategy. I
> > expect that the majority of users will have an easy migration path
> > from a Flink 1.x to a Flink 2.0, if you're currently not using a
> > deprecated API and are a Java user.
> >
> > Allowing backporting of new features to Flink 1.x will result in users
> > delaying the upgrade, but it doesn't make the upgrade any easier when
> > they must upgrade. New features will also introduce new bugs, meaning
> > that maintainers will have to spend time on two release versions. As
> > the codebases diverge more and more, this will just become
> > increasingly more complex.
> >
> > With that being said, I do think that it makes sense to also formalize
> > the result of this discussion in a FLIP. That's just easier to point
> > users towards at a later stage.
> >
> > Best regards,
> >
> > Martijn
> >
> > On Mon, Dec 4, 2023 at 9:55 PM Alexander Fedulov
> >  wrote:
> > >
> > > Hi everyone,
> > >
> > > As we progress with the 1.19 release, which might potentially (although
> > not
> > > likely) be the last in the 1.x line, I'd like to revive our discussion on
> > > the
> > > LTS support matter. There is a general consensus that due to breaking API
> > > changes in 2.0, extending bug fixes support by designating an LTS release
> > > is
> > > something we want to do.
> > >
> > > To summarize, the approaches we've considered are:
> > >
> > > Time-based: The last release of the 1.x line gets a clear end-of-life
> > date
> > > (2 years).
> > > Release-based: The last release of the 1.x line gets support for 4 minor
> > > releases in the 2.x line. The exact time is unknown, but we assume it to
> 

Re: Confluence access

2024-01-22 Thread Martijn Visser
Hi,

What's your Confluence user name?

Best regards,

Martijn

On Fri, Jan 19, 2024 at 8:04 AM Сергей Парышев
 wrote:
>
>
> Hi devs! Can I get access to confluence? I want suggest a FLIP.


[jira] [Created] (FLINK-34193) Remove usage of Flink-Shaded Jackson and Snakeyaml in flink-connector-kafka

2024-01-22 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34193:
--

 Summary: Remove usage of Flink-Shaded Jackson and Snakeyaml in 
flink-connector-kafka
 Key: FLINK-34193
 URL: https://issues.apache.org/jira/browse/FLINK-34193
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / Kafka
Reporter: Martijn Visser


The Flink Kafka connector doesn't have a direct dependency in the POM on 
flink-shaded, but it still uses the shaded versions of Jackson and SnakeYAML in 
{{YamlFileMetaDataService.java}} and {{KafkaRecordDeserializationSchemaTest}} 

Those cause problems when trying to compile the Flink Kafka connector for Flink 
1.19, since these dependencies have been updated in there. Since connectors 
shouldn't rely on Flink-Shaded, we should refactor these implementations 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >