[jira] [Created] (FLINK-34479) Fix missed changelog configs in the documentation

2024-02-20 Thread Hangxiang Yu (Jira)
Hangxiang Yu created FLINK-34479:


 Summary: Fix missed changelog configs in the documentation
 Key: FLINK-34479
 URL: https://issues.apache.org/jira/browse/FLINK-34479
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.19.0, 1.20.0
Reporter: Hangxiang Yu
Assignee: Hangxiang Yu


state_backend_changelog_section has been missed in the documentation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34478) NoSuchMethod error for "flink cancel $jobId" via Command Line

2024-02-20 Thread Liu Yi (Jira)
Liu Yi created FLINK-34478:
--

 Summary: NoSuchMethod error for "flink cancel $jobId" via Command 
Line
 Key: FLINK-34478
 URL: https://issues.apache.org/jira/browse/FLINK-34478
 Project: Flink
  Issue Type: Bug
  Components: Command Line Client
Affects Versions: 1.18.1
Reporter: Liu Yi


On 1.18.1 standalone mode (launched by "\{flink}/bin/start-cluster.sh"), I hit 
" [java.lang.NoSuchMethodError: 'boolean 
org.apache.commons.cli.CommandLine.hasOption(org.apache.commons.cli.Option)'|https://www.google.com/search?q=java.lang.NoSuchMethodError:+%27boolean+org.apache.commons.cli.CommandLine.hasOption%28org.apache.commons.cli.Option%29%27]
 " when trying to cancel a job submitted via the UI by executing the Command 
Line "{*}{flink}/bin/flink cancel _$jobId_{*}". While clicking on "Cancel Job" 
link in the UI can cancel the job just fine, and "flink run" command line also 
works fine.

Has anyone seen same/similar behavior?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] New Apache Flink Committer - Jiabao Sun

2024-02-20 Thread Hongshun Wang
Congratulations, Jiabao :)
Congratulations Jiabao!

Best,
Hongshun
Best regards,

Weijie

On Tue, Feb 20, 2024 at 2:19 PM Runkang He  wrote:

> Congratulations Jiabao!
>
> Best,
> Runkang He
>
> Jane Chan  于2024年2月20日周二 14:18写道:
>
> > Congrats, Jiabao!
> >
> > Best,
> > Jane
> >
> > On Tue, Feb 20, 2024 at 10:32 AM Paul Lam  wrote:
> >
> > > Congrats, Jiabao!
> > >
> > > Best,
> > > Paul Lam
> > >
> > > > 2024年2月20日 10:29,Zakelly Lan  写道:
> > > >
> > > >> Congrats! Jiabao!
> > >
> > >
> >
>


[RESULT][VOTE] Release flink-connector-jdbc 3.1.2, release candidate #3

2024-02-20 Thread Sergey Nuyanzin
I'm happy to announce that we have unanimously approved this release.

There are 8 approving votes, 3 of which are binding:

* Hang Ruan (non-binding)
* Jiabao Sun (non-binding)
* Yanquan Lv (non-binding)
* Martijn Visser (binding)
* David Radley (non-binding)
* Sergey Nuyanzin (non-binding)
* Leonard Xu (binding)
* Matthias Pohl (binding)

There are no disapproving votes.

Thanks all! I’ll complete the release soon and announce it soon after this
email.
-- 
Best regards,
Sergey


Re: [VOTE] Release flink-connector-jdbc, release candidate #3

2024-02-20 Thread Sergey Nuyanzin
Thanks all, this vote is now closed, I will announce the results on a
separate thread!

On Tue, Feb 20, 2024 at 3:26 PM Matthias Pohl
 wrote:

> +1 (binding)
>
> * Downloaded artifacts
> * Extracted sources and compiled them
> * Checked diff of git tag checkout with downloaded sources
> * Verified SHA512 & GPG checksums
> * Checked that all POMs have the expected version
> * Generated diffs to compare pom file changes with NOTICE files
>
> On Tue, Feb 20, 2024 at 11:23 AM Leonard Xu  wrote:
>
> > Thanks Sergey for driving this release.
> >
> > +1 (binding)
> >
> > - verified signatures
> > - verified hashsums
> > - built from source code with Maven 3.8.1 and Scala 2.12 succeeded
> > - checked Github release tag
> > - checked release notes
> > - reviewed all jira tickets has been resolved
> > - reviewed the web PR and left one minor comment about backporting bugfix
> > to main branch
> > **Note** The release date in jira[1] need to be updated
> >
> > Best,
> > Leonard
> > [1] https://issues.apache.org/jira/projects/FLINK/versions/12354088
> >
> >
> > > 2024年2月20日 下午5:15,Sergey Nuyanzin  写道:
> > >
> > > +1 (non-binding)
> > >
> > > - Validated checksum hash
> > > - Verified signature from another machine
> > > - Checked that tag is present in Github
> > > - Built the source
> > >
> > > On Tue, Feb 20, 2024 at 10:13 AM Sergey Nuyanzin 
> > > wrote:
> > >
> > >> Hi David
> > >> thanks for checking and sorry for the late reply
> > >>
> > >> yep, that's ok this just means that you haven't signed my key which is
> > ok
> > >> (usually it could happen during virtual key signing parties)
> > >>
> > >> For release checking it is ok to check that the key which was used to
> > sign
> > >> the artifacts is included into Flink release KEYS file [1]
> > >>
> > >> [1] https://dist.apache.org/repos/dist/release/flink/KEYS
> > >>
> > >> On Thu, Feb 8, 2024 at 3:50 PM David Radley 
> > >> wrote:
> > >>
> > >>> Thanks Sergey,
> > >>>
> > >>> It looks better now.
> > >>>
> > >>> gpg --verify flink-connector-jdbc-3.1.2-1.18.jar.asc
> > >>>
> > >>> gpg: assuming signed data in 'flink-connector-jdbc-3.1.2-1.18.jar'
> > >>>
> > >>> gpg: Signature made Thu  1 Feb 10:54:45 2024 GMT
> > >>>
> > >>> gpg:using RSA key
> > F7529FAE24811A5C0DF3CA741596BBF0726835D8
> > >>>
> > >>> gpg: Good signature from "Sergey Nuyanzin (CODE SIGNING KEY)
> > >>> snuyan...@apache.org" [unknown]
> > >>>
> > >>> gpg: aka "Sergey Nuyanzin (CODE SIGNING KEY)
> > >>> snuyan...@gmail.com" [unknown]
> > >>>
> > >>> gpg: aka "Sergey Nuyanzin snuyan...@gmail.com
>  > >>> snuyan...@gmail.com>" [unknown]
> > >>>
> > >>> gpg: WARNING: This key is not certified with a trusted signature!
> > >>>
> > >>> gpg:  There is no indication that the signature belongs to
> the
> > >>> owner.
> > >>>
> > >>> I assume the warning is ok,
> > >>>  Kind regards, David.
> > >>>
> > >>> From: Sergey Nuyanzin 
> > >>> Date: Thursday, 8 February 2024 at 14:39
> > >>> To: dev@flink.apache.org 
> > >>> Subject: [EXTERNAL] Re: FW: RE: [VOTE] Release flink-connector-jdbc,
> > >>> release candidate #3
> > >>> Hi David
> > >>>
> > >>> it looks like in your case you don't specify the jar itself and
> > probably
> > >>> it
> > >>> is not in current dir
> > >>> so it should be something like that (assuming that both asc and jar
> > file
> > >>> are downloaded and are in current folder)
> > >>> gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
> > >>> flink-connector-jdbc-3.1.2-1.16.jar
> > >>>
> > >>> Here it is a more complete guide how to do it for Apache projects [1]
> > >>>
> > >>> [1] https://www.apache.org/info/verification.html#CheckingSignatures
> > >>>
> > >>> On Thu, Feb 8, 2024 at 12:38 PM David Radley <
> david_rad...@uk.ibm.com>
> > >>> wrote:
> > >>>
> >  Hi,
> >  I was looking more at the asc files. I imported the keys and tried.
> > 
> > 
> >  gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
> > 
> >  gpg: no signed data
> > 
> >  gpg: can't hash datafile: No data
> > 
> >  This seems to be the same for all the asc file. It does not look
> > right;
> > >>> am
> >  I doing doing incorrect?
> >    Kind regards, David.
> > 
> > 
> >  From: David Radley 
> >  Date: Thursday, 8 February 2024 at 10:46
> >  To: dev@flink.apache.org 
> >  Subject: [EXTERNAL] RE: [VOTE] Release flink-connector-jdbc, release
> >  candidate #3
> >  +1 (non-binding)
> > 
> >  I assume that thttps://github.com/apache/flink-web/pull/707 and be
> >  completed after the release is out.
> > 
> >  From: Martijn Visser 
> >  Date: Friday, 2 February 2024 at 08:38
> >  To: dev@flink.apache.org 
> >  Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-jdbc, release
> >  candidate #3
> >  +1 (binding)
> > 
> >  - Validated hashes
> >  - Verified signature
> 

Re: [DISCUSS] Kubernetes Operator 1.8.0 release planning

2024-02-20 Thread Maximilian Michels
Hey Rui, hey Ryan,

Good points. Non-committers can't directly release but they can assist
with the release. It would be great to get help from both of you in
the release process.

I'd be happy to be the release manager for the 1.8 release. As for the
timing, I think we need to reach consensus in which form to include
the new memory tuning. Also, considering that Gyula just merged a
pretty big improvement / refactor of the metric collection code, we
might want to give it another week. I would target the end of February
to begin with the release process.

Cheers,
Max

On Sun, Feb 18, 2024 at 4:48 AM Rui Fan <1996fan...@gmail.com> wrote:
>
> Thanks Max and Ryan for the volunteering.
>
> To Ryan:
>
> I'm not sure whether non-flink-committers have permission to release.
> If I remember correctly, multiple steps of the release process[1] need
> the apache account, such as: Apache GPG key and Apache Nexus.
>
> If the release process needs the committer permission, feel free to
> assist this release, thanks~
>
> To all:
>
> Max is one of the very active contributors to the
> flink-kuberneters-operator
> project, and he didn't release before. So Max as the release manager
> makes sense to me.
>
> I can assist this release if all of you don't mind. In particular,
> Autoscaler Standalone 1.8.0 is much improved compared to 1.7.0,
> and I can help write the related Release note. Besides, I can help
> check and test this release.
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Kubernetes+Operator+Release
>
> Best,
> Rui
>
> On Wed, Feb 7, 2024 at 11:01 PM Ryan van Huuksloot <
> ryan.vanhuuksl...@shopify.com> wrote:
>
> > I can volunteer to be a release manager. I haven't done it for
> > Apache/Flink or the operator before so I may be a good candidate.
> >
> > Ryan van Huuksloot
> > Sr. Production Engineer | Streaming Platform
> > [image: Shopify]
> > 
> >
> >
> > On Wed, Feb 7, 2024 at 6:06 AM Maximilian Michels  wrote:
> >
> >> It's very considerate that you want to volunteer to be the release
> >> manager, but given that you have already managed one release, I would
> >> ideally like somebody else to do it. Personally, I haven't managed an
> >> operator release, although I've done it for Flink itself in the past.
> >> Nevertheless, it would be nice to have somebody new to the process.
> >>
> >> Anyone reading this who wants to try being a release manager, please
> >> don't be afraid to volunteer. Of course we'll be able to assist. That
> >> would also be a good opportunity for us to update the docs regarding
> >> the release process.
> >>
> >> Cheers,
> >> Max
> >>
> >>
> >> On Wed, Feb 7, 2024 at 10:08 AM Rui Fan <1996fan...@gmail.com> wrote:
> >> >
> >> > If the release is postponed 1-2 more weeks, I could volunteer
> >> > as the one of the release managers.
> >> >
> >> > Best,
> >> > Rui
> >> >
> >> > On Wed, Feb 7, 2024 at 4:54 AM Gyula Fóra  wrote:
> >> >>
> >> >> Given the proposed timeline was a bit short / rushed I agree with Max
> >> that
> >> >> we could wait 1-2 more weeks to wrap up the current outstanding bigger
> >> >> features around memory tuning and the JDBC state store.
> >> >>
> >> >> In the meantime it would be great to involve 1-2 new committers (or
> >> other
> >> >> contributors) in the operator release process so that we have some
> >> fresh
> >> >> eyes on the process.
> >> >> Would anyone be interested in volunteering to help with the next
> >> release?
> >> >>
> >> >> Cheers,
> >> >> Gyula
> >> >>
> >> >> On Tue, Feb 6, 2024 at 4:35 PM Maximilian Michels 
> >> wrote:
> >> >>
> >> >> > Thanks for starting the discussion Gyula!
> >> >> >
> >> >> > It comes down to how important the outstanding changes are for the
> >> >> > release. Both the memory tuning as well as the JDBC changes probably
> >> >> > need 1-2 weeks realistically to complete the initial spec. For the
> >> >> > memory tuning, I would prefer merging it in the current state as an
> >> >> > experimental feature for the release which comes disabled out of the
> >> >> > box. The reason is that it can already be useful to users who want to
> >> >> > try it out; we have seen some interest in it. Then for the next
> >> >> > release we will offer a richer feature set and might enable it by
> >> >> > default.
> >> >> >
> >> >> > Cheers,
> >> >> > Max
> >> >> >
> >> >> > On Tue, Feb 6, 2024 at 10:53 AM Rui Fan <1996fan...@gmail.com>
> >> wrote:
> >> >> > >
> >> >> > > Thanks Gyula for driving this release!
> >> >> > >
> >> >> > > Release 1.8.0 sounds make sense to me.
> >> >> > >
> >> >> > > As you said, I'm developing the JDBC event handler.
> >> >> > > Since I'm going on vacation starting this Friday, and I have some
> >> >> > > other work before I go on vacation. After evaluating my time today,
> >> >> > > I found that I cannot complete the development, testing, and
> >> merging
> >> >> > > of the JDBC event handler this week. So I tend 

[jira] [Created] (FLINK-34477) support capture groups in REGEXP_REPLACE

2024-02-20 Thread David Anderson (Jira)
David Anderson created FLINK-34477:
--

 Summary: support capture groups in REGEXP_REPLACE
 Key: FLINK-34477
 URL: https://issues.apache.org/jira/browse/FLINK-34477
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: David Anderson


For example, I would expect this query
{code:java}

{code}
{{select REGEXP_REPLACE('ERR1,ERR2', '([^,]+)', 'AA$1AA'); }}

 

to produce
{code:java}
AAERR1AA,AAERR2AA{code}
but instead it produces
{code:java}
AA$1AA,AA$1AA{code}
With FLINK-9990 support was added for REGEXP_EXTRACT, which does provide access 
to the capture groups, but for many use cases supporting this in the way that 
users expect in REGEXP_REPLACE would be more natural and convenient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[RESULT][VOTE] FLIP-417: Expose JobManagerOperatorMetrics via REST API

2024-02-20 Thread Mason Chen
Hi devs,

I'm happy to announce that FLIP-417: Expose JobManagerOperatorMetrics via
REST API [1] has been accepted with 6 approving votes (3 binding) [2]:

- Hang Ruan (non-binding)
- Xuyang (non-binding)
- Rui Fan (binding)
- Maximilian Michels (binding)
- Thomas Weise (binding)
- Alexander Fedulov

There are no disapproving votes. Thanks to everyone who participated in the
discussion and voting.

Best,
Mason

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-417%3A+Expose+JobManagerOperatorMetrics+via+REST+API
[2] https://lists.apache.org/thread/whrng29fnrpzmyjwr882jl9cqjz8ykh8


[jira] [Created] (FLINK-34476) Window TVFs with named parameters don't support column expansion

2024-02-20 Thread Timo Walther (Jira)
Timo Walther created FLINK-34476:


 Summary: Window TVFs with named parameters don't support column 
expansion
 Key: FLINK-34476
 URL: https://issues.apache.org/jira/browse/FLINK-34476
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Timo Walther
Assignee: Timo Walther


It seems named parameters still have issues with column expansion of virtual 
metadata column:

{code}

SELECT * FROM TABLE(  TUMBLE( DATA => TABLE gaming_player_activity_source, 
TIMECOL => DESCRIPTOR(meta_col), SIZE => INTERVAL '10' MINUTES));

{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[ANNOUNCE] Flink 1.19 Cross-team testing & sync summary on 02/20/2024

2024-02-20 Thread Lincoln Lee
Hi devs,

I'd like to share some highlights from the release sync on 02/20/2024


*- Cross-team testing*
Only one test instruction remains to be finalized[1]. With more than half
of the identified test tasks completed: 8 of the 15 test work items
finished, 3 in progress, and volunteers identified for the remaining 4.


*- Release notes*
We've drafted a release note[2] based on the content of 'Release Notes'
field from JIRA tickets, Please use ‘Suggest edit’[3] mode to comment the
modifications.
Revisions to the draft version will continue until the next release sync,
02/27, after which a formal pr will be submitted for review.


*- CI issues*
There are some CI test or instability issues that are under evaluation[4].


*- Sync meeting (https://meet.google.com/vcx-arzs-trv
)*
We've already switched to weekly release sync, so the next release sync
will be on Feb 27th, 2024. Feel free to join!

[1] https://issues.apache.org/jira/browse/FLINK-34305
[2]
https://docs.google.com/document/d/1HLF4Nhvkln4zALKJdwRErCnPzufh7Z3BhhkWlk9Zh7w/edit
[3]
https://support.google.com/docs/answer/6033474?hl=en=GENIE.Platform%3DDesktop
[4] https://cwiki.apache.org/confluence/display/FLINK/1.19+Release

Best,
Yun, Jing, Martijn and Lincoln


[jira] [Created] (FLINK-34475) ZooKeeperLeaderElectionDriverTest failed with exit code 2

2024-02-20 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34475:
-

 Summary: ZooKeeperLeaderElectionDriverTest failed with exit code 2
 Key: FLINK-34475
 URL: https://issues.apache.org/jira/browse/FLINK-34475
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.18.1
Reporter: Matthias Pohl


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57649=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7c1d86e3-35bd-5fd5-3b7c-30c126a78702=8746]
{code:java}
Feb 20 01:20:02 01:20:02.369 [ERROR] Process Exit Code: 2
Feb 20 01:20:02 01:20:02.369 [ERROR] Crashed tests:
Feb 20 01:20:02 01:20:02.369 [ERROR] 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriverTest
Feb 20 01:20:02 01:20:02.369 [ERROR]at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-jdbc, release candidate #3

2024-02-20 Thread Matthias Pohl
+1 (binding)

* Downloaded artifacts
* Extracted sources and compiled them
* Checked diff of git tag checkout with downloaded sources
* Verified SHA512 & GPG checksums
* Checked that all POMs have the expected version
* Generated diffs to compare pom file changes with NOTICE files

On Tue, Feb 20, 2024 at 11:23 AM Leonard Xu  wrote:

> Thanks Sergey for driving this release.
>
> +1 (binding)
>
> - verified signatures
> - verified hashsums
> - built from source code with Maven 3.8.1 and Scala 2.12 succeeded
> - checked Github release tag
> - checked release notes
> - reviewed all jira tickets has been resolved
> - reviewed the web PR and left one minor comment about backporting bugfix
> to main branch
> **Note** The release date in jira[1] need to be updated
>
> Best,
> Leonard
> [1] https://issues.apache.org/jira/projects/FLINK/versions/12354088
>
>
> > 2024年2月20日 下午5:15,Sergey Nuyanzin  写道:
> >
> > +1 (non-binding)
> >
> > - Validated checksum hash
> > - Verified signature from another machine
> > - Checked that tag is present in Github
> > - Built the source
> >
> > On Tue, Feb 20, 2024 at 10:13 AM Sergey Nuyanzin 
> > wrote:
> >
> >> Hi David
> >> thanks for checking and sorry for the late reply
> >>
> >> yep, that's ok this just means that you haven't signed my key which is
> ok
> >> (usually it could happen during virtual key signing parties)
> >>
> >> For release checking it is ok to check that the key which was used to
> sign
> >> the artifacts is included into Flink release KEYS file [1]
> >>
> >> [1] https://dist.apache.org/repos/dist/release/flink/KEYS
> >>
> >> On Thu, Feb 8, 2024 at 3:50 PM David Radley 
> >> wrote:
> >>
> >>> Thanks Sergey,
> >>>
> >>> It looks better now.
> >>>
> >>> gpg --verify flink-connector-jdbc-3.1.2-1.18.jar.asc
> >>>
> >>> gpg: assuming signed data in 'flink-connector-jdbc-3.1.2-1.18.jar'
> >>>
> >>> gpg: Signature made Thu  1 Feb 10:54:45 2024 GMT
> >>>
> >>> gpg:using RSA key
> F7529FAE24811A5C0DF3CA741596BBF0726835D8
> >>>
> >>> gpg: Good signature from "Sergey Nuyanzin (CODE SIGNING KEY)
> >>> snuyan...@apache.org" [unknown]
> >>>
> >>> gpg: aka "Sergey Nuyanzin (CODE SIGNING KEY)
> >>> snuyan...@gmail.com" [unknown]
> >>>
> >>> gpg: aka "Sergey Nuyanzin snuyan...@gmail.com >>> snuyan...@gmail.com>" [unknown]
> >>>
> >>> gpg: WARNING: This key is not certified with a trusted signature!
> >>>
> >>> gpg:  There is no indication that the signature belongs to the
> >>> owner.
> >>>
> >>> I assume the warning is ok,
> >>>  Kind regards, David.
> >>>
> >>> From: Sergey Nuyanzin 
> >>> Date: Thursday, 8 February 2024 at 14:39
> >>> To: dev@flink.apache.org 
> >>> Subject: [EXTERNAL] Re: FW: RE: [VOTE] Release flink-connector-jdbc,
> >>> release candidate #3
> >>> Hi David
> >>>
> >>> it looks like in your case you don't specify the jar itself and
> probably
> >>> it
> >>> is not in current dir
> >>> so it should be something like that (assuming that both asc and jar
> file
> >>> are downloaded and are in current folder)
> >>> gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
> >>> flink-connector-jdbc-3.1.2-1.16.jar
> >>>
> >>> Here it is a more complete guide how to do it for Apache projects [1]
> >>>
> >>> [1] https://www.apache.org/info/verification.html#CheckingSignatures
> >>>
> >>> On Thu, Feb 8, 2024 at 12:38 PM David Radley 
> >>> wrote:
> >>>
>  Hi,
>  I was looking more at the asc files. I imported the keys and tried.
> 
> 
>  gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
> 
>  gpg: no signed data
> 
>  gpg: can't hash datafile: No data
> 
>  This seems to be the same for all the asc file. It does not look
> right;
> >>> am
>  I doing doing incorrect?
>    Kind regards, David.
> 
> 
>  From: David Radley 
>  Date: Thursday, 8 February 2024 at 10:46
>  To: dev@flink.apache.org 
>  Subject: [EXTERNAL] RE: [VOTE] Release flink-connector-jdbc, release
>  candidate #3
>  +1 (non-binding)
> 
>  I assume that thttps://github.com/apache/flink-web/pull/707 and be
>  completed after the release is out.
> 
>  From: Martijn Visser 
>  Date: Friday, 2 February 2024 at 08:38
>  To: dev@flink.apache.org 
>  Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-jdbc, release
>  candidate #3
>  +1 (binding)
> 
>  - Validated hashes
>  - Verified signature
>  - Verified that no binaries exist in the source archive
>  - Build the source with Maven
>  - Verified licenses
>  - Verified web PRs
> 
>  On Fri, Feb 2, 2024 at 9:31 AM Yanquan Lv 
> wrote:
> 
> > +1 (non-binding)
> >
> > - Validated checksum hash
> > - Verified signature
> > - Build the source with Maven and jdk8/11/17
> > - Check that the jar is built by jdk8
> > - Verified that no binaries exist in the 

[jira] [Created] (FLINK-34474) After failed deserialization with ConfluentRegistryAvroDeserializationSchema all subsequent deserialization fails

2024-02-20 Thread Grzegorz Liter (Jira)
Grzegorz Liter created FLINK-34474:
--

 Summary: After failed deserialization with 
ConfluentRegistryAvroDeserializationSchema all subsequent deserialization fails
 Key: FLINK-34474
 URL: https://issues.apache.org/jira/browse/FLINK-34474
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.18.1
 Environment: * Locally executed, without Flink cluster.
 * Flink on Kubernetes
Reporter: Grzegorz Liter


Steps to reproduce:
 # Create ConfluentRegistryAvroDeserializationSchema instance for specific Avro
 # Parse invalid byte representation of serialized Avro event
 # Parse valid byte representation of serialized Avro event

Expected:
Validation in step 3 is successful

 

Actual:
Validation in step 3 fails

Short code example, I cannot attach full examples at this time:

```
public class DeserializationTest {
public static void main(String[] args) throws Exception {
   byte[] valid = new byte[]{
   ...
};
byte[] invalid = new byte[]{
...
};


ConfluentRegistryAvroDeserializationSchema deserializer = 
ConfluentRegistryAvroDeserializationSchema.forSpecific(RawEvent.class, valid 
schema registry url);

System.out.println("deserialize valid");
des(deserializer, valid);
System.out.println("deserialize invalid");
des(deserializer, invalid);
System.out.println("deserialize valid");
des(deserializer, valid);
System.out.println("deserialize valid");
des(deserializer, valid);
}

private static void 
des(ConfluentRegistryAvroDeserializationSchema deserializer, byte[] 
bytes) {
try {
deserializer.deserialize(bytes);
System.out.println("VALID");
} catch (Exception e) {
System.out.println("FAILED: " + e);
}
}
}
```

Console output:

```
deserialize valid
VALID
deserialize invalid
FAILED: java.lang.ArrayIndexOutOfBoundsException: Index -154 out of bounds for 
length 2
deserialize valid
FAILED: java.lang.ArrayIndexOutOfBoundsException: Index 24 out of bounds for 
length 2
deserialize valid
FAILED: java.lang.ArrayIndexOutOfBoundsException: Index 25 out of bounds for 
length 2
```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34473) Migrate FlinkPruneEmptyRules

2024-02-20 Thread Jacky Lau (Jira)
Jacky Lau created FLINK-34473:
-

 Summary: Migrate FlinkPruneEmptyRules
 Key: FLINK-34473
 URL: https://issues.apache.org/jira/browse/FLINK-34473
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.20.0
Reporter: Jacky Lau
 Fix For: 1.20.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-402: Extend ZooKeeper Curator configurations

2024-02-20 Thread Matthias Pohl
Thanks for your reply Zhu Zhu. I guess, if we don't see any value in
aligning the parameter names (I don't have a strong argument either aside
from "it looks nicer"), there wouldn't be a need to add it to the
guidelines as well.

Sorry for not responding right away. I did a bit of research on the
AuthInfo configuration parameter (server side authorization [1], SO thread
on utilizing curator's authorization API [2]). It looks like using
String#getBytes() is the valid approach to configure this. So, in this way,
I don't have anything to add to this FLIP proposal.

+1 LGTM

[1]
https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication
[2] https://stackoverflow.com/questions/40427700/using-acl-with-curator

On Wed, Jan 24, 2024 at 12:35 PM Zhu Zhu  wrote:

> @Matthias
> Thanks for raising the question.
>
> AFAIK, there is no guide of this naming convention yet. But +1 to add this
> naming
> convention in Flink code style guide. So that new configuration names can
> follow
> the guide.
>
> However, I tend to not force the configuration name alignment in Flink 2.0.
> It does not bring obvious benefits to users but will increase the
> migration cost.
> And the feature freeze of 1.19 is coming soon. I think we can add aligned
> key
> names for those exceptional config options in 1.20, but remove the old
> keys in
> later major versions.
>
> Thanks,
> Zhu
>
> Matthias Pohl  于2024年1月23日周二 19:45写道:
>
>> - Regarding names: sure it totally makes sense to follow the kebab case
>>> and Flip has reflected the change.
>>> Regarding the convention, Flink has this widely used configuration
>>> storageDir, which doesn't follow the kebab rule and creates some confusion.
>>> IMHO it would be valuable to add a clear guide.
>>
>>
>> Ah true, I should have checked the HA-related parameters as well.
>> Initially, I just briefly skimmed over a few ConfigOptions names.
>>
>> @Zhu Zhu Is the alignment of the configuration parameter names also part
>> of the 2.0 efforts that touch the Flink configuration? Is there a guideline
>> we can follow here which is future-proof in terms of parameter naming?
>>
>> - I am considering calling the next method from the Curator framework:
>>> authorization(List) [2]. I have added necessary details regarding
>>> Map -> List(AuthInfo) conversion, taking into account that
>>> AuthInfo has a constructor with String, byte[] parameters.
>>>
>>
>> The update in the FLIP looks good to me.
>>
>> - Good point. Please let me know if I am missing something, but it seems
>>> that we already can influence ACLProvider for Curator in Flink with
>>> high-availability.zookeeper.client.acl [2] . The way it is done currently
>>> is translation of the predefined constant to some predefined ACL Provider
>>> [3]. I do not see if we can add something to the current FLIP. I suppose
>>> that eventual extension of the supported ACLProvider would be
>>> straightforward and could be done outside of the current Flip as soon
>>> as concrete use-case requirements arise.
>>
>>
>> Thanks for the pointer. My concern is just that, we might have to
>> consider certain formats for the AuthInfo to be aligned with ACLProviders.
>>
>> @Marton: I know that ZooKeeper is probably a bit unrelated to FLIP-211
>> [1] but since you worked on the Kerberos delegation token provider: Is
>> there something to consider for the ZK Kerberos integration? Maybe, you can
>> help us out.
>>
>> Matthias
>>
>> [1]
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-211%3A+Kerberos+delegation+token+framework
>>
>> On Mon, Jan 22, 2024 at 6:55 PM Alex Nitavsky 
>> wrote:
>>
>>> Hello Matthias,
>>>
>>> Thanks a lot for the feedback.
>>>
>>> - Regarding names: sure it totally makes sense to follow the kebab case
>>> and Flip has reflected the change.
>>> Regarding the convention, Flink has this widely used configuration
>>> storageDir, which doesn't follow the kebab rule and creates some confusion.
>>> IMHO it would be valuable to add a clear guide.
>>>
>>> - I am considering calling the next method from the Curator framework:
>>> authorization(List) [2]. I have added necessary details regarding
>>> Map -> List(AuthInfo) conversion, taking into account that
>>> AuthInfo has a constructor with String, byte[] parameters.
>>>
>>> - Good point. Please let me know if I am missing something, but it seems
>>> that we already can influence ACLProvider for Curator in Flink with
>>> high-availability.zookeeper.client.acl [2] . The way it is done currently
>>> is translation of the predefined constant to some predefined ACL Provider
>>> [3]. I do not see if we can add something to the current FLIP. I suppose
>>> that eventual extension of the supported ACLProvider would be
>>> straightforward and could be done outside of the current Flip as soon
>>> as concrete use-case requirements arise.
>>>
>>> Kind Regards
>>> Oleksandr
>>>
>>> [1]
>>> 

[jira] [Created] (FLINK-34472) loading class of protobuf format descriptor by Class.forName(className, true, Thread.currentThread().getContextClassLoader()) may can not find class because The current

2024-02-20 Thread jeremyMu (Jira)
jeremyMu created FLINK-34472:


 Summary: loading class of protobuf format descriptor by 
Class.forName(className, true, Thread.currentThread().getContextClassLoader()) 
may can not find class because The current thread class loader may not contain 
this class
 Key: FLINK-34472
 URL: https://issues.apache.org/jira/browse/FLINK-34472
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.18.1
Reporter: jeremyMu
 Fix For: 1.18.1
 Attachments: exception1.png





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-jdbc, release candidate #3

2024-02-20 Thread Leonard Xu
Thanks Sergey for driving this release.

+1 (binding)

- verified signatures
- verified hashsums
- built from source code with Maven 3.8.1 and Scala 2.12 succeeded
- checked Github release tag 
- checked release notes
- reviewed all jira tickets has been resolved
- reviewed the web PR and left one minor comment about backporting bugfix to 
main branch
**Note** The release date in jira[1] need to be updated

Best,
Leonard
[1] https://issues.apache.org/jira/projects/FLINK/versions/12354088


> 2024年2月20日 下午5:15,Sergey Nuyanzin  写道:
> 
> +1 (non-binding)
> 
> - Validated checksum hash
> - Verified signature from another machine
> - Checked that tag is present in Github
> - Built the source
> 
> On Tue, Feb 20, 2024 at 10:13 AM Sergey Nuyanzin 
> wrote:
> 
>> Hi David
>> thanks for checking and sorry for the late reply
>> 
>> yep, that's ok this just means that you haven't signed my key which is ok
>> (usually it could happen during virtual key signing parties)
>> 
>> For release checking it is ok to check that the key which was used to sign
>> the artifacts is included into Flink release KEYS file [1]
>> 
>> [1] https://dist.apache.org/repos/dist/release/flink/KEYS
>> 
>> On Thu, Feb 8, 2024 at 3:50 PM David Radley 
>> wrote:
>> 
>>> Thanks Sergey,
>>> 
>>> It looks better now.
>>> 
>>> gpg --verify flink-connector-jdbc-3.1.2-1.18.jar.asc
>>> 
>>> gpg: assuming signed data in 'flink-connector-jdbc-3.1.2-1.18.jar'
>>> 
>>> gpg: Signature made Thu  1 Feb 10:54:45 2024 GMT
>>> 
>>> gpg:using RSA key F7529FAE24811A5C0DF3CA741596BBF0726835D8
>>> 
>>> gpg: Good signature from "Sergey Nuyanzin (CODE SIGNING KEY)
>>> snuyan...@apache.org" [unknown]
>>> 
>>> gpg: aka "Sergey Nuyanzin (CODE SIGNING KEY)
>>> snuyan...@gmail.com" [unknown]
>>> 
>>> gpg: aka "Sergey Nuyanzin snuyan...@gmail.com>> snuyan...@gmail.com>" [unknown]
>>> 
>>> gpg: WARNING: This key is not certified with a trusted signature!
>>> 
>>> gpg:  There is no indication that the signature belongs to the
>>> owner.
>>> 
>>> I assume the warning is ok,
>>>  Kind regards, David.
>>> 
>>> From: Sergey Nuyanzin 
>>> Date: Thursday, 8 February 2024 at 14:39
>>> To: dev@flink.apache.org 
>>> Subject: [EXTERNAL] Re: FW: RE: [VOTE] Release flink-connector-jdbc,
>>> release candidate #3
>>> Hi David
>>> 
>>> it looks like in your case you don't specify the jar itself and probably
>>> it
>>> is not in current dir
>>> so it should be something like that (assuming that both asc and jar file
>>> are downloaded and are in current folder)
>>> gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
>>> flink-connector-jdbc-3.1.2-1.16.jar
>>> 
>>> Here it is a more complete guide how to do it for Apache projects [1]
>>> 
>>> [1] https://www.apache.org/info/verification.html#CheckingSignatures
>>> 
>>> On Thu, Feb 8, 2024 at 12:38 PM David Radley 
>>> wrote:
>>> 
 Hi,
 I was looking more at the asc files. I imported the keys and tried.
 
 
 gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
 
 gpg: no signed data
 
 gpg: can't hash datafile: No data
 
 This seems to be the same for all the asc file. It does not look right;
>>> am
 I doing doing incorrect?
   Kind regards, David.
 
 
 From: David Radley 
 Date: Thursday, 8 February 2024 at 10:46
 To: dev@flink.apache.org 
 Subject: [EXTERNAL] RE: [VOTE] Release flink-connector-jdbc, release
 candidate #3
 +1 (non-binding)
 
 I assume that thttps://github.com/apache/flink-web/pull/707 and be
 completed after the release is out.
 
 From: Martijn Visser 
 Date: Friday, 2 February 2024 at 08:38
 To: dev@flink.apache.org 
 Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-jdbc, release
 candidate #3
 +1 (binding)
 
 - Validated hashes
 - Verified signature
 - Verified that no binaries exist in the source archive
 - Build the source with Maven
 - Verified licenses
 - Verified web PRs
 
 On Fri, Feb 2, 2024 at 9:31 AM Yanquan Lv  wrote:
 
> +1 (non-binding)
> 
> - Validated checksum hash
> - Verified signature
> - Build the source with Maven and jdk8/11/17
> - Check that the jar is built by jdk8
> - Verified that no binaries exist in the source archive
> 
> Sergey Nuyanzin  于2024年2月1日周四 19:50写道:
> 
>> Hi everyone,
>> Please review and vote on the release candidate #3 for the version
 3.1.2,
>> as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific
>>> comments)
>> 
>> This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
>> 
>> The complete staging area is available for your review, which
>>> includes:
>> * JIRA release notes [1],
>> * the official Apache source release to be deployed to
>>> 

[jira] [Created] (FLINK-34471) Tune the network memroy in Autoscaler

2024-02-20 Thread Rui Fan (Jira)
Rui Fan created FLINK-34471:
---

 Summary: Tune the network memroy in Autoscaler
 Key: FLINK-34471
 URL: https://issues.apache.org/jira/browse/FLINK-34471
 Project: Flink
  Issue Type: Improvement
  Components: Autoscaler
Reporter: Rui Fan


Design doc: 
https://docs.google.com/document/d/19HYamwMaYYYOeH3NRbk6l9P-bBLBfgzMYjfGEPWEbeo/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: FW: RE: [VOTE] Release flink-connector-jdbc, release candidate #3

2024-02-20 Thread Sergey Nuyanzin
Hi David
thanks for checking and sorry for the late reply

yep, that's ok this just means that you haven't signed my key which is ok
(usually it could happen during virtual key signing parties)

For release checking it is ok to check that the key which was used to sign
the artifacts is included into Flink release KEYS file [1]

[1] https://dist.apache.org/repos/dist/release/flink/KEYS

On Thu, Feb 8, 2024 at 3:50 PM David Radley  wrote:

> Thanks Sergey,
>
> It looks better now.
>
> gpg --verify flink-connector-jdbc-3.1.2-1.18.jar.asc
>
> gpg: assuming signed data in 'flink-connector-jdbc-3.1.2-1.18.jar'
>
> gpg: Signature made Thu  1 Feb 10:54:45 2024 GMT
>
> gpg:using RSA key F7529FAE24811A5C0DF3CA741596BBF0726835D8
>
> gpg: Good signature from "Sergey Nuyanzin (CODE SIGNING KEY)
> snuyan...@apache.org" [unknown]
>
> gpg: aka "Sergey Nuyanzin (CODE SIGNING KEY)
> snuyan...@gmail.com" [unknown]
>
> gpg: aka "Sergey Nuyanzin snuyan...@gmail.com snuyan...@gmail.com>" [unknown]
>
> gpg: WARNING: This key is not certified with a trusted signature!
>
> gpg:  There is no indication that the signature belongs to the
> owner.
>
> I assume the warning is ok,
>   Kind regards, David.
>
> From: Sergey Nuyanzin 
> Date: Thursday, 8 February 2024 at 14:39
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: FW: RE: [VOTE] Release flink-connector-jdbc,
> release candidate #3
> Hi David
>
> it looks like in your case you don't specify the jar itself and probably it
> is not in current dir
> so it should be something like that (assuming that both asc and jar file
> are downloaded and are in current folder)
> gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
> flink-connector-jdbc-3.1.2-1.16.jar
>
> Here it is a more complete guide how to do it for Apache projects [1]
>
> [1] https://www.apache.org/info/verification.html#CheckingSignatures
>
> On Thu, Feb 8, 2024 at 12:38 PM David Radley 
> wrote:
>
> > Hi,
> > I was looking more at the asc files. I imported the keys and tried.
> >
> >
> > gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
> >
> > gpg: no signed data
> >
> > gpg: can't hash datafile: No data
> >
> > This seems to be the same for all the asc file. It does not look right;
> am
> > I doing doing incorrect?
> >Kind regards, David.
> >
> >
> > From: David Radley 
> > Date: Thursday, 8 February 2024 at 10:46
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] RE: [VOTE] Release flink-connector-jdbc, release
> > candidate #3
> > +1 (non-binding)
> >
> > I assume that thttps://github.com/apache/flink-web/pull/707 and be
> > completed after the release is out.
> >
> > From: Martijn Visser 
> > Date: Friday, 2 February 2024 at 08:38
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-jdbc, release
> > candidate #3
> > +1 (binding)
> >
> > - Validated hashes
> > - Verified signature
> > - Verified that no binaries exist in the source archive
> > - Build the source with Maven
> > - Verified licenses
> > - Verified web PRs
> >
> > On Fri, Feb 2, 2024 at 9:31 AM Yanquan Lv  wrote:
> >
> > > +1 (non-binding)
> > >
> > > - Validated checksum hash
> > > - Verified signature
> > > - Build the source with Maven and jdk8/11/17
> > > - Check that the jar is built by jdk8
> > > - Verified that no binaries exist in the source archive
> > >
> > > Sergey Nuyanzin  于2024年2月1日周四 19:50写道:
> > >
> > > > Hi everyone,
> > > > Please review and vote on the release candidate #3 for the version
> > 3.1.2,
> > > > as follows:
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > > This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
> > > >
> > > > The complete staging area is available for your review, which
> includes:
> > > > * JIRA release notes [1],
> > > > * the official Apache source release to be deployed to
> dist.apache.org
> > > > [2],
> > > > which are signed with the key with fingerprint 1596BBF0726835D8 [3],
> > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > * source code tag v3.1.2-rc3 [5],
> > > > * website pull request listing the new release [6].
> > > >
> > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > > approval, with at least 3 PMC affirmative votes.
> > > >
> > > > Thanks,
> > > > Release Manager
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354088
> > > > [2]
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.1.2-rc3
> > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > [4]
> > > >
> > https://repository.apache.org/content/repositories/orgapacheflink-1706/
> > > > [5]
> > > https://github.com/apache/flink-connector-jdbc/releases/tag/v3.1.2-rc3
> > > > [6] 

Re: [VOTE] Release flink-connector-parent 1.1.0 release candidate #2

2024-02-20 Thread gongzhongqiang
+1 (non-binding)

- checksum and signature are good
- github tag exist
- build success on Ubuntu 22.04 with jdk8
- no binaries in source
- reviewed web pr

Best,
Zhongqiang Gong

Etienne Chauchot  于2024年2月20日周二 01:34写道:

> Hi everyone,
> Please review and vote on the release candidate #2 for the version
> 1.1.0, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2], which are signed with the key with fingerprint
> D1A76BA19D6294DD0033F6843A019F0B8DD163EA [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v1.1.0-rc2 [5],
> * website pull request listing the new release [6].
>
> * confluence wiki: connector parent upgrade to version 1.1.0 that will
> be validated after the artifact is released (there is no PR mechanism on
> the wiki) [7]
>
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Etienne
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353442
> [2]
>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-parent-1.1.0-rc2
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1707
> [5]
>
> https://github.com/apache/flink-connector-shared-utils/releases/tag/v1.1.0-rc2
>
> [6] https://github.com/apache/flink-web/pull/717
>
> [7]
>
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
>


Re: FW: RE: [VOTE] Release flink-connector-jdbc, release candidate #3

2024-02-20 Thread Sergey Nuyanzin
+1 (non-binding)

- Validated checksum hash
- Verified signature from another machine
- Checked that tag is present in Github
- Built the source

On Tue, Feb 20, 2024 at 10:13 AM Sergey Nuyanzin 
wrote:

> Hi David
> thanks for checking and sorry for the late reply
>
> yep, that's ok this just means that you haven't signed my key which is ok
> (usually it could happen during virtual key signing parties)
>
> For release checking it is ok to check that the key which was used to sign
> the artifacts is included into Flink release KEYS file [1]
>
> [1] https://dist.apache.org/repos/dist/release/flink/KEYS
>
> On Thu, Feb 8, 2024 at 3:50 PM David Radley 
> wrote:
>
>> Thanks Sergey,
>>
>> It looks better now.
>>
>> gpg --verify flink-connector-jdbc-3.1.2-1.18.jar.asc
>>
>> gpg: assuming signed data in 'flink-connector-jdbc-3.1.2-1.18.jar'
>>
>> gpg: Signature made Thu  1 Feb 10:54:45 2024 GMT
>>
>> gpg:using RSA key F7529FAE24811A5C0DF3CA741596BBF0726835D8
>>
>> gpg: Good signature from "Sergey Nuyanzin (CODE SIGNING KEY)
>> snuyan...@apache.org" [unknown]
>>
>> gpg: aka "Sergey Nuyanzin (CODE SIGNING KEY)
>> snuyan...@gmail.com" [unknown]
>>
>> gpg: aka "Sergey Nuyanzin snuyan...@gmail.com> snuyan...@gmail.com>" [unknown]
>>
>> gpg: WARNING: This key is not certified with a trusted signature!
>>
>> gpg:  There is no indication that the signature belongs to the
>> owner.
>>
>> I assume the warning is ok,
>>   Kind regards, David.
>>
>> From: Sergey Nuyanzin 
>> Date: Thursday, 8 February 2024 at 14:39
>> To: dev@flink.apache.org 
>> Subject: [EXTERNAL] Re: FW: RE: [VOTE] Release flink-connector-jdbc,
>> release candidate #3
>> Hi David
>>
>> it looks like in your case you don't specify the jar itself and probably
>> it
>> is not in current dir
>> so it should be something like that (assuming that both asc and jar file
>> are downloaded and are in current folder)
>> gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
>> flink-connector-jdbc-3.1.2-1.16.jar
>>
>> Here it is a more complete guide how to do it for Apache projects [1]
>>
>> [1] https://www.apache.org/info/verification.html#CheckingSignatures
>>
>> On Thu, Feb 8, 2024 at 12:38 PM David Radley 
>> wrote:
>>
>> > Hi,
>> > I was looking more at the asc files. I imported the keys and tried.
>> >
>> >
>> > gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
>> >
>> > gpg: no signed data
>> >
>> > gpg: can't hash datafile: No data
>> >
>> > This seems to be the same for all the asc file. It does not look right;
>> am
>> > I doing doing incorrect?
>> >Kind regards, David.
>> >
>> >
>> > From: David Radley 
>> > Date: Thursday, 8 February 2024 at 10:46
>> > To: dev@flink.apache.org 
>> > Subject: [EXTERNAL] RE: [VOTE] Release flink-connector-jdbc, release
>> > candidate #3
>> > +1 (non-binding)
>> >
>> > I assume that thttps://github.com/apache/flink-web/pull/707 and be
>> > completed after the release is out.
>> >
>> > From: Martijn Visser 
>> > Date: Friday, 2 February 2024 at 08:38
>> > To: dev@flink.apache.org 
>> > Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-jdbc, release
>> > candidate #3
>> > +1 (binding)
>> >
>> > - Validated hashes
>> > - Verified signature
>> > - Verified that no binaries exist in the source archive
>> > - Build the source with Maven
>> > - Verified licenses
>> > - Verified web PRs
>> >
>> > On Fri, Feb 2, 2024 at 9:31 AM Yanquan Lv  wrote:
>> >
>> > > +1 (non-binding)
>> > >
>> > > - Validated checksum hash
>> > > - Verified signature
>> > > - Build the source with Maven and jdk8/11/17
>> > > - Check that the jar is built by jdk8
>> > > - Verified that no binaries exist in the source archive
>> > >
>> > > Sergey Nuyanzin  于2024年2月1日周四 19:50写道:
>> > >
>> > > > Hi everyone,
>> > > > Please review and vote on the release candidate #3 for the version
>> > 3.1.2,
>> > > > as follows:
>> > > > [ ] +1, Approve the release
>> > > > [ ] -1, Do not approve the release (please provide specific
>> comments)
>> > > >
>> > > > This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
>> > > >
>> > > > The complete staging area is available for your review, which
>> includes:
>> > > > * JIRA release notes [1],
>> > > > * the official Apache source release to be deployed to
>> dist.apache.org
>> > > > [2],
>> > > > which are signed with the key with fingerprint 1596BBF0726835D8 [3],
>> > > > * all artifacts to be deployed to the Maven Central Repository [4],
>> > > > * source code tag v3.1.2-rc3 [5],
>> > > > * website pull request listing the new release [6].
>> > > >
>> > > > The vote will be open for at least 72 hours. It is adopted by
>> majority
>> > > > approval, with at least 3 PMC affirmative votes.
>> > > >
>> > > > Thanks,
>> > > > Release Manager
>> > > >
>> > > > [1]
>> > > >
>> > > >
>> > >
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354088
>> > > > [2]
>> 

Community Over Code Asia 2024 Travel Assistance Applications now open!

2024-02-20 Thread Gavin McDonald
Hello to all users, contributors and Committers!

The Travel Assistance Committee (TAC) are pleased to announce that
travel assistance applications for Community over Code Asia 2024 are now
open!

We will be supporting Community over Code Asia, Hangzhou, China
July 26th - 28th, 2024.

TAC exists to help those that would like to attend Community over Code
events, but are unable to do so for financial reasons. For more info
on this year's applications and qualifying criteria, please visit the
TAC website at < https://tac.apache.org/ >. Applications are already
open on https://tac-apply.apache.org/, so don't delay!

The Apache Travel Assistance Committee will only be accepting
applications from those people that are able to attend the full event.

Important: Applications close on Friday, May 10th, 2024.

Applicants have until the the closing date above to submit their
applications (which should contain as much supporting material as
required to efficiently and accurately process their request), this
will enable TAC to announce successful applications shortly
afterwards.

As usual, TAC expects to deal with a range of applications from a
diverse range of backgrounds; therefore, we encourage (as always)
anyone thinking about sending in an application to do so ASAP.

For those that will need a Visa to enter the Country - we advise you to
apply
now so that you have enough time in case of interview delays. So do not
wait until you know if you have been accepted or not.

We look forward to greeting many of you in Hangzhou, China in July, 2024!

Kind Regards,

Gavin

(On behalf of the Travel Assistance Committee)


Re: FW: RE: [DISCUSS] FLIP-314: Support Customized Job Lineage Listener

2024-02-20 Thread Yong Fang
Hi Martijn,

Thank you for your attention. Let me first explain the specific situation
of FLIP-314. FLIP-314 is currently in an accepted state, but actual code
development has not yet begun, and interface related PR has not been merged
into the master. So it may not be necessary for us to create a separate
FLIP. Currently, my idea is to directly update the interface on FLIP-314,
but to initiate a separate thread with the context and we can vote there.

What do you think? Thanks

Best,
Fang Yong

On Mon, Feb 19, 2024 at 8:27 PM Martijn Visser 
wrote:

> I'm a bit confused: did we add new interfaces after FLIP-314 was
> accepted? If so, please move the new interfaces to a new FLIP and
> start a separate vote. We can't retrospectively change an accepted
> FLIP with new interfaces and a new vote.
>
> On Mon, Feb 19, 2024 at 3:22 AM Yong Fang  wrote:
> >
> > Hi all,
> >
> > If there are no more feedbacks, I will start a vote for the new
> interfaces
> > in the next day, thanks
> >
> > Best,
> > Fang Yong
> >
> > On Thu, Feb 8, 2024 at 1:30 PM Yong Fang  wrote:
> >
> > > Hi devs,
> > >
> > > According to the online-discussion in FLINK-3127 [1] and
> > > offline-discussion with Maciej Obuchowski and Zhenqiu Huang, we would
> like
> > > to update the lineage vertex relevant interfaces in FLIP-314 [2] as
> follows:
> > >
> > > 1. Introduce `LineageDataset` which represents source and sink in
> > > `LineageVertex`. The fields in `LineageDataset` are as follows:
> > > /* Name for this particular dataset. */
> > > String name;
> > > /* Unique name for this dataset's storage, for example, url for
> jdbc
> > > connector and location for lakehouse connector. */
> > > String namespace;
> > > /* Facets for the lineage vertex to describe the particular
> > > information of dataset, such as schema and config. */
> > > Map facets;
> > >
> > > 2. There may be multiple datasets in one `LineageVertex`, for example,
> > > kafka source or hybrid source. So users can get dataset list from
> > > `LineageVertex`:
> > > /** Get datasets from the lineage vertex. */
> > > List datasets();
> > >
> > > 3. There will be built in facets for config and schema. To describe
> > > columns in table/sql jobs and datastream jobs, we introduce
> > > `DatasetSchemaField`.
> > > /** Builtin config facet for dataset. */
> > > @PublicEvolving
> > > public interface DatasetConfigFacet extends LineageDatasetFacet {
> > > Map config();
> > > }
> > >
> > > /** Field for schema in dataset. */
> > > public interface DatasetSchemaField {
> > > /** The name of the field. */
> > > String name();
> > > /** The type of the field. */
> > > T type();
> > > }
> > >
> > > Thanks for valuable inputs from @Maciej and @Zhenqiu. And looking
> forward
> > > to your feedback, thanks
> > >
> > > Best,
> > > Fang Yong
> > >
> > > On Mon, Sep 25, 2023 at 1:18 PM Shammon FY  wrote:
> > >
> > >> Hi David,
> > >>
> > >> Do you want the detailed topology for Flink job? You can get
> > >> `JobDetailsInfo` in `RestCusterClient` with the submitted job id, it
> has
> > >> `String jsonPlan`. You can parse the json plan to get all steps and
> > >> relations between them in a Flink job. Hope this can help you, thanks!
> > >>
> > >> Best,
> > >> Shammon FY
> > >>
> > >> On Tue, Sep 19, 2023 at 11:46 PM David Radley <
> david_rad...@uk.ibm.com>
> > >> wrote:
> > >>
> > >>> Hi there,
> > >>> I am looking at the interfaces. If I am reading it correctly,there is
> > >>> one relationship between the source and sink and this relationship
> > >>> represents the operational lineage. Lineage is usually represented
> as asset
> > >>> -> process - > asset – see for example
> > >>>
> https://egeria-project.org/features/lineage-management/overview/#the-lineage-graph
> > >>>
> > >>> Maybe I am missing it, but it seems to be that it would be useful to
> > >>> store the process in the lineage graph.
> > >>>
> > >>> It is useful to have the top level lineage as source -> Flink job ->
> > >>> sink. Where the Flink job is the process, but also to have this
> asset ->
> > >>> process -> asset pattern for each of the steps in the job. If this is
> > >>> present, please could you point me to it,
> > >>>
> > >>>   Kind regards, David.
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> From: David Radley 
> > >>> Date: Tuesday, 19 September 2023 at 16:11
> > >>> To: dev@flink.apache.org 
> > >>> Subject: [EXTERNAL] RE: [DISCUSS] FLIP-314: Support Customized Job
> > >>> Lineage Listener
> > >>> Hi,
> > >>> I notice that there is an experimental lineage integration for Flink
> > >>> with OpenLineage https://openlineage.io/docs/integrations/flink  . I
> > >>> think this feature would allow for a superior Flink OpenLineage
> integration,
> > >>> Kind regards, David.
> > >>>
> > >>> From: XTransfer 
> > >>> Date: Tuesday, 19 September 2023 at 15:47
> > >>> To: dev@flink.apache.org 
> > >>> Subject: 

Re: [VOTE] Release flink-connector-parent 1.1.0 release candidate #2

2024-02-20 Thread Sergey Nuyanzin
Thanks for driving this, Etienne!

+1 (non-binding)

- Verified checksum and signature
- Verified pom
- Built from source
- Verified no binaries
- Checked staging repo on Maven central
- Checked source code tag
- Reviewed web PR


One thing (probably minor) I noticed that the artifacts (uploaded to nexus)
are built with jdk11 while usually it should be with jdk8
Since there is no jars I think it should be ok

On Tue, Feb 20, 2024 at 9:19 AM Hang Ruan  wrote:

> +1 (non-binding)
>
> - verified checksum and signature
> - checked Github release tag
> - checked release notes
> - verified no binaries in source
> - reviewed the web PR
>
> Best,
> Hang
>
> Leonard Xu  于2024年2月20日周二 14:26写道:
>
> > +1 (binding)
> >
> > - verified signatures
> > - verified hashsums
> > - built from source code succeeded
> > - checked Github release tag
> > - checked release notes
> > - reviewed all Jira tickets have been resolved
> > - reviewed the web PR
> >
> > Best,
> > Leonard
> >
> >
> > > 2024年2月20日 上午11:14,Rui Fan <1996fan...@gmail.com> 写道:
> > >
> > > Thanks for driving this, Etienne!
> > >
> > > +1 (non-binding)
> > >
> > > - Verified checksum and signature
> > > - Verified pom content
> > > - Build source on my Mac with jdk8
> > > - Verified no binaries in source
> > > - Checked staging repo on Maven central
> > > - Checked source code tag
> > > - Reviewed web PR
> > >
> > > Best,
> > > Rui
> > >
> > > On Tue, Feb 20, 2024 at 10:33 AM Qingsheng Ren 
> wrote:
> > >
> > >> Thanks for driving this, Etienne!
> > >>
> > >> +1 (binding)
> > >>
> > >> - Checked release note
> > >> - Verified checksum and signature
> > >> - Verified pom content
> > >> - Verified no binaries in source
> > >> - Checked staging repo on Maven central
> > >> - Checked source code tag
> > >> - Reviewed web PR
> > >> - Built Kafka connector from source with parent pom in staging repo
> > >>
> > >> Best,
> > >> Qingsheng
> > >>
> > >> On Tue, Feb 20, 2024 at 1:34 AM Etienne Chauchot <
> echauc...@apache.org>
> > >> wrote:
> > >>
> > >>> Hi everyone,
> > >>> Please review and vote on the release candidate #2 for the version
> > >>> 1.1.0, as follows:
> > >>> [ ] +1, Approve the release
> > >>> [ ] -1, Do not approve the release (please provide specific comments)
> > >>>
> > >>>
> > >>> The complete staging area is available for your review, which
> includes:
> > >>> * JIRA release notes [1],
> > >>> * the official Apache source release to be deployed to
> dist.apache.org
> > >>> [2], which are signed with the key with fingerprint
> > >>> D1A76BA19D6294DD0033F6843A019F0B8DD163EA [3],
> > >>> * all artifacts to be deployed to the Maven Central Repository [4],
> > >>> * source code tag v1.1.0-rc2 [5],
> > >>> * website pull request listing the new release [6].
> > >>>
> > >>> * confluence wiki: connector parent upgrade to version 1.1.0 that
> will
> > >>> be validated after the artifact is released (there is no PR mechanism
> > on
> > >>> the wiki) [7]
> > >>>
> > >>>
> > >>> The vote will be open for at least 72 hours. It is adopted by
> majority
> > >>> approval, with at least 3 PMC affirmative votes.
> > >>>
> > >>> Thanks,
> > >>> Etienne
> > >>>
> > >>> [1]
> > >>>
> > >>>
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353442
> > >>> [2]
> > >>>
> > >>>
> > >>
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-parent-1.1.0-rc2
> > >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > >>> [4]
> > >>
> https://repository.apache.org/content/repositories/orgapacheflink-1707
> > >>> [5]
> > >>>
> > >>>
> > >>
> >
> https://github.com/apache/flink-connector-shared-utils/releases/tag/v1.1.0-rc2
> > >>>
> > >>> [6] https://github.com/apache/flink-web/pull/717
> > >>>
> > >>> [7]
> > >>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
> > >>>
> > >>
> >
> >
>


-- 
Best regards,
Sergey


Re: [VOTE] Release flink-connector-parent 1.1.0 release candidate #2

2024-02-20 Thread Hang Ruan
+1 (non-binding)

- verified checksum and signature
- checked Github release tag
- checked release notes
- verified no binaries in source
- reviewed the web PR

Best,
Hang

Leonard Xu  于2024年2月20日周二 14:26写道:

> +1 (binding)
>
> - verified signatures
> - verified hashsums
> - built from source code succeeded
> - checked Github release tag
> - checked release notes
> - reviewed all Jira tickets have been resolved
> - reviewed the web PR
>
> Best,
> Leonard
>
>
> > 2024年2月20日 上午11:14,Rui Fan <1996fan...@gmail.com> 写道:
> >
> > Thanks for driving this, Etienne!
> >
> > +1 (non-binding)
> >
> > - Verified checksum and signature
> > - Verified pom content
> > - Build source on my Mac with jdk8
> > - Verified no binaries in source
> > - Checked staging repo on Maven central
> > - Checked source code tag
> > - Reviewed web PR
> >
> > Best,
> > Rui
> >
> > On Tue, Feb 20, 2024 at 10:33 AM Qingsheng Ren  wrote:
> >
> >> Thanks for driving this, Etienne!
> >>
> >> +1 (binding)
> >>
> >> - Checked release note
> >> - Verified checksum and signature
> >> - Verified pom content
> >> - Verified no binaries in source
> >> - Checked staging repo on Maven central
> >> - Checked source code tag
> >> - Reviewed web PR
> >> - Built Kafka connector from source with parent pom in staging repo
> >>
> >> Best,
> >> Qingsheng
> >>
> >> On Tue, Feb 20, 2024 at 1:34 AM Etienne Chauchot 
> >> wrote:
> >>
> >>> Hi everyone,
> >>> Please review and vote on the release candidate #2 for the version
> >>> 1.1.0, as follows:
> >>> [ ] +1, Approve the release
> >>> [ ] -1, Do not approve the release (please provide specific comments)
> >>>
> >>>
> >>> The complete staging area is available for your review, which includes:
> >>> * JIRA release notes [1],
> >>> * the official Apache source release to be deployed to dist.apache.org
> >>> [2], which are signed with the key with fingerprint
> >>> D1A76BA19D6294DD0033F6843A019F0B8DD163EA [3],
> >>> * all artifacts to be deployed to the Maven Central Repository [4],
> >>> * source code tag v1.1.0-rc2 [5],
> >>> * website pull request listing the new release [6].
> >>>
> >>> * confluence wiki: connector parent upgrade to version 1.1.0 that will
> >>> be validated after the artifact is released (there is no PR mechanism
> on
> >>> the wiki) [7]
> >>>
> >>>
> >>> The vote will be open for at least 72 hours. It is adopted by majority
> >>> approval, with at least 3 PMC affirmative votes.
> >>>
> >>> Thanks,
> >>> Etienne
> >>>
> >>> [1]
> >>>
> >>>
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353442
> >>> [2]
> >>>
> >>>
> >>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-parent-1.1.0-rc2
> >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >>> [4]
> >> https://repository.apache.org/content/repositories/orgapacheflink-1707
> >>> [5]
> >>>
> >>>
> >>
> https://github.com/apache/flink-connector-shared-utils/releases/tag/v1.1.0-rc2
> >>>
> >>> [6] https://github.com/apache/flink-web/pull/717
> >>>
> >>> [7]
> >>>
> >>>
> >>
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
> >>>
> >>
>
>