[jira] [Created] (FLINK-35357) Add "kubernetes.operator.plugins.listeners" parameter description to the Operator configuration document

2024-05-14 Thread Yang Zhou (Jira)
Yang Zhou created FLINK-35357:
-

 Summary: Add "kubernetes.operator.plugins.listeners" parameter 
description to the Operator configuration document
 Key: FLINK-35357
 URL: https://issues.apache.org/jira/browse/FLINK-35357
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Yang Zhou


In Flink Operator "Custom Flink Resource Listeners" in practice (doc: 
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.8/docs/operations/plugins/#custom-flink-resource
 -listeners)

It was found that the "Operator Configuration Reference" document did not 
explain the "Custom Flink Resource Listeners" configuration parameters.

So I wanted to come up with adding: 
kubernetes.operator.plugins.listeners..class: 
 , after all it is useful.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-14 Thread Rodrigo Meneses
+1 (non-binding)
Thanks a lot for driving this effort!!
-Rodrigo


On Tue, May 14, 2024 at 11:00 AM Péter Váry 
wrote:

> +1 (non-binding)
> Thanks, Peter
>
> On Tue, May 14, 2024, 09:50 gongzhongqiang 
> wrote:
>
> > +1 (non-binding)
> >
> > Best.
> > Zhongqiang Gong
> >
> > Martijn Visser  于2024年5月14日周二 14:45写道:
> >
> > > Hi everyone,
> > >
> > > With no more discussions being open in the thread [1] I would like to
> > start
> > > a vote on FLIP-453: Promote Unified Sink API V2 to Public and Deprecate
> > > SinkFunction [2]
> > >
> > > The vote will be open for at least 72 hours unless there is an
> objection
> > or
> > > insufficient votes.
> > >
> > > Best regards,
> > >
> > > Martijn
> > >
> > > [1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
> > > [2]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction
> > >
> >
>


Re: [VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-14 Thread Péter Váry
+1 (non-binding)
Thanks, Peter

On Tue, May 14, 2024, 09:50 gongzhongqiang 
wrote:

> +1 (non-binding)
>
> Best.
> Zhongqiang Gong
>
> Martijn Visser  于2024年5月14日周二 14:45写道:
>
> > Hi everyone,
> >
> > With no more discussions being open in the thread [1] I would like to
> start
> > a vote on FLIP-453: Promote Unified Sink API V2 to Public and Deprecate
> > SinkFunction [2]
> >
> > The vote will be open for at least 72 hours unless there is an objection
> or
> > insufficient votes.
> >
> > Best regards,
> >
> > Martijn
> >
> > [1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
> > [2]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction
> >
>


RE: [VOTE] FLIP-454: New Apicurio Avro format

2024-05-14 Thread David Radley
Hi Danny,
Thankyou so much for supporting this flip and for your feedback. I have copied 
your points and responded on the discussion thread here:
https://lists.apache.org/thread/rmcc67z35ysk0sv2jhz2wq0mwzjorhjb

kind regards, David.

From: Danny Cranmer 
Date: Tuesday, 14 May 2024 at 11:58
To: dev@flink.apache.org 
Subject: [EXTERNAL] Re: [VOTE] FLIP-454: New Apicurio Avro format
Hello all,

Thanks for Driving this David. I am +1 for adding support for the new
format, however have some questions/suggestions on the details.

1. Passing around Map additionalInputProperties feels a bit
dirty. It looks like this is mainly for the Kafka connector. This connector
already has a de/serialization schema extension to access record
headers, KafkaRecordDeserializationSchema [1], can we use this instead?
2. Can you elaborate why we need to change the SchemaCoder interface? Again
I am not a fan of adding these Map parameters
3. I assume this integration will go into the core Flink repo under
flink-formats [2], and not be a separate repository like the connectors?

Thanks,
Danny

[1]
https://github.com/apache/flink-connector-kafka/blob/main/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/deserializer/KafkaRecordDeserializationSchema.java
[2] https://github.com/apache/flink/tree/master/flink-formats

On Sat, May 4, 2024 at 12:46 PM Ahmed Hamdy  wrote:

> +1 (non-binding)
>
> Best Regards
> Ahmed Hamdy
>
>
> On Fri, 3 May 2024 at 15:16, Jeyhun Karimov  wrote:
>
> > +1 (non binding)
> >
> > Thanks for driving this FLIP David.
> >
> > Regards,
> > Jeyhun
> >
> > On Fri, May 3, 2024 at 2:21 PM Mark Nuttall  wrote:
> >
> > > +1, I would also like to see first class support for Avro and Apicurio
> > >
> > > -- Mark Nuttall, mnutt...@apache.org
> > > Senior Software Engineer, IBM Event Automation
> > >
> > > On 2024/05/02 09:41:09 David Radley wrote:
> > > > Hi everyone,
> > > >
> > > > I'd like to start a vote on the FLIP-454: New Apicurio Avro format
> > > > [1]. The discussion thread is here [2].
> > > >
> > > > The vote will be open for at least 72 hours unless there is an
> > > > objection
> > > > or
> > > > insufficient votes.
> > > >
> > > > [1]
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format
> > > > [2] https://lists.apache.org/thread/wtkl4yn847tdd0wrqm5xgv9wc0cb0kr8
> > > >
> > > >
> > > > Kind regards, David.
> > > >
> > > > Unless otherwise stated above:
> > > >
> > > > IBM United Kingdom Limited
> > > > Registered in England and Wales with number 741598
> > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6
> 3AU
> > > >
> > >
> >
>

Unless otherwise stated above:

IBM United Kingdom Limited
Registered in England and Wales with number 741598
Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


[DISCUSS] FLIP-XXX Apicurio-avro format

2024-05-14 Thread David Radley
Hi Danny,

Thank you very much for the feedback and your support. I have copied your 
feedback from the VOTE thread to this discussion thread, so we can continue our 
discussions off the VOTE thread.



Your feedback:

Thanks for Driving this David. I am +1 for adding support for the new

format, however have some questions/suggestions on the details.



1. Passing around Map additionalInputProperties feels a bit

dirty. It looks like this is mainly for the Kafka connector. This connector

already has a de/serialization schema extension to access record

headers, KafkaRecordDeserializationSchema [1], can we use this instead?

2. Can you elaborate why we need to change the SchemaCoder interface? Again

I am not a fan of adding these Map parameters

3. I assume this integration will go into the core Flink repo under

flink-formats [2], and not be a separate repository like the connectors?



My response:

Addressing 1. and 2.

I agree that sending maps around is a bit dirty. If we can see a better way 
that would be great. I was looking for a way to pass this kafka header 
information in a non-Kafka way - the most obvious way I could think was as a 
map. Here are the main considerations I saw, if I have missed anything or could 
improve something I would be grateful for any further feedback.



  *   I see KafkaRecordDeserializationSchema is a Kafka interface that works at 
the Kafka record level (so includes the headers). We need a mechanism to send 
over the headers from the Kafka record to Flink
  *   Flink core is not aware of Kafka headers, and I did not want to add a 
Kafka dependancy to core flink.
  *   The formats are stateless so it did not appear to be in fitting with the 
Flink architecture to pass through header information to stash in state in the 
format waiting for the deserialise to be subsequently called to pick up the 
header information.
  *   We could have used Thread local storage to stash the header content, but 
this would be extra state to manage; and this would seem like an obtrusive 
change.
  *   The SchemaCoder deserialise is where Confluent Avro gets the schema id 
from the payload, so it can lookup the schema. In line with this approach it 
made sense to extend the deserialise so it had the header contents so the 
Apicurio Avro format could lookup the schema.
  *   I did not want to have Apicurio specific logic in the Kafka connector, if 
we did we could pull out the appropriate headers and only send over the schema 
ids.
  *   For deserialise, the schema id we are interested in is the one in the 
Kafka headers on the message and is for the writer schema (an Avro format 
concept) currently used by the confluent-avro format in deserialize.
  *   For serialise the schema ids need to be obtained from apicurio then 
passed through to Kafka.
  *   For serialise there is existing logic around handling the metadata which 
includes passing the headers. But the presence of the metadata would imply we 
have a metadata column. Maybe a change to the metadata mechanism may have 
allowed to use to pass the headers, but not create a metadata column; instead I 
pass through the additional headers in a map to be appended.



3.

Yes this integration will go into the core Flink repo under

flink-formats and sit next to the confluent-avro format. The Avro format has 
the concept of a Registry and drives the confluent-avro format. The Apicurio 
Avro format will use the same approach.

Unless otherwise stated above:

IBM United Kingdom Limited
Registered in England and Wales with number 741598
Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


Community over Code EU 2024: The countdown has started!

2024-05-14 Thread Ryan Skraba
[Note: You're receiving this email because you are subscribed to one
or more project dev@ mailing lists at the Apache Software Foundation.]

We are very close to Community Over Code EU -- check out the amazing
program and the special discounts that we have for you.

Special discounts

You still have the opportunity to secure your ticket for Community
Over Code EU. Explore the various options available, including the
regular pass, the committer and groups pass, and now introducing the
one-day pass tailored for locals in Bratislava.

We also have a special discount for you to attend both Community Over
Code and Berlin Buzzwords from June 9th to 11th. Visit our website to
find out more about this opportunity and contact te...@sg.com.mx to
get the discount code.

Take advantage of the discounts and register now!
https://eu.communityovercode.org/tickets/

Check out the full program!

This year Community Over Code Europe will bring to you three days of
keynotes and sessions that cover topics of interest for ASF projects
and the greater open source ecosystem including data engineering,
performance engineering, search, Internet of Things (IoT) as well as
sessions with tips and lessons learned on building a healthy open
source community.

Check out the program: https://eu.communityovercode.org/program/

Keynote speaker highlights for Community Over Code Europe include:

* Dirk-Willem Van Gulik, VP of Public Policy at the Apache Software
Foundation, will discuss the Cyber Resiliency Act and its impact on
open source (All your code belongs to Policy Makers, Politicians, and
the Law).

* Dr. Sherae Daniel will share the results of her study on the impact
of self-promotion for open source software developers (To Toot or not
to Toot, that is the question).

* Asim Hussain, Executive Director of the Green Software Foundation
will present a framework they have developed for quantifying the
environmental impact of software (Doing for Sustainability what Open
Source did for Software).

* Ruth Ikegah will  discuss the growth of the open source movement in
Africa (From Local Roots to Global Impact: Building an Inclusive Open
Source Community in Africa)

* A discussion panel on EU policies and regulations affecting
specialists working in Open Source Program Offices

Additional activities

* Poster sessions: We invite you to stop by our poster area and see if
the ideas presented ignite a conversation within your team.

* BOF time: Don't miss the opportunity to discuss in person with your
open source colleagues on your shared interests.

* Participants reception: At the end of the first day, we will have a
reception at the event venue. All participants are welcome to attend!

* Spontaneous talks: There is a dedicated room and social space for
having spontaneous talks and sessions. Get ready to share with your
peers.

* Lighting talks: At the end of the event we will have the awaited
Lighting talks, where every participant is welcome to share and
enlighten us.

Please remember:  If you haven't applied for the visa, we will provide
the necessary letter for the process. In the unfortunate case of a
visa rejection, your ticket will be reimbursed.

See you in Bratislava,

Community Over Code EU Team


[jira] [Created] (FLINK-35356) Async reducing state

2024-05-14 Thread Zakelly Lan (Jira)
Zakelly Lan created FLINK-35356:
---

 Summary: Async reducing state
 Key: FLINK-35356
 URL: https://issues.apache.org/jira/browse/FLINK-35356
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / State Backends
Reporter: Zakelly Lan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35355) Async aggregating state

2024-05-14 Thread Zakelly Lan (Jira)
Zakelly Lan created FLINK-35355:
---

 Summary: Async aggregating state
 Key: FLINK-35355
 URL: https://issues.apache.org/jira/browse/FLINK-35355
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / State Backends
Reporter: Zakelly Lan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35354) [discuss] Support host mapping in Flink tikv cdc

2024-05-14 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-35354:
---

 Summary: [discuss] Support host mapping in Flink tikv cdc
 Key: FLINK-35354
 URL: https://issues.apache.org/jira/browse/FLINK-35354
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Affects Versions: cdc-3.1.0, cdc-3.2.0
Reporter: ouyangwulin
 Fix For: cdc-3.1.0, cdc-3.2.0


In tidb production environment deployment, there are usually two kinds of 
network: internal network and public network. When we use pd mode kv, we need 
to do network mapping, such as `spark.tispark.host_mapping` in 
https://github.com/pingcap/tispark/blob/master/docs/userguide_3.0.md. So I 
think we need support `host_mapping` in our Flink tikv cdc connector.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35353) Translate "Profiler" page into Chinese

2024-05-14 Thread Juan Zifeng (Jira)
Juan Zifeng created FLINK-35353:
---

 Summary: Translate  "Profiler" page into Chinese
 Key: FLINK-35353
 URL: https://issues.apache.org/jira/browse/FLINK-35353
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Affects Versions: 1.19.0
Reporter: Juan Zifeng
 Fix For: 1.19.0


The links are 
https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/ops/debugging/profiler/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Apache Flink CDC Release 3.1.0, release candidate #3

2024-05-14 Thread Leonard Xu
+1 (binding)

- verified signatures
- verified hashsums
- checked Github release tag 
- checked release notes
- Run pipeline from MySQL to StarRocks with fields projection, the result is 
expected
- Run pipeline from MySQL to StarRocks with filter, the result is expected
- reviewed Jira issues for cdc-3.1.0,and closed invalid issue for incorrect 
version 3.1.0
- reviewed the web PR and left two minor comments 

Best,
Leonard

> 2024年5月14日 下午6:07,Yanquan Lv  写道:
> 
> +1 (non-binding)
> - Validated checksum hash
> - Build the source with Maven and jdk8
> - Verified web PR
> - Check that the jar is built by jdk8
> - Check synchronizing from mysql to paimon
> - Check synchronizing from mysql to kafka
> 
> Hang Ruan  于2024年5月13日周一 13:55写道:
> 
>> +1 (non-binding)
>> 
>> - Validated checksum hash
>> - Verified signature
>> - Verified that no binaries exist in the source archive
>> - Build the source with Maven and jdk8
>> - Verified web PR
>> - Check that the jar is built by jdk8
>> - Check synchronizing schemas and data from mysql to starrocks following
>> the quickstart
>> 
>> Best,
>> Hang
>> 
>> Qingsheng Ren  于2024年5月11日周六 10:10写道:
>> 
>>> Hi everyone,
>>> 
>>> Please review and vote on the release candidate #3 for the version 3.1.0
>> of
>>> Apache Flink CDC, as follows:
>>> [ ] +1, Approve the release
>>> [ ] -1, Do not approve the release (please provide specific comments)
>>> 
>>> **Release Overview**
>>> 
>>> As an overview, the release consists of the following:
>>> a) Flink CDC source release to be deployed to dist.apache.org
>>> b) Maven artifacts to be deployed to the Maven Central Repository
>>> 
>>> **Staging Areas to Review**
>>> 
>>> The staging areas containing the above mentioned artifacts are as
>> follows,
>>> for your review:
>>> * All artifacts for a) can be found in the corresponding dev repository
>> at
>>> dist.apache.org [1], which are signed with the key with fingerprint
>>> A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B [2]
>>> * All artifacts for b) can be found at the Apache Nexus Repository [3]
>>> 
>>> Other links for your review:
>>> * JIRA release notes [4]
>>> * Source code tag "release-3.1.0-rc3" with commit hash
>>> 5452f30b704942d0ede64ff3d4c8699d39c63863 [5]
>>> * PR for release announcement blog post of Flink CDC 3.1.0 in flink-web
>> [6]
>>> 
>>> **Vote Duration**
>>> 
>>> The voting time will run for at least 72 hours, adopted by majority
>>> approval with at least 3 PMC affirmative votes.
>>> 
>>> Thanks,
>>> Qingsheng Ren
>>> 
>>> [1] https://dist.apache.org/repos/dist/dev/flink/flink-cdc-3.1.0-rc3/
>>> [2] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> [3]
>> https://repository.apache.org/content/repositories/orgapacheflink-1733
>>> [4]
>>> 
>>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387
>>> [5] https://github.com/apache/flink-cdc/releases/tag/release-3.1.0-rc3
>>> [6] https://github.com/apache/flink-web/pull/739
>>> 
>> 



Re: [VOTE] FLIP-454: New Apicurio Avro format

2024-05-14 Thread Danny Cranmer
Hello all,

Thanks for Driving this David. I am +1 for adding support for the new
format, however have some questions/suggestions on the details.

1. Passing around Map additionalInputProperties feels a bit
dirty. It looks like this is mainly for the Kafka connector. This connector
already has a de/serialization schema extension to access record
headers, KafkaRecordDeserializationSchema [1], can we use this instead?
2. Can you elaborate why we need to change the SchemaCoder interface? Again
I am not a fan of adding these Map parameters
3. I assume this integration will go into the core Flink repo under
flink-formats [2], and not be a separate repository like the connectors?

Thanks,
Danny

[1]
https://github.com/apache/flink-connector-kafka/blob/main/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/deserializer/KafkaRecordDeserializationSchema.java
[2] https://github.com/apache/flink/tree/master/flink-formats

On Sat, May 4, 2024 at 12:46 PM Ahmed Hamdy  wrote:

> +1 (non-binding)
>
> Best Regards
> Ahmed Hamdy
>
>
> On Fri, 3 May 2024 at 15:16, Jeyhun Karimov  wrote:
>
> > +1 (non binding)
> >
> > Thanks for driving this FLIP David.
> >
> > Regards,
> > Jeyhun
> >
> > On Fri, May 3, 2024 at 2:21 PM Mark Nuttall  wrote:
> >
> > > +1, I would also like to see first class support for Avro and Apicurio
> > >
> > > -- Mark Nuttall, mnutt...@apache.org
> > > Senior Software Engineer, IBM Event Automation
> > >
> > > On 2024/05/02 09:41:09 David Radley wrote:
> > > > Hi everyone,
> > > >
> > > > I'd like to start a vote on the FLIP-454: New Apicurio Avro format
> > > > [1]. The discussion thread is here [2].
> > > >
> > > > The vote will be open for at least 72 hours unless there is an
> > > > objection
> > > > or
> > > > insufficient votes.
> > > >
> > > > [1]
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format
> > > > [2] https://lists.apache.org/thread/wtkl4yn847tdd0wrqm5xgv9wc0cb0kr8
> > > >
> > > >
> > > > Kind regards, David.
> > > >
> > > > Unless otherwise stated above:
> > > >
> > > > IBM United Kingdom Limited
> > > > Registered in England and Wales with number 741598
> > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6
> 3AU
> > > >
> > >
> >
>


Re: [VOTE] FLIP-454: New Apicurio Avro format

2024-05-14 Thread Danny Cranmer
Hello all,

Thanks for Driving this David. I am +1 for adding support for the new
format, however have some questions/suggestions on the details.

1. Passing around Map additionalInputProperties feels a bit
dirty. It looks like this is mainly for the Kafka connector. This connector
already has a de/serialization schema extension to access record
headers, KafkaRecordDeserializationSchema [1], can we use this instead?
2. Can you elaborate why we need to change the SchemaCoder interface? Again
I am not a fan of adding these Map parameters
3. I assume this integration will go into the core Flink repo under
flink-formats [2], and not be a separate repository like the connectors?

Thanks,
Danny

[1]
https://github.com/apache/flink-connector-kafka/blob/main/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/deserializer/KafkaRecordDeserializationSchema.java
[2] https://github.com/apache/flink/tree/master/flink-formats

On Sat, May 4, 2024 at 12:46 PM Ahmed Hamdy  wrote:

> +1 (non-binding)
>
> Best Regards
> Ahmed Hamdy
>
>
> On Fri, 3 May 2024 at 15:16, Jeyhun Karimov  wrote:
>
> > +1 (non binding)
> >
> > Thanks for driving this FLIP David.
> >
> > Regards,
> > Jeyhun
> >
> > On Fri, May 3, 2024 at 2:21 PM Mark Nuttall  wrote:
> >
> > > +1, I would also like to see first class support for Avro and Apicurio
> > >
> > > -- Mark Nuttall, mnutt...@apache.org
> > > Senior Software Engineer, IBM Event Automation
> > >
> > > On 2024/05/02 09:41:09 David Radley wrote:
> > > > Hi everyone,
> > > >
> > > > I'd like to start a vote on the FLIP-454: New Apicurio Avro format
> > > > [1]. The discussion thread is here [2].
> > > >
> > > > The vote will be open for at least 72 hours unless there is an
> > > > objection
> > > > or
> > > > insufficient votes.
> > > >
> > > > [1]
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-454%3A+New+Apicurio+Avro+format
> > > > [2] https://lists.apache.org/thread/wtkl4yn847tdd0wrqm5xgv9wc0cb0kr8
> > > >
> > > >
> > > > Kind regards, David.
> > > >
> > > > Unless otherwise stated above:
> > > >
> > > > IBM United Kingdom Limited
> > > > Registered in England and Wales with number 741598
> > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6
> 3AU
> > > >
> > >
> >
>


-- 


Best Regards
Danny Cranmer


RE: [VOTE] FLIP-451: Introduce timeout configuration to AsyncSink

2024-05-14 Thread David Radley
Thanks for the clarification Ahmed

+1 (non-binding)

From: Ahmed Hamdy 
Date: Monday, 13 May 2024 at 19:58
To: dev@flink.apache.org 
Subject: [EXTERNAL] Re: [VOTE] FLIP-451: Introduce timeout configuration to 
AsyncSink
Thanks David,
I have replied to your question in the discussion thread.
Best Regards
Ahmed Hamdy


On Mon, 13 May 2024 at 16:21, David Radley  wrote:

> Hi,
> I raised a question on the discussion thread, around retriable errors, as
> a possible alternative,
>   Kind regards, David.
>
>
> From: Aleksandr Pilipenko 
> Date: Monday, 13 May 2024 at 16:07
> To: dev@flink.apache.org 
> Subject: [EXTERNAL] Re: [VOTE] FLIP-451: Introduce timeout configuration
> to AsyncSink
> Thanks for driving this!
>
> +1 (non-binding)
>
> Thanks,
> Aleksandr
>
> On Mon, 13 May 2024 at 14:08, 
> wrote:
>
> > Thanks Ahmed!
> >
> > +1 non binding
> > On May 13, 2024 at 12:40 +0200, Jeyhun Karimov ,
> > wrote:
> > > Thanks for driving this Ahmed.
> > >
> > > +1 (non-binding)
> > >
> > > Regards,
> > > Jeyhun
> > >
> > > On Mon, May 13, 2024 at 12:37 PM Muhammet Orazov
> > >  wrote:
> > >
> > > > Thanks Ahmed, +1 (non-binding)
> > > >
> > > > Best,
> > > > Muhammet
> > > >
> > > > On 2024-05-13 09:50, Ahmed Hamdy wrote:
> > > > > > Hi all,
> > > > > >
> > > > > > Thanks for the feedback on the discussion thread[1], I would like
> > to
> > > > > > start
> > > > > > a vote on FLIP-451[2]: Introduce timeout configuration to
> AsyncSink
> > > > > >
> > > > > > The vote will be open for at least 72 hours unless there is an
> > > > > > objection or
> > > > > > insufficient votes.
> > > > > >
> > > > > > 1-
> https://lists.apache.org/thread/ft7wcw7kyftvww25n5fm4l925tlgdfg0
> > > > > > 2-
> > > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-451%3A+Introduce+timeout+configuration+to+AsyncSink+API
> > > > > > Best Regards
> > > > > > Ahmed Hamdy
> > > >
> >
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>

Unless otherwise stated above:

IBM United Kingdom Limited
Registered in England and Wales with number 741598
Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU


Re: [VOTE] FLIP-451: Introduce timeout configuration to AsyncSink

2024-05-14 Thread Danny Cranmer
Thanks for driving this Ahmed. This is a nice improvement to the API:

+1 (binding)

Thanks,
Danny

On Mon, May 13, 2024 at 7:58 PM Ahmed Hamdy  wrote:

> Thanks David,
> I have replied to your question in the discussion thread.
> Best Regards
> Ahmed Hamdy
>
>
> On Mon, 13 May 2024 at 16:21, David Radley 
> wrote:
>
> > Hi,
> > I raised a question on the discussion thread, around retriable errors, as
> > a possible alternative,
> >   Kind regards, David.
> >
> >
> > From: Aleksandr Pilipenko 
> > Date: Monday, 13 May 2024 at 16:07
> > To: dev@flink.apache.org 
> > Subject: [EXTERNAL] Re: [VOTE] FLIP-451: Introduce timeout configuration
> > to AsyncSink
> > Thanks for driving this!
> >
> > +1 (non-binding)
> >
> > Thanks,
> > Aleksandr
> >
> > On Mon, 13 May 2024 at 14:08, 
> > wrote:
> >
> > > Thanks Ahmed!
> > >
> > > +1 non binding
> > > On May 13, 2024 at 12:40 +0200, Jeyhun Karimov ,
> > > wrote:
> > > > Thanks for driving this Ahmed.
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Regards,
> > > > Jeyhun
> > > >
> > > > On Mon, May 13, 2024 at 12:37 PM Muhammet Orazov
> > > >  wrote:
> > > >
> > > > > Thanks Ahmed, +1 (non-binding)
> > > > >
> > > > > Best,
> > > > > Muhammet
> > > > >
> > > > > On 2024-05-13 09:50, Ahmed Hamdy wrote:
> > > > > > > Hi all,
> > > > > > >
> > > > > > > Thanks for the feedback on the discussion thread[1], I would
> like
> > > to
> > > > > > > start
> > > > > > > a vote on FLIP-451[2]: Introduce timeout configuration to
> > AsyncSink
> > > > > > >
> > > > > > > The vote will be open for at least 72 hours unless there is an
> > > > > > > objection or
> > > > > > > insufficient votes.
> > > > > > >
> > > > > > > 1-
> > https://lists.apache.org/thread/ft7wcw7kyftvww25n5fm4l925tlgdfg0
> > > > > > > 2-
> > > > > > >
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-451%3A+Introduce+timeout+configuration+to+AsyncSink+API
> > > > > > > Best Regards
> > > > > > > Ahmed Hamdy
> > > > >
> > >
> >
> > Unless otherwise stated above:
> >
> > IBM United Kingdom Limited
> > Registered in England and Wales with number 741598
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
> >
>


Re: [DISCUSS] Connector releases for Flink 1.19

2024-05-14 Thread Danny Cranmer
Hello,

@Sergey Nuyanzin , it looks like opensearch-2.0.0 has
been created now, all good.

@Hongshun Wang, thanks, since the CDC connectors are not yet released I had
omitted them from this task. But happy to include them, thanks for the
support.

Thanks,
Danny

On Mon, May 13, 2024 at 3:40 AM Hongshun Wang 
wrote:

> Hello Danny,
> Thanks for pushing this forward.  I am available to assist with the CDC
> connector[1].
>
> [1] https://github.com/apache/flink-cdc
>
> Best
> Hongshun
>
> On Sun, May 12, 2024 at 8:48 PM Sergey Nuyanzin 
> wrote:
>
> > I'm in a process of preparation of RC for OpenSearch connector
> >
> > however it seems I need PMC help: need to create opensearch-2.0.0 on jira
> > since as it was proposed in another ML[1] to have 1.x for OpenSearch
> > v1 and 2.x for OpenSearch v2
> >
> > would be great if someone from PMC could help here
> >
> > [1] https://lists.apache.org/thread/3w1rnjp5y612xy5k9yv44hy37zm9ph15
> >
> > On Wed, Apr 17, 2024 at 12:42 PM Ferenc Csaky
> >  wrote:
> > >
> > > Thank you Danny and Sergey for pushing this!
> > >
> > > I can help with the HBase connector if necessary, will comment the
> > > details to the relevant Jira ticket.
> > >
> > > Best,
> > > Ferenc
> > >
> > >
> > >
> > >
> > > On Wednesday, April 17th, 2024 at 11:17, Danny Cranmer <
> > dannycran...@apache.org> wrote:
> > >
> > > >
> > > >
> > > > Hello all,
> > > >
> > > > I have created a parent Jira to cover the releases [1]. I have
> > assigned AWS
> > > > and MongoDB to myself and OpenSearch to Sergey. Please assign the
> > > > relevant issue to yourself as you pick up the tasks.
> > > >
> > > > Thanks!
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-35131
> > > >
> > > > On Tue, Apr 16, 2024 at 2:41 PM Muhammet Orazov
> > > > mor+fl...@morazow.com.invalid wrote:
> > > >
> > > > > Thanks Sergey and Danny for clarifying, indeed it
> > > > > requires committer to go through the process.
> > > > >
> > > > > Anyway, please let me know if I can be any help.
> > > > >
> > > > > Best,
> > > > > Muhammet
> > > > >
> > > > > On 2024-04-16 11:19, Danny Cranmer wrote:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > I have opened the VOTE thread for the AWS connectors release [1].
> > > > > >
> > > > > > > If I'm not mistaking (please correct me if I'm wrong) this
> > request is
> > > > > > > not
> > > > > > > about version update it is about new releases for connectors
> > > > > >
> > > > > > Yes, correct. If there are any other code changes required then
> > help
> > > > > > would be appreciated.
> > > > > >
> > > > > > > Are you going to create an umbrella issue for it?
> > > > > >
> > > > > > We do not usually create JIRA issues for releases. That being
> said
> > it
> > > > > > sounds like a good idea to have one place to track the status of
> > the
> > > > > > connector releases and pre-requisite code changes.
> > > > > >
> > > > > > > I would like to work on this task, thanks for initiating it!
> > > > > >
> > > > > > The actual release needs to be performed by a committer. However,
> > help
> > > > > > getting the connectors building against Flink 1.19 and testing
> the
> > RC
> > > > > > is
> > > > > > appreciated.
> > > > > >
> > > > > > Thanks,
> > > > > > Danny
> > > > > >
> > > > > > [1]
> > https://lists.apache.org/thread/0nw9smt23crx4gwkf6p1dd4jwvp1g5s0
> > > > > >
> > > > > > On Tue, Apr 16, 2024 at 6:34 AM Sergey Nuyanzin
> > snuyan...@gmail.com
> > > > > > wrote:
> > > > > >
> > > > > > > Thanks for volunteering Muhammet!
> > > > > > > And thanks Danny for starting the activity.
> > > > > > >
> > > > > > > If I'm not mistaking (please correct me if I'm wrong)
> > > > > > >
> > > > > > > this request is not about version update it is about new
> > releases for
> > > > > > > connectors
> > > > > > > btw for jdbc connector support of 1.19 and 1.20-SNAPSHOT is
> > already
> > > > > > > done
> > > > > > >
> > > > > > > I would volunteer for Opensearch connector since currently I'm
> > working
> > > > > > > on
> > > > > > > support of Opensearch v2
> > > > > > > and I think it would make sense to have a release after it is
> > done
> > > > > > >
> > > > > > > On Tue, Apr 16, 2024 at 4:29 AM Muhammet Orazov
> > > > > > > mor+fl...@morazow.com.invalid wrote:
> > > > > > >
> > > > > > > > Hello Danny,
> > > > > > > >
> > > > > > > > I would like to work on this task, thanks for initiating it!
> > > > > > > >
> > > > > > > > I could update the versions on JDBC and Pulsar connectors.
> > > > > > > >
> > > > > > > > Are you going to create an umbrella issue for it?
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Muhammet
> > > > > > > >
> > > > > > > > On 2024-04-15 13:44, Danny Cranmer wrote:
> > > > > > > >
> > > > > > > > > Hello all,
> > > > > > > > >
> > > > > > > > > Flink 1.19 was released on 2024-03-18 [1] and the
> connectors
> > have not
> > > > > > > > > yet
> > > > > > > > > caught up. I propose we start releasing the connectors with
> > support
> > > > > > > > > for

Re: [DISCUSS] FLIP-XXX: Improve JDBC connector extensibility for Table API

2024-05-14 Thread Jeyhun Karimov
Hi Lorenzo,

Thanks for driving this FLIP. +1 for it.

Could you please elaborate more on how the new approach will be backwards
compatible?

Regards,
Jeyhun

On Tue, May 14, 2024 at 10:00 AM Muhammet Orazov
 wrote:

> Hey Lorenzo,
>
> Thanks for driving this FLIP! +1
>
> It will improve the user experience of using JDBC based
> connectors and help developers to build with different drivers.
>
> Best,
> Muhammet
>
> On 2024-05-13 10:20, lorenzo.affe...@ververica.com.INVALID wrote:
> > Hello dev!
> >
> > I want to share a draft of my FLIP to refactor the JDBC connector to
> > improve its extensibility [1].
> > The goal is to allow implementers to write new connectors on top of the
> > JDBC one for Table API with clean and maintainable code.
> >
> > Any feedback from the community is more and welcome.
> >
> > [1]
> >
> https://docs.google.com/document/d/1kl_AikMlqPUI-LNiPBraAFVZDRg1LF4bn6uiNtR4dlY/edit?usp=sharing
>


Re: [VOTE] Apache Flink CDC Release 3.1.0, release candidate #3

2024-05-14 Thread Yanquan Lv
+1 (non-binding)
- Validated checksum hash
- Build the source with Maven and jdk8
- Verified web PR
- Check that the jar is built by jdk8
- Check synchronizing from mysql to paimon
- Check synchronizing from mysql to kafka

Hang Ruan  于2024年5月13日周一 13:55写道:

> +1 (non-binding)
>
> - Validated checksum hash
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven and jdk8
> - Verified web PR
> - Check that the jar is built by jdk8
> - Check synchronizing schemas and data from mysql to starrocks following
> the quickstart
>
> Best,
> Hang
>
> Qingsheng Ren  于2024年5月11日周六 10:10写道:
>
> > Hi everyone,
> >
> > Please review and vote on the release candidate #3 for the version 3.1.0
> of
> > Apache Flink CDC, as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > **Release Overview**
> >
> > As an overview, the release consists of the following:
> > a) Flink CDC source release to be deployed to dist.apache.org
> > b) Maven artifacts to be deployed to the Maven Central Repository
> >
> > **Staging Areas to Review**
> >
> > The staging areas containing the above mentioned artifacts are as
> follows,
> > for your review:
> > * All artifacts for a) can be found in the corresponding dev repository
> at
> > dist.apache.org [1], which are signed with the key with fingerprint
> > A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B [2]
> > * All artifacts for b) can be found at the Apache Nexus Repository [3]
> >
> > Other links for your review:
> > * JIRA release notes [4]
> > * Source code tag "release-3.1.0-rc3" with commit hash
> > 5452f30b704942d0ede64ff3d4c8699d39c63863 [5]
> > * PR for release announcement blog post of Flink CDC 3.1.0 in flink-web
> [6]
> >
> > **Vote Duration**
> >
> > The voting time will run for at least 72 hours, adopted by majority
> > approval with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Qingsheng Ren
> >
> > [1] https://dist.apache.org/repos/dist/dev/flink/flink-cdc-3.1.0-rc3/
> > [2] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [3]
> https://repository.apache.org/content/repositories/orgapacheflink-1733
> > [4]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387
> > [5] https://github.com/apache/flink-cdc/releases/tag/release-3.1.0-rc3
> > [6] https://github.com/apache/flink-web/pull/739
> >
>


[jira] [Created] (FLINK-35352) Add -sae submit task to run for a period of time and automatically cancel

2024-05-14 Thread Jira
谷金城 created FLINK-35352:
---

 Summary: Add -sae submit task to run for a period of time and 
automatically cancel
 Key: FLINK-35352
 URL: https://issues.apache.org/jira/browse/FLINK-35352
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.18.1, 1.19.0, 1.17.2
 Environment: centos7.9
Reporter: 谷金城


Add -sae submit task to run for a period of time and automatically cancel

I know Flink has added a heartbeat mechanism in recent versions, and it can be 
adjusted through client.startbeat. interval and client.startbeat. timeout. 
However, I found that although the client has heartbeat logs, the task will 
still be automatically canceled when the timeout is reached
I found through debugging the source code that there is a section of code that 
should have modified expiredTimestamp after receiving the heartbeat, but it was 
not successfully modified

I have fixed this bug locally and would like to submit it to the community



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35351) Restore from unaligned checkpoint with custom partitioner fails.

2024-05-14 Thread Dmitriy Linevich (Jira)
Dmitriy Linevich created FLINK-35351:


 Summary: Restore from unaligned checkpoint with custom partitioner 
fails.
 Key: FLINK-35351
 URL: https://issues.apache.org/jira/browse/FLINK-35351
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Reporter: Dmitriy Linevich


Restore from unaligned checkpoint from job with custom partitioner (
exactly because of using SubtaskStateMapper.FULL), with a change in parallelism 
of one vertex failed:
 
{code:java}
[db13789c52b80aad852c53a0afa26247] Task [Sink: sink (3/3)#0] WARN  Sink: sink 
(3/3)#0 (be1d158c2e77fc9ed9e3e5d9a8431dc2_0a448493b4782967b150582570326227_2_0) 
switched from RUNNING to FAILED with failure cause:
java.io.IOException: Can't get next record for channel 
InputChannelInfo{gateIdx=0, inputChannelIdx=0}
    at 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:106)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:600)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:930)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:879) 
~[classes/:?]
    at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:960)
 ~[classes/:?]
    at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:939) 
[classes/:?]
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:753) 
[classes/:?]
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:568) [classes/:?]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: java.io.IOException: Corrupt stream, found tag: -1
    at 
org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:222)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:44)
 ~[classes/:?]
    at 
org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:53)
 ~[classes/:?]
    at 
org.apache.flink.runtime.io.network.api.serialization.NonSpanningWrapper.readInto(NonSpanningWrapper.java:337)
 ~[classes/:?]
    at 
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.readNonSpanningRecord(SpillingAdaptiveSpanningRecordDeserializer.java:128)
 ~[classes/:?]
    at 
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.readNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:103)
 ~[classes/:?]
    at 
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:93)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.io.recovery.DemultiplexingRecordDeserializer$VirtualChannel.getNextRecord(DemultiplexingRecordDeserializer.java:79)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.io.recovery.DemultiplexingRecordDeserializer.getNextRecord(DemultiplexingRecordDeserializer.java:154)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.io.recovery.DemultiplexingRecordDeserializer.getNextRecord(DemultiplexingRecordDeserializer.java:54)
 ~[classes/:?]
    at 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:103)
 ~[classes/:?]
    ... 10 more {code}
 

Example: Source[2] -> Sink[3]   restore for Source[1] -> Sink[3]

This fail happens because outputs of source and inputs of sink can consists 
only 1 part of stream record during checkpoint.  After restore 2 parts of one 
record can be sent to different inputs of the sink. Because of parallelism of 
sink not changed, inputs of sink don't know about other channels, only about 
yours.

I think for fix need rescale inputs for sink, if source outputs was rescaled .

For fix need to add 
[here|[https://github.com/apache/flink/blob/4165bac27bda4457e5940a994d923242d4a271dc/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java#L424]:]
{code:java}
boolean noNeedRescale = stateAssignment.executionJobVertex
.getJobVertex()
.getInputs()
.stream()
.map(JobEdge::getDownstreamSubtaskStateMapper)
.anyMatch(m -> !m.equals(SubtaskStateMapper.FULL))
&& stateAssignment.executionJobVertex
.getInputs()
.stream()
.map(IntermediateResult::getProducer)

Re: [DISCUSS] FLIP-XXX: Improve JDBC connector extensibility for Table API

2024-05-14 Thread Muhammet Orazov

Hey Lorenzo,

Thanks for driving this FLIP! +1

It will improve the user experience of using JDBC based
connectors and help developers to build with different drivers.

Best,
Muhammet

On 2024-05-13 10:20, lorenzo.affe...@ververica.com.INVALID wrote:

Hello dev!

I want to share a draft of my FLIP to refactor the JDBC connector to 
improve its extensibility [1].
The goal is to allow implementers to write new connectors on top of the 
JDBC one for Table API with clean and maintainable code.


Any feedback from the community is more and welcome.

[1] 
https://docs.google.com/document/d/1kl_AikMlqPUI-LNiPBraAFVZDRg1LF4bn6uiNtR4dlY/edit?usp=sharing


Re: [VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-14 Thread gongzhongqiang
+1 (non-binding)

Best.
Zhongqiang Gong

Martijn Visser  于2024年5月14日周二 14:45写道:

> Hi everyone,
>
> With no more discussions being open in the thread [1] I would like to start
> a vote on FLIP-453: Promote Unified Sink API V2 to Public and Deprecate
> SinkFunction [2]
>
> The vote will be open for at least 72 hours unless there is an objection or
> insufficient votes.
>
> Best regards,
>
> Martijn
>
> [1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
> [2]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction
>


Re: [VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-14 Thread Rui Fan
Thanks Martijn for driving this proposal!

+1(binding)

Best,
Rui

On Tue, May 14, 2024 at 3:30 PM Ferenc Csaky 
wrote:

> +1 (non-binding)
>
> Thanks,
> Ferenc
>
>
>
>
> On Tuesday, 14 May 2024 at 08:51, weijie guo 
> wrote:
>
> >
> >
> > Thanks Martijn for the effort!
> >
> > +1(binding)
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > Martijn Visser martijnvis...@apache.org 于2024年5月14日周二 14:45写道:
> >
> > > Hi everyone,
> > >
> > > With no more discussions being open in the thread [1] I would like to
> start
> > > a vote on FLIP-453: Promote Unified Sink API V2 to Public and Deprecate
> > > SinkFunction [2]
> > >
> > > The vote will be open for at least 72 hours unless there is an
> objection or
> > > insufficient votes.
> > >
> > > Best regards,
> > >
> > > Martijn
> > >
> > > [1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
> > > [2]
> > >
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction
>


Re: [VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-14 Thread Ferenc Csaky
+1 (non-binding)

Thanks,
Ferenc




On Tuesday, 14 May 2024 at 08:51, weijie guo  wrote:

> 
> 
> Thanks Martijn for the effort!
> 
> +1(binding)
> 
> Best regards,
> 
> Weijie
> 
> 
> Martijn Visser martijnvis...@apache.org 于2024年5月14日周二 14:45写道:
> 
> > Hi everyone,
> > 
> > With no more discussions being open in the thread [1] I would like to start
> > a vote on FLIP-453: Promote Unified Sink API V2 to Public and Deprecate
> > SinkFunction [2]
> > 
> > The vote will be open for at least 72 hours unless there is an objection or
> > insufficient votes.
> > 
> > Best regards,
> > 
> > Martijn
> > 
> > [1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
> > [2]
> > 
> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction


[jira] [Created] (FLINK-35350) Add documentation for Kudu

2024-05-14 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-35350:
--

 Summary: Add documentation for Kudu
 Key: FLINK-35350
 URL: https://issues.apache.org/jira/browse/FLINK-35350
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Kudu
Reporter: Martijn Visser
 Fix For: kudu-2.0.0


There's currently no documentation for Kudu; this should be added



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-14 Thread weijie guo
Thanks Martijn for the effort!

+1(binding)

Best regards,

Weijie


Martijn Visser  于2024年5月14日周二 14:45写道:

> Hi everyone,
>
> With no more discussions being open in the thread [1] I would like to start
> a vote on FLIP-453: Promote Unified Sink API V2 to Public and Deprecate
> SinkFunction [2]
>
> The vote will be open for at least 72 hours unless there is an objection or
> insufficient votes.
>
> Best regards,
>
> Martijn
>
> [1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
> [2]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction
>


[VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-14 Thread Martijn Visser
Hi everyone,

With no more discussions being open in the thread [1] I would like to start
a vote on FLIP-453: Promote Unified Sink API V2 to Public and Deprecate
SinkFunction [2]

The vote will be open for at least 72 hours unless there is an objection or
insufficient votes.

Best regards,

Martijn

[1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
[2]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction


[jira] [Created] (FLINK-35349) Use connection in openJdbcConnection of SqlServerDialect/Db2Dialect/OracleDialect

2024-05-14 Thread Hongshun Wang (Jira)
Hongshun Wang created FLINK-35349:
-

 Summary: Use connection in openJdbcConnection of 
SqlServerDialect/Db2Dialect/OracleDialect
 Key: FLINK-35349
 URL: https://issues.apache.org/jira/browse/FLINK-35349
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Affects Versions: cdc-3.1.0
Reporter: Hongshun Wang
 Fix For: cdc-3.2.0


Current, some dialect's `openJdbcConnection`  create connection without 
connection pool. It means that will create a new connection each time.

Howver , openJdbcConnection is used in generateSplits now, which means that 
enumerator will create a new connection for once split. A big table will create 
connection again and again.
{code:java}
public Collection generateSplits(TableId tableId) {
try (JdbcConnection jdbc = dialect.openJdbcConnection(sourceConfig)) {

LOG.info("Start splitting table {} into chunks...", tableId);
long start = System.currentTimeMillis();

Table table =
Objects.requireNonNull(dialect.queryTableSchema(jdbc, 
tableId)).getTable();
Column splitColumn = getSplitColumn(table, 
sourceConfig.getChunkKeyColumn());
final List chunks; {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)