Thanks for driving this, Weijie. Usually, the data distribution of the external
system is closely related to the keys, e.g. computing the bucket index by key
hashcode % bucket num, so I'm not sure about how much difference there are
between partitioning by key and a custom partitioning
Congratulations Rui
Best Regards,
Gang Wang
Jacky Lau 于2024年6月11日周二 13:04写道:
> Congratulations Rui, well deserved!
>
> Regards,
> Jacky Lau
>
> Jeyhun Karimov 于2024年6月11日 周二03:49写道:
>
> > Congratulations Rui, well deserved!
> >
> > Regards,
> > Jeyhun
> >
> > On Mon, Jun 10, 2024, 10:21 Ahmed
Thanks Zhanghao for the feedback.
Please feel free to change the state of this one to `won't make it`.
Best regards,
Weijie
Zhanghao Chen 于2024年6月12日周三 13:18写道:
> Hi Rui,
>
> Thanks for the summary! A quick update here: FLIP-398 was decided not to
> go into 1.20, as it was just found that
Hi Rui,
Thanks for the summary! A quick update here: FLIP-398 was decided not to go
into 1.20, as it was just found that the effort to add dedicated serialization
support for Maps, Sets and Lists, will break state-compatibility. I will revert
the relevant changes soon.
Best,
Zhanghao Chen
Thanks Rui for the summary!
Best regards,
Weijie
Rui Fan <1996fan...@gmail.com> 于2024年6月12日周三 13:00写道:
> Dear devs,
>
> This is the sixth meeting for Flink 1.20 release[1] cycle.
>
> I'd like to share the information synced in the meeting.
>
> - Feature Freeze
>
> It is worth noting that
Dear devs,
This is the sixth meeting for Flink 1.20 release[1] cycle.
I'd like to share the information synced in the meeting.
- Feature Freeze
It is worth noting that there are only 3 days left until the
feature freeze time(June 15, 2024, 00:00 CEST(UTC+2)),
and developers need to pay
Zakelly Lan created FLINK-35570:
---
Summary: Consider PlaceholderStreamStateHandle in checkpoint file
merging
Key: FLINK-35570
URL: https://issues.apache.org/jira/browse/FLINK-35570
Project: Flink
+1 (binding)
Thanks,
Zhu
Yuepeng Pan 于2024年6月11日周二 17:04写道:
> +1 (non-binding)
>
> Best regards,
> Yuepeng Pan
>
> At 2024-06-11 16:34:12, "Rui Fan" <1996fan...@gmail.com> wrote:
> >+1(binding)
> >
> >Best,
> >Rui
> >
> >On Tue, Jun 11, 2024 at 4:14 PM Muhammet Orazov
> > wrote:
> >
> >> +1
Jane Chan created FLINK-35569:
-
Summary:
SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
failed
Key: FLINK-35569
URL: https://issues.apache.org/jira/browse/FLINK-35569
Hi Sergio, thanks for driving it, +1 for this.
I have some comments:
1. If we have a source table with primary keys and partition keys defined,
what is the default behavior if PARTITIONED and DISTRIBUTED not specified
in the CTAS statement, It should not be inherited by default?
2. I suggest
Gang Huang created FLINK-35568:
--
Summary: Add imagePullSecrets for FlinkDeployment spec
Key: FLINK-35568
URL: https://issues.apache.org/jira/browse/FLINK-35568
Project: Flink
Issue Type:
Thanks a lot Danny!
On Tue, Jun 11, 2024 at 10:21 AM Danny Cranmer wrote:
>
> Hey Sergey,
>
> I have completed the 3 tasks. Let me know if you need anything else.
>
> Thanks,
> Danny
>
> On Tue, Jun 11, 2024 at 9:11 AM Danny Cranmer
> wrote:
>
> > Thanks for driving this Sergey, I will pick up
The Apache Flink community is very happy to announce the release of
Apache flink-connector-opensearch 2.0.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.
The release is available for
The Apache Flink community is very happy to announce the release of Apache
flink-connector-opensearch 1.2.0 for Flink 1.18 and Flink 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The
I just noticed the CREATE TABLE LIKE statement allows the definition of new
columns in the CREATE part. The difference
with this CTAS proposal is that TABLE LIKE appends the new columns at the
end of the schema instead of adding them
at the beginning like this proposal and Mysql do.
> create
+1 for cutting the release and Gyula as the release manager.
On Tue, Jun 11, 2024 at 10:41 AM David Radley
wrote:
> I agree – thanks for driving this Gyula.
>
> From: Rui Fan <1996fan...@gmail.com>
> Date: Tuesday, 11 June 2024 at 02:52
> To: dev@flink.apache.org
> Cc: Mate Czagany
> Subject:
Thanks for bringing this discussion back.
When we decided to decouple the connectors, we already discussed that we
will only realize the full benefit when the connectors actually become
independent from the Flink minor releases. Until that happens we have a ton
of extra work but limited gain.
Thanks Timo for answering Jeyhun questions.
To add info more about your questions Jeyhun. This proposal is not handling
NULL/NOT_NULL types. I noticed that
the current CTAS impl. (as Timo said) adds this constraint as part of the
resulting schema. And when defining
a primary key in the CREATE
Hi all,
+1 to this FLIP, very thanks all for your proposal.
isDeterministic looks good to me too.
We can consider stating the following points:
1. How to enable custom data distribution? Is it a dynamic hint? Can
you provide an SQL example.
2. What impact will it have when the mainstream is
On 10/06/2024 18:25, Danny Cranmer wrote:
This would
mean we would usually not need to release a new connector version per Flink
version, assuming there are no breaking changes.
We technically can't do this because we don't provide binary
compatibility across minor versions.
That's the entire
Hi Martjin,
Thanks for the confirmation,
Kind regards, David.
From: Martijn Visser
Date: Tuesday, 11 June 2024 at 15:00
To: dev@flink.apache.org
Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-kafka v3.2.0, release
candidate #1
Hi David,
That's a blocker for a Flink Kafka connector
Hi David,
That's a blocker for a Flink Kafka connector 4.0, not for 3.2.0. It's not
related to this release.
Best regards,
Martijn
On Tue, Jun 11, 2024 at 3:54 PM David Radley
wrote:
> Hi,
> Sorry I am a bit late.
> I notice https://issues.apache.org/jira/browse/FLINK-35109 is open and a
>
Hi,
Sorry I am a bit late.
I notice https://issues.apache.org/jira/browse/FLINK-35109 is open and a
blocker. Can I confirm that we have mitigated the impacts of this issue in this
release?
Kind regards, David.
From: Danny Cranmer
Date: Friday, 7 June 2024 at 11:46
To:
Hi Matthias,
I think we can include this generic semantic into the writeup of the LTS
definition for the Flink website (last item in the Migration Plan).
Talking about 1.x and 2.x feels more natural than about N.x and N+1.x - I'd
prefer not to overcomplicate things here.
Should the gap before the
Hi Danny,
Thank you for bringing this up.
I agree with points made by Ahmed, the split into different repositories
for connectors/connector groups adds flexibility to evolve connectors
without affecting other connectors.
I am also in favor of dropping the Flink version component, although this
Hello Danny,
Thanks for the starting the discussion.
-1 for mono-repo, and -+1 for dropping Flink version.
I have mixed opinion with dropping the Flink version. Usually, large
production migrations happen on Flink versions and users want also
naturally update the connectors compatible for that
Hi Danny,
Thanks for bringing this up, I might haven't driven a connector release
myself but I echo the pain and delay in releases for adding Flink version
support.
I am not really with the mono-repo approach for the following reasons
1- We will lose the flexibility we currently have for
Hi Mingliang,
Yes sounds like a good solution, I am not very familiar with ElasticSearch
internals and APIs but will try to assist with the PR when ready.
Best Regards
Ahmed Hamdy
On Tue, 11 Jun 2024 at 07:07, Mingliang Liu wrote:
> Thank you Ahmed for the explanation.
>
> The current
Hi Lincoln,
Thanks for your reply. Weijie and I discussed these two issues offline,
and here are the results of our discussion:
1. When the user utilizes the hash lookup join hint introduced by FLIP-204[1],
the `SupportsLookupCustomShuffle` interface should be ignored. This is because
the hash
Thanks Sam for your investigation.
I revisited the logs and confirmed that the JDK has never changed.
'java -version' get:
> openjdk version "11.0.19" 2023-04-18 LTS
> OpenJDK Runtime Environment (Red_Hat-11.0.19.0.7-2) (build 11.0.19+7-LTS)
> OpenJDK 64-Bit Server VM (Red_Hat-11.0.19.0.7-2)
Hongshun Wang created FLINK-35567:
-
Summary: CDC BinaryWriter cast NullableSerializerWrapper error
Key: FLINK-35567
URL: https://issues.apache.org/jira/browse/FLINK-35567
Project: Flink
Martijn Visser created FLINK-35566:
--
Summary: Consider promoting TypeSerializer from PublicEvolving to
Public
Key: FLINK-35566
URL: https://issues.apache.org/jira/browse/FLINK-35566
Project: Flink
Naci Simsek created FLINK-35565:
---
Summary: Flink KafkaSource Batch Job Gets Into Infinite Loop after
Resetting Offset
Key: FLINK-35565
URL: https://issues.apache.org/jira/browse/FLINK-35565
Project:
+1 (non-binding)
Best regards,
Yuepeng Pan
At 2024-06-11 16:34:12, "Rui Fan" <1996fan...@gmail.com> wrote:
>+1(binding)
>
>Best,
>Rui
>
>On Tue, Jun 11, 2024 at 4:14 PM Muhammet Orazov
> wrote:
>
>> +1 (non-binding)
>>
>> Thanks Yuxin for driving this!
>>
>> Best,
>> Muhammet
>>
>>
>> On
I agree – thanks for driving this Gyula.
From: Rui Fan <1996fan...@gmail.com>
Date: Tuesday, 11 June 2024 at 02:52
To: dev@flink.apache.org
Cc: Mate Czagany
Subject: [EXTERNAL] Re: Flink Kubernetes Operator 1.9.0 release planning
Thanks Gyula for driving this release!
> I suggest we cut the
+1(non-binding)
- Verified signatures
- Verified hashsums
- Checked Github release tag
- Source archives with no binary files
- Reviewed the flink-web PR
- Checked the jar build with jdk 1.8
Best,
Hang
gongzhongqiang 于2024年6月11日周二 15:53写道:
> +1(non-binding)
>
> - Verified signatures and
+1(binding)
Best,
Rui
On Tue, Jun 11, 2024 at 4:14 PM Muhammet Orazov
wrote:
> +1 (non-binding)
>
> Thanks Yuxin for driving this!
>
> Best,
> Muhammet
>
>
> On 2024-06-07 08:02, Yuxin Tan wrote:
> > Hi everyone,
> >
> > Thanks for all the feedback about the FLIP-459 Support Flink
> > hybrid
Hey Sergey,
I have completed the 3 tasks. Let me know if you need anything else.
Thanks,
Danny
On Tue, Jun 11, 2024 at 9:11 AM Danny Cranmer
wrote:
> Thanks for driving this Sergey, I will pick up the PMC tasks.
>
> Danny
>
> On Sun, Jun 9, 2024 at 11:09 PM Sergey Nuyanzin
> wrote:
>
>> Hi
+1 (non-binding)
Thanks Yuxin for driving this!
Best,
Muhammet
On 2024-06-07 08:02, Yuxin Tan wrote:
Hi everyone,
Thanks for all the feedback about the FLIP-459 Support Flink
hybrid shuffle integration with Apache Celeborn[1].
The discussion thread is here [2].
I'd like to start a vote for
Thanks for driving this Sergey, I will pick up the PMC tasks.
Danny
On Sun, Jun 9, 2024 at 11:09 PM Sergey Nuyanzin wrote:
> Hi everyone,
>
> as you might noticed the voting threads for release of flink-opensearch
> connectors (v1, v2) received 3+ binding votes[1][2]
>
> Now I need PMC help to
Congratulations Rui, well deserved!
Best,
Muhammet
On 2024-06-05 10:01, Piotr Nowojski wrote:
Hi everyone,
On behalf of the PMC, I'm very happy to announce another new Apache
Flink
PMC Member - Fan Rui.
Rui has been active in the community since August 2019. During this
time he
has
+1(non-binding)
- Verified signatures and sha512
- Checked Github release tag exsit
- Source archives with no binary files
- Build the source with jdk8 on ubuntu 22.04 succeed
- Reviewed the flink-web PR
Best,
Zhongqiang Gong
Hong Liang 于2024年6月6日周四 23:39写道:
> Hi everyone,
> Please review and
Thanks for starting this discussion Danny
I will put my 5 cents here
>From one side yes, support of new Flink release takes time as it was
mentioned above
However from another side most of the connectors (main/master branches)
supported Flink 1.19
even before it was released, same for 1.20 since
+1 (binding)
- verified signatures
- verified hashsums
- checked Github release tag
- checked release notes
- reviewed all Jira issues for 1.19.1 have been resolved
- reviewed the web PR
Best,
Leonard
> 2024年6月11日 下午3:19,Sergey Nuyanzin 写道:
>
> +1 (non-binding)
>
> - Downloaded all the
中国无锡周良 created FLINK-35564:
--
Summary: The topic cannot be distributed on subtask when
calculatePartitionOwner returns -1
Key: FLINK-35564
URL: https://issues.apache.org/jira/browse/FLINK-35564
Project:
+1 (non-binding)
- Downloaded all the artifacts
- Verified checksums and signatures
- Verified that source archives do not contain any binaries
- Built from source with jdk8
- Ran a simple wordcount job on local standalone cluster
On Tue, Jun 11, 2024 at 8:36 AM Matthias Pohl wrote:
> +1
+1 (binding)
* Downloaded all artifacts
* Extracted sources and ran compilation on sources
* Diff of git tag checkout with downloaded sources
* Verified SHA512 & GPG checksums
* Checked that all POMs have the right expected version
* Generated diffs to compare pom file changes with NOTICE files
*
Congratulations, Fan Rui!
Best,
Jiangang Liu
Jacky Lau 于2024年6月11日周二 13:04写道:
> Congratulations Rui, well deserved!
>
> Regards,
> Jacky Lau
>
> Jeyhun Karimov 于2024年6月11日 周二03:49写道:
>
> > Congratulations Rui, well deserved!
> >
> > Regards,
> > Jeyhun
> >
> > On Mon, Jun 10, 2024, 10:21
Thanks for bringing this up, Danny. This is indeed an important issue that
the community needs to improve on.
Personally, I think a mono-repo might not be a bad idea, if we apply
different rules for the connector releases. To be specific:
- flink-connectors 1.19.x contains all connectors that are
Hi Alexey,
thanks for proposing this FLIP. It is a nice continuation of the vision
we had for CompiledPlan when writing and implementing FLIP-190. The
whole stack is prepared for serializing BatchExecNodes as well so it
shouldn't be too hard to make this a reality.
> I think the FLIP should
+1 (non-binding)
Best,
Junrui
Venkatakrishnan Sowrirajan 于2024年6月10日周一 02:37写道:
> Thanks for adding this new support. +1 (non-binding)
>
> On Sat, Jun 8, 2024, 3:26 PM Ahmed Hamdy wrote:
>
> > +1 (non-binding)
> > Best Regards
> > Ahmed Hamdy
> >
> >
> > On Sat, 8 Jun 2024 at 22:26, Jeyhun
Thank you Ahmed for the explanation.
The current Elasticsearch 8 connector already uses the
FatalExeptionClassifier for fatal / non-retriable requests [1]. It's very
similar to what you linked in the AWS connectors. Currently this is only
used for fully failed requests. The main problem I was
52 matches
Mail list logo