Re: [DISCUSSION] FLIP-456: CompiledPlan support for Batch Execution Mode

2024-06-07 Thread Jim Hughes
Hi Alexey,

Responses inline below:

On Mon, May 13, 2024 at 7:18 PM Alexey Leonov-Vendrovskiy <
vendrov...@gmail.com> wrote:

> Thanks Jim.
>
> > 1. For the testing, I'd call the tests "execution" tests rather than
> > "restore" tests.  For streaming execution, restore tests have the
> compiled
> > plan and intermediate state; the tests verify that those can work
> together
> > and continue processing.
>
> Agree that we don't need to store and restore the intermediate state. So
> the most critical part is that the CompiledPlan for batch can be executed.
>

On the FLIP, can you be more specific about what we are checking during
execution?  I'd suggest that `executeSql(_)` and
`executePlan(compilePlanSql(_))` should be compared.


> 2. The FLIP implicitly suggests "completeness tests" (to use FLIP-190's
> > words).  Do we need "change detection tests"?  I'm a little unsure if
> that
> > is presently happening in an automatic way for streaming operators.
>
>
>  We might need to elaborate more on this, but the idea is that  we need to
> make sure that compiled plans created by an older version of SQL Planner
> are executable on newer runtimes.
>
> 3.  Can we remove old versions of batch operators eventually?  Or do we
> > need to keep them forever like we would for streaming operators?
> >
>
> We could have deprecation paths for old operator nodes in some cases. It is
> a matter of the time window: what could be practical the "time distance"
> between query planner and flink runtime against which the query query can
> be resubmitted.
> Note, here we don't have continuous queries, so there is always an option
> to "re-plan" the original SQL query text into a newer version of the
> CompiledPlan.
> With this in mind, a time window of 1yr+ would allow deprecation of older
> batch exec nodes, though I don't see this as a frequent event.
>

As I read the JavaDocs for `TableEnvironment.loadPlan`, it looks like the
compiled plan ought to be sufficient to run a job at a later time.

I think the FLIP should be clear on the backwards support strategy here.
The strategy for streaming is "forever".  This may be the most interesting
part of the FLIP to discuss.

Can you let us know when you've updated the FLIP?

Cheers,

Jim


> -Alexey
>
>
>
> On Mon, May 13, 2024 at 1:52 PM Jim Hughes 
> wrote:
>
> > Hi Alexey,
> >
> > After some thought, I have a question about deprecations:
> >
> > 3.  Can we remove old versions of batch operators eventually?  Or do we
> > need to keep them forever like we would for streaming operators?
> >
> > Cheers,
> >
> > Jim
> >
> > On Thu, May 9, 2024 at 11:29 AM Jim Hughes  wrote:
> >
> > > Hi Alexey,
> > >
> > > Overall, the FLIP looks good and makes sense to me.
> > >
> > > 1. For the testing, I'd call the tests "execution" tests rather than
> > > "restore" tests.  For streaming execution, restore tests have the
> > compiled
> > > plan and intermediate state; the tests verify that those can work
> > together
> > > and continue processing.
> > >
> > > For batch execution, I think we just want that all existing compiled
> > plans
> > > can be executed in future versions.
> > >
> > > 2. The FLIP implicitly suggests "completeness tests" (to use FLIP-190's
> > > words).  Do we need "change detection tests"?  I'm a little unsure if
> > that
> > > is presently happening in an automatic way for streaming operators.
> > >
> > > In RestoreTestBase, generateTestSetupFiles is disabled and has to be
> run
> > > manually when tests are being written.
> > >
> > > Cheers,
> > >
> > > Jim
> > >
> > > On Tue, May 7, 2024 at 5:11 AM Paul Lam  wrote:
> > >
> > >> Hi Alexey,
> > >>
> > >> Thanks a lot for bringing up the discussion. I’m big +1 for the FLIP.
> > >>
> > >> I suppose the goal doesn’t involve the interchangeability of json
> plans
> > >> between batch mode and streaming mode, right?
> > >> In other words, a json plan compiled in a batch program can’t be run
> in
> > >> streaming mode without a migration (which is not yet supported).
> > >>
> > >> Best,
> > >> Paul Lam
> > >>
> > >> > 2024年5月7日 14:38,Alexey Leonov-Vendrovskiy 
> 写道:
> > >> >
> > >> > Hi everyone,
> > >> >
> > >> > PTAL at the proposed FLIP-456: CompiledPlan support for Batch
> > Execution
> > >> > Mode. It is pretty self-describing.
> > >> >
> > >> > Any thoughts are welcome!
> > >> >
> > >> > Thanks,
> > >> > Alexey
> > >> >
> > >> > [1]
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-456%3A+CompiledPlan+support+for+Batch+Execution+Mode
> > >> > .
> > >>
> > >>
> >
>


Re: [DISCUSS] FLIP-461: FLIP-461: Synchronize rescaling with checkpoint creation to minimize reprocessing

2024-06-07 Thread Matthias Pohl
Hi Zakelly,
good point. I updated the FLIP to use "scale-on-failed-checkpoints-count"
and "max-delay-for-scale-trigger".

On Fri, Jun 7, 2024 at 12:18 PM Zakelly Lan  wrote:

> Hi Matthias,
>
> Thanks for your reply!
>
> That's something that could be considered as another optimization. I would
> > consider this as a possible follow-up. My concern here is that we'd make
> > the rescaling configuration even more complicated by introducing yet
> > another parameter.
>
>
> I'd be fine with considering this as a follow-up.
>
> It might be worth renaming the internal interface into something that
> > indicates its internal usage to avoid confusion.
> >
>
> Agree with this.
>
> And another question:
> I noticed the existing options under 'jobmanager.adaptive-scheduler' are
> using the word 'scaling', e.g.
> 'jobmanager.adaptive-scheduler.scaling-interval.min'. While in this FLIP
> you choose 'rescale'. Would you mind unifying them?
>
>
> Best,
> Zakelly
>
>
> On Thu, Jun 6, 2024 at 10:57 PM David Morávek 
> wrote:
>
> > Thanks for the FLIP Matthias, I think it looks pretty solid!
> >
> > I also don't see a relation to unaligned checkpoints. From the AS
> > perspective, the checkpoint time doesn't matter.
> >
> > Is it possible a change event observed right after a complete checkpoint
> > > (or within a specific short time after a checkpoint) that triggers a
> > > rescale immediately? Sometimes the checkpoint interval is huge and it
> is
> > > better to rescale immediately.
> > >
> >
> > I had considered this initially too, but it feels like a possible
> follow-up
> > optimization.
> >
> > The primary objective of the proposed solution is to enhance overall
> > predictability. With a longer checkpointing interval, the current
> situation
> > worsens as we might have to reprocess a substantial backlog.
> >
> > I think in the future we might actually want to enhance this by
> triggering
> > some kind of specialized "rescaling" checkpoint that prepares the cluster
> > for rescaling (eg. by replicating state to new slots / pre-splitting the
> > db, ...), to make things faster.
> >
> > Best,
> > D.
> >
> > On Wed, Jun 5, 2024 at 4:34 PM Matthias Pohl  wrote:
> >
> > > Hi Zakelly,
> > > thanks for your reply. See my inlined responses below:
> > >
> > > On Wed, Jun 5, 2024 at 10:26 AM Zakelly Lan 
> > wrote:
> > >
> > > > Hi Matthias,
> > > >
> > > > Thanks for your proposal! I have a few questions:
> > > >
> > > > 1. Is it possible a change event observed right after a complete
> > > checkpoint
> > > > (or within a specific short time after a checkpoint) that triggers a
> > > > rescale immediately? Sometimes the checkpoint interval is huge and it
> > is
> > > > better to rescale immediately.
> > > >
> > >
> > > That's something that could be considered as another optimization. I
> > would
> > > consider this as a possible follow-up. My concern here is that we'd
> make
> > > the rescaling configuration even more complicated by introducing yet
> > > another parameter.
> > >
> > >
> > > > 2. Should we introduce `CheckpointLifecycleListener` instead of
> reusing
> > > > `CheckpointListener`? Is `CheckpointListener` enough for this
> scenario?
> > > >
> > >
> > > Good point, they are serving similar purposes. But I'm hesitant to use
> > > CheckpointListener (which is a public interface) for this internal
> quite
> > > narrowly scoped runtime-specific use case of FLIP-461.
> > >
> > > It might be worth renaming the internal interface into something that
> > > indicates its internal usage to avoid confusion.
> > >
> > >
> > > > Best,
> > > > Zakelly
> > > >
> > > > On Wed, Jun 5, 2024 at 3:02 PM Matthias Pohl 
> > wrote:
> > > >
> > > > > Hi ConradJam,
> > > > > thanks for your response.
> > > > >
> > > > > The CheckpointStatsTracker gets notified about the checkpoint
> > > completion
> > > > > after the checkpoint is finalized, i.e. all its data is persisted
> and
> > > the
> > > > > metadata is written to the CompletedCheckpointStore. At this
> moment,
> > > the
> > > > > checkpoint is considered for restoring a job and, therefore,
> becomes
> > > > > available for restarts. This workflow also applies to unaligned
> > > > > checkpoints. But I see how this context might be helpful for
> > > > understanding
> > > > > the change. I will add it to the FLIP. So far, I don't see a reason
> > > > > to disable the feature for unaligned checkpoints. Do you see other
> > > issues
> > > > > that might make it necessary to disable this feature for this type
> of
> > > > > checkpoints?
> > > > >
> > > > > Can you elaborate a bit more what you mean by "checkpoints that do
> > not
> > > > > check it"? I do not fully understand what you are referring to with
> > > "it"
> > > > > here.
> > > > >
> > > > > Best,
> > > > > Matthias
> > > > >
> > > > > On Wed, Jun 5, 2024 at 4:46 AM ConradJam 
> > wrote:
> > > > >
> > > > > > I have a few questions:
> > > > > > Unaligned checkpoints Do we need to enable this feature? Whether
> > this
> > > > 

Re: [VOTE] FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn

2024-06-07 Thread Jim Hughes
HI all,

+1 (non-binding)

Cheers,

Jim

On Fri, Jun 7, 2024 at 4:03 AM Yuxin Tan  wrote:

> Hi everyone,
>
> Thanks for all the feedback about the FLIP-459 Support Flink
> hybrid shuffle integration with Apache Celeborn[1].
> The discussion thread is here [2].
>
> I'd like to start a vote for it. The vote will be open for at least
> 72 hours unless there is an objection or insufficient votes.
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-459%3A+Support+Flink+hybrid+shuffle+integration+with+Apache+Celeborn
> [2] https://lists.apache.org/thread/gy7sm7qrf7yrv1rl5f4vtk5fo463ts33
>
> Best,
> Yuxin
>


Re: Savepoints not considered during failover

2024-06-07 Thread Mate Czagany
Hi,

I think this was last discussed in FLIP-193 [1] where the reasoning is
mostly the same as Matthias said, savepoints are owned by the user and
Flink cannot depend on them.

In older versions Flink also took savepoints into account when restoring a
job, with the possibility to skip savepoints by using the config
"execution.checkpointing.prefer-checkpoint-for-recovery" introduced in
FLINK-11159 [2]. This option was later removed in FLINK-20427 [3] because
it could lead to loss of data. Since the introduction of FLIP-193 only
checkpoints are considered during recovery.

Best regards,
Mate

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-193%3A+Snapshots+ownership
[2] https://issues.apache.org/jira/browse/FLINK-11159
[3] https://issues.apache.org/jira/browse/FLINK-20427

Matthias Pohl  ezt írta (időpont: 2024. jún. 7., P,
8:55):

> One reason could be that the savepoints are self-contained, owned by the
> user rather than Flink and, therefore, could be moved. Flink wouldn't have
> a proper reference in that case anymore.
>
> I don't have a link to a discussion, though.
>
> Best,
> Matthias
>
> On Fri, Jun 7, 2024 at 8:47 AM Gyula Fóra  wrote:
>
> > Hey Devs!
> >
> > What is the reason / rationale for savepoints being ignored during
> failover
> > scenarios?
> >
> > I see they are not even recorded as the last valid checkpoint in the HA
> > metadata (only the checkpoint id counter is bumped) so if the JM fails
> > after a manually triggered savepoint the job will still fall back to the
> > previous checkpoint instead.
> >
> > I am sure there must have been some discussion around it but I cant find
> > it.
> >
> > Thank you!
> > Gyula
> >
>


[DISCUSS] FLIP-463: Schema Definition in CREATE TABLE AS Statement

2024-06-07 Thread Sergio Pena
HI All,

I'd like to start a discussion on FLIP-463: Schema Definition in CREATE
TABLE AS Statement [1]

The proposal extends the CTAS statement to allow users to define their own
schema by adding columns, primary and partition keys, and table
distribution to the CREATE statement.

Any thoughts are welcome.

Thanks,
- Sergio Pena

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-463%3A+Schema+Definition+in+CREATE+TABLE+AS+Statement


[jira] [Created] (FLINK-35558) [docs] "Edit This Page" tool does not follow contribution guidelines

2024-06-07 Thread Matt Braymer-Hayes (Jira)
Matt Braymer-Hayes created FLINK-35558:
--

 Summary: [docs] "Edit This Page" tool does not follow contribution 
guidelines
 Key: FLINK-35558
 URL: https://issues.apache.org/jira/browse/FLINK-35558
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Matt Braymer-Hayes


h1. Problem

The [documentation 
site|https://nightlies.apache.org/flink/flink-docs-release-1.19/] offers an 
"Edit This Page" button at the bottom of most pages 
([example|https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/try-flink/local_installation/#summary]),
 allowing a user to quickly open a GitHub PR to resolve the issue.

 

Unfortunately this feature uses the version branch (e.g., {{{}release-1.19{}}}) 
as the base, whereas the [documentation contribution 
guide|https://flink.apache.org/how-to-contribute/contribute-documentation/#submit-your-contribution]
 expects {{master}} to be the base. Since these release branches are often 
incompatible with {{master}} (i.e., I can't do a simple rebase or merge), I end 
up not being able to use the "Edit This Page" feature and instead have to make 
the change myself on GitHub or locally.
h1. Solution

Edit the anchor 
([source|https://github.com/apache/flink/blob/57869c11687e0053a242c90623779c0c7336cd33/docs/layouts/partials/docs/footer.html#L30])
 to use {{master}} instead of {{{}.Site.Params.Branch{}}}. This would lower the 
barrier to entry significantly for docs changes and allow the "Edit This Page" 
feature to work as intended.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35557) MemoryManager only reserves memory per consumer type once

2024-06-07 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-35557:
-

 Summary: MemoryManager only reserves memory per consumer type once
 Key: FLINK-35557
 URL: https://issues.apache.org/jira/browse/FLINK-35557
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends, Runtime / Task
Affects Versions: 1.18.1, 1.19.0, 1.17.2, 1.16.3, 1.20.0
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan
 Fix For: 1.20.0, 1.19.1


# In {{MemoryManager.getSharedMemoryResourceForManagedMemory}} we 
[create|https://github.com/apache/flink/blob/57869c11687e0053a242c90623779c0c7336cd33/flink-runtime/src/main/java/org/apache/flink/runtime/memory/MemoryManager.java#L526]
 a reserve function
 # The function 
[decrements|https://github.com/apache/flink/blob/57869c11687e0053a242c90623779c0c7336cd33/flink-runtime/src/main/java/org/apache/flink/runtime/memory/UnsafeMemoryBudget.java#L61]
 the available Slot memory and fails if there's not enough memory
 # We pass it to {{SharedResources.getOrAllocateSharedResource}}
 # In {{SharedResources.getOrAllocateSharedResource}} , we check if the 
resource (memory) was already reserved by some key (e.g. 
{{{}state-rocks-managed-memory{}}})
 # If not, we create a new one and call the reserve function
 # If the resource was already reserved (not null), we do NOT reserve the 
memory again: 
[https://github.com/apache/flink/blob/57869c11687e0053a242c90623779c0c7336cd33/flin[…]/main/java/org/apache/flink/runtime/memory/SharedResources.java|https://github.com/apache/flink/blob/57869c11687e0053a242c90623779c0c7336cd33/flink-runtime/src/main/java/org/apache/flink/runtime/memory/SharedResources.java#L71]


So there will be only one (first) memory reservation for rocksdb for example, 
no matter how many state backends are created. Meaning that managed memory 
limits are not followed (edited) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Merge "flink run" and "flink run-application" in Flink 2.0

2024-06-07 Thread Ferenc Csaky
Hi,

Thank you everyone for the valuable comments, if there are no new messages by 
then, I will start a vote on Monday.

Thanks,
Ferenc




On Monday, 3 June 2024 at 17:27, Jeyhun Karimov  wrote:

> 
> 
> Hi Ferenc,
> 
> Thanks for the proposal. +1 for it! This FLIP will improve the user
> experience.
> 
> Regards,
> Jeyhun
> 
> 
> 
> 
> 
> On Mon, Jun 3, 2024 at 1:50 PM Ferenc Csaky ferenc.cs...@pm.me.invalid
> 
> wrote:
> 
> > Hi Hang,
> > 
> > Thank you for your inputs, both points make sense, updated the
> > FLIP according to them.
> > 
> > Best,
> > Ferenc
> > 
> > On Friday, 31 May 2024 at 04:31, Hang Ruan ruanhang1...@gmail.com wrote:
> > 
> > > Hi, Ferenc.
> > > 
> > > +1 for this proposal. This FLIP will help to make the CLI clearer for
> > > users.
> > > 
> > > I think we should better add an example in the FLIP about how to use the
> > > application mode with the new CLI.
> > > Besides that, we need to add some new tests for this change instead of
> > > only
> > > using the existed tests.
> > > 
> > > Best,
> > > Hang
> > > 
> > > Mate Czagany czmat...@gmail.com 于2024年5月29日周三 19:57写道:
> > > 
> > > > Hi Ferenc,
> > > > 
> > > > Thanks for the FLIP, +1 from me for the proposal. I think these changes
> > > > would be a great solution to all the confusion that comes from these
> > > > two
> > > > action parameters.
> > > > 
> > > > Best regards,
> > > > Mate
> > > > 
> > > > Ferenc Csaky ferenc.cs...@pm.me.invalid ezt írta (időpont: 2024. máj.
> > > > 28., K, 16:13):
> > > > 
> > > > > Thank you Xintong for your input.
> > > > > 
> > > > > I prepared a FLIP for this change [1], looking forward for any
> > > > > other opinions.
> > > > > 
> > > > > Thanks,
> > > > > Ferenc
> > > > > 
> > > > > [1]
> > 
> > https://docs.google.com/document/d/1EX74rFp9bMKdfoGkz1ASOM6Ibw32rRxIadX72zs2zoY/edit?usp=sharing
> > 
> > > > > On Friday, 17 May 2024 at 07:04, Xintong Song tonysong...@gmail.com
> > > > > wrote:
> > > > > 
> > > > > > AFAIK, the main purpose of having `run-application` was to make
> > > > > > sure
> > > > > > the user is aware that application mode is used, which executes the
> > > > > > main
> > > > > > method of the user program in JM rather than in client. This was
> > > > > > important
> > > > > > at the time application mode was first introduced, but maybe not
> > > > > > that
> > > > > > important anymore, given that per-job mode is deprecated and likely
> > > > > > removed
> > > > > > in 2.0. Therefore, +1 for the proposal.
> > > > > > 
> > > > > > Best,
> > > > > > 
> > > > > > Xintong
> > > > > > 
> > > > > > On Thu, May 16, 2024 at 11:35 PM Ferenc Csaky
> > > > > > ferenc.cs...@pm.me.invalid
> > > > > > 
> > > > > > wrote:
> > > > > > 
> > > > > > > Hello devs,
> > > > > > > 
> > > > > > > I saw quite some examples when customers were confused about
> > > > > > > run, and
> > > > > > > run-
> > > > > > > application in the Flink CLI and I was wondering about the
> > > > > > > necessity
> > > > > > > of
> > > > > > > deploying
> > > > > > > Application Mode (AM) jobs with a different command, than
> > > > > > > Session and
> > > > > > > Per-Job mode jobs.
> > > > > > > 
> > > > > > > I can see a point that YarnDeploymentTarget [1] and
> > > > > > > KubernetesDeploymentTarget
> > > > > > > [2] are part of their own maven modules and not known in
> > > > > > > flink-clients,
> > > > > > > so the
> > > > > > > deployment mode validations are happening during cluster
> > > > > > > deployment
> > > > > > > in
> > > > > > > their specific
> > > > > > > ClusterDescriptor implementation [3]. Although these are
> > > > > > > implementation
> > > > > > > details that
> > > > > > > IMO should not define user-facing APIs.
> > > > > > > 
> > > > > > > The command line setup is the same for both run and
> > > > > > > run-application,
> > > > > > > so
> > > > > > > I think there
> > > > > > > is a quite simple way to achieve a unified flink run experience,
> > > > > > > but
> > > > > > > I
> > > > > > > might missed
> > > > > > > something so I would appreciate any inputs on this topic.
> > > > > > > 
> > > > > > > Based on my assumptions I think it would be possible to
> > > > > > > deprecate the
> > > > > > > run-
> > > > > > > application in Flink 1.20 and remove it completely in Flink 2.0.
> > > > > > > I
> > > > > > > already put together a
> > > > > > > PoC [4], and I was able to deploy AM jobs like this:
> > > > > > > 
> > > > > > > flink run --target kubernetes-application ...
> > > > > > > 
> > > > > > > If others also agree with this, I would be happy to open a FLIP.
> > > > > > > WDYT?
> > > > > > > 
> > > > > > > Thanks,
> > > > > > > Ferenc
> > > > > > > 
> > > > > > > [1]
> > 
> > https://github.com/apache/flink/blob/master/flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnDeploymentTarget.java
> > 
> > > > > > > [2]
> > 
> > https://github.com/apache/flink/blob/master/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesDeploymentTarget.java

[jira] [Created] (FLINK-35556) Wrong constant in RocksDBSharedResourcesFactory.SLOT_SHARED_MANAGED

2024-06-07 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-35556:
-

 Summary: Wrong constant in 
RocksDBSharedResourcesFactory.SLOT_SHARED_MANAGED
 Key: FLINK-35556
 URL: https://issues.apache.org/jira/browse/FLINK-35556
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.19.0, 1.20.0
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan
 Fix For: 1.20.0, 1.19.1


See 
https://github.com/apache/flink/blob/57869c11687e0053a242c90623779c0c7336cd33/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBSharedResourcesFactory.java#L39



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35555) Serializing List with null values throws NPE

2024-06-07 Thread Zhanghao Chen (Jira)
Zhanghao Chen created FLINK-3:
-

 Summary: Serializing List with null values throws NPE
 Key: FLINK-3
 URL: https://issues.apache.org/jira/browse/FLINK-3
 Project: Flink
  Issue Type: Bug
  Components: API / Type Serialization System
Affects Versions: 1.20.0
Reporter: Zhanghao Chen
 Fix For: 1.20.0


FLINK-34123 introduced built-in serialization support for java.util.List, which 
relies on the existing {{ListSerializer}} impl. However, {{ListSerializer}} 
does not allow null values, as it is originally designed for serializing 
{{ListState}} only where null value is explicitly forbidden in the contract.

Directly adding null marker to allow null values will break backwards state 
compatibility, so we'll need to introduce a new List serializer and 
corrsponding TypeInformation that allows null values for serializing user 
objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-opensearch v1.2.0, release candidate #1

2024-06-07 Thread Danny Cranmer
Thanks for driving this Sergey.

+1 (binding)

- Release notes look good
- Source archive checksum and signature is correct
- Binary checksum and signature is correct
- Contents of Maven repo looks good
- Verified there are no binaries in the source archive
- Builds from source using Java 8 (-Popensearch1)
- CI run passed
- Tag exists in repo
- NOTICE and LICENSE files present and correct

Thanks,
Danny

On Fri, Jun 7, 2024 at 4:27 AM Rui Fan <1996fan...@gmail.com> wrote:

> Thanks Sergey for the hard work!
>
> +1(binding)
>
> - Verified signatures
> - Verified sha512 (hashsums)
> - The source archives do not contain any binaries
> - Build the source with Maven 3 and java8 (Checked the license as well)
> - Checked Github release tag
> - Reviewed the flink-web PR
>
> Best,
> Rui
>
> On Thu, May 30, 2024 at 8:22 PM weijie guo 
> wrote:
>
> > Thanks Sergey for driving this release!
> >
> > +1(non-binding)
> >
> > 1. Verified signatures and hash sums
> > 2. Build from source with 1.8.0_291 succeeded
> > 3. Checked RN.
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > Yuepeng Pan  于2024年5月30日周四 10:08写道:
> >
> > > +1 (non-binding)
> > >
> > > - Built from source code with JDK 1.8 on MaxOS- Run examples locally.-
> > > Checked release notes Best, Yuepeng Pan
> > >
> > >
> > > At 2024-05-28 22:53:10, "gongzhongqiang" 
> > > wrote:
> > > >+1(non-binding)
> > > >
> > > >- Verified signatures and hash sums
> > > >- Reviewed the web PR
> > > >- Built from source code with JDK 1.8 on Ubuntu 22.04
> > > >- Checked release notes
> > > >
> > > >Best,
> > > >Zhongqiang Gong
> > > >
> > > >
> > > >Sergey Nuyanzin  于2024年5月16日周四 06:03写道:
> > > >
> > > >> Hi everyone,
> > > >> Please review and vote on release candidate #1 for
> > > >> flink-connector-opensearch v1.2.0, as follows:
> > > >> [ ] +1, Approve the release
> > > >> [ ] -1, Do not approve the release (please provide specific
> comments)
> > > >>
> > > >>
> > > >> The complete staging area is available for your review, which
> > includes:
> > > >> * JIRA release notes [1],
> > > >> * the official Apache source release to be deployed to
> > dist.apache.org
> > > >> [2],
> > > >> which are signed with the key with fingerprint
> > > >> F7529FAE24811A5C0DF3CA741596BBF0726835D8 [3],
> > > >> * all artifacts to be deployed to the Maven Central Repository [4],
> > > >> * source code tag v1.2.0-rc1 [5],
> > > >> * website pull request listing the new release [6].
> > > >> * CI build of the tag [7].
> > > >>
> > > >> The vote will be open for at least 72 hours. It is adopted by
> majority
> > > >> approval, with at least 3 PMC affirmative votes.
> > > >>
> > > >> Note that this release is for Opensearch v1.x
> > > >>
> > > >> Thanks,
> > > >> Release Manager
> > > >>
> > > >> [1] https://issues.apache.org/jira/projects/FLINK/versions/12353812
> > > >> [2]
> > > >>
> > > >>
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-1.2.0-rc1
> > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > >> [4]
> > > https://repository.apache.org/content/repositories/orgapacheflink-1734
> > > >> [5]
> > > >>
> > > >>
> > >
> >
> https://github.com/apache/flink-connector-opensearch/releases/tag/v1.2.0-rc1
> > > >> [6] https://github.com/apache/flink-web/pull/740
> > > >> [7]
> > > >>
> > > >>
> > >
> >
> https://github.com/apache/flink-connector-opensearch/actions/runs/9102334125
> > > >>
> > >
> >
>


Re: [VOTE] Release flink-connector-opensearch v2.0.0, release candidate #1

2024-06-07 Thread Danny Cranmer
Thanks for driving this Sergey.

+1 (binding)

- Release notes look good
- Source archive checksum and signature is correct
- Binary checksum and signature is correct
- Contents of Maven repo looks good
- Verified there are no binaries in the source archive
- Builds from source using Java 11 (-Popensearch2)
- CI run passed
- Tag exists in repo
- NOTICE and LICENSE files present and correct

Thanks,
Danny

On Fri, Jun 7, 2024 at 4:17 AM Rui Fan <1996fan...@gmail.com> wrote:

> Thanks Sergey for the hard work!
>
> +1(binding)
>
> - Verified signatures
> - Verified sha512 (hashsums)
> - The source archives do not contain any binaries
> - Build the source with Maven 3 and java8 (Checked the license as well)
> - Checked Github release tag
> - Reviewed the flink-web PR
>
> Best,
> Rui
>
> On Mon, May 27, 2024 at 5:35 PM Hang Ruan  wrote:
>
> > +1 (non-binding)
> >
> > - verified signatures
> > - verified hashsums
> > - built from source code with JDK 11 succeed
> > - checked release notes
> > - reviewed the web PR
> >
> > Best,
> > Hang
> >
> > Leonard Xu  于2024年5月22日周三 21:02写道:
> >
> > >
> > > > +1 (binding)
> > > >
> > > > - verified signatures
> > > > - verified hashsums
> > > > - built from source code with JDK 1.8 succeeded
> > > > - checked Github release tag
> > > > - checked release notes
> > > > - reviewed the web PR
> > >
> > > Supply more information about build from source code with JDK 1.8
> > >
> > > > - built from source code with JDK 1.8 succeeded
> > > It’s correct as we don’t activate opensearch2 profile by default.
> > >
> > > - built from source code with JDK 1.8 and -Popensearch2 failed
> > > - built from source code with JDK 11 and -Popensearch2 succeeded
> > >
> > > Best,
> > > Leonard
> > >
> > >
> > > >
> > > >
> > > >> 2024年5月16日 上午6:58,Andrey Redko  写道:
> > > >>
> > > >> +1 (non-binding), thanks Sergey!
> > > >>
> > > >> On Wed, May 15, 2024, 6:00 p.m. Sergey Nuyanzin <
> snuyan...@gmail.com>
> > > wrote:
> > > >>
> > > >>> Hi everyone,
> > > >>> Please review and vote on release candidate #1 for
> > > >>> flink-connector-opensearch v2.0.0, as follows:
> > > >>> [ ] +1, Approve the release
> > > >>> [ ] -1, Do not approve the release (please provide specific
> comments)
> > > >>>
> > > >>>
> > > >>> The complete staging area is available for your review, which
> > includes:
> > > >>> * JIRA release notes [1],
> > > >>> * the official Apache source release to be deployed to
> > dist.apache.org
> > > >>> [2],
> > > >>> which are signed with the key with fingerprint
> > > >>> F7529FAE24811A5C0DF3CA741596BBF0726835D8 [3],
> > > >>> * all artifacts to be deployed to the Maven Central Repository [4],
> > > >>> * source code tag v2.0.0-rc1 [5],
> > > >>> * website pull request listing the new release [6].
> > > >>> * CI build of the tag [7].
> > > >>>
> > > >>> The vote will be open for at least 72 hours. It is adopted by
> > majority
> > > >>> approval, with at least 3 PMC affirmative votes.
> > > >>>
> > > >>> Note that this release is for Opensearch v2.x
> > > >>>
> > > >>> Thanks,
> > > >>> Release Manager
> > > >>>
> > > >>> [1]
> https://issues.apache.org/jira/projects/FLINK/versions/12354674
> > > >>> [2]
> > > >>>
> > > >>>
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-2.0.0-rc1
> > > >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > >>> [4]
> > > >>>
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1735/
> > > >>> [5]
> > > >>>
> > > >>>
> > >
> >
> https://github.com/apache/flink-connector-opensearch/releases/tag/v2.0.0-rc1
> > > >>> [6] https://github.com/apache/flink-web/pull/741
> > > >>> [7]
> > > >>>
> > > >>>
> > >
> >
> https://github.com/apache/flink-connector-opensearch/actions/runs/9102980808
> > > >>>
> > > >
> > >
> > >
> >
>


[ANNOUNCE] Apache flink-connector-kafka 3.2.0 released

2024-06-07 Thread Danny Cranmer
The Apache Flink community is very happy to announce the release of Apache
flink-connector-kafka 3.2.0 for Flink 1.18 and 1.19.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354209

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Danny


[ANNOUNCE] Apache flink-connector-jdbc 3.2.0 released

2024-06-07 Thread Danny Cranmer
The Apache Flink community is very happy to announce the release of Apache
flink-connector-jdbc 3.2.0 for Flink 1.18 and 1.19.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353143

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Danny


[RESULT][VOTE] flink-connector-kafka 3.2.0, release candidate #1

2024-06-07 Thread Danny Cranmer
I'm happy to announce that we have unanimously approved this release.

There are 9 approving votes, 3 of which are binding:
* Muhammet Orazov
* Ahmed Hamdy
* Hang Ruan
* Aleksandr Pilipenko
* Leonard Xu (binding)
* Fabian Paul
* Martijn Visser (binding)
* Yanquan Lv
* Danny Cranmer (binding)

There are no disapproving votes.

Thanks,
Danny


Re: [VOTE] Release flink-connector-kafka v3.2.0, release candidate #1

2024-06-07 Thread Danny Cranmer
Thanks all. This vote is now closed, I will announce the results in a
separate thread.

On Fri, Jun 7, 2024 at 11:45 AM Danny Cranmer 
wrote:

> +1 (binding)
>
> - Release notes look good
> - Source archive checksum and signature is correct
> - Binary checksum and signature is correct
> - Contents of Maven repo looks good
> - Verified there are no binaries in the source archive
> - Builds from source, tests pass using Java 8
> - CI run passed [1]
> - Tag exists in repo
> - NOTICE and LICENSE files present and correct
>
> Thanks,
> Danny
>
> [1]
> https://github.com/apache/flink-connector-kafka/actions/runs/8785158288
>
>
> On Fri, Jun 7, 2024 at 7:19 AM Yanquan Lv  wrote:
>
>> +1 (non-binding)
>>
>> - verified gpg signatures
>> - verified sha512 hash
>> - built from source code with java 8/11/17
>> - checked Github release tag
>> - checked the CI result
>> - checked release notes
>>
>> Danny Cranmer  于2024年4月22日周一 21:56写道:
>>
>> > Hi everyone,
>> >
>> > Please review and vote on release candidate #1 for flink-connector-kafka
>> > v3.2.0, as follows:
>> > [ ] +1, Approve the release
>> > [ ] -1, Do not approve the release (please provide specific comments)
>> >
>> > This release supports Flink 1.18 and 1.19.
>> >
>> > The complete staging area is available for your review, which includes:
>> > * JIRA release notes [1],
>> > * the official Apache source release to be deployed to dist.apache.org
>> > [2],
>> > which are signed with the key with fingerprint 125FD8DB [3],
>> > * all artifacts to be deployed to the Maven Central Repository [4],
>> > * source code tag v3.2.0-rc1 [5],
>> > * website pull request listing the new release [6].
>> > * CI build of the tag [7].
>> >
>> > The vote will be open for at least 72 hours. It is adopted by majority
>> > approval, with at least 3 PMC affirmative votes.
>> >
>> > Thanks,
>> > Danny
>> >
>> > [1]
>> >
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354209
>> > [2]
>> >
>> >
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.2.0-rc1
>> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> > [4]
>> https://repository.apache.org/content/repositories/orgapacheflink-1723
>> > [5]
>> > https://github.com/apache/flink-connector-kafka/releases/tag/v3.2.0-rc1
>> > [6] https://github.com/apache/flink-web/pull/738
>> > [7] https://github.com/apache/flink-connector-kafka
>> >
>>
>


Re: [VOTE] Release flink-connector-kafka v3.2.0, release candidate #1

2024-06-07 Thread Danny Cranmer
+1 (binding)

- Release notes look good
- Source archive checksum and signature is correct
- Binary checksum and signature is correct
- Contents of Maven repo looks good
- Verified there are no binaries in the source archive
- Builds from source, tests pass using Java 8
- CI run passed [1]
- Tag exists in repo
- NOTICE and LICENSE files present and correct

Thanks,
Danny

[1] https://github.com/apache/flink-connector-kafka/actions/runs/8785158288


On Fri, Jun 7, 2024 at 7:19 AM Yanquan Lv  wrote:

> +1 (non-binding)
>
> - verified gpg signatures
> - verified sha512 hash
> - built from source code with java 8/11/17
> - checked Github release tag
> - checked the CI result
> - checked release notes
>
> Danny Cranmer  于2024年4月22日周一 21:56写道:
>
> > Hi everyone,
> >
> > Please review and vote on release candidate #1 for flink-connector-kafka
> > v3.2.0, as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > This release supports Flink 1.18 and 1.19.
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> > [2],
> > which are signed with the key with fingerprint 125FD8DB [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v3.2.0-rc1 [5],
> > * website pull request listing the new release [6].
> > * CI build of the tag [7].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Danny
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354209
> > [2]
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.2.0-rc1
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1723
> > [5]
> > https://github.com/apache/flink-connector-kafka/releases/tag/v3.2.0-rc1
> > [6] https://github.com/apache/flink-web/pull/738
> > [7] https://github.com/apache/flink-connector-kafka
> >
>


Re: [Discuss] Non-retriable (partial) errors in Elasticsearch 8 connector

2024-06-07 Thread Ahmed Hamdy
Hi Mingliang,

We already have a mechanism for detecting and propagating
Fatal/Non-retryable exceptions[1]. We can use that in ElasticSearch similar
to what we do for AWS connectors[2]. Also, you can check AWS connectors for
how to add a fail-fast mechanism to disable retrying all along.

> FLIP-451 proposes timeout for retrying which helps with un-acknowledged
> requests, but not addressing the case when request gets processed and
> failed items keep failing no matter how many times we retry. Correct me if
> I'm wrong
>
yes you are correct, this is mainly to mitigate the issues arising from
incorrect handling of requests in sink implementers.
The Failure handling itself has always been assumed to be the Sink
implementation responsibility, this is done in 3 levels
- Classifying Fatal exceptions as mentioned above
- Adding configuration to disable retries as mentioned above as well.
- Adding mechanism to limit retries as in the proposed ticket for AWS
connectors[3]

In my opinion at least 1 and 3 are useful in this case for Elasticsearch,
Adding classifiers and retry mechanisms for elasticsearch.

Or we can allow users to configure
> "drop/fail" behavior for non-retriable errors
>

I am not sure I follow this proposal, but in general while "Dropping"
records seems to boost reliability, it breaks the at-least-once semantics
and if you don't have proper tracing and debugging mechanisms we will be
shooting ourselves in the foot.


1-
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/throwable/FatalExceptionClassifier.java
2-
https://github.com/apache/flink-connector-aws/blob/c688a8545ac1001c8450e8c9c5fe8bbafa13aeba/flink-connector-aws/flink-connector-dynamodb/src/main/java/org/apache/flink/connector/dynamodb/sink/DynamoDbSinkWriter.java#L227

3-https://issues.apache.org/jira/browse/FLINK-35541
Best Regards
Ahmed Hamdy


On Thu, 6 Jun 2024 at 06:53, Mingliang Liu  wrote:

> Hi all,
>
> Currently the Elasticsearch 8 connector retries all items if the request
> fails as a whole, and retries failed items if the request has partial
> failures [1]. I think this infinitely retries might be problematic in some
> cases when retrying can never eventually succeed. For example, if the
> request is 400 (bad request) or 404 (not found), retries do not help. If
> there are too many failed items non-retriable, new requests will get
> processed less effectively. In extreme cases, it may stall the pipeline if
> in-flight requests are occupied by those failed items.
>
> FLIP-451 proposes timeout for retrying which helps with un-acknowledged
> requests, but not addressing the case when request gets processed and
> failed items keep failing no matter how many times we retry. Correct me if
> I'm wrong.
>
> One opinionated option is to fail fast for non-retriable errors like 400 /
> 404 and to drop items for 409. Or we can allow users to configure
> "drop/fail" behavior for non-retriable errors. I prefer the latter. I
> checked how LogStash ingests data to Elasticsearch and it takes a similar
> approach for non-retriable errors [2]. In my day job, we have a
> dead-letter-queue in AsynSinkWriter for failed entries that exhaust
> retries. I guess that is too specific to our setup and seems an overkill
> here for Elasticsearch connector.
>
> Any thoughts on this?
>
> [1]
>
> https://github.com/apache/flink-connector-elasticsearch/blob/5d1f8d03e3cff197ed7fe30b79951e44808b48fe/flink-connector-elasticsearch8/src/main/java/org/apache/flink/connector/elasticsearch/sink/Elasticsearch8AsyncWriter.java#L152-L170
> [2]
>
> https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/main/lib/logstash/plugin_mixins/elasticsearch/common.rb#L283-L304
>


Re: [VOTE] Release 1.19.1, release candidate #1

2024-06-07 Thread Xiqian YU
+1 (non-binding)


  *   Checked download links & release tags
  *   Verified that package checksums matched
  *   Compiled Flink from source code with JDK 8 / 11
  *   Ran E2e data integration test jobs on local cluster

Regards,
yux

De : Rui Fan <1996fan...@gmail.com>
Date : vendredi, 7 juin 2024 à 17:14
À : dev@flink.apache.org 
Objet : Re: [VOTE] Release 1.19.1, release candidate #1
+1(binding)

- Reviewed the flink-web PR (Left some comments)
- Checked Github release tag
- Verified signatures
- Verified sha512 (hashsums)
- The source archives do not contain any binaries
- Build the source with Maven 3 and java8 (Checked the license as well)
- Start the cluster locally with jdk8, and run the StateMachineExample job,
it works fine.

Best,
Rui

On Thu, Jun 6, 2024 at 11:39 PM Hong Liang  wrote:

> Hi everyone,
> Please review and vote on the release candidate #1 for the flink v1.19.1,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint B78A5EA1 [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.19.1-rc1" [5],
> * website pull request listing the new release and adding announcement blog
> post [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Hong
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354399
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.19.1-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1736/
> [5] https://github.com/apache/flink/releases/tag/release-1.19.1-rc1
> [6] https://github.com/apache/flink-web/pull/745
>


Re: [DISCUSS] FLIP-461: FLIP-461: Synchronize rescaling with checkpoint creation to minimize reprocessing

2024-06-07 Thread Zakelly Lan
Hi Matthias,

Thanks for your reply!

That's something that could be considered as another optimization. I would
> consider this as a possible follow-up. My concern here is that we'd make
> the rescaling configuration even more complicated by introducing yet
> another parameter.


I'd be fine with considering this as a follow-up.

It might be worth renaming the internal interface into something that
> indicates its internal usage to avoid confusion.
>

Agree with this.

And another question:
I noticed the existing options under 'jobmanager.adaptive-scheduler' are
using the word 'scaling', e.g.
'jobmanager.adaptive-scheduler.scaling-interval.min'. While in this FLIP
you choose 'rescale'. Would you mind unifying them?


Best,
Zakelly


On Thu, Jun 6, 2024 at 10:57 PM David Morávek 
wrote:

> Thanks for the FLIP Matthias, I think it looks pretty solid!
>
> I also don't see a relation to unaligned checkpoints. From the AS
> perspective, the checkpoint time doesn't matter.
>
> Is it possible a change event observed right after a complete checkpoint
> > (or within a specific short time after a checkpoint) that triggers a
> > rescale immediately? Sometimes the checkpoint interval is huge and it is
> > better to rescale immediately.
> >
>
> I had considered this initially too, but it feels like a possible follow-up
> optimization.
>
> The primary objective of the proposed solution is to enhance overall
> predictability. With a longer checkpointing interval, the current situation
> worsens as we might have to reprocess a substantial backlog.
>
> I think in the future we might actually want to enhance this by triggering
> some kind of specialized "rescaling" checkpoint that prepares the cluster
> for rescaling (eg. by replicating state to new slots / pre-splitting the
> db, ...), to make things faster.
>
> Best,
> D.
>
> On Wed, Jun 5, 2024 at 4:34 PM Matthias Pohl  wrote:
>
> > Hi Zakelly,
> > thanks for your reply. See my inlined responses below:
> >
> > On Wed, Jun 5, 2024 at 10:26 AM Zakelly Lan 
> wrote:
> >
> > > Hi Matthias,
> > >
> > > Thanks for your proposal! I have a few questions:
> > >
> > > 1. Is it possible a change event observed right after a complete
> > checkpoint
> > > (or within a specific short time after a checkpoint) that triggers a
> > > rescale immediately? Sometimes the checkpoint interval is huge and it
> is
> > > better to rescale immediately.
> > >
> >
> > That's something that could be considered as another optimization. I
> would
> > consider this as a possible follow-up. My concern here is that we'd make
> > the rescaling configuration even more complicated by introducing yet
> > another parameter.
> >
> >
> > > 2. Should we introduce `CheckpointLifecycleListener` instead of reusing
> > > `CheckpointListener`? Is `CheckpointListener` enough for this scenario?
> > >
> >
> > Good point, they are serving similar purposes. But I'm hesitant to use
> > CheckpointListener (which is a public interface) for this internal quite
> > narrowly scoped runtime-specific use case of FLIP-461.
> >
> > It might be worth renaming the internal interface into something that
> > indicates its internal usage to avoid confusion.
> >
> >
> > > Best,
> > > Zakelly
> > >
> > > On Wed, Jun 5, 2024 at 3:02 PM Matthias Pohl 
> wrote:
> > >
> > > > Hi ConradJam,
> > > > thanks for your response.
> > > >
> > > > The CheckpointStatsTracker gets notified about the checkpoint
> > completion
> > > > after the checkpoint is finalized, i.e. all its data is persisted and
> > the
> > > > metadata is written to the CompletedCheckpointStore. At this moment,
> > the
> > > > checkpoint is considered for restoring a job and, therefore, becomes
> > > > available for restarts. This workflow also applies to unaligned
> > > > checkpoints. But I see how this context might be helpful for
> > > understanding
> > > > the change. I will add it to the FLIP. So far, I don't see a reason
> > > > to disable the feature for unaligned checkpoints. Do you see other
> > issues
> > > > that might make it necessary to disable this feature for this type of
> > > > checkpoints?
> > > >
> > > > Can you elaborate a bit more what you mean by "checkpoints that do
> not
> > > > check it"? I do not fully understand what you are referring to with
> > "it"
> > > > here.
> > > >
> > > > Best,
> > > > Matthias
> > > >
> > > > On Wed, Jun 5, 2024 at 4:46 AM ConradJam 
> wrote:
> > > >
> > > > > I have a few questions:
> > > > > Unaligned checkpoints Do we need to enable this feature? Whether
> this
> > > > > feature should be disabled for checkpoints that do not check it
> > > > >
> > > > > Matthias Pohl  于2024年6月4日周二 18:03写道:
> > > > >
> > > > > > Hi everyone,
> > > > > > I'd like to discuss FLIP-461 [1]. The FLIP proposes the
> > > synchronization
> > > > > of
> > > > > > rescaling and the completion of checkpoints. The idea is to
> reduce
> > > the
> > > > > > amount of data that needs to be processed after rescaling
> > happened. A
> > > > > more
> > > > 

Re: [RESULT][VOTE] flink-connector-jdbc 3.2.0, release candidate #1

2024-06-07 Thread Danny Cranmer
Apologies, this was RC2, not RC1.

On Fri, Jun 7, 2024 at 11:12 AM Danny Cranmer 
wrote:

> I'm happy to announce that we have unanimously approved this release.
>
> There are 7 approving votes, 3 of which are binding:
> * Ahmed Hamdy
> * Hang Ruan
> * Leonard Xu (binding)
> * Yuepeng Pan
> * Zhongqiang Gong
> * Rui Fan (binding)
> * Weijie Guo (binding)
>
> There was one -1 vote that was cancelled.
> * Yuepeng Pan (CANCELLED)
>
> Thanks everyone!
>


[RESULT][VOTE] flink-connector-jdbc 3.2.0, release candidate #1

2024-06-07 Thread Danny Cranmer
I'm happy to announce that we have unanimously approved this release.

There are 7 approving votes, 3 of which are binding:
* Ahmed Hamdy
* Hang Ruan
* Leonard Xu (binding)
* Yuepeng Pan
* Zhongqiang Gong
* Rui Fan (binding)
* Weijie Guo (binding)

There was one -1 vote that was cancelled.
* Yuepeng Pan (CANCELLED)

Thanks everyone!


Re: [VOTE] Release flink-connector-jdbc v3.2.0, release candidate #2

2024-06-07 Thread Danny Cranmer
Thanks all. This vote is now closed, I will announce the results in a
separate thread.


On Fri, Jun 7, 2024 at 4:59 AM weijie guo  wrote:

> Thanks Danny!
>
> +1(binding)
> - Verified signatures and hash sums
> - Checked the CI build
> - Checked the release note
> - Reviewed the flink-web PR
> - Build from source.
>
> Best regards,
>
> Weijie
>
>
> Rui Fan <1996fan...@gmail.com> 于2024年6月7日周五 11:08写道:
>
> > Thanks Danny for the hard work!
> >
> > +1(binding)
> >
> > - Verified signatures
> > - Verified sha512 (hashsums)
> > - The source archives do not contain any binaries
> > - Build the source with Maven 3 and java8 (Checked the license as well)
> > - Checked Github release tag
> > - Reviewed the flink-web PR
> >
> > Best,
> > Rui
> >
> > On Tue, Jun 4, 2024 at 1:31 PM gongzhongqiang  >
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > - Validated checksum hash and signature.
> > > - Confirmed that no binaries exist in the source archive.
> > > - Built the source with JDK 8.
> > > - Verified the web PR.
> > > - Ensured the JAR is built by JDK 8.
> > >
> > > Best,
> > > Zhongqiang Gong
> > >
> > > Danny Cranmer  于2024年4月18日周四 18:20写道:
> > >
> > > > Hi everyone,
> > > >
> > > > Please review and vote on the release candidate #1 for the version
> > 3.2.0,
> > > > as follows:
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > > This release supports Flink 1.18 and 1.19.
> > > >
> > > > The complete staging area is available for your review, which
> includes:
> > > > * JIRA release notes [1],
> > > > * the official Apache source release to be deployed to
> dist.apache.org
> > > > [2],
> > > > which are signed with the key with fingerprint 125FD8DB [3],
> > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > * source code tag v3.2.0-rc1 [5],
> > > > * website pull request listing the new release [6].
> > > > * CI run of tag [7].
> > > >
> > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > > approval, with at least 3 PMC affirmative votes.
> > > >
> > > > Thanks,
> > > > Danny
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353143
> > > > [2]
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.2.0-rc2
> > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > [4]
> > > >
> > https://repository.apache.org/content/repositories/orgapacheflink-1718/
> > > > [5]
> > > https://github.com/apache/flink-connector-jdbc/releases/tag/v3.2.0-rc2
> > > > [6] https://github.com/apache/flink-web/pull/734
> > > > [7]
> > > https://github.com/apache/flink-connector-jdbc/actions/runs/8736019099
> > > >
> > >
> >
>


[ANNOUNCE] Apache flink-connector-cassandra 3.2.0 released

2024-06-07 Thread Danny Cranmer
The Apache Flink community is very happy to announce the release of Apache
flink-connector-cassandra 3.2.0 for Flink 1.18 and 1.19.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353148

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Danny


Re: [VOTE] FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn

2024-06-07 Thread weijie guo
+1 (binding)

Best regards,

Weijie


Zhu Zhu  于2024年6月7日周五 16:48写道:

> +1 (binding)
>
> Thanks,
> Zhu
>
> Xintong Song  于2024年6月7日周五 16:08写道:
>
> > +1 (binding)
> >
> > Best,
> >
> > Xintong
> >
> >
> >
> > On Fri, Jun 7, 2024 at 4:03 PM Yuxin Tan  wrote:
> >
> > > Hi everyone,
> > >
> > > Thanks for all the feedback about the FLIP-459 Support Flink
> > > hybrid shuffle integration with Apache Celeborn[1].
> > > The discussion thread is here [2].
> > >
> > > I'd like to start a vote for it. The vote will be open for at least
> > > 72 hours unless there is an objection or insufficient votes.
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-459%3A+Support+Flink+hybrid+shuffle+integration+with+Apache+Celeborn
> > > [2] https://lists.apache.org/thread/gy7sm7qrf7yrv1rl5f4vtk5fo463ts33
> > >
> > > Best,
> > > Yuxin
> > >
> >
>


[RESULT][VOTE] flink-connector-cassandra 3.2.0, release candidate #1

2024-06-07 Thread Danny Cranmer
I'm happy to announce that we have unanimously approved this release.

There are 6 approving votes, 3 of which are binding:
* Muhammet Orazov
* Ahmed Hamdy
* Hang Ruan
* Weijie Guo (binding)
* Leonard Xu (binding)
* Rui Fan (binding)

There are no disapproving votes.

Thanks,
Danny


Re: [VOTE] Release flink-connector-cassandra v3.2.0, release candidate #1

2024-06-07 Thread Danny Cranmer
Thanks all. This vote is now closed, I will announce the results in a
separate thread.

On Fri, Jun 7, 2024 at 5:02 AM weijie guo  wrote:

> Thanks Danny!
>
> +1(binding)
>
> - Verified signatures and hash sum
> - Checked the CI build from tag
> - Build from source
> - Reviewed flink-web PR
>
> Best regards,
>
> Weijie
>
>
> Rui Fan <1996fan...@gmail.com> 于2024年6月7日周五 11:01写道:
>
>> Thanks Danny for the hard work!
>>
>> +1(binding)
>>
>> - Verified signatures
>> - Verified sha512 (hashsums)
>> - The source archives do not contain any binaries
>> - Build the source with Maven 3 and java8 (Checked the license as well)
>> - Checked Github release tag
>> - Reviewed the flink-web PR
>>
>> Best,
>> Rui
>>
>> On Wed, May 22, 2024 at 8:01 PM Leonard Xu  wrote:
>>
>> > +1 (binding)
>> >
>> > - verified signatures
>> > - verified hashsums
>> > - built from source code with java 1.8 succeeded
>> > - checked Github release tag
>> > - checked release notes status which only left one issue is used for
>> > release tracking
>> > - reviewed the web PR
>> >
>> > Best,
>> > Leonard
>> >
>> > > 2024年5月22日 下午6:10,weijie guo  写道:
>> > >
>> > > +1(non-binding)
>> > >
>> > > -Validated checksum hash
>> > > -Verified signature
>> > > -Build from source
>> > >
>> > > Best regards,
>> > >
>> > > Weijie
>> > >
>> > >
>> > > Hang Ruan  于2024年5月22日周三 10:12写道:
>> > >
>> > >> +1 (non-binding)
>> > >>
>> > >> - Validated checksum hash
>> > >> - Verified signature
>> > >> - Verified that no binaries exist in the source archive
>> > >> - Build the source with Maven and jdk8
>> > >> - Verified web PR
>> > >> - Check that the jar is built by jdk8
>> > >>
>> > >> Best,
>> > >> Hang
>> > >>
>> > >> Muhammet Orazov  于2024年5月22日周三
>> 04:15写道:
>> > >>
>> > >>> Hey all,
>> > >>>
>> > >>> Could we please get some more votes to proceed with the release?
>> > >>>
>> > >>> Thanks and best,
>> > >>> Muhammet
>> > >>>
>> > >>> On 2024-04-22 13:04, Danny Cranmer wrote:
>> >  Hi everyone,
>> > 
>> >  Please review and vote on release candidate #1 for
>> >  flink-connector-cassandra v3.2.0, as follows:
>> >  [ ] +1, Approve the release
>> >  [ ] -1, Do not approve the release (please provide specific
>> comments)
>> > 
>> >  This release supports Flink 1.18 and 1.19.
>> > 
>> >  The complete staging area is available for your review, which
>> > includes:
>> >  * JIRA release notes [1],
>> >  * the official Apache source release to be deployed to
>> > dist.apache.org
>> >  [2],
>> >  which are signed with the key with fingerprint 125FD8DB [3],
>> >  * all artifacts to be deployed to the Maven Central Repository [4],
>> >  * source code tag v3.2.0-rc1 [5],
>> >  * website pull request listing the new release [6].
>> >  * CI build of the tag [7].
>> > 
>> >  The vote will be open for at least 72 hours. It is adopted by
>> majority
>> >  approval, with at least 3 PMC affirmative votes.
>> > 
>> >  Thanks,
>> >  Danny
>> > 
>> >  [1]
>> > 
>> > >>>
>> > >>
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353148
>> >  [2]
>> > 
>> > >>>
>> > >>
>> >
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-cassandra-3.2.0-rc1
>> >  [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> >  [4]
>> > 
>> > https://repository.apache.org/content/repositories/orgapacheflink-1722
>> >  [5]
>> > 
>> > >>>
>> > >>
>> >
>> https://github.com/apache/flink-connector-cassandra/releases/tag/v3.2.0-rc1
>> >  [6] https://github.com/apache/flink-web/pull/737
>> >  [7]
>> > 
>> > >>>
>> > >>
>> >
>> https://github.com/apache/flink-connector-cassandra/actions/runs/8784310241
>> > >>>
>> > >>
>> >
>> >
>>
>


[ANNOUNCE] Apache flink-connector-aws 4.3.0 released

2024-06-07 Thread Danny Cranmer
The Apache Flink community is very happy to announce the release of Apache
flink-connector-aws 4.3.0 for Flink 1.18 and 1.19.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353793

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Danny Cranmer


[RESULT][VOTE] flink-connector-aws 4.3.0, release candidate #2

2024-06-07 Thread Danny Cranmer
I'm happy to announce that we have unanimously approved this release.

There are 7 approving votes, 3 of which are binding:
* Hang Ruan
* Ahmed Hamdy
* Aleksandr Pilipenko
* Leonard Xu (binding)
* Zhongqiang Gong
* Rui Fan (binding)
* Weijie Guo (binding)

There are no disapproving votes.

Thanks,
Danny


[jira] [Created] (FLINK-35554) usrlib is not added to classpath when using containers

2024-06-07 Thread Josh England (Jira)
Josh England created FLINK-35554:


 Summary: usrlib is not added to classpath when using containers
 Key: FLINK-35554
 URL: https://issues.apache.org/jira/browse/FLINK-35554
 Project: Flink
  Issue Type: Bug
  Components: flink-docker
Affects Versions: 1.19.0
 Environment: Docker
Reporter: Josh England


We use flink-docker to create a "standalone" application, with a Dockerfile 
like...
 
{code:java}
FROM flink:1.18.1-java17
COPY application.jar /opt/flink/usrlib/artifacts/
{code}

However, after upgrading to 1.19.0 we found our application would not start. We 
saw errors like the following in the logs:


{noformat}
org.apache.flink.util.FlinkException: Could not load the provided entrypoint 
class.
   at 
org.apache.flink.client.program.DefaultPackagedProgramRetriever.getPackagedProgram(DefaultPackagedProgramRetriever.java:230)
   at 
org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.getPackagedProgram(StandaloneApplicationClusterEntryPoint.java:149)
   at 
org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.lambda$main$0(StandaloneApplicationClusterEntryPoint.java:90)
   at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
   at 
org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:89)
   Caused by: org.apache.flink.client.program.ProgramInvocationException: The 
program's entry point class 'X' was not found in the jar file.
{noformat}

We were able to fix the issue by placing the application.jar in /opt/flink/lib 
instead. My guess is that the usrlib directory isn't being added to the 
classpath by the shell scripts that launch flink from a container.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-aws v4.3.0, release candidate #2

2024-06-07 Thread Danny Cranmer
Thanks all, this vote is now closed, I will announce the results on a
separate thread.

On Fri, Jun 7, 2024 at 4:58 AM weijie guo  wrote:

> Thanks Danny!
>
> +1(binding)
>
> - Verified signatures and hashsums
> - Build from source
> - Checked release tag
> - Reviewed the flink-web PR
> - Checked the CI build.
>
> Best regards,
>
> Weijie
>
>
> Rui Fan <1996fan...@gmail.com> 于2024年6月7日周五 11:00写道:
>
> > Thanks Danny for the hard work!
> >
> > +1(binding)
> >
> > - Verified signatures
> > - Verified sha512 (hashsums)
> > - The source archives do not contain any binaries
> > - Build the source with Maven 3 and java8 (Checked the license as well)
> > - Checked Github release tag
> > - Reviewed the flink-web PR
> >
> > Best,
> > Rui
> >
> > On Fri, May 31, 2024 at 11:47 AM gongzhongqiang <
> gongzhongqi...@apache.org
> > >
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > - Validated the checksum hash and signature.
> > > - No binaries exist in the source archive.
> > > - Built the source with JDK 8 succeed.
> > > - Verified the flink-web PR.
> > > - Ensured the JAR is built by JDK 8.
> > >
> > > Best,
> > > Zhongqiang Gong
> > >
> > > Danny Cranmer  于2024年4月19日周五 18:08写道:
> > >
> > > > Hi everyone,
> > > >
> > > > Please review and vote on release candidate #2 for
> flink-connector-aws
> > > > v4.3.0, as follows:
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > > This version supports Flink 1.18 and 1.19.
> > > >
> > > > The complete staging area is available for your review, which
> includes:
> > > > * JIRA release notes [1],
> > > > * the official Apache source release to be deployed to
> dist.apache.org
> > > > [2],
> > > > which are signed with the key with fingerprint 125FD8DB [3],
> > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > * source code tag v4.3.0-rc2 [5],
> > > > * website pull request listing the new release [6].
> > > > * CI build of the tag [7].
> > > >
> > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > > approval, with at least 3 PMC affirmative votes.
> > > >
> > > > Thanks,
> > > > Release Manager
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353793
> > > > [2]
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-aws-4.3.0-rc2
> > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > [4]
> > > >
> > https://repository.apache.org/content/repositories/orgapacheflink-1721/
> > > > [5]
> > > https://github.com/apache/flink-connector-aws/releases/tag/v4.3.0-rc2
> > > > [6] https://github.com/apache/flink-web/pull/733
> > > > [7]
> > > https://github.com/apache/flink-connector-aws/actions/runs/8751694197
> > > >
> > >
> >
>


[ANNOUNCE] Apache flink-connector-gcp-pubsub 3.1.0 released

2024-06-07 Thread Danny Cranmer
The Apache Flink community is very happy to announce the release of Apache
flink-connector-gcp-pubsub 3.1.0 for Flink 1.18 and 1.19.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353813

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Danny


[jira] [Created] (FLINK-35553) Integrate newly added trigger interface with checkpointing

2024-06-07 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-35553:
-

 Summary: Integrate newly added trigger interface with checkpointing
 Key: FLINK-35553
 URL: https://issues.apache.org/jira/browse/FLINK-35553
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing, Runtime / Coordination
Reporter: Matthias Pohl


This connects the newly introduced trigger logic (FLINK-35551) with the newly 
added checkpoint lifecycle listening feature (FLINK-35552).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35552) Move CheckpointStatsTracker out of ExecutionGraph into Scheduler

2024-06-07 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-35552:
-

 Summary: Move CheckpointStatsTracker out of ExecutionGraph into 
Scheduler
 Key: FLINK-35552
 URL: https://issues.apache.org/jira/browse/FLINK-35552
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing, Runtime / Coordination
Reporter: Matthias Pohl


The scheduler needs to know about the CheckpointStatsTracker to allow listening 
to checkpoint failures and completion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35551) Introduces RescaleManager#onTrigger endpoint

2024-06-07 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-35551:
-

 Summary: Introduces RescaleManager#onTrigger endpoint
 Key: FLINK-35551
 URL: https://issues.apache.org/jira/browse/FLINK-35551
 Project: Flink
  Issue Type: Sub-task
Reporter: Matthias Pohl


The new endpoint would allow use from separating observing change events from 
actually triggering the rescale operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35550) Introduce new component RescaleManager

2024-06-07 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-35550:
-

 Summary: Introduce new component RescaleManager
 Key: FLINK-35550
 URL: https://issues.apache.org/jira/browse/FLINK-35550
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Reporter: Matthias Pohl


The goal here is to collect the rescaling logic in a single component to 
improve testability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35549) FLIP-461: Synchronize rescaling with checkpoint creation to minimize reprocessing for the AdaptiveScheduler

2024-06-07 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-35549:
-

 Summary: FLIP-461: Synchronize rescaling with checkpoint creation 
to minimize reprocessing for the AdaptiveScheduler
 Key: FLINK-35549
 URL: https://issues.apache.org/jira/browse/FLINK-35549
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing, Runtime / Coordination
Affects Versions: 1.20.0
Reporter: Matthias Pohl


This is the umbrella issue for implementing 
[FLIP-461|https://cwiki.apache.org/confluence/display/FLINK/FLIP-461%3A+Synchronize+rescaling+with+checkpoint+creation+to+minimize+reprocessing+for+the+AdaptiveScheduler]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[RESULT][VOTE] flink-connector-gcp-pubsub 3.1.0, release candidate #1

2024-06-07 Thread Danny Cranmer
I'm happy to announce that we have unanimously approved this release.

There are 5 approving votes, 3 of which are binding:
* Ahmed Hamdy
* Hang Ruan
* Leonard Xu (binding)
* Rui Fan (binding)
* Danny Cranmer (binding)

There are no disapproving votes.

Thanks everyone!


Re: [VOTE] Release 1.19.1, release candidate #1

2024-06-07 Thread Rui Fan
+1(binding)

- Reviewed the flink-web PR (Left some comments)
- Checked Github release tag
- Verified signatures
- Verified sha512 (hashsums)
- The source archives do not contain any binaries
- Build the source with Maven 3 and java8 (Checked the license as well)
- Start the cluster locally with jdk8, and run the StateMachineExample job,
it works fine.

Best,
Rui

On Thu, Jun 6, 2024 at 11:39 PM Hong Liang  wrote:

> Hi everyone,
> Please review and vote on the release candidate #1 for the flink v1.19.1,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint B78A5EA1 [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.19.1-rc1" [5],
> * website pull request listing the new release and adding announcement blog
> post [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Hong
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354399
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.19.1-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1736/
> [5] https://github.com/apache/flink/releases/tag/release-1.19.1-rc1
> [6] https://github.com/apache/flink-web/pull/745
>


Re: [VOTE] Release flink-connector-gcp-pubsub v3.1.0, release candidate #1

2024-06-07 Thread Danny Cranmer
This vote is now closed, I will announce the results on a separate thread.

Thanks all.

On Fri, Jun 7, 2024 at 10:10 AM Danny Cranmer 
wrote:

> +1 (binding)
>
> - Release notes look good
> - Source archive checksum and signature is correct
> - Binary checksum and signature is correct
> - Contents of Maven repo looks good
> - Verified there are no binaries in the source archive
> - Builds from source, tests pass using Java 8
> - CI run passed
> - Tag exists in repo
> - NOTICE and LICENSE files present and correct
>
> Thanks,
> Danny
>
> On Fri, Jun 7, 2024 at 4:43 AM Rui Fan <1996fan...@gmail.com> wrote:
>
>> Thanks Danny for the hard work!
>>
>> +1(binding)
>>
>> - Verified signatures
>> - Verified sha512 (hashsums)
>> - The source archives do not contain any binaries
>> - Build the source with Maven 3 and java8 (Checked the license as well)
>> - Checked Github release tag
>> - Reviewed the flink-web PR
>>
>> Best,
>> Rui
>>
>> On Wed, May 22, 2024 at 8:07 PM Leonard Xu  wrote:
>>
>> >
>> > +1 (binding)
>> >
>> > - verified signatures
>> > - verified hashsums
>> > - built from source code with java 1.8 succeeded
>> > - checked Github release tag
>> > - checked release notes
>> > - reviewed the web PR
>> >
>> > Best,
>> > Leonard
>> >
>> > > 2024年4月21日 下午9:52,Hang Ruan  写道:
>> > >
>> > > +1 (non-binding)
>> > >
>> > > - Validated checksum hash
>> > > - Verified signature
>> > > - Verified that no binaries exist in the source archive
>> > > - Build the source with Maven and jdk8
>> > > - Verified web PR
>> > > - Check that the jar is built by jdk8
>> > >
>> > > Best,
>> > > Hang
>> > >
>> > > Ahmed Hamdy  于2024年4月18日周四 20:01写道:
>> > >
>> > >> Hi Danny,
>> > >> +1 (non-binding)
>> > >>
>> > >> -  verified hashes and checksums
>> > >> - verified signature
>> > >> - verified source contains no binaries
>> > >> - tag exists in github
>> > >> - reviewed web PR
>> > >>
>> > >> Best Regards
>> > >> Ahmed Hamdy
>> > >>
>> > >>
>> > >> On Thu, 18 Apr 2024 at 11:32, Danny Cranmer > >
>> > >> wrote:
>> > >>
>> > >>> Hi everyone,
>> > >>>
>> > >>> Please review and vote on release candidate #1 for
>> > >>> flink-connector-gcp-pubsub v3.1.0, as follows:
>> > >>> [ ] +1, Approve the release
>> > >>> [ ] -1, Do not approve the release (please provide specific
>> comments)
>> > >>>
>> > >>> This release supports Flink 1.18 and 1.19.
>> > >>>
>> > >>> The complete staging area is available for your review, which
>> includes:
>> > >>> * JIRA release notes [1],
>> > >>> * the official Apache source release to be deployed to
>> dist.apache.org
>> > >>> [2],
>> > >>> which are signed with the key with fingerprint 125FD8DB [3],
>> > >>> * all artifacts to be deployed to the Maven Central Repository [4],
>> > >>> * source code tag v3.1.0-rc1 [5],
>> > >>> * website pull request listing the new release [6].
>> > >>> * CI build of the tag [7].
>> > >>>
>> > >>> The vote will be open for at least 72 hours. It is adopted by
>> majority
>> > >>> approval, with at least 3 PMC affirmative votes.
>> > >>>
>> > >>> Thanks,
>> > >>> Danny
>> > >>>
>> > >>> [1]
>> > >>>
>> > >>>
>> > >>
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353813
>> > >>> [2]
>> > >>>
>> > >>>
>> > >>
>> >
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-gcp-pubsub-3.1.0-rc1
>> > >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> > >>> [4]
>> > >>
>> https://repository.apache.org/content/repositories/orgapacheflink-1720
>> > >>> [5]
>> > >>>
>> > >>>
>> > >>
>> >
>> https://github.com/apache/flink-connector-gcp-pubsub/releases/tag/v3.1.0-rc1
>> > >>> [6] https://github.com/apache/flink-web/pull/736/files
>> > >>> [7]
>> > >>>
>> > >>>
>> > >>
>> >
>> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8735952883
>> > >>>
>> > >>
>> >
>> >
>>
>


Re: [VOTE] Release flink-connector-gcp-pubsub v3.1.0, release candidate #1

2024-06-07 Thread Danny Cranmer
+1 (binding)

- Release notes look good
- Source archive checksum and signature is correct
- Binary checksum and signature is correct
- Contents of Maven repo looks good
- Verified there are no binaries in the source archive
- Builds from source, tests pass using Java 8
- CI run passed
- Tag exists in repo
- NOTICE and LICENSE files present and correct

Thanks,
Danny

On Fri, Jun 7, 2024 at 4:43 AM Rui Fan <1996fan...@gmail.com> wrote:

> Thanks Danny for the hard work!
>
> +1(binding)
>
> - Verified signatures
> - Verified sha512 (hashsums)
> - The source archives do not contain any binaries
> - Build the source with Maven 3 and java8 (Checked the license as well)
> - Checked Github release tag
> - Reviewed the flink-web PR
>
> Best,
> Rui
>
> On Wed, May 22, 2024 at 8:07 PM Leonard Xu  wrote:
>
> >
> > +1 (binding)
> >
> > - verified signatures
> > - verified hashsums
> > - built from source code with java 1.8 succeeded
> > - checked Github release tag
> > - checked release notes
> > - reviewed the web PR
> >
> > Best,
> > Leonard
> >
> > > 2024年4月21日 下午9:52,Hang Ruan  写道:
> > >
> > > +1 (non-binding)
> > >
> > > - Validated checksum hash
> > > - Verified signature
> > > - Verified that no binaries exist in the source archive
> > > - Build the source with Maven and jdk8
> > > - Verified web PR
> > > - Check that the jar is built by jdk8
> > >
> > > Best,
> > > Hang
> > >
> > > Ahmed Hamdy  于2024年4月18日周四 20:01写道:
> > >
> > >> Hi Danny,
> > >> +1 (non-binding)
> > >>
> > >> -  verified hashes and checksums
> > >> - verified signature
> > >> - verified source contains no binaries
> > >> - tag exists in github
> > >> - reviewed web PR
> > >>
> > >> Best Regards
> > >> Ahmed Hamdy
> > >>
> > >>
> > >> On Thu, 18 Apr 2024 at 11:32, Danny Cranmer 
> > >> wrote:
> > >>
> > >>> Hi everyone,
> > >>>
> > >>> Please review and vote on release candidate #1 for
> > >>> flink-connector-gcp-pubsub v3.1.0, as follows:
> > >>> [ ] +1, Approve the release
> > >>> [ ] -1, Do not approve the release (please provide specific comments)
> > >>>
> > >>> This release supports Flink 1.18 and 1.19.
> > >>>
> > >>> The complete staging area is available for your review, which
> includes:
> > >>> * JIRA release notes [1],
> > >>> * the official Apache source release to be deployed to
> dist.apache.org
> > >>> [2],
> > >>> which are signed with the key with fingerprint 125FD8DB [3],
> > >>> * all artifacts to be deployed to the Maven Central Repository [4],
> > >>> * source code tag v3.1.0-rc1 [5],
> > >>> * website pull request listing the new release [6].
> > >>> * CI build of the tag [7].
> > >>>
> > >>> The vote will be open for at least 72 hours. It is adopted by
> majority
> > >>> approval, with at least 3 PMC affirmative votes.
> > >>>
> > >>> Thanks,
> > >>> Danny
> > >>>
> > >>> [1]
> > >>>
> > >>>
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353813
> > >>> [2]
> > >>>
> > >>>
> > >>
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-gcp-pubsub-3.1.0-rc1
> > >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > >>> [4]
> > >>
> https://repository.apache.org/content/repositories/orgapacheflink-1720
> > >>> [5]
> > >>>
> > >>>
> > >>
> >
> https://github.com/apache/flink-connector-gcp-pubsub/releases/tag/v3.1.0-rc1
> > >>> [6] https://github.com/apache/flink-web/pull/736/files
> > >>> [7]
> > >>>
> > >>>
> > >>
> >
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8735952883
> > >>>
> > >>
> >
> >
>


Re: [VOTE] FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn

2024-06-07 Thread Zhu Zhu
+1 (binding)

Thanks,
Zhu

Xintong Song  于2024年6月7日周五 16:08写道:

> +1 (binding)
>
> Best,
>
> Xintong
>
>
>
> On Fri, Jun 7, 2024 at 4:03 PM Yuxin Tan  wrote:
>
> > Hi everyone,
> >
> > Thanks for all the feedback about the FLIP-459 Support Flink
> > hybrid shuffle integration with Apache Celeborn[1].
> > The discussion thread is here [2].
> >
> > I'd like to start a vote for it. The vote will be open for at least
> > 72 hours unless there is an objection or insufficient votes.
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-459%3A+Support+Flink+hybrid+shuffle+integration+with+Apache+Celeborn
> > [2] https://lists.apache.org/thread/gy7sm7qrf7yrv1rl5f4vtk5fo463ts33
> >
> > Best,
> > Yuxin
> >
>


Re: [DISCUSSION] FLIP-456: CompiledPlan support for Batch Execution Mode

2024-06-07 Thread Fabian Paul
Thanks, Alexey, for the proposal. I think this is a nice addition that
finally fixes the gap in the CompiledPlan. +1

Best,
Fabian

On Tue, May 14, 2024 at 1:19 AM Alexey Leonov-Vendrovskiy <
vendrov...@gmail.com> wrote:

> Thanks Jim.
>
>
>
> > 1. For the testing, I'd call the tests "execution" tests rather than
> > "restore" tests.  For streaming execution, restore tests have the
> compiled
> > plan and intermediate state; the tests verify that those can work
> together
> > and continue processing.
>
>
> Agree that we don't need to store and restore the intermediate state. So
> the most critical part is that the CompiledPlan for batch can be executed.
>
> 2. The FLIP implicitly suggests "completeness tests" (to use FLIP-190's
> > words).  Do we need "change detection tests"?  I'm a little unsure if
> that
> > is presently happening in an automatic way for streaming operators.
>
>
>  We might need to elaborate more on this, but the idea is that  we need to
> make sure that compiled plans created by an older version of SQL Planner
> are executable on newer runtimes.
>
> 3.  Can we remove old versions of batch operators eventually?  Or do we
> > need to keep them forever like we would for streaming operators?
> >
>
> We could have deprecation paths for old operator nodes in some cases. It is
> a matter of the time window: what could be practical the "time distance"
> between query planner and flink runtime against which the query query can
> be resubmitted.
> Note, here we don't have continuous queries, so there is always an option
> to "re-plan" the original SQL query text into a newer version of the
> CompiledPlan.
> With this in mind, a time window of 1yr+ would allow deprecation of older
> batch exec nodes, though I don't see this as a frequent event.
>
> -Alexey
>
>
>
> On Mon, May 13, 2024 at 1:52 PM Jim Hughes 
> wrote:
>
> > Hi Alexey,
> >
> > After some thought, I have a question about deprecations:
> >
> > 3.  Can we remove old versions of batch operators eventually?  Or do we
> > need to keep them forever like we would for streaming operators?
> >
> > Cheers,
> >
> > Jim
> >
> > On Thu, May 9, 2024 at 11:29 AM Jim Hughes  wrote:
> >
> > > Hi Alexey,
> > >
> > > Overall, the FLIP looks good and makes sense to me.
> > >
> > > 1. For the testing, I'd call the tests "execution" tests rather than
> > > "restore" tests.  For streaming execution, restore tests have the
> > compiled
> > > plan and intermediate state; the tests verify that those can work
> > together
> > > and continue processing.
> > >
> > > For batch execution, I think we just want that all existing compiled
> > plans
> > > can be executed in future versions.
> > >
> > > 2. The FLIP implicitly suggests "completeness tests" (to use FLIP-190's
> > > words).  Do we need "change detection tests"?  I'm a little unsure if
> > that
> > > is presently happening in an automatic way for streaming operators.
> > >
> > > In RestoreTestBase, generateTestSetupFiles is disabled and has to be
> run
> > > manually when tests are being written.
> > >
> > > Cheers,
> > >
> > > Jim
> > >
> > > On Tue, May 7, 2024 at 5:11 AM Paul Lam  wrote:
> > >
> > >> Hi Alexey,
> > >>
> > >> Thanks a lot for bringing up the discussion. I’m big +1 for the FLIP.
> > >>
> > >> I suppose the goal doesn’t involve the interchangeability of json
> plans
> > >> between batch mode and streaming mode, right?
> > >> In other words, a json plan compiled in a batch program can’t be run
> in
> > >> streaming mode without a migration (which is not yet supported).
> > >>
> > >> Best,
> > >> Paul Lam
> > >>
> > >> > 2024年5月7日 14:38,Alexey Leonov-Vendrovskiy 
> 写道:
> > >> >
> > >> > Hi everyone,
> > >> >
> > >> > PTAL at the proposed FLIP-456: CompiledPlan support for Batch
> > Execution
> > >> > Mode. It is pretty self-describing.
> > >> >
> > >> > Any thoughts are welcome!
> > >> >
> > >> > Thanks,
> > >> > Alexey
> > >> >
> > >> > [1]
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-456%3A+CompiledPlan+support+for+Batch+Execution+Mode
> > >> > .
> > >>
> > >>
> >
>


Request :: delta-flink sample Example/use case

2024-06-07 Thread SNEHASISH DUTTA
Hi,
I am evaluating delta-flink "3.2.0" , and trying to write a
Datastream[My_Custom_Class]
Flink version "1.16.3" ,  it would be of great help if someone can share
some examples either in Java or Scala APIs.

Thanks and Regards,
Sneh


Re: [VOTE] FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn

2024-06-07 Thread Xintong Song
+1 (binding)

Best,

Xintong



On Fri, Jun 7, 2024 at 4:03 PM Yuxin Tan  wrote:

> Hi everyone,
>
> Thanks for all the feedback about the FLIP-459 Support Flink
> hybrid shuffle integration with Apache Celeborn[1].
> The discussion thread is here [2].
>
> I'd like to start a vote for it. The vote will be open for at least
> 72 hours unless there is an objection or insufficient votes.
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-459%3A+Support+Flink+hybrid+shuffle+integration+with+Apache+Celeborn
> [2] https://lists.apache.org/thread/gy7sm7qrf7yrv1rl5f4vtk5fo463ts33
>
> Best,
> Yuxin
>


[VOTE] FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn

2024-06-07 Thread Yuxin Tan
Hi everyone,

Thanks for all the feedback about the FLIP-459 Support Flink
hybrid shuffle integration with Apache Celeborn[1].
The discussion thread is here [2].

I'd like to start a vote for it. The vote will be open for at least
72 hours unless there is an objection or insufficient votes.

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-459%3A+Support+Flink+hybrid+shuffle+integration+with+Apache+Celeborn
[2] https://lists.apache.org/thread/gy7sm7qrf7yrv1rl5f4vtk5fo463ts33

Best,
Yuxin


[jira] [Created] (FLINK-35548) Add E2E tests for PubSubSinkV2

2024-06-07 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-35548:
---

 Summary: Add E2E tests for PubSubSinkV2
 Key: FLINK-35548
 URL: https://issues.apache.org/jira/browse/FLINK-35548
 Project: Flink
  Issue Type: Sub-task
Reporter: Ahmed Hamdy
Assignee: Ahmed Hamdy
 Fix For: gcp-pubsub-3.2.0


Refactor Google PubSub source to use Unified Sink API 
[FLIP-143|https://cwiki.apache.org/confluence/display/FLINK/FLIP-143%3A+Unified+Sink+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35547) Sources support HybridSource implements

2024-06-07 Thread linqigeng (Jira)
linqigeng created FLINK-35547:
-

 Summary: Sources support HybridSource implements
 Key: FLINK-35547
 URL: https://issues.apache.org/jira/browse/FLINK-35547
 Project: Flink
  Issue Type: New Feature
  Components: Flink CDC
Reporter: linqigeng


Since the specifications of slave instances are generally smaller than the 
primary instance, it is more likely to restart when encountering performance 
bottlenecks(e.g. CPU,memory,disk), and there is a certain delay with the 
primary instance. In order not to affect the running Flink CDC jobs and improve 
effectiveness, here is an idea:

Sources adapt HybridSouce, so we can do like:
 * in the snapshot stage , readers fetch data from the slave instances
 * after the snapshot phase ends, switches to reading binlog from the primary 
instance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Savepoints not considered during failover

2024-06-07 Thread Matthias Pohl
One reason could be that the savepoints are self-contained, owned by the
user rather than Flink and, therefore, could be moved. Flink wouldn't have
a proper reference in that case anymore.

I don't have a link to a discussion, though.

Best,
Matthias

On Fri, Jun 7, 2024 at 8:47 AM Gyula Fóra  wrote:

> Hey Devs!
>
> What is the reason / rationale for savepoints being ignored during failover
> scenarios?
>
> I see they are not even recorded as the last valid checkpoint in the HA
> metadata (only the checkpoint id counter is bumped) so if the JM fails
> after a manually triggered savepoint the job will still fall back to the
> previous checkpoint instead.
>
> I am sure there must have been some discussion around it but I cant find
> it.
>
> Thank you!
> Gyula
>


Savepoints not considered during failover

2024-06-07 Thread Gyula Fóra
Hey Devs!

What is the reason / rationale for savepoints being ignored during failover
scenarios?

I see they are not even recorded as the last valid checkpoint in the HA
metadata (only the checkpoint id counter is bumped) so if the JM fails
after a manually triggered savepoint the job will still fall back to the
previous checkpoint instead.

I am sure there must have been some discussion around it but I cant find it.

Thank you!
Gyula


[jira] [Created] (FLINK-35546) Elasticsearch 8 connector fails fast for non-retryable bulk request items

2024-06-07 Thread Mingliang Liu (Jira)
Mingliang Liu created FLINK-35546:
-

 Summary: Elasticsearch 8 connector fails fast for non-retryable 
bulk request items
 Key: FLINK-35546
 URL: https://issues.apache.org/jira/browse/FLINK-35546
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / ElasticSearch
Reporter: Mingliang Liu


Discussion thread: 
[https://lists.apache.org/thread/yrf0mmbch0lhk3rgkz94fr0x5qz2417l]

{quote}
Currently the Elasticsearch 8 connector retries all items if the request fails 
as a whole, and retries failed items if the request has partial failures 
[[1|https://github.com/apache/flink-connector-elasticsearch/blob/5d1f8d03e3cff197ed7fe30b79951e44808b48fe/flink-connector-elasticsearch8/src/main/java/org/apache/flink/connector/elasticsearch/sink/Elasticsearch8AsyncWriter.java#L152-L170]\].
 I think this infinitely retries might be problematic in some cases when 
retrying can never eventually succeed. For example, if the request is 400 (bad 
request) or 404 (not found), retries do not help. If there are too many failed 
items non-retriable, new requests will get processed less effectively. In 
extreme cases, it may stall the pipeline if in-flight requests are occupied by 
those failed items.

FLIP-451 proposes timeout for retrying which helps with un-acknowledged 
requests, but not addressing the case when request gets processed and failed 
items keep failing no matter how many times we retry. Correct me if I'm wrong.

One opinionated option is to fail fast for non-retriable errors like 400 / 404 
and to drop items for 409. Or we can allow users to configure "drop/fail" 
behavior for non-retriable errors. I prefer the latter. I checked how LogStash 
ingests data to Elasticsearch and it takes a similar approach for non-retriable 
errors 
[[2|https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/main/lib/logstash/plugin_mixins/elasticsearch/common.rb#L283-L304]\].
 In my day job, we have a dead-letter-queue in AsynSinkWriter for failed 
entries that exhaust retries. I guess that is too specific to our setup and 
seems an overkill here for Elasticsearch connector.
{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-kafka v3.2.0, release candidate #1

2024-06-07 Thread Yanquan Lv
+1 (non-binding)

- verified gpg signatures
- verified sha512 hash
- built from source code with java 8/11/17
- checked Github release tag
- checked the CI result
- checked release notes

Danny Cranmer  于2024年4月22日周一 21:56写道:

> Hi everyone,
>
> Please review and vote on release candidate #1 for flink-connector-kafka
> v3.2.0, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This release supports Flink 1.18 and 1.19.
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint 125FD8DB [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.2.0-rc1 [5],
> * website pull request listing the new release [6].
> * CI build of the tag [7].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Danny
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354209
> [2]
>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.2.0-rc1
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1723
> [5]
> https://github.com/apache/flink-connector-kafka/releases/tag/v3.2.0-rc1
> [6] https://github.com/apache/flink-web/pull/738
> [7] https://github.com/apache/flink-connector-kafka
>