Re: [VOTE] Release flink-connector-mongodb v1.1.0, release candidate #2

2024-01-30 Thread gongzhongqiang
+1(non-binding)

- Signatures and Checksums are good
- No binaries in the source archive
- Tag is present
- Build successful with jdk8 on ubuntu 22.04


Leonard Xu  于2024年1月30日周二 18:23写道:

> Hey all,
>
> Please help review and vote on the release candidate #2 for the version
> v1.1.0 of the
> Apache Flink MongoDB Connector as follows:
>
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * The official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint
> 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
> * All artifacts to be deployed to the Maven Central Repository [4],
> * Source code tag v1.1.0-rc2 [5],
> * Website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
>
> Best,
> Leonard
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353483
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc2/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1705/
> [5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc2
> [6] https://github.com/apache/flink-web/pull/719


[jira] [Created] (FLINK-34316) Reduce instantiation of ScanRuntimeProvider for in streaming mode

2024-01-30 Thread Timo Walther (Jira)
Timo Walther created FLINK-34316:


 Summary: Reduce instantiation of ScanRuntimeProvider for in 
streaming mode
 Key: FLINK-34316
 URL: https://issues.apache.org/jira/browse/FLINK-34316
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Timo Walther
Assignee: Timo Walther


This is pure performance optimization by avoiding an additional call to 
\{{ScanTableSource#getScanRuntimeProvider}} in 
\{{org.apache.flink.table.planner.connectors.DynamicSourceUtils#validateScanSource}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34315) Forcibly disable window join, window rank and window deduplicate optimization when using session window tvf

2024-01-30 Thread xuyang (Jira)
xuyang created FLINK-34315:
--

 Summary: Forcibly disable window join, window rank and window 
deduplicate optimization when using session window tvf
 Key: FLINK-34315
 URL: https://issues.apache.org/jira/browse/FLINK-34315
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: xuyang
 Fix For: 1.19.0


Currently session window tvf is first introduced after 
https://issues.apache.org/jira/browse/FLINK-24024 . However after 
https://issues.apache.org/jira/browse/FLINK-34100 the session window tvf node 
can exist independently of window aggregation, but it is not ready for window 
join, window rank and window deduplicate. So we need to temporarily disable 
window join, window rank and window deduplicate optimization when using session 
window tvf.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34314) Update CI Node Actions from NodeJS 16 to NodeJS 20

2024-01-30 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-34314:
--

 Summary: Update CI Node Actions from NodeJS 16 to NodeJS 20
 Key: FLINK-34314
 URL: https://issues.apache.org/jira/browse/FLINK-34314
 Project: Flink
  Issue Type: Technical Debt
  Components: Build System / CI
Reporter: Martijn Visser
Assignee: Martijn Visser


{code:java}
Node.js 16 actions are deprecated. Please update the following actions to use 
Node.js 20: actions/checkout@v3, actions/setup-java@v3, 
stCarolas/setup-maven@v4.5, actions/cache/restore@v3, actions/cache/save@v3. 
{code}

For more information see: 
https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34313) Update doc about session window tvf

2024-01-30 Thread xuyang (Jira)
xuyang created FLINK-34313:
--

 Summary: Update doc about session window tvf
 Key: FLINK-34313
 URL: https://issues.apache.org/jira/browse/FLINK-34313
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 1.19.0
Reporter: xuyang
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-410: Config, Context and Processing Timer Service of DataStream API V2

2024-01-30 Thread weijie guo
Hi Xintong,

Thanks for the quick reply.

> Why introduce a new `MetricManager` rather than just return `MetricGroup`
from `RuntimeContext`?

This is to facilitate possible future extensions. But I thought it through,
MetricGroup itself also plays the role of a manager.
So I think you are right, I will add a `getMetricGroup` method directly in
`RuntimeContext`.

Best regards,

Weijie


Xintong Song  于2024年1月31日周三 14:02写道:

> >
> > > How can users define custom metrics within the `ProcessFunction`?
> > Will there be a method like `getMetricGroup` available in the
> > `RuntimeContext`?
> >
> > I think this is a reasonable request. For extensibility, I have added the
> > getMetricManager instead of getMetricGroup to RuntimeContext, we can use
> it
> > to get the MetricGroup.
> >
>
> Why introduce a new `MetricManager` rather than just return `MetricGroup`
> from `RuntimeContext`?
>
> > Q2. The FLIP describes the interface for handling processing
> >  timers (ProcessingTimeManager), but it does not mention
> > how to delete or update an existing timer. V1 API provides TimeService
> > that could delete a timer. Does this mean that
> >  once a timer is registered, it cannot be changed?
> >
> > I think we do need to introduce a method to delete the timer, but I'm
> kind
> > of curious why we need to update the timer instead of registering a new
> > one. Anyway, I have updated the FLIP to support delete the timer.
> >
>
> Registering a new timer does not mean the old timer should be removed.
> There could be multiple timers.
>
> If we don't support deleting timers, developers can still decide to do
> nothing upon the timer is triggered. In that case, they will need
> additional logic to decide whether the timer should be skipped or not in
> `onProcessingTimer`. Besides, there could also be additional performance
> overhead in frequent calling and skipping the callback.
>
> Best,
>
> Xintong
>
>
>
> On Tue, Jan 30, 2024 at 3:26 PM weijie guo 
> wrote:
>
> > Hi Wencong,
> >
> > > Q1. In the "Configuration" section, it is mentioned that
> > configurations can be set continuously using the withXXX methods.
> > Are these configuration options the same as those provided by DataStream
> > V1,
> > or might there be different options compared to V1?
> >
> > I haven't considered options that don't exist in V1 yet, but we may have
> > some new options as we continue to develop.
> >
> > > Q2. The FLIP describes the interface for handling processing
> >  timers (ProcessingTimeManager), but it does not mention
> > how to delete or update an existing timer. V1 API provides TimeService
> > that could delete a timer. Does this mean that
> >  once a timer is registered, it cannot be changed?
> >
> > I think we do need to introduce a method to delete the timer, but I'm
> kind
> > of curious why we need to update the timer instead of registering a new
> > one. Anyway, I have updated the FLIP to support delete the timer.
> >
> >
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > weijie guo  于2024年1月30日周二 14:35写道:
> >
> > > Hi Xuannan,
> > >
> > > > 1. +1 to only use XXXParititionStream if users only need to use the
> > > configurable PartitionStream.  If there are use cases for both,
> > > perhaps we could use `ProcessConfigurableNonKeyedPartitionStream` or
> > > `ConfigurableNonKeyedPartitionStream` for simplicity.
> > >
> > > As for why we need both, you can refer to my reply to Yunfeng's first
> > > question. As for the name, I can accept
> > > ProcessConfigurableNonKeyedPartitionStream or keep the status quo. But
> I
> > > don't want to change it to ConfigurableNonKeyedPartitionStream, the
> > reason
> > > is the same, because the configuration is applied to the Process rather
> > > than the Stream.
> > >
> > > > Should we allow users to set custom configurations through the
> > > `ProcessConfigurable` interface and access these configurations in the
> > > `ProcessFunction` via `RuntimeContext`? I believe it would be useful
> > > for process function developers to be able to define custom
> > > configurations.
> > >
> > > If I understand you correctly, you want to set custom properties for
> > > processing. The current configurations are mostly for the runtime
> engine,
> > > such as determining the underlying operator 's parallelism and SSG. But
> > I'm
> > > not aware of the need to pass in a custom value(independent of the
> > > framework itself) and then get it at runtime from RuntimeContext. Could
> > > you give some examples?
> > >
> > > > How can users define custom metrics within the `ProcessFunction`?
> > > Will there be a method like `getMetricGroup` available in the
> > > `RuntimeContext`?
> > >
> > > I think this is a reasonable request. For extensibility, I have added
> the
> > > getMetricManager instead of getMetricGroup to RuntimeContext, we can
> use
> > > it to get the MetricGroup.
> > >
> > >
> > > Best regards,
> > >
> > > Weijie
> > >
> > >
> > > weijie guo  于2024年1月30日周二 13:45写道:
> > >
> > >> Thanks Yunfeng,
> 

Re: [DISCUSS] FLIP-410: Config, Context and Processing Timer Service of DataStream API V2

2024-01-30 Thread Xintong Song
>
> > How can users define custom metrics within the `ProcessFunction`?
> Will there be a method like `getMetricGroup` available in the
> `RuntimeContext`?
>
> I think this is a reasonable request. For extensibility, I have added the
> getMetricManager instead of getMetricGroup to RuntimeContext, we can use it
> to get the MetricGroup.
>

Why introduce a new `MetricManager` rather than just return `MetricGroup`
from `RuntimeContext`?

> Q2. The FLIP describes the interface for handling processing
>  timers (ProcessingTimeManager), but it does not mention
> how to delete or update an existing timer. V1 API provides TimeService
> that could delete a timer. Does this mean that
>  once a timer is registered, it cannot be changed?
>
> I think we do need to introduce a method to delete the timer, but I'm kind
> of curious why we need to update the timer instead of registering a new
> one. Anyway, I have updated the FLIP to support delete the timer.
>

Registering a new timer does not mean the old timer should be removed.
There could be multiple timers.

If we don't support deleting timers, developers can still decide to do
nothing upon the timer is triggered. In that case, they will need
additional logic to decide whether the timer should be skipped or not in
`onProcessingTimer`. Besides, there could also be additional performance
overhead in frequent calling and skipping the callback.

Best,

Xintong



On Tue, Jan 30, 2024 at 3:26 PM weijie guo 
wrote:

> Hi Wencong,
>
> > Q1. In the "Configuration" section, it is mentioned that
> configurations can be set continuously using the withXXX methods.
> Are these configuration options the same as those provided by DataStream
> V1,
> or might there be different options compared to V1?
>
> I haven't considered options that don't exist in V1 yet, but we may have
> some new options as we continue to develop.
>
> > Q2. The FLIP describes the interface for handling processing
>  timers (ProcessingTimeManager), but it does not mention
> how to delete or update an existing timer. V1 API provides TimeService
> that could delete a timer. Does this mean that
>  once a timer is registered, it cannot be changed?
>
> I think we do need to introduce a method to delete the timer, but I'm kind
> of curious why we need to update the timer instead of registering a new
> one. Anyway, I have updated the FLIP to support delete the timer.
>
>
>
> Best regards,
>
> Weijie
>
>
> weijie guo  于2024年1月30日周二 14:35写道:
>
> > Hi Xuannan,
> >
> > > 1. +1 to only use XXXParititionStream if users only need to use the
> > configurable PartitionStream.  If there are use cases for both,
> > perhaps we could use `ProcessConfigurableNonKeyedPartitionStream` or
> > `ConfigurableNonKeyedPartitionStream` for simplicity.
> >
> > As for why we need both, you can refer to my reply to Yunfeng's first
> > question. As for the name, I can accept
> > ProcessConfigurableNonKeyedPartitionStream or keep the status quo. But I
> > don't want to change it to ConfigurableNonKeyedPartitionStream, the
> reason
> > is the same, because the configuration is applied to the Process rather
> > than the Stream.
> >
> > > Should we allow users to set custom configurations through the
> > `ProcessConfigurable` interface and access these configurations in the
> > `ProcessFunction` via `RuntimeContext`? I believe it would be useful
> > for process function developers to be able to define custom
> > configurations.
> >
> > If I understand you correctly, you want to set custom properties for
> > processing. The current configurations are mostly for the runtime engine,
> > such as determining the underlying operator 's parallelism and SSG. But
> I'm
> > not aware of the need to pass in a custom value(independent of the
> > framework itself) and then get it at runtime from RuntimeContext. Could
> > you give some examples?
> >
> > > How can users define custom metrics within the `ProcessFunction`?
> > Will there be a method like `getMetricGroup` available in the
> > `RuntimeContext`?
> >
> > I think this is a reasonable request. For extensibility, I have added the
> > getMetricManager instead of getMetricGroup to RuntimeContext, we can use
> > it to get the MetricGroup.
> >
> >
> > Best regards,
> >
> > Weijie
> >
> >
> > weijie guo  于2024年1月30日周二 13:45写道:
> >
> >> Thanks Yunfeng,
> >>
> >> Let me try to answer your question :)
> >>
> >> > 1. Would it be better to have all XXXPartitionStream classes implement
> >> ProcessConfigurable, instead of defining both XXXPartitionStream and
> >> ProcessConfigurableAndXXXPartitionStream? I wonder whether users would
> >> need to operate on a non-configurable PartitionStream.
> >>
> >> I thought about this for a while and decided to separate DataStream from
> >> ProcessConfigurable. At the core of this is that streams and c
> >> onfigurations are completely orthogonal concepts, and configuration is
> >> only responsible for the `Process`, not the `Stream`. This is why only
> >> the 

Re: [VOTE] FLIP-331: Support EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute to optimize task deployment

2024-01-30 Thread weijie guo
+1(binding)


Best regards,

Weijie


Rui Fan <1996fan...@gmail.com> 于2024年1月31日周三 12:51写道:

> +1(binding)
>
> Best,
> Rui
>
> On Wed, Jan 31, 2024 at 12:46 PM Xintong Song 
> wrote:
>
> > +1
> >
> > Best,
> >
> > Xintong
> >
> >
> >
> > On Wed, Jan 31, 2024 at 11:41 AM Xuannan Su 
> wrote:
> >
> > > Hi everyone,
> > >
> > > Thanks for all the feedback about the FLIP-331: Support
> > > EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute
> > > to optimize task deployment [1] [2].
> > >
> > > I'd like to start a vote for it. The vote will be open for at least 72
> > > hours(excluding weekends,until Feb 5, 12:00AM GMT) unless there is an
> > > objection or an insufficient number of votes.
> > >
> > > [1]
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-331%3A+Support+EndOfStreamTrigger+and+isOutputOnlyAfterEndOfStream+operator+attribute+to+optimize+task+deployment
> > > [2] https://lists.apache.org/thread/qq39rmg3f23ysx5m094s4c4cq0m4tdj5
> > >
> > >
> > > Best,
> > > Xuannan
> > >
> >
>


Re: [VOTE] FLIP-331: Support EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute to optimize task deployment

2024-01-30 Thread Rui Fan
+1(binding)

Best,
Rui

On Wed, Jan 31, 2024 at 12:46 PM Xintong Song  wrote:

> +1
>
> Best,
>
> Xintong
>
>
>
> On Wed, Jan 31, 2024 at 11:41 AM Xuannan Su  wrote:
>
> > Hi everyone,
> >
> > Thanks for all the feedback about the FLIP-331: Support
> > EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute
> > to optimize task deployment [1] [2].
> >
> > I'd like to start a vote for it. The vote will be open for at least 72
> > hours(excluding weekends,until Feb 5, 12:00AM GMT) unless there is an
> > objection or an insufficient number of votes.
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-331%3A+Support+EndOfStreamTrigger+and+isOutputOnlyAfterEndOfStream+operator+attribute+to+optimize+task+deployment
> > [2] https://lists.apache.org/thread/qq39rmg3f23ysx5m094s4c4cq0m4tdj5
> >
> >
> > Best,
> > Xuannan
> >
>


Re: [VOTE] FLIP-331: Support EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute to optimize task deployment

2024-01-30 Thread Xintong Song
+1

Best,

Xintong



On Wed, Jan 31, 2024 at 11:41 AM Xuannan Su  wrote:

> Hi everyone,
>
> Thanks for all the feedback about the FLIP-331: Support
> EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute
> to optimize task deployment [1] [2].
>
> I'd like to start a vote for it. The vote will be open for at least 72
> hours(excluding weekends,until Feb 5, 12:00AM GMT) unless there is an
> objection or an insufficient number of votes.
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-331%3A+Support+EndOfStreamTrigger+and+isOutputOnlyAfterEndOfStream+operator+attribute+to+optimize+task+deployment
> [2] https://lists.apache.org/thread/qq39rmg3f23ysx5m094s4c4cq0m4tdj5
>
>
> Best,
> Xuannan
>


[VOTE] FLIP-331: Support EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute to optimize task deployment

2024-01-30 Thread Xuannan Su
Hi everyone,

Thanks for all the feedback about the FLIP-331: Support
EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute
to optimize task deployment [1] [2].

I'd like to start a vote for it. The vote will be open for at least 72
hours(excluding weekends,until Feb 5, 12:00AM GMT) unless there is an
objection or an insufficient number of votes.

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-331%3A+Support+EndOfStreamTrigger+and+isOutputOnlyAfterEndOfStream+operator+attribute+to+optimize+task+deployment
[2] https://lists.apache.org/thread/qq39rmg3f23ysx5m094s4c4cq0m4tdj5


Best,
Xuannan


Re: [VOTE] Release flink-connector-mongodb v1.1.0, release candidate #2

2024-01-30 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk11
- Verified web PR
- Check that the jar is built by jdk8
- Review the release note

Best,
Hang

Jiabao Sun  于2024年1月30日周二 21:44写道:

> Thanks Leonard for driving this.
>
> +1(non-binding)
>
> - Release notes look good
> - Tag is present in Github
> - Validated checksum hash
> - Verified signature
> - Build the source with Maven by jdk8,11,17,21
> - Checked the dist jar was built by jdk8
> - Reviewed web PR
> - Run a filter push down test by sql-client on Flink 1.18.1 and it works
> well
>
> Best,
> Jiabao
>
>
> On 2024/01/30 10:23:07 Leonard Xu wrote:
> > Hey all,
> >
> > Please help review and vote on the release candidate #2 for the version
> v1.1.0 of the
> > Apache Flink MongoDB Connector as follows:
> >
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * The official Apache source release to be deployed to dist.apache.org
> [2],
> > which are signed with the key with fingerprint
> > 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
> > * All artifacts to be deployed to the Maven Central Repository [4],
> > * Source code tag v1.1.0-rc2 [5],
> > * Website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> >
> > Best,
> > Leonard
> > [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353483
> > [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc2/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1705/
> > [5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc2
> > [6] https://github.com/apache/flink-web/pull/719


Re: [VOTE] FLIP-418: Show data skew score on Flink Dashboard

2024-01-30 Thread Aleksandr Pilipenko
Thanks for the FLIP!

+1 (non-binding)

Best,
Aleksandr

On Mon, 29 Jan 2024 at 10:11, Kartoglu, Emre 
wrote:

> Hello,
>
> I'd like to call votes on FLIP-418: Show data skew score on Flink
> Dashboard.
>
> FLIP:
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-418%3A+Show+data+skew+score+on+Flink+Dashboard
> Discussion:
> https://lists.apache.org/thread/m5ockoork0h2zr78h77dcrn71rbt35ql
>
> Kind regards,
> Emre
>
>


[jira] [Created] (FLINK-34312) Improve the handling of default node types when using named parameters.

2024-01-30 Thread Feng Jin (Jira)
Feng Jin created FLINK-34312:


 Summary: Improve the handling of default node types when using 
named parameters.
 Key: FLINK-34312
 URL: https://issues.apache.org/jira/browse/FLINK-34312
 Project: Flink
  Issue Type: Sub-task
Reporter: Feng Jin


Currently, we have supported the use of named parameters with optional 
arguments. 

By adapting the interface of Calcite, we can fill in the default operator when 
a parameter is missing. Whether it is during the validation phase or when 
converting to SqlToRel phase, we need to handle it specially by modifying the 
return type of DEFAULT operator based on the argument type of the operator.  
We have multiple places that need to handle the type of DEFAULT operator, 
including SqlCallBinding, SqlOperatorBinding, and SqlToRelConverter.


The improved solution is as follows: 

Before SqlToRel, we can construct a DEFAULT node with a return type that 
matches the argument type. This way, during the SqlToRel phase, there is no 
need for special handling of the DEFAULT node's type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Flink 1.19 feature freeze & sync summary on 01/30/2024

2024-01-30 Thread Matthias Pohl
Thanks for the update, Lincoln.

fyi: I merged FLINK-32684 (deprecating AkkaOptions) [1] since we agreed in
today's meeting that this change is still ok to go in.

The beta version of the GitHub Actions workflows (FLIP-396 [2]) are also
finalized (see related PRs for basic CI [3], nightly master [4] and nightly
scheduling [5]). I'd like to merge the changes before creating the
release-1.19 branch. That would enable us to see whether we miss anything
in the GHA workflows setup when creating a new release branch.

The changes are limited to a few CI scripts that are also used for Azure
Pipelines (see [3]). The majority of the changes are GHA-specific and
shouldn't affect the Azure Pipelines CI setup.

Therefore, I'm requesting the approval from the 1.19 release managers to go
ahead with merging the mentioned PRs [3, 4, 5].

Matthias


[1] https://issues.apache.org/jira/browse/FLINK-32684
[2]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-396%3A+Trial+to+test+GitHub+Actions+as+an+alternative+for+Flink%27s+current+Azure+CI+infrastructure
[3] https://github.com/apache/flink/pull/23970
[4] https://github.com/apache/flink/pull/23971
[5] https://github.com/apache/flink/pull/23972

On Tue, Jan 30, 2024 at 1:51 PM Lincoln Lee  wrote:

> Hi everyone,
>
> (Since feature freeze and release sync are on the same day, we merged the
> announcement and sync summary together)
>
>
> *- Feature freeze*
> The feature freeze of 1.19 has started now. That means that no new features
> or improvements should now be merged into the master branch unless you ask
> the release managers first, which has already been done for PRs, or pending
> on CI to pass. Bug fixes and documentation PRs can still be merged.
>
>
> *- Cutting release branch*
> Currently we have three blocker issues[1][2][3], and will try to close
> them this Friday.
> We are planning to cut the release branch on next Monday (Feb 6th) if no
> new test instabilities,
> and we'll make another announcement in the dev mailing list then.
>
>
> *- Cross-team testing*
> Release testing is expected to start next week as soon as we cut the
> release branch.
> As a prerequisite, please Before we start testing, please make sure
> 1. Whether the feature needs a cross-team testing
> 2. If yes, please the documentation completed
> There's an umbrella ticket[4] for tracking the 1.19 testing, RM will
> create all tickets for completed features listed on the 1.19 wiki page[5]
> and assign to the feature's Responsible Contributor,
> also contributors are encouraged to create tickets following the steps in
> the umbrella ticket if there are other ones that need to be cross-team
> tested.
>
> *- Release notes*
>
> All new features and behavior changes require authors to fill out the
> 'Release Note' column in the JIRA(click the Edit button and pull the page
> to the center),
> especially since 1.19 involves a lot of deprecation, which is important
> for users and will be part of the release announcement.
>
> - *Sync meeting* (https://meet.google.com/vcx-arzs-trv)
>
> We've already switched to weekly release sync, so the next release sync
> will be on Feb 6th, 2024. Feel free to join us!
>
> [1] https://issues.apache.org/jira/browse/FLINK-34148
> [2] https://issues.apache.org/jira/browse/FLINK-34007
> [3] https://issues.apache.org/jira/browse/FLINK-34259
> [4] https://issues.apache.org/jira/browse/FLINK-34285
> [5] https://cwiki.apache.org/confluence/display/FLINK/1.19+Release
>
> Best,
> Yun, Jing, Martijn and Lincoln
>


Re: [DISCUSS] Drop support for HBase v1

2024-01-30 Thread Ferenc Csaky
Hi Martijn,

thanks for starting the discussion. Let me link the older discussion regarding 
the same topic [1]. My opinion did not change, so +1.

BR,
Ferenc

[1] https://lists.apache.org/thread/x7l2gj8g93r4v6x6953cyt6jrs8c4r1b




On Monday, January 29th, 2024 at 09:37, Martijn Visser 
 wrote:

> 
> 
> Hi all,
> 
> While working on adding support for Flink 1.19 for HBase, we've run into a
> dependency convergence issue because HBase v1 relies on a really old
> version of Guava.
> 
> HBase v2 has been made available since May 2018, and there have been no new
> releases of HBase v1 since August 2022.
> 
> I would like to propose that the Flink HBase connector drops support for
> HBase v1, and will only continue HBase v2 in the future. I don't think this
> requires a full FLIP and vote, but I do want to start a discussion thread
> for this.
> 
> Best regards,
> 
> Martijn


[jira] [Created] (FLINK-34311) Do not change min resource requirements when rescaling for adaptive scheduler

2024-01-30 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-34311:
--

 Summary: Do not change min resource requirements when rescaling 
for adaptive scheduler
 Key: FLINK-34311
 URL: https://issues.apache.org/jira/browse/FLINK-34311
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Gyula Fora
Assignee: Gyula Fora
 Fix For: kubernetes-operator-1.8.0


when applying the rescale api to change parallelism we should not change the 
min parallelism.
The problem currently is that if we cannot aquire the new resources within 
{{jobmanager.adaptive-scheduler.resource-wait-timeout}} the job will completely 
fail

The {{jobmanager.adaptive-scheduler.resource-stabilization-timeout}} still 
allows us to wait for quite long if necessary to get the target parallelism but 
failing completely because of the wait timeout seems very unfortunate

It's best to keep the min resources unchanged and let the adaptive scheduler 
take care of the parallelism changes together with the timeout settings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34309) Release Testing Instructions: Verify FLINK-33315 Optimize memory usage of large StreamOperato

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34309:
---

 Summary: Release Testing Instructions: Verify FLINK-33315 Optimize 
memory usage of large StreamOperato
 Key: FLINK-34309
 URL: https://issues.apache.org/jira/browse/FLINK-34309
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Rui Fan
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34310:
---

 Summary: Release Testing Instructions: Verify FLINK-33325 Built-in 
cross-platform powerful java profiler
 Key: FLINK-34310
 URL: https://issues.apache.org/jira/browse/FLINK-34310
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Rui Fan
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34308) Release Testing Instructions: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34308:
---

 Summary: Release Testing Instructions: Verify FLINK-33625 Support 
System out and err to be redirected to LOG or discarded
 Key: FLINK-34308
 URL: https://issues.apache.org/jira/browse/FLINK-34308
 Project: Flink
  Issue Type: Sub-task
  Components: API / DataStream
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Peter Vary
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34307) Release Testing Instructions: Verify FLINK-33972 Enhance and synchronize Sink API to match the Source API

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34307:
---

 Summary: Release Testing Instructions: Verify FLINK-33972 Enhance 
and synchronize Sink API to match the Source API
 Key: FLINK-34307
 URL: https://issues.apache.org/jira/browse/FLINK-34307
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Common
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Peter Vary
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


RE: [VOTE] Release flink-connector-mongodb v1.1.0, release candidate #2

2024-01-30 Thread Jiabao Sun
Thanks Leonard for driving this.

+1(non-binding)

- Release notes look good
- Tag is present in Github
- Validated checksum hash
- Verified signature
- Build the source with Maven by jdk8,11,17,21
- Checked the dist jar was built by jdk8
- Reviewed web PR
- Run a filter push down test by sql-client on Flink 1.18.1 and it works well

Best,
Jiabao


On 2024/01/30 10:23:07 Leonard Xu wrote:
> Hey all,
> 
> Please help review and vote on the release candidate #2 for the version 
> v1.1.0 of the
> Apache Flink MongoDB Connector as follows:
> 
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
> 
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * The official Apache source release to be deployed to dist.apache.org [2],
> which are signed with the key with fingerprint
> 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
> * All artifacts to be deployed to the Maven Central Repository [4],
> * Source code tag v1.1.0-rc2 [5],
> * Website pull request listing the new release [6].
> 
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
> 
> 
> Best,
> Leonard
> [1] 
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353483
> [2] 
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc2/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1705/
> [5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc2
> [6] https://github.com/apache/flink-web/pull/719

[jira] [Created] (FLINK-34306) Release Testing Instructions: Verify FLINK-25857 Add committer metrics to track the status of committables

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34306:
---

 Summary: Release Testing Instructions: Verify FLINK-25857 Add 
committer metrics to track the status of committables 
 Key: FLINK-34306
 URL: https://issues.apache.org/jira/browse/FLINK-34306
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Zhanghao Chen
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34305) Release Testing Instructions: Verify FLINK-33261 Support Setting Parallelism for Table/SQL Sources

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34305:
---

 Summary: Release Testing Instructions: Verify FLINK-33261 Support 
Setting Parallelism for Table/SQL Sources 
 Key: FLINK-34305
 URL: https://issues.apache.org/jira/browse/FLINK-34305
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Runtime
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Shuai Xu
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34304) Release Testing Instructions: Verify FLINK-34219 Introduce a new join operator to support minibatch

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34304:
---

 Summary: Release Testing Instructions: Verify FLINK-34219 
Introduce a new join operator to support minibatch
 Key: FLINK-34304
 URL: https://issues.apache.org/jira/browse/FLINK-34304
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Feng Jin
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34303) Release Testing Instructions: Verify FLINK-34054 Support named parameters for functions and procedures

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34303:
---

 Summary: Release Testing Instructions: Verify FLINK-34054 Support 
named parameters for functions and procedures
 Key: FLINK-34303
 URL: https://issues.apache.org/jira/browse/FLINK-34303
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34302) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34302:
---

 Summary: Release Testing: Verify FLINK-33644 Make QueryOperations 
SQL serializable
 Key: FLINK-34302
 URL: https://issues.apache.org/jira/browse/FLINK-34302
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: xuyang
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34301) Release Testing: Verify FLINK-20281 Window aggregation supports changelog stream input

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34301:
---

 Summary: Release Testing: Verify FLINK-20281 Window aggregation 
supports changelog stream input
 Key: FLINK-34301
 URL: https://issues.apache.org/jira/browse/FLINK-34301
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: xuyang
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34300) Release Testing: Verify FLINK-24024 Support session Window TVF

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34300:
---

 Summary: Release Testing: Verify FLINK-24024 Support session 
Window TVF
 Key: FLINK-34300
 URL: https://issues.apache.org/jira/browse/FLINK-34300
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Jane Chan
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34299) Release Testing: Verify FLINK-33203 Adding a separate configuration for specifying Java Options of the SQL Gateway

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34299:
---

 Summary: Release Testing: Verify FLINK-33203 Adding a separate 
configuration for specifying Java Options of the SQL Gateway 
 Key: FLINK-34299
 URL: https://issues.apache.org/jira/browse/FLINK-34299
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Jane Chan
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34298) Release Testing: Verify FLINK-33397 Support Configuring Different State TTLs using SQL Hint

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34298:
---

 Summary: Release Testing: Verify FLINK-33397 Support Configuring 
Different State TTLs using SQL Hint
 Key: FLINK-34298
 URL: https://issues.apache.org/jira/browse/FLINK-34298
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Xuannan Su
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34297) Release Testing: Verify FLINK-34079 Migrate string configuration key to ConfigOption

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34297:
---

 Summary: Release Testing: Verify FLINK-34079 Migrate string 
configuration key to ConfigOption
 Key: FLINK-34297
 URL: https://issues.apache.org/jira/browse/FLINK-34297
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Junrui Li
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34296) Release Testing: Verify FLINK-33581 Deprecate configuration getters/setters that return/set complex Java objects

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34296:
---

 Summary: Release Testing: Verify FLINK-33581 Deprecate 
configuration getters/setters that return/set complex Java objects
 Key: FLINK-34296
 URL: https://issues.apache.org/jira/browse/FLINK-34296
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Junrui Li
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34295) Release Testing: Verify FLINK-33712 Deprecate RuntimeContext#getExecutionConfig

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34295:
---

 Summary: Release Testing: Verify FLINK-33712 Deprecate 
RuntimeContext#getExecutionConfig
 Key: FLINK-34295
 URL: https://issues.apache.org/jira/browse/FLINK-34295
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Junrui Li
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34294) Release Testing: Verify FLINK-33297 Support standard YAML for FLINK configuration

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34294:
---

 Summary: Release Testing: Verify FLINK-33297 Support standard YAML 
for FLINK configuration
 Key: FLINK-34294
 URL: https://issues.apache.org/jira/browse/FLINK-34294
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Zakelly Lan
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34293) Release Testing: Verify FLINK-34190 Deprecate RestoreMode#LEGACY

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34293:
---

 Summary: Release Testing: Verify FLINK-34190 Deprecate 
RestoreMode#LEGACY
 Key: FLINK-34293
 URL: https://issues.apache.org/jira/browse/FLINK-34293
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Hangxiang Yu
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34292) Release Testing: Verify FLINK-30613 Improve resolving schema compatibility

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34292:
---

 Summary: Release Testing: Verify FLINK-30613 Improve resolving 
schema compatibility 
 Key: FLINK-34292
 URL: https://issues.apache.org/jira/browse/FLINK-34292
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Piotr Nowojski
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34291) Release Testing: Verify FLINK-33697 Support adding custom metrics in Recovery Spans

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34291:
---

 Summary: Release Testing: Verify FLINK-33697 Support adding custom 
metrics in Recovery Spans
 Key: FLINK-34291
 URL: https://issues.apache.org/jira/browse/FLINK-34291
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Piotr Nowojski
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34290) Release Testing: Verify FLINK-33696 Add OpenTelemetryTraceReporter and OpenTelemetryMetricReporter

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34290:
---

 Summary: Release Testing: Verify FLINK-33696 Add 
OpenTelemetryTraceReporter and OpenTelemetryMetricReporter
 Key: FLINK-34290
 URL: https://issues.apache.org/jira/browse/FLINK-34290
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Piotr Nowojski
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34289) Release Testing: Verify FLINK-33695 Introduce TraceReporter and use it to create checkpointing and recovery traces

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34289:
---

 Summary: Release Testing: Verify FLINK-33695 Introduce 
TraceReporter and use it to create checkpointing and recovery traces 
 Key: FLINK-34289
 URL: https://issues.apache.org/jira/browse/FLINK-34289
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: Rui Fan
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34288) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34288:
---

 Summary: Release Testing: Verify FLINK-33735 Improve the 
exponential-delay restart-strategy 
 Key: FLINK-34288
 URL: https://issues.apache.org/jira/browse/FLINK-34288
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: xingbe
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34287) Release Testing: Verify FLINK-33768 Support dynamic source parallelism inference for batch jobs

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34287:
---

 Summary: Release Testing: Verify FLINK-33768  Support dynamic 
source parallelism inference for batch jobs 
 Key: FLINK-34287
 URL: https://issues.apache.org/jira/browse/FLINK-34287
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: xingbe
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[ANNOUNCE] Flink 1.19 feature freeze & sync summary on 01/30/2024

2024-01-30 Thread Lincoln Lee
Hi everyone,

(Since feature freeze and release sync are on the same day, we merged the
announcement and sync summary together)


*- Feature freeze*
The feature freeze of 1.19 has started now. That means that no new features
or improvements should now be merged into the master branch unless you ask
the release managers first, which has already been done for PRs, or pending
on CI to pass. Bug fixes and documentation PRs can still be merged.


*- Cutting release branch*
Currently we have three blocker issues[1][2][3], and will try to close them
this Friday.
We are planning to cut the release branch on next Monday (Feb 6th) if no
new test instabilities,
and we'll make another announcement in the dev mailing list then.


*- Cross-team testing*
Release testing is expected to start next week as soon as we cut the
release branch.
As a prerequisite, please Before we start testing, please make sure
1. Whether the feature needs a cross-team testing
2. If yes, please the documentation completed
There's an umbrella ticket[4] for tracking the 1.19 testing, RM will create
all tickets for completed features listed on the 1.19 wiki page[5] and
assign to the feature's Responsible Contributor,
also contributors are encouraged to create tickets following the steps in
the umbrella ticket if there are other ones that need to be cross-team
tested.

*- Release notes*

All new features and behavior changes require authors to fill out the
'Release Note' column in the JIRA(click the Edit button and pull the page
to the center),
especially since 1.19 involves a lot of deprecation, which is important for
users and will be part of the release announcement.

- *Sync meeting* (https://meet.google.com/vcx-arzs-trv)

We've already switched to weekly release sync, so the next release sync
will be on Feb 6th, 2024. Feel free to join us!

[1] https://issues.apache.org/jira/browse/FLINK-34148
[2] https://issues.apache.org/jira/browse/FLINK-34007
[3] https://issues.apache.org/jira/browse/FLINK-34259
[4] https://issues.apache.org/jira/browse/FLINK-34285
[5] https://cwiki.apache.org/confluence/display/FLINK/1.19+Release

Best,
Yun, Jing, Martijn and Lincoln


Re: [DISCUSS] FLIP-419: Optimize multi-sink query plan generation

2024-01-30 Thread Jeyhun Karimov
Hi devs,

I just wanted to give an update on this FLIP.
I updated the doc based on the comments from Jim.
Also, I developed a prototype and did some testing.

I in my small prototype I ran the following tests:

   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks1
   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks2
   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks3
   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks4
   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinks5
   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksWithUDTF
   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksSplitOnUnion1
   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksSplitOnUnion2
   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksSplitOnUnion3
   -
   
org.apache.flink.table.planner.plan.stream.sql.DagOptimizationTest#testMultiSinksSplitOnUnion4


These tests are e2e dag optimization, including query parsing, validation,
optimization, and checking the results.

In these e2e optimization tests, my prototype was 15-20% faster than
existing Flink optimization structure (with the "cost" of simplifying the
codebase).


Any questions/comments are more than welcome.


Regards,

Jeyhun Karimov

On Wed, Jan 17, 2024 at 9:11 PM Jeyhun Karimov  wrote:

> Hi Jim,
>
> Thanks for your comments. Please find my answers below:
>
>1. StreamOptimizeContext may still be needed to pass the fact that we
>>are optimizing a streaming query.  I don't think this class will go
>> away
>>completely.  (I agree it may become more simple if the kind or
>>mini-batch configuration can be removed.)
>
>
> What I meant is that it might go away if we get rid of
> *isUpdateBeforeRequired* and *getMiniBatchInterval *fields.
> Of course if we can get rid of only one of them, then the
> *StreamOptimizeContext* class will not be removed but get simpler.
> Will update the doc accordingly.
>
>2. How are the mini-batch and changelog inference rules tightly coupled?
>>I looked a little bit and I haven't seen any connection between them.
>> It
>>seems like the changelog inference is what needs to run multiple times.
>
>
> Sorry for the misunderstanding. The mini-batch and changelog inference are
> not coupled among themselves but with the high-level optimization logic.
> The idea is to separate the query optimization into 1) optimize 2) enrich
> with changelog inference 3) enrich with mini-batch interval inference and
> 4) rewrite
>
>3. I think your point about code complexity is unnecessary.
>> StreamOptimizeContext
>>extends org.apache.calcite.plan.Context which is used an interface to
>> pass
>>information and objects through the Calcite stack.
>
>
> I partially agree. Please see my answer above for the question 1.
>
>4. Is an alternative where the complexity of the changelog optimization
>>can be moved into the `FlinkChangelogModeInferenceProgram`?  (If this
>> is
>>coupling between the mini-batch and changelog rules, then this would
>> not
>>make sense.)
>
>
> Good point. Yes, this is definitely an alternative.
>
>5. There are some other smaller refactorings.  I tried some of them
>>here: https://github.com/apache/flink/pull/24108 Mostly, it is syntax
>>and using lazy vals to avoid recomputing various things.  (Feel free to
>>take whatever actually works; I haven't run the tests.)
>
>
> I took a look at your PR. For sure, some of the refactorings I will reuse
> (probably rebase by the time I have this ready :))
>
>
> Separately, folks on the Calcite dev list are thinking about multi-query
>> optimization:
>> https://lists.apache.org/thread/mcdqwrtpx0os54t2nn9vtk17spkp5o5k
>> https://issues.apache.org/jira/browse/CALCITE-6188
>
>
> Seems interesting. But Calcite's MQO approach will probably require some
> drastic changes in our codebase once we adopt it.
> This approach is more incremental.
>
> Hope my comments answer your questions.
>
> Regards,
> Jeyhun Karimov
>
> On Wed, Jan 17, 2024 at 2:36 AM Jim Hughes 
> wrote:
>
>> Hi Jeyhun,
>>
>>
>> Generally, I like the idea of speeding up the optimizer in the case of
>> multiple queries!
>>
>>
>> I am new to the optimizer, but I have a few comments / questions.
>>
>>
>>
>>1. StreamOptimizeContext may still be needed to pass the fact that we
>>are optimizing a streaming query.  I don't think this class will go
>> away
>>completely.  (I agree it may become more simple if the kind or
>>mini-batch configuration can be removed.)
>>2. How are the mini-batch and changelog inference rules tightly
>> coupled?
>>I looked a little bit and I haven't seen any connection between them.
>> It
>>

[jira] [Created] (FLINK-34286) Attach cluster config map labels at creation time

2024-01-30 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-34286:


 Summary: Attach cluster config map labels at creation time
 Key: FLINK-34286
 URL: https://issues.apache.org/jira/browse/FLINK-34286
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Kubernetes
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.19.0


We attach a set of labels to config maps that we create to ease the manual 
cleanup by users in case Flink fails unrecoverably.

For cluster config maps (that are used for leader election), these labels are 
not set at creation time, but when leadership is acquired, in contrast to job 
config maps.

This means there's a gap where we create a CM without any labels being 
attached, and should Flink fail before leadership can be acquired it will 
continue to lack labels indefinitely.

AFAICT it should be straight-forward, at least API-wise, to set these labels at 
creation time. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34285) [Umbrella] Test Flink Release 1.19

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34285:
---

 Summary: [Umbrella] Test Flink Release 1.19
 Key: FLINK-34285
 URL: https://issues.apache.org/jira/browse/FLINK-34285
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.19.0
Reporter: lincoln lee
Assignee: lincoln lee
 Fix For: 1.19.0


This is an umbrella ticket for the Flink 1.19 testing efforts. 
Please follow the steps:
1. Whether the feature needs a crossteam testing, if no, authors just close the 
ticket
2. If testing is required, then the author should prepare related user 
documentation[must have] and additional instructions which are thought 
necessary for testers (if exists, e.g., some limitations that are outside the 
scope of the design)
3. After No.2 is done, the author can close the jira and create a corresponding 
jiras for tracking the testing result(keep unassigned or assign to someone 
already volunteer for testing the feature)

Note:  RM will create all tickets for completed features listed on the 1.19 
wiki page, and contributors are encouraged to create tickets following the 
steps above if there are other ones that need to be cross-team tested. All the 
tasks should be opened with:

Priority: Blocker
Fix Version: 1.19.0
Label: release-testing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-opensearch v1.1.0, release candidate #1

2024-01-30 Thread Leonard Xu
Sorry for late verification, +1(binding)

- built from source code succeeded
- verified signatures
- verified hashsums 
- checked the contents contains jar and pom files in apache repo 
- checked Github release tag 
- checked release notes
- reviewed the web PR


Best,
Leonard

> 2024年1月13日 下午4:41,Jiabao Sun  写道:
> 
> +1 (non-binding)
> 
> - Validated hashes
> - Verified signature
> - Verified tags
> - Verified no binaries in the source archive
> - Reviewed web pr and found that there're some conflicts need to be resolved
> 
> Best,
> Jiabao
> 
> 
>> 2024年1月12日 23:58,Danny Cranmer  写道:
>> 
>> Apologies I jumped the gun on this one. We only have 2 binding votes.
>> Reopening the thread.
>> 
>> On Fri, Jan 12, 2024 at 3:43 PM Danny Cranmer 
>> wrote:
>> 
>>> Thanks all, this vote is now closed, I will announce the results on a
>>> separate thread.
>>> 
>>> Thanks,
>>> Danny
>>> 
>>> On Fri, Jan 12, 2024 at 3:43 PM Danny Cranmer 
>>> wrote:
>>> 
 +1 (binding)
 
 - Verified signatures and checksums
 - Reviewed release notes
 - Verified no binaries in the source archive
 - Source builds using Maven
 - Reviewed NOTICE files (I suppose the copyright needs to be 2024 now!)
 
 Thanks,
 Danny
 
 On Fri, Jan 12, 2024 at 12:56 PM Martijn Visser 
 wrote:
 
> One non blocking nit: the version for flink.version in the main POM is
> set to 1.17.1. I think this should be 1.17.0 (since that's the lowest
> possible Flink version that's supported).
> 
> +1 (binding)
> 
> - Validated hashes
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven
> - Verified licenses
> - Verified web PRs
> 
> On Mon, Jan 1, 2024 at 11:57 AM Danny Cranmer 
> wrote:
>> 
>> Hey,
>> 
>> Gordon, apologies for the delay. Yes this is the correct
> understanding, all
>> connectors follow a similar pattern.
>> 
>> Would appreciate some PMC eyes on this release.
>> 
>> Thanks,
>> Danny
>> 
>> On Thu, 23 Nov 2023, 23:28 Tzu-Li (Gordon) Tai, 
> wrote:
>> 
>>> Hi Danny,
>>> 
>>> Thanks for starting a RC for this.
>>> 
>>> From the looks of the staged POMs for 1.1.0-1.18, the flink versions
> for
>>> Flink dependencies still point to 1.17.1.
>>> 
>>> My understanding is that this is fine, as those provided scope
>>> dependencies (e.g. flink-streaming-java) will have their versions
>>> overwritten by the user POM if they do intend to compile their jobs
> against
>>> Flink 1.18.x.
>>> Can you clarify if this is the correct understanding of how we
> intend the
>>> externalized connector artifacts to be published? Related discussion
> on
>>> [1].
>>> 
>>> Thanks,
>>> Gordon
>>> 
>>> [1] https://lists.apache.org/thread/x1pyrrrq7o1wv1lcdovhzpo4qhd4tvb4
>>> 
>>> On Thu, Nov 23, 2023 at 3:14 PM Sergey Nuyanzin > 
>>> wrote:
>>> 
 +1 (non-binding)
 
 - downloaded artifacts
 - built from source
 - verified checksums and signatures
 - reviewed web pr
 
 
 On Mon, Nov 6, 2023 at 5:31 PM Ryan Skraba
> >>> 
 wrote:
 
> Hello! +1 (non-binding) Thanks for the release!
> 
> I've validated the source for the RC1:
> * flink-connector-opensearch-1.1.0-src.tgz at r64995
> * The sha512 checksum is OK.
> * The source file is signed correctly.
> * The signature 0F79F2AFB2351BC29678544591F9C1EC125FD8DB is
> found in
>>> the
> KEYS file, and on https://keyserver.ubuntu.com/
> * The source file is consistent with the GitHub tag v1.1.0-rc1,
> which
> corresponds to commit 0f659cc65131c9ff7c8c35eb91f5189e80414ea1
> - The files explicitly excluded by create_pristine_sources (such
> as
> .gitignore and the submodule tools/releasing/shared) are not
> present.
> * Has a LICENSE file and a NOTICE file
> * Does not contain any compiled binaries.
> 
> * The sources can be compiled and unit tests pass with
> flink.version
 1.17.1
> and flink.version 1.18.0
> 
> * Nexus has three staged artifact ids for 1.1.0-1.17 and
> 1.1.0-1.18
> - flink-connector-opensearch (.jar, -javadoc.jar, -sources.jar,
> -tests.jar and .pom)
> - flink-sql-connector-opensearch (.jar, -sources.jar and .pom)
> - flink-connector-gcp-pubsub-parent (only .pom)
> 
> All my best, Ryan
> 
> On Fri, Nov 3, 2023 at 10:29 AM Danny Cranmer <
> dannycran...@apache.org
 
> wrote:
>> 
>> Hi everyone,
>> 
>> Please review and vote on the release candidate #1 for the
> version
 1.1.0

[VOTE] Release flink-connector-mongodb v1.1.0, release candidate #2

2024-01-30 Thread Leonard Xu
Hey all,

Please help review and vote on the release candidate #2 for the version v1.1.0 
of the
Apache Flink MongoDB Connector as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* The official Apache source release to be deployed to dist.apache.org [2],
which are signed with the key with fingerprint
5B2F6608732389AEB67331F5B197E1F1108998AD [3],
* All artifacts to be deployed to the Maven Central Repository [4],
* Source code tag v1.1.0-rc2 [5],
* Website pull request listing the new release [6].

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.


Best,
Leonard
[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353483
[2] 
https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc2/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1705/
[5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc2
[6] https://github.com/apache/flink-web/pull/719

[jira] [Created] (FLINK-34284) Submit Software License Grant to ASF

2024-01-30 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-34284:
--

 Summary: Submit Software License Grant to ASF
 Key: FLINK-34284
 URL: https://issues.apache.org/jira/browse/FLINK-34284
 Project: Flink
  Issue Type: Sub-task
  Components: Flink CDC
Reporter: Leonard Xu


As ASF software license grant[1] required, we need submit the Software Grant 
Agreement.

[1] https://www.apache.org/licenses/contributor-agreements.html#grants



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34283) CLONE - Verify that no exclusions were erroneously added to the japicmp plugin

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34283:
---

 Summary: CLONE - Verify that no exclusions were erroneously added 
to the japicmp plugin
 Key: FLINK-34283
 URL: https://issues.apache.org/jira/browse/FLINK-34283
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Matthias Pohl


Verify that no exclusions were erroneously added to the japicmp plugin that 
break compatibility guarantees. Check the exclusions for the 
japicmp-maven-plugin in the root pom (see 
[apache/flink:pom.xml:2175ff|https://github.com/apache/flink/blob/3856c49af77601cf7943a5072d8c932279ce46b4/pom.xml#L2175]
 for exclusions that:
* For minor releases: break source compatibility for {{@Public}} APIs
* For patch releases: break source/binary compatibility for 
{{@Public}}/{{@PublicEvolving}}  APIs
Any such exclusion must be properly justified, in advance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34279) CLONE - Cross team testing

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34279:
---

 Summary: CLONE - Cross team testing
 Key: FLINK-34279
 URL: https://issues.apache.org/jira/browse/FLINK-34279
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Qingsheng Ren


For user facing features that go into the release we'd like to ensure they can 
actually _be used_ by Flink users. To achieve this the release managers ensure 
that an issue for cross team testing is created in the Apache Flink Jira. This 
can and should be picked up by other community members to verify the 
functionality and usability of the feature.
The issue should contain some entry points which enables other community 
members to test it. It should not contain documentation on how to use the 
feature as this should be part of the actual documentation. The cross team 
tests are performed after the feature freeze. Documentation should be in place 
before that. Those tests are manual tests, so do not confuse them with 
automated tests.
To sum that up:
 * User facing features should be tested by other contributors
 * The scope is usability and sanity of the feature
 * The feature needs to be already documented
 * The contributor creates an issue containing some pointers on how to get 
started (e.g. link to the documentation, suggested targets of verification)
 * Other community members pick those issues up and provide feedback
 * Cross team testing happens right after the feature freeze

 

h3. Expectations
 * Jira issues for each expected release task according to the release plan is 
created and labeled as {{{}release-testing{}}}.
 * All the created release-testing-related Jira issues are resolved and the 
corresponding blocker issues are fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34282) CLONE - Create a release branch

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34282:
---

 Summary: CLONE - Create a release branch
 Key: FLINK-34282
 URL: https://issues.apache.org/jira/browse/FLINK-34282
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.17.0
Reporter: lincoln lee
Assignee: Leonard Xu
 Fix For: 1.17.0


If you are doing a new minor release, you need to update Flink version in the 
following repositories and the [AzureCI project 
configuration|https://dev.azure.com/apache-flink/apache-flink/]:
 * [apache/flink|https://github.com/apache/flink]
 * [apache/flink-docker|https://github.com/apache/flink-docker]
 * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]

Patch releases don't require the these repositories to be touched. Simply 
checkout the already existing branch for that version:
{code:java}
$ git checkout release-$SHORT_RELEASE_VERSION
{code}
h4. Flink repository

Create a branch for the new version that we want to release before updating the 
master branch to the next development version:
{code:bash}
$ cd ./tools
tools $ releasing/create_snapshot_branch.sh
tools $ git checkout master
tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
{code}
In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
[apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
 as the last entry:
{code:java}
// ...
v1_12("1.12"),
v1_13("1.13"),
v1_14("1.14"),
v1_15("1.15"),
v1_16("1.16");
{code}
The newly created branch and updated {{master}} branch need to be pushed to the 
official repository.
h4. Flink Docker Repository

Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
[apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
sure that 
[apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
 points to the correct snapshot version; for {{dev-x.y}} it should point to 
{{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).

After pushing the new minor release branch, as the last step you should also 
update the documentation workflow to also build the documentation for the new 
release branch. Check [Managing 
Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
 on details on how to do that. You may also want to manually trigger a build to 
make the changes visible as soon as possible.

h4. Flink Benchmark Repository
First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
[apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
we can have a branch named {{dev-x.y}} which could be built on top of 
(${{CURRENT_SNAPSHOT_VERSION}}).

Then, inside the repository you need to manually update the {{flink.version}} 
property inside the parent *pom.xml* file. It should be pointing to the most 
recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
{code:xml}
1.18-SNAPSHOT
{code}

h4. AzureCI Project Configuration
The new release branch needs to be configured within AzureCI to make azure 
aware of the new release branch. This matter can only be handled by Ververica 
employees since they are owning the AzureCI setup.
 

h3. Expectations (Minor Version only if not stated otherwise)
 * Release branch has been created and pushed
 * Changes on the new release branch are picked up by [Azure 
CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
 * {{master}} branch has the version information updated to the new version 
(check pom.xml files and 
 * 
[apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
 enum)
 * New version is added to the 
[apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
 enum.
 * Make sure [flink-docker|https://github.com/apache/flink-docker/] has 
{{dev-x.y}} branch and docker e2e tests run against this branch in the 
corresponding Apache Flink release branch (see 
[apache/flink:flink-end-to-end-tests/test-scripts/common_docker.sh:51|https://github.com/apache/flink/blob/master/flink-end-to-end-tests/test-scripts/common_docker.sh#L51])
 * 
[apache-flink:docs/config.toml|https://github.com/apache/flink/blob/release-1.17/docs/config.toml]
 has been updated appropriately in the new Apache Flink release branch.
 * The {{flink.version}} property (see 

[jira] [Created] (FLINK-34277) CLONE - Triage release-blocking issues in JIRA

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34277:
---

 Summary: CLONE - Triage release-blocking issues in JIRA
 Key: FLINK-34277
 URL: https://issues.apache.org/jira/browse/FLINK-34277
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Qingsheng Ren


There could be outstanding release-blocking issues, which should be triaged 
before proceeding to build a release candidate. We track them by assigning a 
specific Fix version field even before the issue resolved.

The list of release-blocking issues is available at the version status page. 
Triage each unresolved issue with one of the following resolutions:
 * If the issue has been resolved and JIRA was not updated, resolve it 
accordingly.
 * If the issue has not been resolved and it is acceptable to defer this until 
the next release, update the Fix Version field to the new version you just 
created. Please consider discussing this with stakeholders and the dev@ mailing 
list, as appropriate.
 ** When using "Bulk Change" functionality of Jira
 *** First, add the newly created version to Fix Version for all unresolved 
tickets that have old the old version among its Fix Versions.
 *** Afterwards, remove the old version from the Fix Version.
 * If the issue has not been resolved and it is not acceptable to release until 
it is fixed, the release cannot proceed. Instead, work with the Flink community 
to resolve the issue.

 

h3. Expectations
 * There are no release blocking JIRA issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34276) CLONE - Create a new version in JIRA

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34276:
---

 Summary: CLONE - Create a new version in JIRA
 Key: FLINK-34276
 URL: https://issues.apache.org/jira/browse/FLINK-34276
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Martijn Visser


When contributors resolve an issue in JIRA, they are tagging it with a release 
that will contain their changes. With the release currently underway, new 
issues should be resolved against a subsequent future release. Therefore, you 
should create a release item for this subsequent release, as follows:
 # In JIRA, navigate to the [Flink > Administration > 
Versions|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions].
 # Add a new release: choose the next minor version number compared to the one 
currently underway, select today’s date as the Start Date, and choose Add.
(Note: Only PMC members have access to the project administration. If you do 
not have access, ask on the mailing list for assistance.)

 

h3. Expectations
 * The new version should be listed in the dropdown menu of {{fixVersion}} or 
{{affectedVersion}} under "unreleased versions" when creating a new Jira issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34281) CLONE - Select executing Release Manager

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34281:
---

 Summary: CLONE - Select executing Release Manager
 Key: FLINK-34281
 URL: https://issues.apache.org/jira/browse/FLINK-34281
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Affects Versions: 1.17.0
Reporter: lincoln lee
Assignee: Qingsheng Ren
 Fix For: 1.17.0


h4. GPG Key

You need to have a GPG key to sign the release artifacts. Please be aware of 
the ASF-wide [release signing 
guidelines|https://www.apache.org/dev/release-signing.html]. If you don’t have 
a GPG key associated with your Apache account, please create one according to 
the guidelines.

Determine your Apache GPG Key and Key ID, as follows:
{code:java}
$ gpg --list-keys
{code}
This will list your GPG keys. One of these should reflect your Apache account, 
for example:
{code:java}
--
pub   2048R/845E6689 2016-02-23
uid  Nomen Nescio 
sub   2048R/BA4D50BE 2016-02-23
{code}
In the example above, the key ID is the 8-digit hex string in the {{pub}} line: 
{{{}845E6689{}}}.

Now, add your Apache GPG key to the Flink’s {{KEYS}} file in the [Apache Flink 
release KEYS file|https://dist.apache.org/repos/dist/release/flink/KEYS] 
repository at [dist.apache.org|http://dist.apache.org/]. Follow the 
instructions listed at the top of these files. (Note: Only PMC members have 
write access to the release repository. If you end up getting 403 errors ask on 
the mailing list for assistance.)

Configure {{git}} to use this key when signing code by giving it your key ID, 
as follows:
{code:java}
$ git config --global user.signingkey 845E6689
{code}
You may drop the {{--global}} option if you’d prefer to use this key for the 
current repository only.

You may wish to start {{gpg-agent}} to unlock your GPG key only once using your 
passphrase. Otherwise, you may need to enter this passphrase hundreds of times. 
The setup for {{gpg-agent}} varies based on operating system, but may be 
something like this:
{code:bash}
$ eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info)
$ export GPG_TTY=$(tty)
$ export GPG_AGENT_INFO
{code}
h4. Access to Apache Nexus repository

Configure access to the [Apache Nexus 
repository|https://repository.apache.org/], which enables final deployment of 
releases to the Maven Central Repository.
 # You log in with your Apache account.
 # Confirm you have appropriate access by finding {{org.apache.flink}} under 
{{{}Staging Profiles{}}}.
 # Navigate to your {{Profile}} (top right drop-down menu of the page).
 # Choose {{User Token}} from the dropdown, then click {{{}Access User 
Token{}}}. Copy a snippet of the Maven XML configuration block.
 # Insert this snippet twice into your global Maven {{settings.xml}} file, 
typically {{{}${HOME}/.m2/settings.xml{}}}. The end result should look like 
this, where {{TOKEN_NAME}} and {{TOKEN_PASSWORD}} are your secret tokens:
{code:xml}

   
 
   apache.releases.https
   TOKEN_NAME
   TOKEN_PASSWORD
 
 
   apache.snapshots.https
   TOKEN_NAME
   TOKEN_PASSWORD
 
   
 
{code}

h4. Website development setup

Get ready for updating the Flink website by following the [website development 
instructions|https://flink.apache.org/contributing/improve-website.html].
h4. GNU Tar Setup for Mac (Skip this step if you are not using a Mac)

The default tar application on Mac does not support GNU archive format and 
defaults to Pax. This bloats the archive with unnecessary metadata that can 
result in additional files when decompressing (see [1.15.2-RC2 vote 
thread|https://lists.apache.org/thread/mzbgsb7y9vdp9bs00gsgscsjv2ygy58q]). 
Install gnu-tar and create a symbolic link to use in preference of the default 
tar program.
{code:bash}
$ brew install gnu-tar
$ ln -s /usr/local/bin/gtar /usr/local/bin/tar
$ which tar
{code}
 

h3. Expectations
 * Release Manager’s GPG key is published to 
[dist.apache.org|http://dist.apache.org/]
 * Release Manager’s GPG key is configured in git configuration
 * Release Manager's GPG key is configured as the default gpg key.
 * Release Manager has {{org.apache.flink}} listed under Staging Profiles in 
Nexus
 * Release Manager’s Nexus User Token is configured in settings.xml



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34275) Prepare Flink 1.19 Release

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34275:
---

 Summary: Prepare Flink 1.19 Release
 Key: FLINK-34275
 URL: https://issues.apache.org/jira/browse/FLINK-34275
 Project: Flink
  Issue Type: New Feature
  Components: Release System
Affects Versions: 1.17.0
Reporter: lincoln lee
Assignee: Leonard Xu
 Fix For: 1.17.0


This umbrella issue is meant as a test balloon for moving the [release 
documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
 into Jira.
h3. Prerequisites
h4. Environment Variables

Commands in the subtasks might expect some of the following enviroment 
variables to be set accordingly to the version that is about to be released:
{code:bash}
RELEASE_VERSION="1.5.0"
SHORT_RELEASE_VERSION="1.5"
CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
SHORT_NEXT_SNAPSHOT_VERSION="1.6"
{code}
h4. Build Tools

All of the following steps require to use Maven 3.2.5 and Java 8. Modify your 
PATH environment variable accordingly if needed.
h4. Flink Source
 * Create a new directory for this release and clone the Flink repository from 
Github to ensure you have a clean workspace (this step is optional).
 * Run {{mvn -Prelease clean install}} to ensure that the build processes that 
are specific to that profile are in good shape (this step is optional).

The rest of this instructions assumes that commands are run in the root (or 
{{./tools}} directory) of a repository on the branch of the release version 
with the above environment variables set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34280) CLONE - Review Release Notes in JIRA

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34280:
---

 Summary: CLONE - Review Release Notes in JIRA
 Key: FLINK-34280
 URL: https://issues.apache.org/jira/browse/FLINK-34280
 Project: Flink
  Issue Type: Sub-task
Reporter: lincoln lee
Assignee: Qingsheng Ren


JIRA automatically generates Release Notes based on the {{Fix Version}} field 
applied to issues. Release Notes are intended for Flink users (not Flink 
committers/contributors). You should ensure that Release Notes are informative 
and useful.

Open the release notes from the version status page by choosing the release 
underway and clicking Release Notes.

You should verify that the issues listed automatically by JIRA are appropriate 
to appear in the Release Notes. Specifically, issues should:
 * Be appropriately classified as {{{}Bug{}}}, {{{}New Feature{}}}, 
{{{}Improvement{}}}, etc.
 * Represent noteworthy user-facing changes, such as new functionality, 
backward-incompatible API changes, or performance improvements.
 * Have occurred since the previous release; an issue that was introduced and 
fixed between releases should not appear in the Release Notes.
 * Have an issue title that makes sense when read on its own.

Adjust any of the above properties to the improve clarity and presentation of 
the Release Notes.

Ensure that the JIRA release notes are also included in the release notes of 
the documentation (see section "Review and update documentation").
h4. Content of Release Notes field from JIRA tickets 

To get the list of "release notes" field from JIRA, you can ran the following 
script using JIRA REST API (notes the maxResults limits the number of entries):
{code:bash}
curl -s 
https://issues.apache.org/jira//rest/api/2/search?maxResults=200=project%20%3D%20FLINK%20AND%20%22Release%20Note%22%20is%20not%20EMPTY%20and%20fixVersion%20%3D%20${RELEASE_VERSION}
 | jq '.issues[]|.key,.fields.summary,.fields.customfield_12310192' | paste - - 
-
{code}
{{jq}}  is present in most Linux distributions and on MacOS can be installed 
via brew.

 

h3. Expectations
 * Release Notes in JIRA have been audited and adjusted



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34278) CLONE - Review and update documentation

2024-01-30 Thread lincoln lee (Jira)
lincoln lee created FLINK-34278:
---

 Summary: CLONE - Review and update documentation
 Key: FLINK-34278
 URL: https://issues.apache.org/jira/browse/FLINK-34278
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.17.0
Reporter: lincoln lee
Assignee: Qingsheng Ren
 Fix For: 1.17.0


There are a few pages in the documentation that need to be reviewed and updated 
for each release.
 * Ensure that there exists a release notes page for each non-bugfix release 
(e.g., 1.5.0) in {{{}./docs/release-notes/{}}}, that it is up-to-date, and 
linked from the start page of the documentation.
 * Upgrading Applications and Flink Versions: 
[https://ci.apache.org/projects/flink/flink-docs-master/ops/upgrading.html]
 * ...

 

h3. Expectations
 * Update upgrade compatibility table 
([apache-flink:./docs/content/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content/docs/ops/upgrading.md#compatibility-table]
 and 
[apache-flink:./docs/content.zh/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content.zh/docs/ops/upgrading.md#compatibility-table]).
 * Update [Release Overview in 
Confluence|https://cwiki.apache.org/confluence/display/FLINK/Release+Management+and+Feature+Plan]
 * (minor only) The documentation for the new major release is visible under 
[https://nightlies.apache.org/flink/flink-docs-release-$SHORT_RELEASE_VERSION] 
(after at least one [doc 
build|https://github.com/apache/flink/actions/workflows/docs.yml] succeeded).
 * (minor only) The documentation for the new major release does not contain 
"-SNAPSHOT" in its version title, and all links refer to the corresponding 
version docs instead of {{{}master{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Confluence access request

2024-01-30 Thread tanjialiang
Hi, devs! I want to prepare a FLIP and start a discussion on the dev mailing 
list, but I find I don't have the access, can someone give me access to 
confluence?


My Confluence username: tanjialiang


Best regards,
tanjialiang

[jira] [Created] (FLINK-34273) git fetch fails

2024-01-30 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34273:
-

 Summary: git fetch fails
 Key: FLINK-34273
 URL: https://issues.apache.org/jira/browse/FLINK-34273
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI, Test Infrastructure
Affects Versions: 1.18.1, 1.19.0
Reporter: Matthias Pohl


We've seen multiple {{git fetch}} failures. I assume this to be an 
infrastructure issue. This Jira issue is for documentation purposes.
{code:java}
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
error: 5211 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57080=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=5d6dc3d3-393d-5111-3a40-c6a5a36202e6=667



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34274) AdaptiveSchedulerTest.testRequirementLowerBoundDecreaseAfterResourceScarcityBelowAvailableSlots times out

2024-01-30 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34274:
-

 Summary: 
AdaptiveSchedulerTest.testRequirementLowerBoundDecreaseAfterResourceScarcityBelowAvailableSlots
 times out
 Key: FLINK-34274
 URL: https://issues.apache.org/jira/browse/FLINK-34274
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: Matthias Pohl


{code:java}
Jan 30 03:15:46 "ForkJoinPool-420-worker-25" #9746 daemon prio=5 os_prio=0 
tid=0x7fdfbb635800 nid=0x2dbd waiting on condition [0x7fdf39528000]
Jan 30 03:15:46java.lang.Thread.State: WAITING (parking)
Jan 30 03:15:46 at sun.misc.Unsafe.park(Native Method)
Jan 30 03:15:46 - parking to wait for  <0xfe642548> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
Jan 30 03:15:46 at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
Jan 30 03:15:46 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
Jan 30 03:15:46 at 
java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
Jan 30 03:15:46 at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerTest$SubmissionBufferingTaskManagerGateway.waitForSubmissions(AdaptiveSchedulerTest.java:2225)
Jan 30 03:15:46 at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerTest.awaitJobReachingParallelism(AdaptiveSchedulerTest.java:1333)
Jan 30 03:15:46 at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveSchedulerTest.testRequirementLowerBoundDecreaseAfterResourceScarcityBelowAvailableSlots(AdaptiveSchedulerTest.java:1273)
Jan 30 03:15:46 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
[...] {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57086=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=9893



--
This message was sent by Atlassian Jira
(v8.20.10#820010)