Re: Re: [DISCUSS] FLIP-387: Support named parameters for functions and call procedures

2024-01-02 Thread Feng Jin
Hi all,

Thank you for the valuable input.
Since there are no new objections or suggestions, I will open a voting
thread in two days.


Best,
Feng


On Thu, Dec 21, 2023 at 6:58 PM Benchao Li  wrote:

> I'm glad to hear that this is in your plan. Sorry that I overlooked
> the PoC link in the FLIP previously, I'll go over the code of PoC, and
> post here if there are any more concerns.
>
> Xuyang  于2023年12月21日周四 10:39写道:
>
> >
> > Hi, Benchao.
> >
> >
> > When Feng Jin and I tried the poc together, we found that when using
> udaf, Calcite directly using the function's input parameters from
> SqlCall#getOperandList. But in fact, these input parameters may use named
> arguments, the order of parameters may be wrong, and they may not include
> optional parameters that need to set default values. Actually, we should
> use new SqlCallBinding(this, scope, call).operands() to let this method
> correct the order and add default values. (You can see the modification in
> SqlToRelConverter in poc branch[1])
> >
> >
> > We have not reported this bug to the calcite community yet. Our original
> plan was to report this bug to the calcite community during the process of
> doing this flip, and fix it separately in flink's own calcite file. Because
> the time for Calcite to release the version is uncertain. And the time to
> upgrade flink to the latest calcite version is also unknown.
> >
> >
> > The link to the poc code is at the bottom of the flip[2]. I'm post it
> here again[1].
> >
> >
> > [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-387%3A+Support+named+parameters+for+functions+and+call+procedures
> > [2]
> https://github.com/apache/flink/compare/master...hackergin:flink:poc_named_argument
> >
> >
> >
> > --
> >
> > Best!
> > Xuyang
> >
> >
> >
> >
> >
> > 在 2023-12-20 13:31:26,"Benchao Li"  写道:
> > >I didn't see your POC code, so I assumed that you'll need to add
> > >SqlStdOperatorTable#DEFAULT and
> > >SqlStdOperatorTable#ARGUMENT_ASSIGNMENT to FlinkSqlOperatorTable, am I
> > >right?
> > >
> > >If yes, this would enable many builtin functions to allow default and
> > >optional arguments, for example, `select md5(DEFAULT)`, I guess this
> > >is not what we want to support right? If so, I would suggest to throw
> > >proper errors for these unexpected usages.
> > >
> > >Benchao Li  于2023年12月20日周三 13:16写道:
> > >>
> > >> Thanks Feng for driving this, it's a very useful feature.
> > >>
> > >> In the FLIP, you mentioned that
> > >> > During POC verification, bugs were discovered in Calcite that
> caused issues during the validation phase. We need to modify the
> SqlValidatorImpl and SqlToRelConverter to address these problems.
> > >>
> > >> Could you log bugs in Calcite, and reference the corresponding Jira
> > >> number in your code. We want to upgrade Calcite version to latest as
> > >> much as possible, and maintaining many bugfixes in Flink will add
> > >> additional burdens for upgrading Calcite. By adding corresponding
> > >> issue numbers, we can easily make sure that we can remove these Flink
> > >> hosted bugfixes when we upgrade to a version that already contains the
> > >> fix.
> > >>
> > >> Feng Jin  于2023年12月14日周四 19:30写道:
> > >> >
> > >> > Hi Timo
> > >> > Thanks for your reply.
> > >> >
> > >> > >  1) ArgumentNames annotation
> > >> >
> > >> > I'm sorry for my incorrect expression. argumentNames is a method of
> > >> > FunctionHints. We should introduce a new arguments method to
> replace this
> > >> > method and return Argument[].
> > >> > I updated the FLIP doc about this part.
> > >> >
> > >> > >  2) Evolution of FunctionHint
> > >> >
> > >> > +1 define DataTypeHint as part of ArgumentHint. I'll update the
> FLIP doc.
> > >> >
> > >> > > 3)  Semantical correctness
> > >> >
> > >> > I realized that I forgot to submit the latest modification of the
> FLIP
> > >> > document. Xuyang and I had a prior discussion before starting this
> discuss.
> > >> > Let's restrict it to supporting only one eval() function, which will
> > >> > simplify the overall design.
> > >> >
> > >> > Therefore, I also concur with not permitting overloaded named
> parameters.
> > >> >
> > >> >
> > >> > Best,
> > >> > Feng
> > >> >
> > >> > On Thu, Dec 14, 2023 at 6:15 PM Timo Walther 
> wrote:
> > >> >
> > >> > > Hi Feng,
> > >> > >
> > >> > > thank you for proposing this FLIP. This nicely completes FLIP-65
> which
> > >> > > is great for usability.
> > >> > >
> > >> > > I read the FLIP and have some feedback:
> > >> > >
> > >> > >
> > >> > > 1) ArgumentNames annotation
> > >> > >
> > >> > >  > Deprecate the ArgumentNames annotation as it is not
> user-friendly for
> > >> > > specifying argument names with optional configuration.
> > >> > >
> > >> > > Which annotation does the FLIP reference here? I cannot find it
> in the
> > >> > > Flink code base.
> > >> > >
> > >> > > 2) Evolution of FunctionHint
> > >> > >
> > >> > > Introducing @ArgumentHint makes a lot of sense to me. However,
> using it
> > >> > > 

[jira] [Created] (FLINK-33971) Specifies whether to use HBase table that supports dynamic columns.

2024-01-02 Thread MOBIN (Jira)
MOBIN created FLINK-33971:
-

 Summary: Specifies whether to use HBase table that supports 
dynamic columns.
 Key: FLINK-33971
 URL: https://issues.apache.org/jira/browse/FLINK-33971
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / HBase
Reporter: MOBIN


Specifies whether to use HBase table that supports dynamic columns.

Refer to the dynamic.table parameter in this document: 
[[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]|https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv]

Sample code for a result table that supports dynamic columns

CREATE TEMPORARY TABLE datagen_source (
  id INT,
  f1hour STRING,
  f1deal BIGINT,
  f2day STRING,
  f2deal BIGINT
) WITH (
  'connector'='datagen'
);

CREATE TEMPORARY TABLE hbase_sink (
  rowkey INT,
  f1 ROW<`hour` STRING, deal BIGINT>,
  f2 ROW<`day` STRING, deal BIGINT>
) WITH (
  'connector'='hbase-2.2',
  'table-name'='',
  'zookeeper.quorum'='',
  'dynamic.table'='true'
);

INSERT INTO hbase_sink
SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source;
If dynamic.table is set to true, HBase table that supports dynamic columns is 
used.
Two fields must be declared in the rows that correspond to each column family. 
The value of the first field indicates the dynamic column, and the value of the 
second field indicates the value of the dynamic column.

For example, the datagen_source table contains a row of data The row of data 
indicates that the ID of the commodity is 1, the transaction amount of the 
commodity between 10:00 and 11:00 is 100, and the transaction amount of the 
commodity on July 26, 2020 is 1. In this case, a row whose rowkey is 1 is 
inserted into the ApsaraDB for HBase table. f1:10 is 100, and f2:2020-7-26 is 
1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Hang Ruan
Congratulations, Alex!

Best,
Hang

Samrat Deb  于2024年1月3日周三 14:18写道:

> Congratulations Alex
>


RE: Re: [DISCUSS] FLIP-377: Support configuration to disable filter push down for Table/SQL Sources

2024-01-02 Thread Jiabao Sun
Thanks Leonard for your reminder,

The FLIP title has been changed as FLIP-377: 
Support fine-grained configuration to control filter push down for Table/SQL 
Sources.

Best,
Jiabao

On 2024/01/03 06:51:10 Leonard Xu wrote:
> Thanks Jiabao for driving this.
> 
> +1 to start a vote, a minor comment, should we change the FLIP title 
> according this thread context as well?
> 
> Best,
> Leonard
> 
> 
> 
> > 2024年1月3日 下午2:43,Jiabao Sun  写道:
> > 
> > Hi,
> > 
> > Thank you again for the discussion on this FLIP.
> > If there are no further comments, I plan to start a voting thread tomorrow.
> > 
> > Best,
> > Jiabao
> > 
> > On 2023/12/20 14:09:49 Jiabao Sun wrote:
> >> Hi,
> >> 
> >> Thank you to everyone for the discussion on this FLIP, 
> >> especially Becket for providing guidance that made it more reasonable. 
> >> 
> >> The FLIP document[1] has been updated with the recent discussed content. 
> >> Please take a look to double-check it when you have time.
> >> 
> >> If we can reach a consensus on this, I will open the voting thread in 
> >> recent days.
> >> 
> >> Best,
> >> Jiabao
> >> 
> >> [1] 
> >> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=276105768
> >> 
> >> 
> >>> 2023年12月20日 11:38,Jiabao Sun  写道:
> >>> 
> >>> Thanks Becket,
> >>> 
> >>> The behavior description has been added to the Public Interfaces section.
> >>> 
> >>> Best,
> >>> Jiabao
> >>> 
> >>> 
>  2023年12月20日 08:17,Becket Qin http://gmail.com/>> 写道:
>  
>  Hi Jiabao,
>  
>  Thanks for updating the FLIP.
>  Can you add the behavior of the policies that are only applicable to some
>  but not all of the databases? This is a part of the intended behavior of
>  the proposed configuration. So, we should include that in the FLIP.
>  Otherwise, the FLIP looks good to me.
>  
>  Cheers,
>  
>  Jiangjie (Becket) Qin
>  
>  On Tue, Dec 19, 2023 at 11:00 PM Jiabao Sun 
>  wrote:
>  
> > Hi Becket,
> > 
> > I share the same view as you regarding the prefix for this configuration
> > option.
> > 
> > For the JDBC connector, I prefer setting 'filter.handling.policy' = 
> > 'FOO'
> > and throwing an exception when the database do not support that specific
> > policy.
> > 
> > Not using a prefix can reduce the learning curve for users and avoid
> > introducing a new set of configuration options for every supported JDBC
> > database.
> > I think the policies we provide can be compatible with most databases 
> > that
> > follow the JDBC protocol.
> > However, there may be cases where certain databases cannot support some
> > policies.
> > Nevertheless, we can ensure fast failure and allow users to choose a
> > suitable policy in such situations.
> > 
> > I have removed the contents about the configuration prefix.
> > Please help review it again.
> > 
> > Thanks,
> > Jiabao
> > 
> > 
> >> 2023年12月19日 19:46,Becket Qin http://gmail.com/>> 写道:
> >> 
> >> Hi Jiabao,
> >> 
> >> Thanks for updating the FLIP.
> >> 
> >> One more question regarding the JDBC connector, since it is a connector
> >> shared by multiple databases, what if there is a filter handling policy
> >> that is only applicable to one of the databases, but not the others? In
> >> that case, how would the users specify that policy?
> >> Unlike the example of orc format with 2nd+ level config, JDBC connector
> >> only looks at the URL to decide which driver to use.
> >> 
> >> For example, MySql supports policy FOO while other databases do not. If
> >> users want to use FOO for MySql, what should they do? Will they set
> >> '*mysql.filter.hanlding.policy'
> >> = 'FOO', *which will only be picked up when the MySql driver is used?
> >> Or they should just set* 'filter.handling.policy' = 'FOO', *and throw
> >> exceptions when other JDBC drivers are used? Personally, I prefer the
> >> latter. If we pick that, do we still need to mention the following?
> >> 
> >> *The prefix is needed when the option is for a 2nd+ level. *
> >>> 'connector' = 'filesystem',
> >>> 'format' = 'orc',
> >>> 'orc.filter.handling.policy' = 'NUBERIC_ONY'
> >>> 
> >>> *In this case, the values of this configuration may be different
> > depending
> >>> on the format option. For example, orc format may have INDEXED_ONLY
> > while
> >>> parquet format may have something else. *
> >>> 
> >> 
> >> I found this is somewhat misleading, because the example here is not a
> > part
> >> of the proposal of this FLIP. It is just an example explaining when a
> >> prefix is needed, which seems orthogonal to the proposal in this FLIP.
> >> 
> >> Thanks,
> >> 
> >> Jiangjie (Becket) Qin
> >> 
> >> 
> >> On Tue, Dec 19, 2023 at 10:09 AM Jiabao Sun  >> 
> > .invalid>
> 

Re: [DISCUSS] FLIP-377: Support configuration to disable filter push down for Table/SQL Sources

2024-01-02 Thread Leonard Xu
Thanks Jiabao for driving this.

+1 to start a vote, a minor comment, should we change the FLIP title according 
this thread context as well?

Best,
Leonard



> 2024年1月3日 下午2:43,Jiabao Sun  写道:
> 
> Hi,
> 
> Thank you again for the discussion on this FLIP.
> If there are no further comments, I plan to start a voting thread tomorrow.
> 
> Best,
> Jiabao
> 
> On 2023/12/20 14:09:49 Jiabao Sun wrote:
>> Hi,
>> 
>> Thank you to everyone for the discussion on this FLIP, 
>> especially Becket for providing guidance that made it more reasonable. 
>> 
>> The FLIP document[1] has been updated with the recent discussed content. 
>> Please take a look to double-check it when you have time.
>> 
>> If we can reach a consensus on this, I will open the voting thread in recent 
>> days.
>> 
>> Best,
>> Jiabao
>> 
>> [1] 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=276105768
>> 
>> 
>>> 2023年12月20日 11:38,Jiabao Sun  写道:
>>> 
>>> Thanks Becket,
>>> 
>>> The behavior description has been added to the Public Interfaces section.
>>> 
>>> Best,
>>> Jiabao
>>> 
>>> 
 2023年12月20日 08:17,Becket Qin http://gmail.com/>> 写道:
 
 Hi Jiabao,
 
 Thanks for updating the FLIP.
 Can you add the behavior of the policies that are only applicable to some
 but not all of the databases? This is a part of the intended behavior of
 the proposed configuration. So, we should include that in the FLIP.
 Otherwise, the FLIP looks good to me.
 
 Cheers,
 
 Jiangjie (Becket) Qin
 
 On Tue, Dec 19, 2023 at 11:00 PM Jiabao Sun 
 wrote:
 
> Hi Becket,
> 
> I share the same view as you regarding the prefix for this configuration
> option.
> 
> For the JDBC connector, I prefer setting 'filter.handling.policy' = 'FOO'
> and throwing an exception when the database do not support that specific
> policy.
> 
> Not using a prefix can reduce the learning curve for users and avoid
> introducing a new set of configuration options for every supported JDBC
> database.
> I think the policies we provide can be compatible with most databases that
> follow the JDBC protocol.
> However, there may be cases where certain databases cannot support some
> policies.
> Nevertheless, we can ensure fast failure and allow users to choose a
> suitable policy in such situations.
> 
> I have removed the contents about the configuration prefix.
> Please help review it again.
> 
> Thanks,
> Jiabao
> 
> 
>> 2023年12月19日 19:46,Becket Qin http://gmail.com/>> 写道:
>> 
>> Hi Jiabao,
>> 
>> Thanks for updating the FLIP.
>> 
>> One more question regarding the JDBC connector, since it is a connector
>> shared by multiple databases, what if there is a filter handling policy
>> that is only applicable to one of the databases, but not the others? In
>> that case, how would the users specify that policy?
>> Unlike the example of orc format with 2nd+ level config, JDBC connector
>> only looks at the URL to decide which driver to use.
>> 
>> For example, MySql supports policy FOO while other databases do not. If
>> users want to use FOO for MySql, what should they do? Will they set
>> '*mysql.filter.hanlding.policy'
>> = 'FOO', *which will only be picked up when the MySql driver is used?
>> Or they should just set* 'filter.handling.policy' = 'FOO', *and throw
>> exceptions when other JDBC drivers are used? Personally, I prefer the
>> latter. If we pick that, do we still need to mention the following?
>> 
>> *The prefix is needed when the option is for a 2nd+ level. *
>>> 'connector' = 'filesystem',
>>> 'format' = 'orc',
>>> 'orc.filter.handling.policy' = 'NUBERIC_ONY'
>>> 
>>> *In this case, the values of this configuration may be different
> depending
>>> on the format option. For example, orc format may have INDEXED_ONLY
> while
>>> parquet format may have something else. *
>>> 
>> 
>> I found this is somewhat misleading, because the example here is not a
> part
>> of the proposal of this FLIP. It is just an example explaining when a
>> prefix is needed, which seems orthogonal to the proposal in this FLIP.
>> 
>> Thanks,
>> 
>> Jiangjie (Becket) Qin
>> 
>> 
>> On Tue, Dec 19, 2023 at 10:09 AM Jiabao Sun > 
> .invalid>
>> wrote:
>> 
>>> Thanks Becket for the suggestions,
>>> 
>>> Updated.
>>> Please help review it again when you have time.
>>> 
>>> Best,
>>> Jiabao
>>> 
>>> 
 2023年12月19日 09:06,Becket Qin http://gmail.com/>> 写道:
 
 Hi JIabao,
 
 Thanks for updating the FLIP. It looks better. Some suggestions /
>>> questions:
 
 1. In the motivation section:
 
> *Currently, Flink 

RE: Re: [DISCUSS] FLIP-377: Support configuration to disable filter push down for Table/SQL Sources

2024-01-02 Thread Jiabao Sun
Hi,

Thank you again for the discussion on this FLIP.
If there are no further comments, I plan to start a voting thread tomorrow.

Best,
Jiabao

On 2023/12/20 14:09:49 Jiabao Sun wrote:
> Hi,
> 
> Thank you to everyone for the discussion on this FLIP, 
> especially Becket for providing guidance that made it more reasonable. 
> 
> The FLIP document[1] has been updated with the recent discussed content. 
> Please take a look to double-check it when you have time.
> 
> If we can reach a consensus on this, I will open the voting thread in recent 
> days.
> 
> Best,
> Jiabao
> 
> [1] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=276105768
> 
> 
> > 2023年12月20日 11:38,Jiabao Sun  写道:
> > 
> > Thanks Becket,
> > 
> > The behavior description has been added to the Public Interfaces section.
> > 
> > Best,
> > Jiabao
> > 
> > 
> >> 2023年12月20日 08:17,Becket Qin  写道:
> >> 
> >> Hi Jiabao,
> >> 
> >> Thanks for updating the FLIP.
> >> Can you add the behavior of the policies that are only applicable to some
> >> but not all of the databases? This is a part of the intended behavior of
> >> the proposed configuration. So, we should include that in the FLIP.
> >> Otherwise, the FLIP looks good to me.
> >> 
> >> Cheers,
> >> 
> >> Jiangjie (Becket) Qin
> >> 
> >> On Tue, Dec 19, 2023 at 11:00 PM Jiabao Sun 
> >> wrote:
> >> 
> >>> Hi Becket,
> >>> 
> >>> I share the same view as you regarding the prefix for this configuration
> >>> option.
> >>> 
> >>> For the JDBC connector, I prefer setting 'filter.handling.policy' = 'FOO'
> >>> and throwing an exception when the database do not support that specific
> >>> policy.
> >>> 
> >>> Not using a prefix can reduce the learning curve for users and avoid
> >>> introducing a new set of configuration options for every supported JDBC
> >>> database.
> >>> I think the policies we provide can be compatible with most databases that
> >>> follow the JDBC protocol.
> >>> However, there may be cases where certain databases cannot support some
> >>> policies.
> >>> Nevertheless, we can ensure fast failure and allow users to choose a
> >>> suitable policy in such situations.
> >>> 
> >>> I have removed the contents about the configuration prefix.
> >>> Please help review it again.
> >>> 
> >>> Thanks,
> >>> Jiabao
> >>> 
> >>> 
>  2023年12月19日 19:46,Becket Qin  写道:
>  
>  Hi Jiabao,
>  
>  Thanks for updating the FLIP.
>  
>  One more question regarding the JDBC connector, since it is a connector
>  shared by multiple databases, what if there is a filter handling policy
>  that is only applicable to one of the databases, but not the others? In
>  that case, how would the users specify that policy?
>  Unlike the example of orc format with 2nd+ level config, JDBC connector
>  only looks at the URL to decide which driver to use.
>  
>  For example, MySql supports policy FOO while other databases do not. If
>  users want to use FOO for MySql, what should they do? Will they set
>  '*mysql.filter.hanlding.policy'
>  = 'FOO', *which will only be picked up when the MySql driver is used?
>  Or they should just set* 'filter.handling.policy' = 'FOO', *and throw
>  exceptions when other JDBC drivers are used? Personally, I prefer the
>  latter. If we pick that, do we still need to mention the following?
>  
>  *The prefix is needed when the option is for a 2nd+ level. *
> > 'connector' = 'filesystem',
> > 'format' = 'orc',
> > 'orc.filter.handling.policy' = 'NUBERIC_ONY'
> > 
> > *In this case, the values of this configuration may be different
> >>> depending
> > on the format option. For example, orc format may have INDEXED_ONLY
> >>> while
> > parquet format may have something else. *
> > 
>  
>  I found this is somewhat misleading, because the example here is not a
> >>> part
>  of the proposal of this FLIP. It is just an example explaining when a
>  prefix is needed, which seems orthogonal to the proposal in this FLIP.
>  
>  Thanks,
>  
>  Jiangjie (Becket) Qin
>  
>  
>  On Tue, Dec 19, 2023 at 10:09 AM Jiabao Sun  >>> .invalid>
>  wrote:
>  
> > Thanks Becket for the suggestions,
> > 
> > Updated.
> > Please help review it again when you have time.
> > 
> > Best,
> > Jiabao
> > 
> > 
> >> 2023年12月19日 09:06,Becket Qin  写道:
> >> 
> >> Hi JIabao,
> >> 
> >> Thanks for updating the FLIP. It looks better. Some suggestions /
> > questions:
> >> 
> >> 1. In the motivation section:
> >> 
> >>> *Currently, Flink Table/SQL does not expose fine-grained control for
> > users
> >>> to control filter pushdown. **However, filter pushdown has some side
> >>> effects, such as additional computational pressure on external
> >>> systems. Moreover, Improper queries can lead to issues such as full
> > table
> >>> scans, which in 

[jira] [Created] (FLINK-33970) Add necessary checks for connector document

2024-01-02 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-33970:
--

 Summary: Add necessary checks for connector document
 Key: FLINK-33970
 URL: https://issues.apache.org/jira/browse/FLINK-33970
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Leonard Xu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Samrat Deb
Congratulations Alex


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread 周仁祥
Congratulations to Alex~

Lijie Wang  于2024年1月3日周三 14:05写道:

> Congratulations Alex !
>
> Best,
> Lijie
>
> Romit Mahanta  于2024年1月3日周三 13:41写道:
>
> > Happy New Year & congratulations Alex!
> >
> > Best,
> >
> > R
> >
> > On Tue, 2 Jan, 2024, 5:45 pm Maximilian Michels,  wrote:
> >
> > > Happy New Year everyone,
> > >
> > > I'd like to start the year off by announcing Alexander Fedulov as a
> > > new Flink committer.
> > >
> > > Alex has been active in the Flink community since 2019. He has
> > > contributed more than 100 commits to Flink, its Kubernetes operator,
> > > and various connectors [1][2].
> > >
> > > Especially noteworthy are his contributions on deprecating and
> > > migrating the old Source API functions and test harnesses, the
> > > enhancement to flame graphs, the dynamic rescale time computation in
> > > Flink Autoscaling, as well as all the small enhancements Alex has
> > > contributed which make a huge difference.
> > >
> > > Beyond code contributions, Alex has been an active community member
> > > with his activity on the mailing lists [3][4], as well as various
> > > talks and blog posts about Apache Flink [5][6].
> > >
> > > Congratulations Alex! The Flink community is proud to have you.
> > >
> > > Best,
> > > The Flink PMC
> > >
> > > [1]
> > >
> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> > > [2]
> > >
> >
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> > > [3]
> https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> > > [4]
> https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> > > [5]
> > >
> >
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> > > [6]
> > >
> >
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
> > >
> >
>


-- 
Best,
renxiang


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Yuxin Tan
Congratulations, Alex!

Best,
Yuxin


Lijie Wang  于2024年1月3日周三 14:04写道:

> Congratulations Alex !
>
> Best,
> Lijie
>
> Romit Mahanta  于2024年1月3日周三 13:41写道:
>
> > Happy New Year & congratulations Alex!
> >
> > Best,
> >
> > R
> >
> > On Tue, 2 Jan, 2024, 5:45 pm Maximilian Michels,  wrote:
> >
> > > Happy New Year everyone,
> > >
> > > I'd like to start the year off by announcing Alexander Fedulov as a
> > > new Flink committer.
> > >
> > > Alex has been active in the Flink community since 2019. He has
> > > contributed more than 100 commits to Flink, its Kubernetes operator,
> > > and various connectors [1][2].
> > >
> > > Especially noteworthy are his contributions on deprecating and
> > > migrating the old Source API functions and test harnesses, the
> > > enhancement to flame graphs, the dynamic rescale time computation in
> > > Flink Autoscaling, as well as all the small enhancements Alex has
> > > contributed which make a huge difference.
> > >
> > > Beyond code contributions, Alex has been an active community member
> > > with his activity on the mailing lists [3][4], as well as various
> > > talks and blog posts about Apache Flink [5][6].
> > >
> > > Congratulations Alex! The Flink community is proud to have you.
> > >
> > > Best,
> > > The Flink PMC
> > >
> > > [1]
> > >
> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> > > [2]
> > >
> >
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> > > [3]
> https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> > > [4]
> https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> > > [5]
> > >
> >
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> > > [6]
> > >
> >
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
> > >
> >
>


[DISCUSS] FLIP 411: Chaining-agnostic Operator ID generation for improved state compatibility on parallelism change

2024-01-02 Thread Zhanghao Chen
Dear Flink devs,

I'd like to start a discussion on FLIP 411: Chaining-agnostic Operator ID 
generation for improved state compatibility on parallelism change [1].

Currently, when user does not explicitly set operator UIDs, the chaining 
behavior will still affect state compatibility, as the generation of the 
Operator ID is dependent on its chained output nodes. For example, a simple 
source->sink DAG with source and sink chained together is state incompatible 
with an otherwise identical DAG with source and sink unchained (either because 
the parallelisms of the two ops are changed to be unequal or chaining is 
disabled). This greatly limits the flexibility to perform 
chain-breaking/building for performance tuning.

The dependency on chained output nodes for Operator ID generation can be traced 
back to Flink 1.2. It is unclear at this point on why chained output nodes are 
involved in the algorithm, but the following history background might be 
related: prior to Flink 1.3, Flink runtime takes the snapshots by the operator 
ID of the first vertex in a chain, so it somewhat makes sense to include 
chained output nodes into the algorithm as chain-breaking/building is expected 
to break state-compatibility anyway.

Given that operator-level state recovery within a chain has long been supported 
since Flink 1.3, I propose to introduce StreamGraphHasherV3 that is agnostic of 
the chaining behavior of operators, so that users are free to tune the 
parallelism of individual operators without worrying about state 
incompatibility. We can make the V3 hasher an optional choice in Flink 1.19, 
and make it the default hasher in 2.0 for backwards compatibility.

Looking forward to your suggestions on it, thanks~

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-411%3A+Chaining-agnostic+Operator+ID+generation+for+improved+state+compatibility+on+parallelism+change

Best,
Zhanghao Chen


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Lijie Wang
Congratulations Alex !

Best,
Lijie

Romit Mahanta  于2024年1月3日周三 13:41写道:

> Happy New Year & congratulations Alex!
>
> Best,
>
> R
>
> On Tue, 2 Jan, 2024, 5:45 pm Maximilian Michels,  wrote:
>
> > Happy New Year everyone,
> >
> > I'd like to start the year off by announcing Alexander Fedulov as a
> > new Flink committer.
> >
> > Alex has been active in the Flink community since 2019. He has
> > contributed more than 100 commits to Flink, its Kubernetes operator,
> > and various connectors [1][2].
> >
> > Especially noteworthy are his contributions on deprecating and
> > migrating the old Source API functions and test harnesses, the
> > enhancement to flame graphs, the dynamic rescale time computation in
> > Flink Autoscaling, as well as all the small enhancements Alex has
> > contributed which make a huge difference.
> >
> > Beyond code contributions, Alex has been an active community member
> > with his activity on the mailing lists [3][4], as well as various
> > talks and blog posts about Apache Flink [5][6].
> >
> > Congratulations Alex! The Flink community is proud to have you.
> >
> > Best,
> > The Flink PMC
> >
> > [1]
> > https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> > [2]
> >
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> > [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> > [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> > [5]
> >
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> > [6]
> >
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
> >
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Romit Mahanta
Happy New Year & congratulations Alex!

Best,

R

On Tue, 2 Jan, 2024, 5:45 pm Maximilian Michels,  wrote:

> Happy New Year everyone,
>
> I'd like to start the year off by announcing Alexander Fedulov as a
> new Flink committer.
>
> Alex has been active in the Flink community since 2019. He has
> contributed more than 100 commits to Flink, its Kubernetes operator,
> and various connectors [1][2].
>
> Especially noteworthy are his contributions on deprecating and
> migrating the old Source API functions and test harnesses, the
> enhancement to flame graphs, the dynamic rescale time computation in
> Flink Autoscaling, as well as all the small enhancements Alex has
> contributed which make a huge difference.
>
> Beyond code contributions, Alex has been an active community member
> with his activity on the mailing lists [3][4], as well as various
> talks and blog posts about Apache Flink [5][6].
>
> Congratulations Alex! The Flink community is proud to have you.
>
> Best,
> The Flink PMC
>
> [1]
> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> [2]
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> [5]
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> [6]
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Jingsong Li
Congratulations!

Best,
Jingsong

On Wed, Jan 3, 2024 at 10:28 AM Benchao Li  wrote:
>
> Congratulations, Alex!
>
> Yuepeng Pan  于2024年1月3日周三 10:10写道:
> >
> > Congrats, Alex!
> >
> > Best,
> > Yuepeng Pan
> > At 2024-01-02 20:15:08, "Maximilian Michels"  wrote:
> > >Happy New Year everyone,
> > >
> > >I'd like to start the year off by announcing Alexander Fedulov as a
> > >new Flink committer.
> > >
> > >Alex has been active in the Flink community since 2019. He has
> > >contributed more than 100 commits to Flink, its Kubernetes operator,
> > >and various connectors [1][2].
> > >
> > >Especially noteworthy are his contributions on deprecating and
> > >migrating the old Source API functions and test harnesses, the
> > >enhancement to flame graphs, the dynamic rescale time computation in
> > >Flink Autoscaling, as well as all the small enhancements Alex has
> > >contributed which make a huge difference.
> > >
> > >Beyond code contributions, Alex has been an active community member
> > >with his activity on the mailing lists [3][4], as well as various
> > >talks and blog posts about Apache Flink [5][6].
> > >
> > >Congratulations Alex! The Flink community is proud to have you.
> > >
> > >Best,
> > >The Flink PMC
> > >
> > >[1] https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> > >[2] 
> > >https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> > >[3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> > >[4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> > >[5] 
> > >https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> > >[6] 
> > >https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
>
>
>
> --
>
> Best,
> Benchao Li


[jira] [Created] (FLINK-33969) Implement restore tests for TableSourceScan node

2024-01-02 Thread Bonnie Varghese (Jira)
Bonnie Varghese created FLINK-33969:
---

 Summary: Implement restore tests for TableSourceScan node
 Key: FLINK-33969
 URL: https://issues.apache.org/jira/browse/FLINK-33969
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Bonnie Varghese
Assignee: Bonnie Varghese






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [2.0] Help needed for release 2.0 work items

2024-01-02 Thread Xintong Song
>
> I would like to ask if there is any plan to review and refactor the CLI in
> Flink 2.0.
>

Not that I'm aware of.


I have the impression of seeing discussions about not using CLI options and
using only "-Dconfig.key", but I cannot find it now. I personally think
that is a good direction to go, and should also solve the problem mentioned
in your example. My biggest concern is whether there are contributors with
enough capacity to work on it.


Feel free to pick it up if you'd like to.


Best,

Xintong



On Wed, Jan 3, 2024 at 11:37 AM Zakelly Lan  wrote:

> Hi Xintong,
>
> Thanks for driving this.
>
> I would like to ask if there is any plan to review and refactor the CLI in
> Flink 2.0. I recently found that the CLI commands and parameters are
> confusing in some ways (e.g.
> https://github.com/apache/flink/pull/23253#discussion_r1405707256). It
> would be beneficial to offer a more intuitive and straightforward CLI
> command to enhance usability.
>
>
> Best,
> Zakelly
>
> On Wed, Jan 3, 2024 at 10:45 AM Xintong Song 
> wrote:
>
> > Thanks a lot for offering the help, Rui. The plan sounds good to me. I'll
> > put your name and the milestones into the 2.0 wiki page.
> >
> > Best,
> >
> > Xintong
> >
> >
> >
> > On Wed, Jan 3, 2024 at 10:38 AM Rui Fan <1996fan...@gmail.com> wrote:
> >
> > > Thanks Xintong for promoting the progress of Flink 2.0.
> > >
> > > If no one minds, I'd like to pick this one: Use Java’s Duration instead
> > of
> > > Flink’s Time.
> > > Could I assign FLINK-14068[1] to me?
> > >
> > > My expected progress is:
> > > - Mark org.apache.flink.api.common.time.Time and
> > >  org.apache.flink.streaming.api.windowing.time.Time
> > >  as @Deprecated in 1.19 (Must do in 1.19)
> > > - Refactor all usages of them to Java's Duration(Nice do in 1.19, must
> do
> > > in 1.20)
> > > - Remove them in 2.0
> > >
> > > Is this plan reasonable?
> > >
> > > [1] https://issues.apache.org/jira/browse/FLINK-14068
> > >
> > > Best,
> > > Rui
> > >
> > > On Wed, Jan 3, 2024 at 9:18 AM Xintong Song 
> > wrote:
> > >
> > >> Hi devs,
> > >>
> > >> The release managers have been tracking the progress of release 2.0
> work
> > >> items. Unfortunately, some of the items are not in good progress, and
> > >> either don't have a contributor or the original contributor no longer
> > has
> > >> capacity to work on them. We have already tried reaching out to some
> > >> developers, but unfortunately don't find many people with capacity.
> > >>
> > >> Therefore, we are looking for developers who want to pick them up.
> > >>
> > >> Helps are needed on:
> > >>
> > >>- Introduce dedicated MetricsScope
> > >>- Rework MetricGroup scope APIs
> > >>- Remove MetricGroup methods accepting an int as a name
> > >>- Remove brackets around variables
> > >>- Drop MetricReporter#open
> > >>- Gauge should only take subclasses of Number, rather than
> > >> everything
> > >>- Add MetricGroup#getLogicalScope
> > >>- User Java’s Duration instead of Flink’s Time
> > >>- Review and refactor the REST API
> > >>- Properly handle NaN/Infinity in OpenAPI spec
> > >>- Enforce single maxExceptions query parameter
> > >>- Drop Yarn specific get rest endpoints
> > >>- Review and refactor the metrics implementation
> > >>- Attach semantics to Gauges; refactor Counter / Meter to be Gauges
> > >> with
> > >>syntactic sugar on top
> > >>- Restructure
> > >>
> > >>
> > >> Please note that:
> > >>
> > >>- For some of the items, the milestones are already given, and
> there
> > >>might be some actions that need to be performed by Flink 1.19.
> Please
> > >> be
> > >>aware that we are only 3.5 weeks from the 1.19 feature freeze.
> > >>- There are also items which don't have any plans / milestones yet.
> > For
> > >>such items, we may want to quickly look into them to find out if
> > >> there's
> > >>anything that needs to be done in 1.19.
> > >>- See more details on the 2.0 wiki page [1]
> > >>
> > >>
> > >> If these items do not make Flink 1.19, we can discuss later what to do
> > >> with
> > >> them, either postpone release 2.0 or exclude them from this major
> > release.
> > >> But for now, let's first see what we can do by 1.19.
> > >>
> > >> Best,
> > >>
> > >> Xintong
> > >>
> > >>
> > >> [1] https://cwiki.apache.org/confluence/display/FLINK/2.0+Release
> > >>
> > >
> >
>


Re: [2.0] Help needed for release 2.0 work items

2024-01-02 Thread Zakelly Lan
Hi Xintong,

Thanks for driving this.

I would like to ask if there is any plan to review and refactor the CLI in
Flink 2.0. I recently found that the CLI commands and parameters are
confusing in some ways (e.g.
https://github.com/apache/flink/pull/23253#discussion_r1405707256). It
would be beneficial to offer a more intuitive and straightforward CLI
command to enhance usability.


Best,
Zakelly

On Wed, Jan 3, 2024 at 10:45 AM Xintong Song  wrote:

> Thanks a lot for offering the help, Rui. The plan sounds good to me. I'll
> put your name and the milestones into the 2.0 wiki page.
>
> Best,
>
> Xintong
>
>
>
> On Wed, Jan 3, 2024 at 10:38 AM Rui Fan <1996fan...@gmail.com> wrote:
>
> > Thanks Xintong for promoting the progress of Flink 2.0.
> >
> > If no one minds, I'd like to pick this one: Use Java’s Duration instead
> of
> > Flink’s Time.
> > Could I assign FLINK-14068[1] to me?
> >
> > My expected progress is:
> > - Mark org.apache.flink.api.common.time.Time and
> >  org.apache.flink.streaming.api.windowing.time.Time
> >  as @Deprecated in 1.19 (Must do in 1.19)
> > - Refactor all usages of them to Java's Duration(Nice do in 1.19, must do
> > in 1.20)
> > - Remove them in 2.0
> >
> > Is this plan reasonable?
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-14068
> >
> > Best,
> > Rui
> >
> > On Wed, Jan 3, 2024 at 9:18 AM Xintong Song 
> wrote:
> >
> >> Hi devs,
> >>
> >> The release managers have been tracking the progress of release 2.0 work
> >> items. Unfortunately, some of the items are not in good progress, and
> >> either don't have a contributor or the original contributor no longer
> has
> >> capacity to work on them. We have already tried reaching out to some
> >> developers, but unfortunately don't find many people with capacity.
> >>
> >> Therefore, we are looking for developers who want to pick them up.
> >>
> >> Helps are needed on:
> >>
> >>- Introduce dedicated MetricsScope
> >>- Rework MetricGroup scope APIs
> >>- Remove MetricGroup methods accepting an int as a name
> >>- Remove brackets around variables
> >>- Drop MetricReporter#open
> >>- Gauge should only take subclasses of Number, rather than
> >> everything
> >>- Add MetricGroup#getLogicalScope
> >>- User Java’s Duration instead of Flink’s Time
> >>- Review and refactor the REST API
> >>- Properly handle NaN/Infinity in OpenAPI spec
> >>- Enforce single maxExceptions query parameter
> >>- Drop Yarn specific get rest endpoints
> >>- Review and refactor the metrics implementation
> >>- Attach semantics to Gauges; refactor Counter / Meter to be Gauges
> >> with
> >>syntactic sugar on top
> >>- Restructure
> >>
> >>
> >> Please note that:
> >>
> >>- For some of the items, the milestones are already given, and there
> >>might be some actions that need to be performed by Flink 1.19. Please
> >> be
> >>aware that we are only 3.5 weeks from the 1.19 feature freeze.
> >>- There are also items which don't have any plans / milestones yet.
> For
> >>such items, we may want to quickly look into them to find out if
> >> there's
> >>anything that needs to be done in 1.19.
> >>- See more details on the 2.0 wiki page [1]
> >>
> >>
> >> If these items do not make Flink 1.19, we can discuss later what to do
> >> with
> >> them, either postpone release 2.0 or exclude them from this major
> release.
> >> But for now, let's first see what we can do by 1.19.
> >>
> >> Best,
> >>
> >> Xintong
> >>
> >>
> >> [1] https://cwiki.apache.org/confluence/display/FLINK/2.0+Release
> >>
> >
>


Re: [DISCUSS] FLIP-406: Reorganize State & Checkpointing & Recovery Configuration

2024-01-02 Thread Rui Fan
Thanks for the feedback!

Using the `execution.checkpointing.incremental.enabled`,
and enabling it by default sounds good to me.

Best,
Rui

On Wed, Jan 3, 2024 at 11:10 AM Zakelly Lan  wrote:

> Hi Rui,
>
> Thanks for your comments!
>
> IMO, given that the state backend can be plugably loaded (as you can
> specify a state backend factory), I prefer not providing state backend
> specified options in the framework.
>
> Secondly, the incremental checkpoint is actually a sharing file strategy
> across checkpoints, which means the state backend *could* reuse files from
> previous cp but not *must* do so. When the state backend could not reuse
> the files, it is reasonable to fallback to a full checkpoint.
>
> Thus, I suggest we make it `execution.checkpointing.incremental` and enable
> it by default. For those state backends not supporting this, they perform
> full checkpoints and print a warning to inform users. Users do not need to
> pay special attention to different options to control this across different
> state backends. This is more user-friendly in my opinion. WDYT?
>
> On Tue, Jan 2, 2024 at 10:49 AM Rui Fan <1996fan...@gmail.com> wrote:
>
> > Hi Zakelly,
> >
> > I'm not sure whether we could add the state backend type in the
> > new key name of state.backend.incremental. It means we use
> > `execution.checkpointing.rocksdb-incremental` or
> > `execution.checkpointing.rocksdb-incremental.enabled`.
> >
> > So far, state.backend.incremental only works for rocksdb state backend.
> > And this feature or optimization is very valuable and huge for large
> > state flink jobs. I believe it's enabled for most production flink jobs
> > with large rocksdb state.
> >
> > If this option isn't generic for all state backend types, I guess we
> > can enable `execution.checkpointing.rocksdb-incremental.enabled`
> > by default in Flink 2.0.
> >
> > But if it works for all state backends, it's hard to enable it by
> default.
> > Enabling great and valuable features or improvements are useful
> > for users, especially a lot of new flink users. Out-of-the-box options
> > are good for users.
> >
> > WDYT?
> >
> > Best,
> > Rui
> >
> > On Fri, Dec 29, 2023 at 1:45 PM Zakelly Lan 
> wrote:
> >
> > > Hi everyone,
> > >
> > > Thanks all for your comments!
> > >
> > > As many of you have questions about the names for boolean options, I
> > > suggest we make a naming rule for them. For now I could think of three
> > > options:
> > >
> > > Option 1: Use enumeration options if possible. But this may cause some
> > name
> > > collisions or confusion as we discussed and we should unify the
> statement
> > > everywhere.
> > > Option 2: Use boolean options and add 'enabled' as the suffix.
> > > Option 3: Use boolean options and ONLY add 'enabled' when there are
> more
> > > detailed configurations under the same prefix, to prevent one name from
> > > serving as a prefix to another.
> > >
> > > I am slightly inclined to Option 3, since it is more in line with
> current
> > > practice and friendly for existing users. Also It reduces the length of
> > > configuration names as much as possible. I really want to hear your
> > > opinions.
> > >
> > >
> > > @Xuannan
> > >
> > > I agree with your comments 1 and 3.
> > >
> > > For 2, If we decide to change the name, maybe
> > > `execution.checkpointing.parallel-cleaner` is better? And as for
> whether
> > to
> > > add 'enabled' I suggest we discuss the rule above. WDYT?
> > > Thanks!
> > >
> > >
> > > Best,
> > > Zakelly
> > >
> > > On Fri, Dec 29, 2023 at 12:02 PM Xuannan Su 
> > wrote:
> > >
> > > > Hi Zakelly,
> > > >
> > > > Thanks for driving this! The organization of the configuration option
> > > > in the FLIP looks much cleaner and easier to understand. +1 to the
> > > > FLIP.
> > > >
> > > > Just some questions from me.
> > > >
> > > > 1. I think the change to the ConfigOptions should be put in the
> > > > `Public Interface` section, instead of `Proposed Changed`, as those
> > > > configuration options are public interface.
> > > >
> > > > 2. The key `state.checkpoint.cleaner.parallel-mode` seems confusing.
> > > > It feels like it is used to choose different modes. In fact, it is a
> > > > boolean flag to indicate whether to enable parallel clean. How about
> > > > making it `state.checkpoint.cleaner.parallel-mode.enabled`?
> > > >
> > > > 3. The `execution.checkpointing.write-buffer` may better be
> > > > `execution.checkpointing.write-buffer-size` so that we know it is
> > > > configuring the size of the buffer.
> > > >
> > > > Best,
> > > > Xuannan
> > > >
> > > >
> > > > On Wed, Dec 27, 2023 at 7:17 PM Yanfei Lei 
> > wrote:
> > > > >
> > > > > Hi Zakelly,
> > > > >
> > > > > > Considering the name occupation, how about naming it as
> > > > `execution.checkpointing.type`?
> > > > >
> > > > > `Checkpoint Type`[1,2] is used to describe aligned/unaligned
> > > > > checkpoint, I am inclined to make a choice between
> > > > > `execution.checkpointing.incremental` and
> > > > > 

Re: [DISCUSS] FLIP-406: Reorganize State & Checkpointing & Recovery Configuration

2024-01-02 Thread Zakelly Lan
Hi Rui,

Thanks for your comments!

IMO, given that the state backend can be plugably loaded (as you can
specify a state backend factory), I prefer not providing state backend
specified options in the framework.

Secondly, the incremental checkpoint is actually a sharing file strategy
across checkpoints, which means the state backend *could* reuse files from
previous cp but not *must* do so. When the state backend could not reuse
the files, it is reasonable to fallback to a full checkpoint.

Thus, I suggest we make it `execution.checkpointing.incremental` and enable
it by default. For those state backends not supporting this, they perform
full checkpoints and print a warning to inform users. Users do not need to
pay special attention to different options to control this across different
state backends. This is more user-friendly in my opinion. WDYT?

On Tue, Jan 2, 2024 at 10:49 AM Rui Fan <1996fan...@gmail.com> wrote:

> Hi Zakelly,
>
> I'm not sure whether we could add the state backend type in the
> new key name of state.backend.incremental. It means we use
> `execution.checkpointing.rocksdb-incremental` or
> `execution.checkpointing.rocksdb-incremental.enabled`.
>
> So far, state.backend.incremental only works for rocksdb state backend.
> And this feature or optimization is very valuable and huge for large
> state flink jobs. I believe it's enabled for most production flink jobs
> with large rocksdb state.
>
> If this option isn't generic for all state backend types, I guess we
> can enable `execution.checkpointing.rocksdb-incremental.enabled`
> by default in Flink 2.0.
>
> But if it works for all state backends, it's hard to enable it by default.
> Enabling great and valuable features or improvements are useful
> for users, especially a lot of new flink users. Out-of-the-box options
> are good for users.
>
> WDYT?
>
> Best,
> Rui
>
> On Fri, Dec 29, 2023 at 1:45 PM Zakelly Lan  wrote:
>
> > Hi everyone,
> >
> > Thanks all for your comments!
> >
> > As many of you have questions about the names for boolean options, I
> > suggest we make a naming rule for them. For now I could think of three
> > options:
> >
> > Option 1: Use enumeration options if possible. But this may cause some
> name
> > collisions or confusion as we discussed and we should unify the statement
> > everywhere.
> > Option 2: Use boolean options and add 'enabled' as the suffix.
> > Option 3: Use boolean options and ONLY add 'enabled' when there are more
> > detailed configurations under the same prefix, to prevent one name from
> > serving as a prefix to another.
> >
> > I am slightly inclined to Option 3, since it is more in line with current
> > practice and friendly for existing users. Also It reduces the length of
> > configuration names as much as possible. I really want to hear your
> > opinions.
> >
> >
> > @Xuannan
> >
> > I agree with your comments 1 and 3.
> >
> > For 2, If we decide to change the name, maybe
> > `execution.checkpointing.parallel-cleaner` is better? And as for whether
> to
> > add 'enabled' I suggest we discuss the rule above. WDYT?
> > Thanks!
> >
> >
> > Best,
> > Zakelly
> >
> > On Fri, Dec 29, 2023 at 12:02 PM Xuannan Su 
> wrote:
> >
> > > Hi Zakelly,
> > >
> > > Thanks for driving this! The organization of the configuration option
> > > in the FLIP looks much cleaner and easier to understand. +1 to the
> > > FLIP.
> > >
> > > Just some questions from me.
> > >
> > > 1. I think the change to the ConfigOptions should be put in the
> > > `Public Interface` section, instead of `Proposed Changed`, as those
> > > configuration options are public interface.
> > >
> > > 2. The key `state.checkpoint.cleaner.parallel-mode` seems confusing.
> > > It feels like it is used to choose different modes. In fact, it is a
> > > boolean flag to indicate whether to enable parallel clean. How about
> > > making it `state.checkpoint.cleaner.parallel-mode.enabled`?
> > >
> > > 3. The `execution.checkpointing.write-buffer` may better be
> > > `execution.checkpointing.write-buffer-size` so that we know it is
> > > configuring the size of the buffer.
> > >
> > > Best,
> > > Xuannan
> > >
> > >
> > > On Wed, Dec 27, 2023 at 7:17 PM Yanfei Lei 
> wrote:
> > > >
> > > > Hi Zakelly,
> > > >
> > > > > Considering the name occupation, how about naming it as
> > > `execution.checkpointing.type`?
> > > >
> > > > `Checkpoint Type`[1,2] is used to describe aligned/unaligned
> > > > checkpoint, I am inclined to make a choice between
> > > > `execution.checkpointing.incremental` and
> > > > `execution.checkpointing.incremental.enabled`.
> > > >
> > > >
> > > > [1]
> > >
> >
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/ops/monitoring/checkpoint_monitoring/
> > > > [2]
> > >
> >
> https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/src/app/pages/job/checkpoints/detail/job-checkpoints-detail.component.html#L27
> > > >
> > > > --
> > > > Best,
> > > > Yanfei
> > > >
> > > > Zakelly Lan  

[jira] [Created] (FLINK-33968) Compute the number of subpartitions when initializing executon job vertices

2024-01-02 Thread Lijie Wang (Jira)
Lijie Wang created FLINK-33968:
--

 Summary: Compute the number of subpartitions when initializing 
executon job vertices
 Key: FLINK-33968
 URL: https://issues.apache.org/jira/browse/FLINK-33968
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Reporter: Lijie Wang
Assignee: Lijie Wang


Currently, when using dynamic graphs, the subpartition-num of a task is lazily 
calculated until the task deployment moment, this may lead to some 
uncertainties in job recovery scenarios.

Before jm crashs, when deploying upstream tasks, the parallelism of downstream 
vertex may be unknown, so the subpartiton-num will be the max parallelism of 
downstream job vertex. However, after jm restarts, when deploying upstream 
tasks, the parallelism of downstream job vertex may be known(has been 
calculated before jm crashs and been recovered after jm restarts), so the 
subpartiton-num will be the actual parallelism of downstream job vertex.
 
The difference of calculated subpartition-num will lead to the partitions 
generated before jm crashs cannot be reused after jm restarts.

We will solve this problem by advancing the calculation of subpartitoin-num to 
the moment of initializing executon job vertex (in ctor of 
IntermediateResultPartition)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [2.0] Help needed for release 2.0 work items

2024-01-02 Thread Xintong Song
Thanks a lot for offering the help, Rui. The plan sounds good to me. I'll
put your name and the milestones into the 2.0 wiki page.

Best,

Xintong



On Wed, Jan 3, 2024 at 10:38 AM Rui Fan <1996fan...@gmail.com> wrote:

> Thanks Xintong for promoting the progress of Flink 2.0.
>
> If no one minds, I'd like to pick this one: Use Java’s Duration instead of
> Flink’s Time.
> Could I assign FLINK-14068[1] to me?
>
> My expected progress is:
> - Mark org.apache.flink.api.common.time.Time and
>  org.apache.flink.streaming.api.windowing.time.Time
>  as @Deprecated in 1.19 (Must do in 1.19)
> - Refactor all usages of them to Java's Duration(Nice do in 1.19, must do
> in 1.20)
> - Remove them in 2.0
>
> Is this plan reasonable?
>
> [1] https://issues.apache.org/jira/browse/FLINK-14068
>
> Best,
> Rui
>
> On Wed, Jan 3, 2024 at 9:18 AM Xintong Song  wrote:
>
>> Hi devs,
>>
>> The release managers have been tracking the progress of release 2.0 work
>> items. Unfortunately, some of the items are not in good progress, and
>> either don't have a contributor or the original contributor no longer has
>> capacity to work on them. We have already tried reaching out to some
>> developers, but unfortunately don't find many people with capacity.
>>
>> Therefore, we are looking for developers who want to pick them up.
>>
>> Helps are needed on:
>>
>>- Introduce dedicated MetricsScope
>>- Rework MetricGroup scope APIs
>>- Remove MetricGroup methods accepting an int as a name
>>- Remove brackets around variables
>>- Drop MetricReporter#open
>>- Gauge should only take subclasses of Number, rather than
>> everything
>>- Add MetricGroup#getLogicalScope
>>- User Java’s Duration instead of Flink’s Time
>>- Review and refactor the REST API
>>- Properly handle NaN/Infinity in OpenAPI spec
>>- Enforce single maxExceptions query parameter
>>- Drop Yarn specific get rest endpoints
>>- Review and refactor the metrics implementation
>>- Attach semantics to Gauges; refactor Counter / Meter to be Gauges
>> with
>>syntactic sugar on top
>>- Restructure
>>
>>
>> Please note that:
>>
>>- For some of the items, the milestones are already given, and there
>>might be some actions that need to be performed by Flink 1.19. Please
>> be
>>aware that we are only 3.5 weeks from the 1.19 feature freeze.
>>- There are also items which don't have any plans / milestones yet. For
>>such items, we may want to quickly look into them to find out if
>> there's
>>anything that needs to be done in 1.19.
>>- See more details on the 2.0 wiki page [1]
>>
>>
>> If these items do not make Flink 1.19, we can discuss later what to do
>> with
>> them, either postpone release 2.0 or exclude them from this major release.
>> But for now, let's first see what we can do by 1.19.
>>
>> Best,
>>
>> Xintong
>>
>>
>> [1] https://cwiki.apache.org/confluence/display/FLINK/2.0+Release
>>
>


Re: [2.0] Help needed for release 2.0 work items

2024-01-02 Thread Rui Fan
By the way, some of the refactors of stage2 need to be done in Flink 1.19.
Because some Public or PublicEvolving classes are using Time class.

Such as: org.apache.flink.api.common.state.StateTtlConfig#newBuilder
is using the org.apache.flink.api.common.time.Time.
And StateTtlConfig is PublicEvolving class.

I can start sorting all usages out later.

Best,
Rui

On Wed, Jan 3, 2024 at 10:37 AM Rui Fan <1996fan...@gmail.com> wrote:

> Thanks Xintong for promoting the progress of Flink 2.0.
>
> If no one minds, I'd like to pick this one: Use Java’s Duration instead of
> Flink’s Time.
> Could I assign FLINK-14068[1] to me?
>
> My expected progress is:
> - Mark org.apache.flink.api.common.time.Time and
>  org.apache.flink.streaming.api.windowing.time.Time
>  as @Deprecated in 1.19 (Must do in 1.19)
> - Refactor all usages of them to Java's Duration(Nice do in 1.19, must do
> in 1.20)
> - Remove them in 2.0
>
> Is this plan reasonable?
>
> [1] https://issues.apache.org/jira/browse/FLINK-14068
>
> Best,
> Rui
>
> On Wed, Jan 3, 2024 at 9:18 AM Xintong Song  wrote:
>
>> Hi devs,
>>
>> The release managers have been tracking the progress of release 2.0 work
>> items. Unfortunately, some of the items are not in good progress, and
>> either don't have a contributor or the original contributor no longer has
>> capacity to work on them. We have already tried reaching out to some
>> developers, but unfortunately don't find many people with capacity.
>>
>> Therefore, we are looking for developers who want to pick them up.
>>
>> Helps are needed on:
>>
>>- Introduce dedicated MetricsScope
>>- Rework MetricGroup scope APIs
>>- Remove MetricGroup methods accepting an int as a name
>>- Remove brackets around variables
>>- Drop MetricReporter#open
>>- Gauge should only take subclasses of Number, rather than
>> everything
>>- Add MetricGroup#getLogicalScope
>>- User Java’s Duration instead of Flink’s Time
>>- Review and refactor the REST API
>>- Properly handle NaN/Infinity in OpenAPI spec
>>- Enforce single maxExceptions query parameter
>>- Drop Yarn specific get rest endpoints
>>- Review and refactor the metrics implementation
>>- Attach semantics to Gauges; refactor Counter / Meter to be Gauges
>> with
>>syntactic sugar on top
>>- Restructure
>>
>>
>> Please note that:
>>
>>- For some of the items, the milestones are already given, and there
>>might be some actions that need to be performed by Flink 1.19. Please
>> be
>>aware that we are only 3.5 weeks from the 1.19 feature freeze.
>>- There are also items which don't have any plans / milestones yet. For
>>such items, we may want to quickly look into them to find out if
>> there's
>>anything that needs to be done in 1.19.
>>- See more details on the 2.0 wiki page [1]
>>
>>
>> If these items do not make Flink 1.19, we can discuss later what to do
>> with
>> them, either postpone release 2.0 or exclude them from this major release.
>> But for now, let's first see what we can do by 1.19.
>>
>> Best,
>>
>> Xintong
>>
>>
>> [1] https://cwiki.apache.org/confluence/display/FLINK/2.0+Release
>>
>


Re: [2.0] Help needed for release 2.0 work items

2024-01-02 Thread Rui Fan
Thanks Xintong for promoting the progress of Flink 2.0.

If no one minds, I'd like to pick this one: Use Java’s Duration instead of
Flink’s Time.
Could I assign FLINK-14068[1] to me?

My expected progress is:
- Mark org.apache.flink.api.common.time.Time and
 org.apache.flink.streaming.api.windowing.time.Time
 as @Deprecated in 1.19 (Must do in 1.19)
- Refactor all usages of them to Java's Duration(Nice do in 1.19, must do
in 1.20)
- Remove them in 2.0

Is this plan reasonable?

[1] https://issues.apache.org/jira/browse/FLINK-14068

Best,
Rui

On Wed, Jan 3, 2024 at 9:18 AM Xintong Song  wrote:

> Hi devs,
>
> The release managers have been tracking the progress of release 2.0 work
> items. Unfortunately, some of the items are not in good progress, and
> either don't have a contributor or the original contributor no longer has
> capacity to work on them. We have already tried reaching out to some
> developers, but unfortunately don't find many people with capacity.
>
> Therefore, we are looking for developers who want to pick them up.
>
> Helps are needed on:
>
>- Introduce dedicated MetricsScope
>- Rework MetricGroup scope APIs
>- Remove MetricGroup methods accepting an int as a name
>- Remove brackets around variables
>- Drop MetricReporter#open
>- Gauge should only take subclasses of Number, rather than everything
>- Add MetricGroup#getLogicalScope
>- User Java’s Duration instead of Flink’s Time
>- Review and refactor the REST API
>- Properly handle NaN/Infinity in OpenAPI spec
>- Enforce single maxExceptions query parameter
>- Drop Yarn specific get rest endpoints
>- Review and refactor the metrics implementation
>- Attach semantics to Gauges; refactor Counter / Meter to be Gauges with
>syntactic sugar on top
>- Restructure
>
>
> Please note that:
>
>- For some of the items, the milestones are already given, and there
>might be some actions that need to be performed by Flink 1.19. Please be
>aware that we are only 3.5 weeks from the 1.19 feature freeze.
>- There are also items which don't have any plans / milestones yet. For
>such items, we may want to quickly look into them to find out if there's
>anything that needs to be done in 1.19.
>- See more details on the 2.0 wiki page [1]
>
>
> If these items do not make Flink 1.19, we can discuss later what to do with
> them, either postpone release 2.0 or exclude them from this major release.
> But for now, let's first see what we can do by 1.19.
>
> Best,
>
> Xintong
>
>
> [1] https://cwiki.apache.org/confluence/display/FLINK/2.0+Release
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Benchao Li
Congratulations, Alex!

Yuepeng Pan  于2024年1月3日周三 10:10写道:
>
> Congrats, Alex!
>
> Best,
> Yuepeng Pan
> At 2024-01-02 20:15:08, "Maximilian Michels"  wrote:
> >Happy New Year everyone,
> >
> >I'd like to start the year off by announcing Alexander Fedulov as a
> >new Flink committer.
> >
> >Alex has been active in the Flink community since 2019. He has
> >contributed more than 100 commits to Flink, its Kubernetes operator,
> >and various connectors [1][2].
> >
> >Especially noteworthy are his contributions on deprecating and
> >migrating the old Source API functions and test harnesses, the
> >enhancement to flame graphs, the dynamic rescale time computation in
> >Flink Autoscaling, as well as all the small enhancements Alex has
> >contributed which make a huge difference.
> >
> >Beyond code contributions, Alex has been an active community member
> >with his activity on the mailing lists [3][4], as well as various
> >talks and blog posts about Apache Flink [5][6].
> >
> >Congratulations Alex! The Flink community is proud to have you.
> >
> >Best,
> >The Flink PMC
> >
> >[1] https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> >[2] 
> >https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> >[3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> >[4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> >[5] 
> >https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> >[6] 
> >https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series



-- 

Best,
Benchao Li


Re:[ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Yuepeng Pan
Congrats, Alex!

Best,
Yuepeng Pan
At 2024-01-02 20:15:08, "Maximilian Michels"  wrote:
>Happy New Year everyone,
>
>I'd like to start the year off by announcing Alexander Fedulov as a
>new Flink committer.
>
>Alex has been active in the Flink community since 2019. He has
>contributed more than 100 commits to Flink, its Kubernetes operator,
>and various connectors [1][2].
>
>Especially noteworthy are his contributions on deprecating and
>migrating the old Source API functions and test harnesses, the
>enhancement to flame graphs, the dynamic rescale time computation in
>Flink Autoscaling, as well as all the small enhancements Alex has
>contributed which make a huge difference.
>
>Beyond code contributions, Alex has been an active community member
>with his activity on the mailing lists [3][4], as well as various
>talks and blog posts about Apache Flink [5][6].
>
>Congratulations Alex! The Flink community is proud to have you.
>
>Best,
>The Flink PMC
>
>[1] https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
>[2] 
>https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
>[3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
>[4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
>[5] 
>https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
>[6] 
>https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Xintong Song
Congratulations~!

Best,

Xintong



On Wed, Jan 3, 2024 at 9:36 AM Leonard Xu  wrote:

> Congrats, Alex!
>
> Best,
> Leonard
>
> > 2024年1月3日 上午4:11,Tang, Zhiyan (udx2na)  写道:
> >
> > big congrats to Alex and Happy New Year everyone!
> >
> > Best
> > Tony
> >
> > Get Outlook for iOS
> > 
> > From: Austin Cawley-Edwards 
> > Sent: Tuesday, January 2, 2024 2:07:49 PM
> > To: dev@flink.apache.org 
> > Cc: Alexander Fedulov 
> > Subject: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
> >
> > Congrats Alex!!
> >
> > On Tue, Jan 2, 2024 at 10:12 Feng Jin  wrote:
> >
> >> Congratulations, Alex!
> >>
> >> Best,
> >> Feng
> >>
> >> On Tue, Jan 2, 2024 at 11:04 PM Chen Yu  wrote:
> >>
> >>> Congratulations, Alex!
> >>>
> >>> Best,
> >>> Yu Chen
> >>>
> >>> 
> >>> 发件人: Zhanghao Chen 
> >>> 发送时间: 2024年1月2日 22:44
> >>> 收件人: dev
> >>> 抄送: Alexander Fedulov
> >>> 主题: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
> >>>
> >>> Congrats, Alex!
> >>>
> >>> Best,
> >>> Zhanghao Chen
> >>> 
> >>> From: Maximilian Michels 
> >>> Sent: Tuesday, January 2, 2024 20:15
> >>> To: dev 
> >>> Cc: Alexander Fedulov 
> >>> Subject: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
> >>>
> >>> Happy New Year everyone,
> >>>
> >>> I'd like to start the year off by announcing Alexander Fedulov as a
> >>> new Flink committer.
> >>>
> >>> Alex has been active in the Flink community since 2019. He has
> >>> contributed more than 100 commits to Flink, its Kubernetes operator,
> >>> and various connectors [1][2].
> >>>
> >>> Especially noteworthy are his contributions on deprecating and
> >>> migrating the old Source API functions and test harnesses, the
> >>> enhancement to flame graphs, the dynamic rescale time computation in
> >>> Flink Autoscaling, as well as all the small enhancements Alex has
> >>> contributed which make a huge difference.
> >>>
> >>> Beyond code contributions, Alex has been an active community member
> >>> with his activity on the mailing lists [3][4], as well as various
> >>> talks and blog posts about Apache Flink [5][6].
> >>>
> >>> Congratulations Alex! The Flink community is proud to have you.
> >>>
> >>> Best,
> >>> The Flink PMC
> >>>
> >>> [1]
> >>>
> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> >>> [2]
> >>>
> >>
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> >>> [3]
> https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> >>> [4]
> https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> >>> [5]
> >>>
> >>
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> >>> [6]
> >>>
> >>
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
> >>>
> >>
>
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Leonard Xu
Congrats, Alex!

Best,
Leonard

> 2024年1月3日 上午4:11,Tang, Zhiyan (udx2na)  写道:
> 
> big congrats to Alex and Happy New Year everyone!
> 
> Best
> Tony
> 
> Get Outlook for iOS
> 
> From: Austin Cawley-Edwards 
> Sent: Tuesday, January 2, 2024 2:07:49 PM
> To: dev@flink.apache.org 
> Cc: Alexander Fedulov 
> Subject: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
> 
> Congrats Alex!!
> 
> On Tue, Jan 2, 2024 at 10:12 Feng Jin  wrote:
> 
>> Congratulations, Alex!
>> 
>> Best,
>> Feng
>> 
>> On Tue, Jan 2, 2024 at 11:04 PM Chen Yu  wrote:
>> 
>>> Congratulations, Alex!
>>> 
>>> Best,
>>> Yu Chen
>>> 
>>> 
>>> 发件人: Zhanghao Chen 
>>> 发送时间: 2024年1月2日 22:44
>>> 收件人: dev
>>> 抄送: Alexander Fedulov
>>> 主题: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
>>> 
>>> Congrats, Alex!
>>> 
>>> Best,
>>> Zhanghao Chen
>>> 
>>> From: Maximilian Michels 
>>> Sent: Tuesday, January 2, 2024 20:15
>>> To: dev 
>>> Cc: Alexander Fedulov 
>>> Subject: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
>>> 
>>> Happy New Year everyone,
>>> 
>>> I'd like to start the year off by announcing Alexander Fedulov as a
>>> new Flink committer.
>>> 
>>> Alex has been active in the Flink community since 2019. He has
>>> contributed more than 100 commits to Flink, its Kubernetes operator,
>>> and various connectors [1][2].
>>> 
>>> Especially noteworthy are his contributions on deprecating and
>>> migrating the old Source API functions and test harnesses, the
>>> enhancement to flame graphs, the dynamic rescale time computation in
>>> Flink Autoscaling, as well as all the small enhancements Alex has
>>> contributed which make a huge difference.
>>> 
>>> Beyond code contributions, Alex has been an active community member
>>> with his activity on the mailing lists [3][4], as well as various
>>> talks and blog posts about Apache Flink [5][6].
>>> 
>>> Congratulations Alex! The Flink community is proud to have you.
>>> 
>>> Best,
>>> The Flink PMC
>>> 
>>> [1]
>>> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
>>> [2]
>>> 
>> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
>>> [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
>>> [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
>>> [5]
>>> 
>> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
>>> [6]
>>> 
>> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
>>> 
>> 



[2.0] Help needed for release 2.0 work items

2024-01-02 Thread Xintong Song
Hi devs,

The release managers have been tracking the progress of release 2.0 work
items. Unfortunately, some of the items are not in good progress, and
either don't have a contributor or the original contributor no longer has
capacity to work on them. We have already tried reaching out to some
developers, but unfortunately don't find many people with capacity.

Therefore, we are looking for developers who want to pick them up.

Helps are needed on:

   - Introduce dedicated MetricsScope
   - Rework MetricGroup scope APIs
   - Remove MetricGroup methods accepting an int as a name
   - Remove brackets around variables
   - Drop MetricReporter#open
   - Gauge should only take subclasses of Number, rather than everything
   - Add MetricGroup#getLogicalScope
   - User Java’s Duration instead of Flink’s Time
   - Review and refactor the REST API
   - Properly handle NaN/Infinity in OpenAPI spec
   - Enforce single maxExceptions query parameter
   - Drop Yarn specific get rest endpoints
   - Review and refactor the metrics implementation
   - Attach semantics to Gauges; refactor Counter / Meter to be Gauges with
   syntactic sugar on top
   - Restructure


Please note that:

   - For some of the items, the milestones are already given, and there
   might be some actions that need to be performed by Flink 1.19. Please be
   aware that we are only 3.5 weeks from the 1.19 feature freeze.
   - There are also items which don't have any plans / milestones yet. For
   such items, we may want to quickly look into them to find out if there's
   anything that needs to be done in 1.19.
   - See more details on the 2.0 wiki page [1]


If these items do not make Flink 1.19, we can discuss later what to do with
them, either postpone release 2.0 or exclude them from this major release.
But for now, let's first see what we can do by 1.19.

Best,

Xintong


[1] https://cwiki.apache.org/confluence/display/FLINK/2.0+Release


[jira] [Created] (FLINK-33967) Remove/Rename log4j2-test.properties in flink-streaming-java's test bundle

2024-01-02 Thread Koala Lam (Jira)
Koala Lam created FLINK-33967:
-

 Summary: Remove/Rename log4j2-test.properties in 
flink-streaming-java's test bundle
 Key: FLINK-33967
 URL: https://issues.apache.org/jira/browse/FLINK-33967
 Project: Flink
  Issue Type: Improvement
Reporter: Koala Lam


This file from test classpath is picked automatically by Log4j2. In order to 
reliably use our own log4j2 test config, we have to specify system property 
"log4j2.configurationFile" which is not ideal as we have to manually set it in 
IDE config.

https://logging.apache.org/log4j/2.x/manual/configuration.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [Discuss][Flink-31326] Flink autoscaler code

2024-01-02 Thread Yang LI
Hello Rui,

Here is the jira ticket https://issues.apache.org/jira/browse/FLINK-33966,
I have pushed a tiny pr
 for this
ticket.

Regards,
Yang

On Tue, 2 Jan 2024 at 16:15, Rui Fan <1996fan...@gmail.com> wrote:

> Thanks Yang for reporting this issue!
>
> You are right, these 2 conditions are indeed the same. It's unexpected
> IIUC.
> Would you like to fix it?
>
> Feel free to create a FLINK JIRA to fix it if you would like to, and I'm
> happy to
> review!
>
> And cc @Maximilian Michels 
>
> Best,
> Rui
>
> On Tue, Jan 2, 2024 at 11:03 PM Yang LI  wrote:
>
> > Hello,
> >
> > I see we have 2 times the same condition check in the
> > function getNumRecordsInPerSecond (L220
> > <
> >
> https://github.com/apache/flink-kubernetes-operator/blob/main/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/metrics/ScalingMetrics.java#L220
> > >
> > and
> > L224
> > <
> >
> https://github.com/apache/flink-kubernetes-operator/blob/main/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/metrics/ScalingMetrics.java#L224
> > >).
> > I imagine you want to use SOURCE_TASK_NUM_RECORDS_OUT_PER_SEC when the
> > operator is not the source. Can you confirm this and if we have a FIP
> > ticket to fix this?
> >
> > Regards,
> > Yang LI
> >
>


[jira] [Created] (FLINK-33966) Fix the getNumRecordsInPerSecond Utility Function

2024-01-02 Thread Yang LI (Jira)
Yang LI created FLINK-33966:
---

 Summary: Fix the getNumRecordsInPerSecond Utility Function
 Key: FLINK-33966
 URL: https://issues.apache.org/jira/browse/FLINK-33966
 Project: Flink
  Issue Type: Bug
  Components: Autoscaler
Affects Versions: kubernetes-operator-1.7.0
Reporter: Yang LI


We have 2 times the same condition check in the function 
getNumRecordsInPerSecond 
([L220|https://github.com/apache/flink-kubernetes-operator/blob/main/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/metrics/ScalingMetrics.java#L220]
 and 
[L224|https://github.com/apache/flink-kubernetes-operator/blob/main/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/metrics/ScalingMetrics.java#L224])

Action: 
Update getNumRecordsInPerSecond'{{{}s{}}} second {{if}} condition from {{if 
(isSource && ...)}} to {{{}if (!isSource && ...){}}}. This addresses the 
redundant check and ensures correct metric fetching for non-source operators.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Tang, Zhiyan (udx2na)
big congrats to Alex and Happy New Year everyone!

Best
Tony

Get Outlook for iOS

From: Austin Cawley-Edwards 
Sent: Tuesday, January 2, 2024 2:07:49 PM
To: dev@flink.apache.org 
Cc: Alexander Fedulov 
Subject: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

Congrats Alex!!

On Tue, Jan 2, 2024 at 10:12 Feng Jin  wrote:

> Congratulations, Alex!
>
> Best,
> Feng
>
> On Tue, Jan 2, 2024 at 11:04 PM Chen Yu  wrote:
>
> > Congratulations, Alex!
> >
> > Best,
> > Yu Chen
> >
> > 
> > 发件人: Zhanghao Chen 
> > 发送时间: 2024年1月2日 22:44
> > 收件人: dev
> > 抄送: Alexander Fedulov
> > 主题: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
> >
> > Congrats, Alex!
> >
> > Best,
> > Zhanghao Chen
> > 
> > From: Maximilian Michels 
> > Sent: Tuesday, January 2, 2024 20:15
> > To: dev 
> > Cc: Alexander Fedulov 
> > Subject: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
> >
> > Happy New Year everyone,
> >
> > I'd like to start the year off by announcing Alexander Fedulov as a
> > new Flink committer.
> >
> > Alex has been active in the Flink community since 2019. He has
> > contributed more than 100 commits to Flink, its Kubernetes operator,
> > and various connectors [1][2].
> >
> > Especially noteworthy are his contributions on deprecating and
> > migrating the old Source API functions and test harnesses, the
> > enhancement to flame graphs, the dynamic rescale time computation in
> > Flink Autoscaling, as well as all the small enhancements Alex has
> > contributed which make a huge difference.
> >
> > Beyond code contributions, Alex has been an active community member
> > with his activity on the mailing lists [3][4], as well as various
> > talks and blog posts about Apache Flink [5][6].
> >
> > Congratulations Alex! The Flink community is proud to have you.
> >
> > Best,
> > The Flink PMC
> >
> > [1]
> > https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> > [2]
> >
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> > [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> > [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> > [5]
> >
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> > [6]
> >
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
> >
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Austin Cawley-Edwards
Congrats Alex!!

On Tue, Jan 2, 2024 at 10:12 Feng Jin  wrote:

> Congratulations, Alex!
>
> Best,
> Feng
>
> On Tue, Jan 2, 2024 at 11:04 PM Chen Yu  wrote:
>
> > Congratulations, Alex!
> >
> > Best,
> > Yu Chen
> >
> > 
> > 发件人: Zhanghao Chen 
> > 发送时间: 2024年1月2日 22:44
> > 收件人: dev
> > 抄送: Alexander Fedulov
> > 主题: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
> >
> > Congrats, Alex!
> >
> > Best,
> > Zhanghao Chen
> > 
> > From: Maximilian Michels 
> > Sent: Tuesday, January 2, 2024 20:15
> > To: dev 
> > Cc: Alexander Fedulov 
> > Subject: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
> >
> > Happy New Year everyone,
> >
> > I'd like to start the year off by announcing Alexander Fedulov as a
> > new Flink committer.
> >
> > Alex has been active in the Flink community since 2019. He has
> > contributed more than 100 commits to Flink, its Kubernetes operator,
> > and various connectors [1][2].
> >
> > Especially noteworthy are his contributions on deprecating and
> > migrating the old Source API functions and test harnesses, the
> > enhancement to flame graphs, the dynamic rescale time computation in
> > Flink Autoscaling, as well as all the small enhancements Alex has
> > contributed which make a huge difference.
> >
> > Beyond code contributions, Alex has been an active community member
> > with his activity on the mailing lists [3][4], as well as various
> > talks and blog posts about Apache Flink [5][6].
> >
> > Congratulations Alex! The Flink community is proud to have you.
> >
> > Best,
> > The Flink PMC
> >
> > [1]
> > https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> > [2]
> >
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> > [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> > [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> > [5]
> >
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> > [6]
> >
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
> >
>


Re: [VOTE] FLIP-372: Allow TwoPhaseCommittingSink WithPreCommitTopology to alter the type of the Committable

2024-01-02 Thread Tzu-Li (Gordon) Tai
+1 (binding) looks good to me overall

thank you for revising the FLIP and continuing to drive the decision, Peter!

On Wed, Dec 27, 2023 at 7:16 AM Martijn Visser 
wrote:

> Hi Peter,
>
> It would be good if Gordon can take a look, but overall this looks good to
> me +1
>
> Best regards,
>
> Martijn
>
> On Fri, Dec 22, 2023 at 8:25 AM Péter Váry 
> wrote:
> >
> > We have enough votes for the decision, but given that this is an
> important
> > change, and for many of us it is a holiday season, I plan to keep this
> vote
> > open until the 3rd of January. This way, if anyone else has comments and
> > suggestions then they have time to raise them.
> >
> > Thanks everyone for the votes, and Leonard for the useful suggestions!
> >
> > Happy holidays everyone!
> >
> > Peter
> >
> > On Thu, Dec 21, 2023, 11:23 Leonard Xu  wrote:
> >
> > > Thanks Peter for quick response and update.
> > >
> > > I’ve no more comments on the updated FLIP, +1.
> > >
> > > For the PR process, you could alsouse draft PR[1] to leverage the
> testing
> > > infra during POC phase,
> > > we usually create FLIP umbrella issue and subtask issues after the
> FLIP is
> > > accepted.
> > >
> > >
> > > Best,
> > > Leonard
> > > [1]https://github.com/apache/flink/pulls?q=is%3Apr+is%3Aopen+draft
> > >
> > >
> > >
> > >
> > > >>
> > > >>
> > > >> Best,
> > > >> Leonard
> > > >>
> > > >>
> > > >>
> > > >>> 2023年12月21日 上午11:47,Jiabao Sun 
> 写道:
> > > >>>
> > > >>> Thanks Peter for driving this.
> > > >>>
> > > >>> +1 (non-binding)
> > > >>>
> > > >>> Best,
> > > >>> Jiabao
> > > >>>
> > > >>>
> > > >>> On 2023/12/18 12:06:05 Gyula Fóra wrote:
> > >  +1 (binding)
> > > 
> > >  Gyula
> > > 
> > >  On Mon, 18 Dec 2023 at 13:04, Márton Balassi 
> > >  wrote:
> > > 
> > > > +1 (binding)
> > > >
> > > > On Mon 18. Dec 2023 at 09:34, Péter Váry 
> > > > wrote:
> > > >
> > > >> Hi everyone,
> > > >>
> > > >> Since there were no further comments on the discussion thread
> [1], I
> > > > would
> > > >> like to start the vote for FLIP-372 [2].
> > > >>
> > > >> The FLIP started as a small new feature, but in the discussion
> > > thread
> > > >> and
> > > >> in a similar parallel thread [3] we opted for a somewhat bigger
> > > >> change in
> > > >> the Sink V2 API.
> > > >>
> > > >> Please read the FLIP and cast your vote.
> > > >>
> > > >> The vote will remain open for at least 72 hours and only
> concluded
> > > if
> > > > there
> > > >> are no objections and enough (i.e. at least 3) binding votes.
> > > >>
> > > >> Thanks,
> > > >> Peter
> > > >>
> > > >> [1] -
> > > >> https://lists.apache.org/thread/344pzbrqbbb4w0sfj67km25msp7hxlyd
> > > >> [2] -
> > > >>
> > > >>
> > > >
> > > >>
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-372%3A+Allow+TwoPhaseCommittingSink+WithPreCommitTopology+to+alter+the+type+of+the+Committable
> > > >> [3] -
> > > >> https://lists.apache.org/thread/h6nkgth838dlh5s90sd95zd6hlsxwx57
> > > >>
> > > >
> > > >>
> > > >>
> > >
> > >
>


Re: [VOTE] FLIP-400: AsyncScalarFunction for asynchronous scalar function support

2024-01-02 Thread Piotr Nowojski
+1 (binding)

Best,
Piotrek

czw., 28 gru 2023 o 09:19 Timo Walther  napisał(a):

> +1 (binding)
>
> Cheers,
> Timo
>
> > Am 28.12.2023 um 03:13 schrieb Yuepeng Pan :
> >
> > +1 (non-binding).
> >
> > Best,
> > Yuepeng Pan.
> >
> >
> >
> >
> > At 2023-12-28 09:19:37, "Lincoln Lee"  wrote:
> >> +1 (binding)
> >>
> >> Best,
> >> Lincoln Lee
> >>
> >>
> >> Martijn Visser  于2023年12月27日周三 23:16写道:
> >>
> >>> +1 (binding)
> >>>
> >>> On Fri, Dec 22, 2023 at 1:44 AM Jim Hughes
> 
> >>> wrote:
> 
>  Hi Alan,
> 
>  +1 (non binding)
> 
>  Cheers,
> 
>  Jim
> 
>  On Wed, Dec 20, 2023 at 2:41 PM Alan Sheinberg
>   wrote:
> 
> > Hi everyone,
> >
> > I'd like to start a vote on FLIP-400 [1]. It covers introducing a new
> >>> UDF
> > type, AsyncScalarFunction for completing invocations asynchronously.
> >>> It
> > has been discussed in this thread [2].
> >
> > I would like to start a vote.  The vote will be open for at least 72
> >>> hours
> > (until December 28th 18:00 GMT) unless there is an objection or
> > insufficient votes.
> >
> > [1]
> >
> >
> >>>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-400%3A+AsyncScalarFunction+for+asynchronous+scalar+function+support
> > [2] https://lists.apache.org/thread/q3st6t1w05grd7bthzfjtr4r54fv4tm2
> >
> > Thanks,
> > Alan
> >
> >>>
>
>


Re: [Discuss][Flink-31326] Flink autoscaler code

2024-01-02 Thread Rui Fan
Thanks Yang for reporting this issue!

You are right, these 2 conditions are indeed the same. It's unexpected IIUC.
Would you like to fix it?

Feel free to create a FLINK JIRA to fix it if you would like to, and I'm
happy to
review!

And cc @Maximilian Michels 

Best,
Rui

On Tue, Jan 2, 2024 at 11:03 PM Yang LI  wrote:

> Hello,
>
> I see we have 2 times the same condition check in the
> function getNumRecordsInPerSecond (L220
> <
> https://github.com/apache/flink-kubernetes-operator/blob/main/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/metrics/ScalingMetrics.java#L220
> >
> and
> L224
> <
> https://github.com/apache/flink-kubernetes-operator/blob/main/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/metrics/ScalingMetrics.java#L224
> >).
> I imagine you want to use SOURCE_TASK_NUM_RECORDS_OUT_PER_SEC when the
> operator is not the source. Can you confirm this and if we have a FIP
> ticket to fix this?
>
> Regards,
> Yang LI
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Feng Jin
Congratulations, Alex!

Best,
Feng

On Tue, Jan 2, 2024 at 11:04 PM Chen Yu  wrote:

> Congratulations, Alex!
>
> Best,
> Yu Chen
>
> 
> 发件人: Zhanghao Chen 
> 发送时间: 2024年1月2日 22:44
> 收件人: dev
> 抄送: Alexander Fedulov
> 主题: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
>
> Congrats, Alex!
>
> Best,
> Zhanghao Chen
> 
> From: Maximilian Michels 
> Sent: Tuesday, January 2, 2024 20:15
> To: dev 
> Cc: Alexander Fedulov 
> Subject: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
>
> Happy New Year everyone,
>
> I'd like to start the year off by announcing Alexander Fedulov as a
> new Flink committer.
>
> Alex has been active in the Flink community since 2019. He has
> contributed more than 100 commits to Flink, its Kubernetes operator,
> and various connectors [1][2].
>
> Especially noteworthy are his contributions on deprecating and
> migrating the old Source API functions and test harnesses, the
> enhancement to flame graphs, the dynamic rescale time computation in
> Flink Autoscaling, as well as all the small enhancements Alex has
> contributed which make a huge difference.
>
> Beyond code contributions, Alex has been an active community member
> with his activity on the mailing lists [3][4], as well as various
> talks and blog posts about Apache Flink [5][6].
>
> Congratulations Alex! The Flink community is proud to have you.
>
> Best,
> The Flink PMC
>
> [1]
> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> [2]
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> [5]
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> [6]
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
>


回复: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Chen Yu
Congratulations, Alex!

Best,
Yu Chen


发件人: Zhanghao Chen 
发送时间: 2024年1月2日 22:44
收件人: dev
抄送: Alexander Fedulov
主题: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

Congrats, Alex!

Best,
Zhanghao Chen

From: Maximilian Michels 
Sent: Tuesday, January 2, 2024 20:15
To: dev 
Cc: Alexander Fedulov 
Subject: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

Happy New Year everyone,

I'd like to start the year off by announcing Alexander Fedulov as a
new Flink committer.

Alex has been active in the Flink community since 2019. He has
contributed more than 100 commits to Flink, its Kubernetes operator,
and various connectors [1][2].

Especially noteworthy are his contributions on deprecating and
migrating the old Source API functions and test harnesses, the
enhancement to flame graphs, the dynamic rescale time computation in
Flink Autoscaling, as well as all the small enhancements Alex has
contributed which make a huge difference.

Beyond code contributions, Alex has been an active community member
with his activity on the mailing lists [3][4], as well as various
talks and blog posts about Apache Flink [5][6].

Congratulations Alex! The Flink community is proud to have you.

Best,
The Flink PMC

[1] https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
[2] 
https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
[3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
[4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
[5] 
https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
[6] 
https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series


[Discuss][Flink-31326] Flink autoscaler code

2024-01-02 Thread Yang LI
Hello,

I see we have 2 times the same condition check in the
function getNumRecordsInPerSecond (L220

and
L224
).
I imagine you want to use SOURCE_TASK_NUM_RECORDS_OUT_PER_SEC when the
operator is not the source. Can you confirm this and if we have a FIP
ticket to fix this?

Regards,
Yang LI


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Zhanghao Chen
Congrats, Alex!

Best,
Zhanghao Chen

From: Maximilian Michels 
Sent: Tuesday, January 2, 2024 20:15
To: dev 
Cc: Alexander Fedulov 
Subject: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

Happy New Year everyone,

I'd like to start the year off by announcing Alexander Fedulov as a
new Flink committer.

Alex has been active in the Flink community since 2019. He has
contributed more than 100 commits to Flink, its Kubernetes operator,
and various connectors [1][2].

Especially noteworthy are his contributions on deprecating and
migrating the old Source API functions and test harnesses, the
enhancement to flame graphs, the dynamic rescale time computation in
Flink Autoscaling, as well as all the small enhancements Alex has
contributed which make a huge difference.

Beyond code contributions, Alex has been an active community member
with his activity on the mailing lists [3][4], as well as various
talks and blog posts about Apache Flink [5][6].

Congratulations Alex! The Flink community is proud to have you.

Best,
The Flink PMC

[1] https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
[2] 
https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
[3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
[4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
[5] 
https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
[6] 
https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread David Anderson
That's great news. Congratulations, Alex!

David

On Tue, Jan 2, 2024 at 9:00 AM Ryan Skraba 
wrote:

> Awesome news for the community -- congratulations Alex (and Happy New
> Year everyone!)
>
> Ryan
>
> On Tue, Jan 2, 2024 at 2:55 PM Yun Tang  wrote:
> >
> > Congratulation to Alex and Happy New Year everyone!
> >
> > Best
> > Yun Tang
> > 
> > From: Rui Fan <1996fan...@gmail.com>
> > Sent: Tuesday, January 2, 2024 21:33
> > To: dev@flink.apache.org 
> > Cc: Alexander Fedulov 
> > Subject: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
> >
> > Happy new year!
> >
> > Hmm, sorry for the typo in the last email.
> > Congratulations Alex, well done!
> >
> > Best,
> > Rui
> >
> > On Tue, 2 Jan 2024 at 20:23, Rui Fan <1996fan...@gmail.com> wrote:
> >
> > > Configurations Alexander!
> > >
> > > Best,
> > > Rui
> > >
> > > On Tue, Jan 2, 2024 at 8:15 PM Maximilian Michels 
> wrote:
> > >
> > >> Happy New Year everyone,
> > >>
> > >> I'd like to start the year off by announcing Alexander Fedulov as a
> > >> new Flink committer.
> > >>
> > >> Alex has been active in the Flink community since 2019. He has
> > >> contributed more than 100 commits to Flink, its Kubernetes operator,
> > >> and various connectors [1][2].
> > >>
> > >> Especially noteworthy are his contributions on deprecating and
> > >> migrating the old Source API functions and test harnesses, the
> > >> enhancement to flame graphs, the dynamic rescale time computation in
> > >> Flink Autoscaling, as well as all the small enhancements Alex has
> > >> contributed which make a huge difference.
> > >>
> > >> Beyond code contributions, Alex has been an active community member
> > >> with his activity on the mailing lists [3][4], as well as various
> > >> talks and blog posts about Apache Flink [5][6].
> > >>
> > >> Congratulations Alex! The Flink community is proud to have you.
> > >>
> > >> Best,
> > >> The Flink PMC
> > >>
> > >> [1]
> > >>
> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> > >> [2]
> > >>
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> > >> [3]
> https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> > >> [4]
> https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> > >> [5]
> > >>
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> > >> [6]
> > >>
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
> > >>
> > >
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Ryan Skraba
Awesome news for the community -- congratulations Alex (and Happy New
Year everyone!)

Ryan

On Tue, Jan 2, 2024 at 2:55 PM Yun Tang  wrote:
>
> Congratulation to Alex and Happy New Year everyone!
>
> Best
> Yun Tang
> 
> From: Rui Fan <1996fan...@gmail.com>
> Sent: Tuesday, January 2, 2024 21:33
> To: dev@flink.apache.org 
> Cc: Alexander Fedulov 
> Subject: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov
>
> Happy new year!
>
> Hmm, sorry for the typo in the last email.
> Congratulations Alex, well done!
>
> Best,
> Rui
>
> On Tue, 2 Jan 2024 at 20:23, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Configurations Alexander!
> >
> > Best,
> > Rui
> >
> > On Tue, Jan 2, 2024 at 8:15 PM Maximilian Michels  wrote:
> >
> >> Happy New Year everyone,
> >>
> >> I'd like to start the year off by announcing Alexander Fedulov as a
> >> new Flink committer.
> >>
> >> Alex has been active in the Flink community since 2019. He has
> >> contributed more than 100 commits to Flink, its Kubernetes operator,
> >> and various connectors [1][2].
> >>
> >> Especially noteworthy are his contributions on deprecating and
> >> migrating the old Source API functions and test harnesses, the
> >> enhancement to flame graphs, the dynamic rescale time computation in
> >> Flink Autoscaling, as well as all the small enhancements Alex has
> >> contributed which make a huge difference.
> >>
> >> Beyond code contributions, Alex has been an active community member
> >> with his activity on the mailing lists [3][4], as well as various
> >> talks and blog posts about Apache Flink [5][6].
> >>
> >> Congratulations Alex! The Flink community is proud to have you.
> >>
> >> Best,
> >> The Flink PMC
> >>
> >> [1]
> >> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> >> [2]
> >> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> >> [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> >> [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> >> [5]
> >> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> >> [6]
> >> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
> >>
> >


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Yun Tang
Congratulation to Alex and Happy New Year everyone!

Best
Yun Tang

From: Rui Fan <1996fan...@gmail.com>
Sent: Tuesday, January 2, 2024 21:33
To: dev@flink.apache.org 
Cc: Alexander Fedulov 
Subject: Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

Happy new year!

Hmm, sorry for the typo in the last email.
Congratulations Alex, well done!

Best,
Rui

On Tue, 2 Jan 2024 at 20:23, Rui Fan <1996fan...@gmail.com> wrote:

> Configurations Alexander!
>
> Best,
> Rui
>
> On Tue, Jan 2, 2024 at 8:15 PM Maximilian Michels  wrote:
>
>> Happy New Year everyone,
>>
>> I'd like to start the year off by announcing Alexander Fedulov as a
>> new Flink committer.
>>
>> Alex has been active in the Flink community since 2019. He has
>> contributed more than 100 commits to Flink, its Kubernetes operator,
>> and various connectors [1][2].
>>
>> Especially noteworthy are his contributions on deprecating and
>> migrating the old Source API functions and test harnesses, the
>> enhancement to flame graphs, the dynamic rescale time computation in
>> Flink Autoscaling, as well as all the small enhancements Alex has
>> contributed which make a huge difference.
>>
>> Beyond code contributions, Alex has been an active community member
>> with his activity on the mailing lists [3][4], as well as various
>> talks and blog posts about Apache Flink [5][6].
>>
>> Congratulations Alex! The Flink community is proud to have you.
>>
>> Best,
>> The Flink PMC
>>
>> [1]
>> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
>> [2]
>> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
>> [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
>> [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
>> [5]
>> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
>> [6]
>> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
>>
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Rui Fan
Happy new year!

Hmm, sorry for the typo in the last email.
Congratulations Alex, well done!

Best,
Rui

On Tue, 2 Jan 2024 at 20:23, Rui Fan <1996fan...@gmail.com> wrote:

> Configurations Alexander!
>
> Best,
> Rui
>
> On Tue, Jan 2, 2024 at 8:15 PM Maximilian Michels  wrote:
>
>> Happy New Year everyone,
>>
>> I'd like to start the year off by announcing Alexander Fedulov as a
>> new Flink committer.
>>
>> Alex has been active in the Flink community since 2019. He has
>> contributed more than 100 commits to Flink, its Kubernetes operator,
>> and various connectors [1][2].
>>
>> Especially noteworthy are his contributions on deprecating and
>> migrating the old Source API functions and test harnesses, the
>> enhancement to flame graphs, the dynamic rescale time computation in
>> Flink Autoscaling, as well as all the small enhancements Alex has
>> contributed which make a huge difference.
>>
>> Beyond code contributions, Alex has been an active community member
>> with his activity on the mailing lists [3][4], as well as various
>> talks and blog posts about Apache Flink [5][6].
>>
>> Congratulations Alex! The Flink community is proud to have you.
>>
>> Best,
>> The Flink PMC
>>
>> [1]
>> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
>> [2]
>> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
>> [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
>> [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
>> [5]
>> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
>> [6]
>> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
>>
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Rui Fan
Configurations Alexander!

Best,
Rui

On Tue, Jan 2, 2024 at 8:15 PM Maximilian Michels  wrote:

> Happy New Year everyone,
>
> I'd like to start the year off by announcing Alexander Fedulov as a
> new Flink committer.
>
> Alex has been active in the Flink community since 2019. He has
> contributed more than 100 commits to Flink, its Kubernetes operator,
> and various connectors [1][2].
>
> Especially noteworthy are his contributions on deprecating and
> migrating the old Source API functions and test harnesses, the
> enhancement to flame graphs, the dynamic rescale time computation in
> Flink Autoscaling, as well as all the small enhancements Alex has
> contributed which make a huge difference.
>
> Beyond code contributions, Alex has been an active community member
> with his activity on the mailing lists [3][4], as well as various
> talks and blog posts about Apache Flink [5][6].
>
> Congratulations Alex! The Flink community is proud to have you.
>
> Best,
> The Flink PMC
>
> [1]
> https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
> [2]
> https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
> [3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
> [4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
> [5]
> https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
> [6]
> https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series
>


[ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Maximilian Michels
Happy New Year everyone,

I'd like to start the year off by announcing Alexander Fedulov as a
new Flink committer.

Alex has been active in the Flink community since 2019. He has
contributed more than 100 commits to Flink, its Kubernetes operator,
and various connectors [1][2].

Especially noteworthy are his contributions on deprecating and
migrating the old Source API functions and test harnesses, the
enhancement to flame graphs, the dynamic rescale time computation in
Flink Autoscaling, as well as all the small enhancements Alex has
contributed which make a huge difference.

Beyond code contributions, Alex has been an active community member
with his activity on the mailing lists [3][4], as well as various
talks and blog posts about Apache Flink [5][6].

Congratulations Alex! The Flink community is proud to have you.

Best,
The Flink PMC

[1] https://github.com/search?type=commits=author%3Aafedulov+org%3Aapache
[2] 
https://issues.apache.org/jira/browse/FLINK-28229?jql=status%20in%20(Resolved%2C%20Closed)%20AND%20assignee%20in%20(afedulov)%20ORDER%20BY%20resolved%20DESC%2C%20created%20DESC
[3] https://lists.apache.org/list?dev@flink.apache.org:lte=100M:Fedulov
[4] https://lists.apache.org/list?u...@flink.apache.org:lte=100M:Fedulov
[5] 
https://flink.apache.org/2020/01/15/advanced-flink-application-patterns-vol.1-case-study-of-a-fraud-detection-system/
[6] 
https://www.ververica.com/blog/presenting-our-streaming-concepts-introduction-to-flink-video-series


Re: [DISCUSS][FLINK-31830] Align the Nullability Handling of ROW between SQL and TableAPI

2024-01-02 Thread Timo Walther

Hi Jane,

thanks for the heavy investigation and extensive summaries. I'm sorry 
that I ignored this discussion for too long but would like to help in 
shaping a sustainable long-term solution.


I fear that changing:
- RowType#copy()
- RowType's constructor
- FieldsDataType#nullable()
will not solve all transitive issues.

We should approach the problem from a different perspective. In my point 
of view:

- DataType and LogicalType are just type declarations.
- RelDataType is similarly just a type declaration. Please correct me if 
I'm wrong but RelDataType itself also allows `ROW NOT 
NULL`. It's the factory or optimizer that performs necessary changes.
- It's up to the framework (i.e. planner or Table API) to decide what to 
do with these declarations.


Let's take a Java class:

class MyPojo {
  int i;
}

MyPojo can be nullable, but i cannot. This is the reason why we decided 
to introduce the current behavior. Complex structs are usually generated 
from Table API or from the catalog (e.g. when mapping to schema registry 
or some other external system). It could lead to other downstream 
inconsistencies if we change the method above.


I can't provide a better solution right now, I need more research on 
this topic. But we should definitely avoid another breaking change 
similar to [1] where the data type system was touched and other projects 
were affected.


How about we work together on this topic and create a FLIP for this? We 
need more examples in a unified document. Currently, the proposal is 
split across multiple Flink and Calcite JIRA issues and a ML discussion.


Regards,
Timo


[1] https://issues.apache.org/jira/browse/FLINK-33523


On 26.12.23 04:47, Jane Chan wrote:

Thanks Shengkai and Xuyang.

@Shengkai

I have one question: Is the influence only limited to the RowType? Does the

Map or Array type have the same problems?



I think the issue is exclusive to RowType. You may want to review
CALCITE-2464[1] for more details.

[1] https://issues.apache.org/jira/browse/CALCITE-2464

@Xuyang

Is it possible to consider introducing a deprecated option to allow users

to fall back to the previous version (default fallback), and then
officially deprecate it in Flink 2.0?



If I understand correctly, 2.0 allows breaking changes to remove historical
baggage in this release. Therefore, if we want to fix this issue before
2.0, we could introduce a fallback option in the two most recent versions
(1.19 and 1.20). However, from version 2.0 onwards, since we no longer
promise backward compatibility, introducing a fallback option might be
unnecessary. What do you think?

BTW, this jira FLINK-33217[1] is caused by that Flink SQL does not handle

the nullable attribute of the Row type in the way Calcite expected.
However, fixing them will also cause a relatively large impact. We may also
need to check the code part in SQL.



Yes, this is another issue caused by the row type nullability handling.
I've mentioned this JIRA ticket in the reference link to the previous
reply.

Best,
Jane

On Mon, Dec 25, 2023 at 1:42 PM Xuyang  wrote:


Hi, Jane, thanks for driving this.


IMO, it is important to keep same consistent semantics between table api
and sql, not only for maintenance, but also for user experience. But for
users, the impact of this modification is a bit large. Is it possible to
consider introducing a deprecated option to allow users to fall back to the
previous version (default fallback), and then officially deprecate it in
Flink 2.0?


BTW, this jira FLINK-33217[1] is caused by that Flink SQL does not handle
the nullable attribute of the Row type in the way Calcite expected.
However, fixing them will also cause a relatively large impact. We may also
need to check the code part in SQL.


[1] https://issues.apache.org/jira/browse/FLINK-33217




--

 Best!
 Xuyang





在 2023-12-25 10:16:28,"Shengkai Fang"  写道:

Thanks for Jane and Sergey's proposal!

+1 to correct the Table API behavior.

I have one question: Is the influence only limited to the RowType? Does

the

Map or Array type have the same problems?

Best,
Shengkai
[DISCUSS][FLINK-31830] Align the Nullability Handling of ROW between SQL
and TableA

Jane Chan  于2023年12月22日周五 17:40写道:


Dear devs,

Several issues [1][2][3] have been identified regarding the inconsistent
treatment of ROW type nullability between SQL and TableAPI. However,
addressing these discrepancies might necessitate updates to the public

API.

Therefore, I'm initiating this discussion to engage the community in
forging a unified approach to resolve these challenges.

To summarize, SQL prohibits ROW types such as ROW, which is implicitly rewritten to ROW by
Calcite[4]. In contrast, TableAPI permits such types, resulting in
inconsistency.
[image: image.png]
For a comprehensive issue breakdown, please refer to the comment of [1].

According to CALCITE-2464[4], ROW is not a valid type.

As

a result, the behavior of TableAPI is incorrect and needs to be

consistent


[jira] [Created] (FLINK-33965) Refactor the configuration for autoscaler standalone

2024-01-02 Thread Rui Fan (Jira)
Rui Fan created FLINK-33965:
---

 Summary: Refactor the configuration for autoscaler standalone
 Key: FLINK-33965
 URL: https://issues.apache.org/jira/browse/FLINK-33965
 Project: Flink
  Issue Type: Sub-task
Reporter: Rui Fan
Assignee: Rui Fan


Currently, all configurations of autoscaler standalone are maintained in string 
key.

When autoscaler standalone has a little options, it's easy to maintain. 
However, I found it's hard to maintain when we add more options.

During I developing the JDBC autoscaler state store and control loop supports 
multiple thread. It will introduce more options.
h2. Solution:

Introducing the AutoscalerStandaloneOptions to manage all options of autoscaler 
standalone. And output the doc for it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33964) Flink documentation can't be build due to error in Pulsar docs

2024-01-02 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-33964:
--

 Summary: Flink documentation can't be build due to error in Pulsar 
docs
 Key: FLINK-33964
 URL: https://issues.apache.org/jira/browse/FLINK-33964
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Martijn Visser
Assignee: Leonard Xu


https://github.com/apache/flink/actions/runs/7380766702/job/20078487743

{code:java}
Start building sites … 
hugo v0.110.0-e32a493b7826d02763c3b79623952e625402b168+extended linux/amd64 
BuildDate=2023-01-17T12:16:09Z VendorInfo=gohugoio
Error: Error building site: 
"/root/flink/docs/themes/connectors/content.zh/docs/connectors/datastream/pulsar.md:491:1":
 failed to extract shortcode: template for shortcode 
"generated/pulsar_admin_configuration" not found
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-397: Add config options for administrator JVM options

2024-01-02 Thread xiangyu feng
Hi Zhanghao,

Thx for reply. LGTM now.

Zhanghao Chen  于2024年1月2日周二 10:29写道:

> Hi Xiangyu,
>
> The proposed new options are targeted on experienced Flink platform
> administrators instead of normal end users, and a one-by-one mapping from
> non-default option to the default option variant might be easier for users
> to understand. Also, although JM and TM tend to use the same set of JVM
> args in most times, there're cases where different set of JVM args are
> preferable. So I am leaning towards the current design, WDYT?
>
> Best,
> Zhanghao Chen
> 
> From: xiangyu feng 
> Sent: Friday, December 29, 2023 20:20
> To: dev@flink.apache.org 
> Subject: Re: [DISCUSS] FLIP-397: Add config options for administrator JVM
> options
>
> Hi Zhanghao,
>
> Thanks for driving this. +1 for the overall idea.
>
> One minor question, do we need separate administrator JVM options for both
> JobManager and TaskManager? Or just one administrator JVM option for all?
>
> I'm afraid of 6 jvm
>
> options(env.java.opts.all\env.java.default-opts.all\env.java.opts.jobmanager\env.java.default-opts.jobmanager\env.java.opts.taskmanager\env.java.default-opts.taskmanager)
> may confuse users.
>
> Regards,
> Xiangyu
>
>
> Yong Fang  于2023年12月27日周三 15:36写道:
>
> > +1 for this, we have met jobs that need to set GC policies different from
> > the default ones to improve performance. Separating the default and
> > user-set ones can help us better manage them.
> >
> > Best,
> > Fang Yong
> >
> > On Fri, Dec 22, 2023 at 9:18 PM Benchao Li  wrote:
> >
> > > +1 from my side,
> > >
> > > I also met some scenarios that I wanted to set some JVM options by
> > > default for all Flink jobs before, such as
> > > '-XX:-DontCompileHugeMethods', without it, some generated big methods
> > > won't be optimized in JVM C2 compiler, leading to poor performance.
> > >
> > > Zhanghao Chen  于2023年11月27日周一 20:04写道:
> > > >
> > > > Hi devs,
> > > >
> > > > I'd like to start a discussion on FLIP-397: Add config options for
> > > administrator JVM options [1].
> > > >
> > > > In production environments, users typically develop and operate their
> > > Flink jobs through a managed platform. Users may need to add JVM
> options
> > to
> > > their Flink applications (e.g. to tune GC options). They typically use
> > the
> > > env.java.opts.x series of options to do so. Platform administrators
> also
> > > have a set of JVM options to apply by default, e.g. to use JVM 17,
> enable
> > > GC logging, or apply pretuned GC options, etc. Both use cases will need
> > to
> > > set the same series of options and will clobber one another. Similar
> > issues
> > > have been described in SPARK-23472 [2].
> > > >
> > > > Therefore, I propose adding a set of default JVM options for
> > > administrator use that prepends the user-set extra JVM options.
> > > >
> > > > Looking forward to hearing from you.
> > > >
> > > > [1]
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-397%3A+Add+config+options+for+administrator+JVM+options
> > > > [2] https://issues.apache.org/jira/browse/SPARK-23472
> > > >
> > > > Best,
> > > > Zhanghao Chen
> > >
> > >
> > >
> > > --
> > >
> > > Best,
> > > Benchao Li
> > >
> >
>