Re: [ANNOUNCE] New Apache Flink PMC Member - Fan Rui

2024-06-10 Thread Jiangang Liu
Congratulations, Fan Rui!


Best,
Jiangang Liu

Jacky Lau  于2024年6月11日周二 13:04写道:

> Congratulations Rui, well deserved!
>
> Regards,
> Jacky Lau
>
> Jeyhun Karimov 于2024年6月11日 周二03:49写道:
>
> > Congratulations Rui, well deserved!
> >
> > Regards,
> > Jeyhun
> >
> > On Mon, Jun 10, 2024, 10:21 Ahmed Hamdy  wrote:
> >
> > > Congratulations Rui!
> > > Best Regards
> > > Ahmed Hamdy
> > >
> > >
> > > On Mon, 10 Jun 2024 at 09:10, David Radley 
> > > wrote:
> > >
> > > > Congratulations, Rui!
> > > >
> > > > From: Sergey Nuyanzin 
> > > > Date: Sunday, 9 June 2024 at 20:33
> > > > To: dev@flink.apache.org 
> > > > Subject: [EXTERNAL] Re: [ANNOUNCE] New Apache Flink PMC Member - Fan
> > Rui
> > > > Congratulations, Rui!
> > > >
> > > > On Fri, Jun 7, 2024 at 5:36 AM Xia Sun  wrote:
> > > >
> > > > > Congratulations, Rui!
> > > > >
> > > > > Best,
> > > > > Xia
> > > > >
> > > > > Paul Lam  于2024年6月6日周四 11:59写道:
> > > > >
> > > > > > Congrats, Rui!
> > > > > >
> > > > > > Best,
> > > > > > Paul Lam
> > > > > >
> > > > > > > 2024年6月6日 11:02,Junrui Lee  写道:
> > > > > > >
> > > > > > > Congratulations, Rui.
> > > > > > >
> > > > > > > Best,
> > > > > > > Junrui
> > > > > > >
> > > > > > > Hang Ruan  于2024年6月6日周四 10:35写道:
> > > > > > >
> > > > > > >> Congratulations, Rui!
> > > > > > >>
> > > > > > >> Best,
> > > > > > >> Hang
> > > > > > >>
> > > > > > >> Samrat Deb  于2024年6月6日周四 10:28写道:
> > > > > > >>
> > > > > > >>> Congratulations Rui
> > > > > > >>>
> > > > > > >>> Bests,
> > > > > > >>> Samrat
> > > > > > >>>
> > > > > > >>> On Thu, 6 Jun 2024 at 7:45 AM, Yuxin Tan <
> > tanyuxinw...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > >>>
> > > > > > >>>> Congratulations, Rui!
> > > > > > >>>>
> > > > > > >>>> Best,
> > > > > > >>>> Yuxin
> > > > > > >>>>
> > > > > > >>>>
> > > > > > >>>> Xuannan Su  于2024年6月6日周四 09:58写道:
> > > > > > >>>>
> > > > > > >>>>> Congratulations!
> > > > > > >>>>>
> > > > > > >>>>> Best regards,
> > > > > > >>>>> Xuannan
> > > > > > >>>>>
> > > > > > >>>>> On Thu, Jun 6, 2024 at 9:53 AM Hangxiang Yu <
> > > master...@gmail.com
> > > > >
> > > > > > >>> wrote:
> > > > > > >>>>>>
> > > > > > >>>>>> Congratulations, Rui !
> > > > > > >>>>>>
> > > > > > >>>>>> On Thu, Jun 6, 2024 at 9:18 AM Lincoln Lee <
> > > > > lincoln.8...@gmail.com
> > > > > > >>>
> > > > > > >>>>> wrote:
> > > > > > >>>>>>
> > > > > > >>>>>>> Congratulations, Rui!
> > > > > > >>>>>>>
> > > > > > >>>>>>> Best,
> > > > > > >>>>>>> Lincoln Lee
> > > > > > >>>>>>>
> > > > > > >>>>>>>
> > > > > > >>>>>>> Lijie Wang  于2024年6月6日周四
> > 09:11写道:
> > > > > > >>>>>>>
> > > > > > >>>>>>>> Congratulations, Rui!
> > > > > > >>>>>>>>
> > > > > > >>>>>>>> Best,
> > > > > > >>>>>>>>

Re: [VOTE] FLIP-271: Autoscaling

2022-11-27 Thread Jiangang Liu
+1 (non-binding)

Best,
Jiangang Liu

Thomas Weise  于2022年11月28日周一 06:23写道:

> +1 (binding)
>
>
> On Sat, Nov 26, 2022 at 8:11 AM Zheng Yu Chen  wrote:
>
> > +1(no-binding)
> >
> > Maximilian Michels  于 2022年11月24日周四 上午12:25写道:
> >
> > > Hi everyone,
> > >
> > > I'd like to start a vote for FLIP-271 [1] which we previously discussed
> > on
> > > the dev mailing list [2].
> > >
> > > I'm planning to keep the vote open for at least until Tuesday, Nov 29.
> > >
> > > -Max
> > >
> > > [1]
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-271%3A+Autoscaling
> > > [2] https://lists.apache.org/thread/pvfb3fw99mj8r1x8zzyxgvk4dcppwssz
> > >
> >
>


Re: State of the Rescale API

2022-10-11 Thread Jiangang Liu
Thanks for the attention to the rescale api. Dynamic resource adjust is
useful for streaming jobs since the throughput can change in different
time. The rescale api is a lightweight way to change the job's parallelism.
This is importance for some jobs, for example, the jobs are in activities
or related to money which can not be delayed.
In our production scenario,we have supported a simple rescale api which may
be not perfect. By this chance, I suggest to support the rescale api in
adaptive scheduler for auto-scaling.

Chesnay Schepler  于2022年10月11日周二 20:36写道:

> The AdaptiveScheduler is not limited to reactive mode. There are no
> deployment limitations for the scheduler itself.
> In a nutshell, all that reactive mode does is crank the target
> parallelism to infinity, when usually it is the parallelism the user has
> set in the job/configuration.
>
> I think it would be fine if a new/revised rescale API were only
> available in the Adaptive Scheduler (without reactive mode!) for starters.
> We'd require way more stuff to make this useful for batch workloads.
>
> On 10/10/2022 16:47, Maximilian Michels wrote:
> > Hey Gyula,
> >
> > Is the Adaptive Scheduler limited to the Reactive mode? I agree that if
> we
> > move forward with the Adaptive Scheduler solution it should support all
> > deployment scenarios.
> >
> > Thanks,
> > -Max
> >
> > On Sun, Oct 9, 2022 at 6:10 AM Gyula Fóra  wrote:
> >
> >> Hi!
> >>
> >> I think we have to make sure that the Rescale API will work also without
> >> the adaptive scheduler (for instance when we are running Flink with the
> >> Kubernetes Native Integration or in other cases where the adaptive
> >> scheduler is not supported).
> >>
> >> What do you think?
> >>
> >> Cheers
> >> Gyula
> >>
> >>
> >>
> >> On Fri, Oct 7, 2022 at 3:50 PM Maximilian Michels 
> wrote:
> >>
> >>> We've been looking into ways to support programmatic rescaling of job
> >>> vertices. This feature is typically required for any type of Flink
> >>> autoscaler which does not merely set the default parallelism but
> adjusts
> >>> the parallelisms on a JobVertex level.
> >>>
> >>> We've had an initial discussion here:
> >>> https://issues.apache.org/jira/browse/FLINK-29501 where Chesnay
> suggested
> >>> to use the infamous "rescaling" API:
> >>>
> >>>
> https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/#jobs-jobid-rescaling
> >>> This API is disabled as of
> >>> https://issues.apache.org/jira/browse/FLINK-12312
> >>> .
> >>>
> >>> Since there is the Adaptive Scheduler in Flink now, it may be feasible
> to
> >>> re-enable the API (at least for streaming jobs) and allow overriding
> the
> >>> parallelism of job vertices in addition to the default parallelism.
> >>>
> >>> Any thoughts?
> >>>
> >>> -Max
> >>>
>
>


Re: [VOTE] FLIP-218: Support SELECT clause in CREATE TABLE(CTAS)

2022-07-06 Thread Jiangang Liu
+1 for the design

Jark Wu  于2022年7月5日周二 16:04写道:

> +1 (binding)
>
> Best,
> Jark
>
>
> On Tue, 5 Jul 2022 at 14:18, Mang Zhang  wrote:
>
> > Hi everyone,
> >
> >
> >
> >
> > Thanks for all the feedback so far. Based on the discussion [1], we seem
> > to have consensus. So, I would like to re-start a vote on FLIP-218 [2].
> >
> >
> >
> >
> > The vote will last for at least 72 hours unless there is an objection or
> > insufficient votes.
> >
> >
> >
> >
> > [1] https://lists.apache.org/thread/mc0lv4gptm7som02hpob1hdp3hb1ps1v
> >
> > [2]
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=199541185
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > --
> >
> > Best regards,
> > Mang Zhang
>


Re: [VOTE] FLIP-245: Source Supports Speculative Execution For Batch Job

2022-07-04 Thread Jiangang Liu
+1 for the feature.

Jing Zhang  于2022年7月5日周二 11:43写道:

> Hi all,
>
> I'd like to start a vote for FLIP-245: Source Supports Speculative
> Execution For Batch Job[1] on the discussion thread [2].
>
> The vote will last for at least 72 hours unless there is an objection or
> insufficient votes.
>
> Best,
> Jing Zhang
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-245%3A+Source+Supports+Speculative+Execution+For+Batch+Job
> [2] https://lists.apache.org/thread/zvc5no4yxvwkto7xxpw1vo7j1p6h0lso
>


Re: [VOTE] FLIP-218: Support SELECT clause in CREATE TABLE(CTAS)

2022-07-04 Thread Jiangang Liu
+1 for the feature.

Jark Wu  于2022年7月4日周一 17:33写道:

> Hi Mang,
>
> I left a comment in the DISCUSS thread.
>
> Best,
> Jark
>
> On Mon, 4 Jul 2022 at 15:24, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Hi.
> >
> > Thanks Mang for this FLIP. I think it will be useful for users.
> >
> > +1(non-binding)
> >
> > Best wishes
> > Rui Fan
> >
> > On Mon, Jul 4, 2022 at 3:01 PM Mang Zhang  wrote:
> >
> > > Hi everyone,
> > >
> > >
> > >
> > >
> > > Thanks for all the feedback so far. Based on the discussion [1], we
> seem
> > > to have consensus. So, I would like to start a vote on FLIP-218 [2].
> > >
> > >
> > >
> > >
> > > The vote will last for at least 72 hours unless there is an objection
> or
> > > insufficient votes.
> > >
> > >
> > >
> > >
> > > [1] https://lists.apache.org/thread/mc0lv4gptm7som02hpob1hdp3hb1ps1v
> > >
> > > [2]
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=199541185
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > >
> > > Best regards,
> > > Mang Zhang
> >
>


Re: Re: [ANNOUNCE] New Apache Flink Committers: Qingsheng Ren, Shengkai Fang

2022-06-22 Thread Jiangang Liu
Congratulations!

Best,
Jiangang Liu

Mason Chen  于2022年6月22日周三 00:37写道:

> Awesome work Qingsheng and Shengkai!
>
> Best,
> Mason
>
> On Tue, Jun 21, 2022 at 4:53 AM Zhipeng Zhang 
> wrote:
>
> > Congratulations, Qingsheng and ShengKai.
> >
> > Yang Wang  于2022年6月21日周二 19:43写道:
> >
> > > Congratulations, Qingsheng and ShengKai.
> > >
> > >
> > > Best,
> > > Yang
> > >
> > > Benchao Li  于2022年6月21日周二 19:33写道:
> > >
> > > > Congratulations!
> > > >
> > > > weijie guo  于2022年6月21日周二 13:44写道:
> > > >
> > > > > Congratulations, Qingsheng and ShengKai!
> > > > >
> > > > > Best regards,
> > > > >
> > > > > Weijie
> > > > >
> > > > >
> > > > > Yuan Mei  于2022年6月21日周二 13:07写道:
> > > > >
> > > > > > Congrats Qingsheng and ShengKai!
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > Yuan
> > > > > >
> > > > > > On Tue, Jun 21, 2022 at 11:27 AM Terry Wang 
> > > > wrote:
> > > > > >
> > > > > > > Congratulations, Qingsheng and ShengKai!
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Best,
> > > > Benchao Li
> > > >
> > >
> >
> >
> > --
> > best,
> > Zhipeng
> >
>


Re: [DISCUSS] FLIP-241: Completed Jobs Information Enhancement

2022-06-16 Thread Jiangang Liu
Thanks for the FLIP. It is helpful to track detail infos for completed jobs.

I want to ask another question. In our environment, sometimes it is hard to
distinguish jobs since the same job names may appear multi times in the
completed jobs. Because a job may run multi times or different jobs have
the same job names. I wonder that wether we can enhance the complete jobs
display with more information, such as applicationId and application name
in yarn. Maybe it is different in k8s to identify a job.

Best
Jiangang Liu

Yangze Guo  于2022年6月17日周五 11:40写道:

> Thanks for the feedback, Aitozi and Jing.
>
> > Are each attempts of the TaskManager or JobManager pods (if failure
> occurs)
> all be shown in the ui?
>
> The info of the prior execution attempts will be archived, you could
> refer to `ArchivedExecutionVertex$priorExecutions`.
>
> > It seems that most of these metrics are more interesting to batch jobs.
> Does it make sense to calculate them for pure streaming jobs too?
>
> All the proposed metrics will be calculated no matter what the job type is.
>
> > Why "duration is less interesting" which is mentioned in the FLIP?
>
> As a first step, we mainly focus on the most interesting status during
> the job lifecycle. The duration of final states like FINISHED and
> CANCELED is meaningless, while abnormal conditions like CANCELING will
> not be included at the moment.
>
> > Could you share your thoughts on "accumulated-busy-time"? It should
> describe the time while the task is working as expected, i.e. the happy
> path. When do we need it for analytics or diagnosis?
>
> A task could be busy or idle while it is working. Users may adjust the
> parallelism or the partition key according to the ratio between them.
>
> Best,
> Yangze Guo
>
> On Fri, Jun 17, 2022 at 5:08 AM Jing Ge  wrote:
> >
> > Hi Junhan
> >
> > These are must-to-have information for batch processing. Thanks for
> > bringing it up.
> >
> > I have some comments:
> >
> > 1. It seems that most of these metrics are more interesting to batch
> jobs.
> > Does it make sense to calculate them for pure streaming jobs too?
> > 2. Why "duration is less interesting" which is mentioned in the FLIP?
> > 3. Could you share your thoughts on "accumulated-busy-time"? It should
> > describe the time while the task is working as expected, i.e. the happy
> > path. When do we need it for analytics or diagnosis?
> >
> > BTW, you might want to optimize the format of the FLIP. Some text is
> > running out of the right border of the wiki page.
> >
> > Best regards,
> > Jing
> >
> > On Thu, Jun 16, 2022 at 4:40 PM Aitozi  wrote:
> >
> > > Thanks Junhan for driving this. It a great improvement for the batch
> jobs.
> > > I'm looking forward to this feature in our internal use case. +1 for
> it.
> > >
> > > One more question:
> > >
> > > Are each attempts of the TaskManager or JobManager pods (if failure
> occurs)
> > > all be shown in the ui ?
> > >
> > > Best,
> > > Aitozi.
> > >
> > > Yang Wang  于2022年6月16日周四 19:10写道:
> > >
> > > > Thanks Xintong for the explanation.
> > > >
> > > > It makes sense to leave the discussion about job result store in a
> > > > dedicated thread.
> > > >
> > > >
> > > > Best,
> > > > Yang
> > > >
> > > > Xintong Song  于2022年6月16日周四 13:40写道:
> > > >
> > > > > My impression of JobResultStore is more about fault tolerance and
> high
> > > > > availability. Using it for providing information to users sounds
> worth
> > > > > exploring. We probably need more time to think it through.
> > > > >
> > > > > Given that it doesn't conflict with what we have proposed in this
> FLIP,
> > > > I'd
> > > > > suggest considering it as a separate thread and exclude it from the
> > > scope
> > > > > of this one.
> > > > >
> > > > > Best,
> > > > >
> > > > > Xintong
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Jun 16, 2022 at 11:43 AM Yang Wang 
> > > > wrote:
> > > > >
> > > > > > This is a very useful feature both for finished streaming and
> batch
> > > > jobs.
> > > > > >
> > > > > > Except for the WebUI & REST API improvements, I

Re: [VOTE] FLIP-224: Blocklist Mechanism

2022-06-15 Thread Jiangang Liu
+1

Chesnay Schepler  于2022年6月15日周三 17:15写道:

> +1
>
> On 15/06/2022 10:49, Lijie Wang wrote:
> > Hi everyone,
> >
> > We've received some additional concerns since the last vote [1], and
> > therefore made a lot of changes to design.  You can find the details in
> [2]
> > and the discussions in [3].
> >
> > Now I'd like to start a new vote thread for FLIP-224. The vote will last
> > for at least 72 hours unless there is an objection or insufficient votes.
> >
> > [1] https://lists.apache.org/thread/3416vks1j35co9608gkmsoplvcjjz7bg
> > [2] https://cwiki.apache.org/confluence/display/FLINK/FLIP-224
> > %3A+Blocklist+Mechanism
> > [3] https://lists.apache.org/thread/fngkk52kjbc6b6v9nn0lkfq6hhsbgb1h
> > Best,
> > Lijie
> >
>
>


Re: [DISCUSS ] Make state.backend.incremental as true by default

2022-06-14 Thread Jiangang Liu
+1 for the suggestion. We have use the incremental checkpoint in our
production for a long time.

Hangxiang Yu  于2022年6月14日周二 15:41写道:

> +1
> It's basically enabled in most scenarios in production environments.
> For HashMapStateBackend, it will adopt a full checkpoint even if we enable
> incremental checkpoint. It will also support incremental checkpoint after
> [1]. It's compatible.
> BTW, I think we may also need to improve the documentation of incremental
> checkpoints which users usually ask. There are some tickets like [2][3].
>
> Best,
> Hangxiang.
>
> [1] https://issues.apache.org/jira/browse/FLINK-21648
> [2] https://issues.apache.org/jira/browse/FLINK-22797
> [3] https://issues.apache.org/jira/browse/FLINK-7449
>
> On Mon, Jun 13, 2022 at 7:48 PM Rui Fan <1996fan...@gmail.com> wrote:
>
> > Strongly +1
> >
> > Best,
> > Rui Fan
> >
> > On Mon, Jun 13, 2022 at 7:35 PM Martijn Visser  >
> > wrote:
> >
> > > > BTW, from my knowledge, nothing would happen for HashMapStateBackend,
> > > which does not support incremental checkpoint yet, when enabling
> > > incremental checkpoints.
> > >
> > > Thanks Yun, if no errors would occur then definitely +1 to enable it by
> > > default
> > >
> > > Op ma 13 jun. 2022 om 12:42 schreef Alexander Fedulov <
> > > alexan...@ververica.com>:
> > >
> > > > +1
> > > >
> > > > From my experience, it is actually hard to come up with use cases
> where
> > > > incremental checkpoints should explicitly not be enabled with the
> > RocksDB
> > > > state backend. If the state is so small that the full snapshots do
> not
> > > > have any negative impact, one should consider using
> HashMapStateBackend
> > > > anyway.
> > > >
> > > > Best,
> > > > Alexander Fedulov
> > > >
> > > >
> > > > On Mon, Jun 13, 2022 at 12:26 PM Jing Ge  wrote:
> > > >
> > > > > +1
> > > > >
> > > > > Glad to see the kickoff of this discussion. Thanks Lihe for driving
> > > this!
> > > > >
> > > > > We have actually already discussed it internally a few months ago.
> > > After
> > > > > considering some corner cases, all agreed on enabling the
> incremental
> > > > > checkpoint as default.
> > > > >
> > > > > Best regards,
> > > > > Jing
> > > > >
> > > > > On Mon, Jun 13, 2022 at 12:17 PM Yun Tang 
> wrote:
> > > > >
> > > > > > Strongly +1 for making incremental checkpoints as default. Many
> > users
> > > > > have
> > > > > > ever been asking why this configuration is not enabled by
> default.
> > > > > >
> > > > > > BTW, from my knowledge, nothing would happen for
> > HashMapStateBackend,
> > > > > > which does not support incremental checkpoint yet, when enabling
> > > > > > incremental checkpoints.
> > > > > >
> > > > > >
> > > > > > Best
> > > > > > Yun Tang
> > > > > > 
> > > > > > From: Martijn Visser 
> > > > > > Sent: Monday, June 13, 2022 18:05
> > > > > > To: dev@flink.apache.org 
> > > > > > Subject: Re: [DISCUSS ] Make state.backend.incremental as true by
> > > > default
> > > > > >
> > > > > > Hi Lihe,
> > > > > >
> > > > > > What happens if we enable incremental checkpoints by default
> while
> > > the
> > > > > used
> > > > > > memory backend is HashMapStateBackend, which doesn't support
> > > > incremental
> > > > > > checkpoints?
> > > > > >
> > > > > > Best regards,
> > > > > >
> > > > > > Martijn
> > > > > >
> > > > > > Op ma 13 jun. 2022 om 11:59 schreef Lihe Ma :
> > > > > >
> > > > > > > Hi, Everyone,
> > > > > > >
> > > > > > > I would like to open a discussion on setting incremental
> > checkpoint
> > > > as
> > > > > > > default behavior.
> > > > > > >
> > > > > > > Currently, the configuration of state.backend.incremental is
> set
> > as
> > > > > false
> > > > > > > by default. Incremental checkpoint has been adopted widely in
> > > > industry
> > > > > > > community for many years , and it is also well-tested from the
> > > > feedback
> > > > > > in
> > > > > > > the community discussion. Incremental checkpointing is more
> > > > > > light-weighted:
> > > > > > > shorter checkpoint duration, less uploaded data and less
> resource
> > > > > > > consumption.
> > > > > > >
> > > > > > > In terms of backward compatibility, enable incremental
> > > checkpointing
> > > > > > would
> > > > > > > not make any data loss no matter restoring from a full
> > > > > > checkpoint/savepoint
> > > > > > > or an incremental checkpoint.
> > > > > > >
> > > > > > > FLIP-193 (Snapshot ownership)[1] has been released in 1.15,
> > > > incremental
> > > > > > > checkpoint no longer depends on a previous restored checkpoint
> in
> > > > > default
> > > > > > > NO_CLAIM mode, which makes the checkpoint lineage much cleaner,
> > it
> > > > is a
> > > > > > > good chance to change the configuration
> state.backend.incremental
> > > to
> > > > > true
> > > > > > > as default.
> > > > > > >
> > > > > > > Thus, based on the above discussion, I suggest to make
> > > > > > > state.backend.incremental as true by default. What do you think
> > of
> > > > this
> > > > > > > pro

Re: [ANNOUNCE] New Apache Flink PMC Member - Jingsong Lee

2022-06-13 Thread Jiangang Liu
Congratulations, Jingsong!

Best,
Jiangang Liu

Martijn Visser  于2022年6月13日周一 16:06写道:

> Like everyone has mentioned, this is very well deserved. Congratulations!
>
> Op ma 13 jun. 2022 om 09:57 schreef Benchao Li :
>
> > Congratulations, Jingsong!  Well deserved.
> >
> > Rui Fan <1996fan...@gmail.com> 于2022年6月13日周一 15:53写道:
> >
> > > Congratulations, Jingsong!
> > >
> > > Best,
> > > Rui Fan
> > >
> > > On Mon, Jun 13, 2022 at 3:40 PM LuNing Wang 
> > wrote:
> > >
> > > > Congratulations, Jingsong!
> > > >
> > > > Best,
> > > > LuNing Wang
> > > >
> > > > Ingo Bürk  于2022年6月13日周一 15:36写道:
> > > >
> > > > > Congrats, Jingsong!
> > > > >
> > > > > On 13.06.22 08:58, Becket Qin wrote:
> > > > > > Hi all,
> > > > > >
> > > > > > I'm very happy to announce that Jingsong Lee has joined the Flink
> > > PMC!
> > > > > >
> > > > > > Jingsong became a Flink committer in Feb 2020 and has been
> > > continuously
> > > > > > contributing to the project since then, mainly in Flink SQL. He
> has
> > > > been
> > > > > > quite active in the mailing list, fixing bugs, helping verifying
> > > > > releases,
> > > > > > reviewing patches and FLIPs. Jingsong is also devoted to pushing
> > > Flink
> > > > > SQL
> > > > > > to new use cases. He spent a lot of time in implementing the
> Flink
> > > > > > connectors for Apache Iceberg. Jingsong is also the primary
> driver
> > > > behind
> > > > > > the effort of flink-table-store, which aims to provide a
> > stream-batch
> > > > > > unified storage for Flink dynamic tables.
> > > > > >
> > > > > > Congratulations and welcome, Jingsong!
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > > Jiangjie (Becket) Qin
> > > > > > (On behalf of the Apache Flink PMC)
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> >
> > Best,
> > Benchao Li
> >
>


Re: [DISCUSS] FLIP-224: Blacklist Mechanism

2022-05-06 Thread Jiangang Liu
Thanks for the valuable design. The auto-detecting can decrease great work
for us. We have implemented the similar feature in our inner flink version.
Below is something that I care about:

   1. For auto-detecting, I wonder how to make the strategy and mark a node
   blocked? Sometimes the blocked node is hard to be detected, for example,
   the upper node or the down node will be blocked when network unreachable.
   2. I see that the strategy is made in JobMaster side. How about
   implementing the similar logic in resource manager? In session mode, multi
   jobs can fail on the same bad node and the node should be marked blocked.
   If the job makes the strategy, the node may be not marked blocked if the
   fail times don't exceed the threshold.


Zhu Zhu  于2022年5月5日周四 23:35写道:

> Thank you for all your feedback!
>
> Besides the answers from Lijie, I'd like to share some of my thoughts:
> 1. Whether to enable automatical blocklist
> Generally speaking, it is not a goal of FLIP-224.
> The automatical way should be something built upon the blocklist
> mechanism and well decoupled. It was designed to be a configurable
> blocklist strategy, but I think we can further decouple it by
> introducing a abnormal node detector, as Becket suggested, which just
> uses the blocklist mechanism once bad nodes are detected. However, it
> should be a separate FLIP with further dev discussions and feedback
> from users. I also agree with Becket that different users have different
> requirements, and we should listen to them.
>
> 2. Is it enough to just take away abnormal nodes externally
> My answer is no. As Lijie has mentioned, we need a way to avoid
> deploying tasks to temporary hot nodes. In this case, users may just
> want to limit the load of the node and do not want to kill all the
> processes on it. Another case is the speculative execution[1] which
> may also leverage this feature to avoid starting mirror tasks on slow
> nodes.
>
> Thanks,
> Zhu
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-168%3A+Speculative+execution+for+Batch+Job
>
> Lijie Wang  于2022年5月5日周四 15:56写道:
>
> >
> > Hi everyone,
> >
> >
> > Thanks for your feedback.
> >
> >
> > There's one detail that I'd like to re-emphasize here because it can
> affect the value and design of the blocklist mechanism (perhaps I should
> highlight it in the FLIP). We propose two actions in FLIP:
> >
> > 1) MARK_BLOCKLISTED: Just mark the task manager or node as blocked.
> Future slots should not be allocated from the blocked task manager or node.
> But slots that are already allocated will not be affected. A typical
> application scenario is to mitigate machine hotspots. In this case, we hope
> that subsequent resource allocations will not be on the hot machine, but
> tasks currently running on it should not be affected.
> >
> > 2) MARK_BLOCKLISTED_AND_EVACUATE_TASKS: Mark the task manager or node as
> blocked, and evacuate all tasks on it. Evacuated tasks will be restarted on
> non-blocked task managers.
> >
> > For the above 2 actions, the former may more highlight the meaning of
> this FLIP, because the external system cannot do that.
> >
> >
> > Regarding *Manually* and *Automatically*, I basically agree with @Becket
> Qin: different users have different answers. Not all users’ deployment
> environments have a special external system that can perform the anomaly
> detection. In addition, adding pluggable/optional auto-detection doesn't
> require much extra work on top of manual specification.
> >
> >
> > I will answer your other questions one by one.
> >
> >
> > @Yangze
> >
> > a) I think you are right, we do not need to expose the
> `cluster.resource-blocklist.item.timeout-check-interval` to users.
> >
> > b) We can abstract the `notifyException` to a separate interface (maybe
> BlocklistExceptionListener), and the ResourceManagerBlocklistHandler can
> implement it in the future.
> >
> >
> > @Martijn
> >
> > a) I also think the manual blocking should be done by cluster operators.
> >
> > b) I think manual blocking makes sense, because according to my
> experience, users are often the first to perceive the machine problems
> (because of job failover or delay), and they will contact cluster operators
> to solve it, or even tell the cluster operators which machine is
> problematic. From this point of view, I think the people who really need
> the manual blocking are the users, and it’s just performed by the cluster
> operator, so I think the manual blocking makes sense.
> >
> >
> > @Chesnay
> >
> > We need to touch the logic of JM/SlotPool, because for MARK_BLOCKLISTED
> , we need to know whether the slot is blocklisted when the task is
> FINISHED/CANCELLED/FAILED. If so,  SlotPool should release the slot
> directly to avoid assigning other tasks (of this job) on it. If we only
> maintain the blocklist information on the RM, JM needs to retrieve it by
> RPC. I think the performance overhead of that is relatively large, so I
> think it's worth m

Re: [ANNOUNCE] Apache Flink 1.15.0 released

2022-05-05 Thread Jiangang Liu
Congratulations! This version is really helpful for us . We will explore it
and help to improve it.

Best
Jiangang Liu

Yu Li  于2022年5月5日周四 18:53写道:

> Hurray!
>
> Thanks Yun Gao, Till and Joe for all the efforts as our release managers.
> And thanks all contributors for making this happen!
>
> Best Regards,
> Yu
>
>
> On Thu, 5 May 2022 at 18:01, Sergey Nuyanzin  wrote:
>
> > Great news!
> > Congratulations!
> > Thanks to the release managers, and everyone involved.
> >
> > On Thu, May 5, 2022 at 11:57 AM godfrey he  wrote:
> >
> > > Congratulations~
> > >
> > > Thanks Yun, Till and Joe for driving this release
> > > and everyone who made this release happen.
> > >
> > > Best,
> > > Godfrey
> > >
> > > Becket Qin  于2022年5月5日周四 17:39写道:
> > > >
> > > > Hooray! Thanks Yun, Till and Joe for driving the release!
> > > >
> > > > Cheers,
> > > >
> > > > JIangjie (Becket) Qin
> > > >
> > > > On Thu, May 5, 2022 at 5:20 PM Timo Walther 
> > wrote:
> > > >
> > > > > It took a bit longer than usual. But I'm sure the users will love
> > this
> > > > > release.
> > > > >
> > > > > Big thanks to the release managers!
> > > > >
> > > > > Timo
> > > > >
> > > > > Am 05.05.22 um 10:45 schrieb Yuan Mei:
> > > > > > Great!
> > > > > >
> > > > > > Thanks, Yun Gao, Till, and Joe for driving the release, and
> thanks
> > to
> > > > > > everyone for making this release happen!
> > > > > >
> > > > > > Best
> > > > > > Yuan
> > > > > >
> > > > > > On Thu, May 5, 2022 at 4:40 PM Leonard Xu 
> > wrote:
> > > > > >
> > > > > >> Congratulations!
> > > > > >>
> > > > > >> Thanks Yun Gao, Till and Joe for the great work as our release
> > > manager
> > > > > and
> > > > > >> everyone who involved.
> > > > > >>
> > > > > >> Best,
> > > > > >> Leonard
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >>> 2022年5月5日 下午4:30,Yang Wang  写道:
> > > > > >>>
> > > > > >>> Congratulations!
> > > > > >>>
> > > > > >>> Thanks Yun Gao, Till and Joe for driving this release and
> > everyone
> > > who
> > > > > >> made
> > > > > >>> this release happen.
> > > > > >>
> > > > >
> > > > >
> > >
> >
> >
> > --
> > Best regards,
> > Sergey
> >
>


Re: [ANNOUNCE] New Flink PMC member: Yang Wang

2022-05-05 Thread Jiangang Liu
Congratulations!

Best
Liu Jiangang

Marios Trivyzas  于2022年5月5日周四 20:47写道:

> Congrats Yang!
>
> On Thu, May 5, 2022, 15:29 Yuan Mei  wrote:
>
> > Congrats and well Deserved, Yang!
> >
> > Best,
> > Yuan
> >
> > On Thu, May 5, 2022 at 8:21 PM Nicholas Jiang 
> > wrote:
> >
> > > Congrats Yang!
> > >
> > > Best regards,
> > > Nicholas Jiang
> > >
> > > On 2022/05/05 11:18:10 Xintong Song wrote:
> > > > Hi all,
> > > >
> > > > I'm very happy to announce that Yang Wang has joined the Flink PMC!
> > > >
> > > > Yang has been consistently contributing to our community, by
> > contributing
> > > > codes, participating in discussions, mentoring new contributors,
> > > answering
> > > > questions on mailing lists, and giving talks on Flink at
> > > > various conferences and events. He is one of the main contributors
> and
> > > > maintainers in Flink's Native Kubernetes / Yarn integrations and the
> > > Flink
> > > > Kubernetes Operator.
> > > >
> > > > Congratulations and welcome, Yang!
> > > >
> > > > Thank you~
> > > >
> > > > Xintong Song (On behalf of the Apache Flink PMC)
> > > >
> > >
> >
>


Re: Re: [DISCUSS] FLIP-168: Speculative execution for Batch Job

2022-04-28 Thread Jiangang Liu
+1 for the feature.

Mang Zhang  于2022年4月28日周四 11:36写道:

> Hi zhu:
>
>
> This sounds like a great job! Thanks for your great job.
> In our company, there are already some jobs using Flink Batch,
> but everyone knows that the offline cluster has a lot more load than
> the online cluster, and the failure rate of the machine is also much higher.
> If this work is done, we'd love to use it, it's simply awesome for our
> flink users.
> thanks again!
>
>
>
>
>
>
>
> --
>
> Best regards,
> Mang Zhang
>
>
>
>
>
> At 2022-04-27 10:46:06, "Zhu Zhu"  wrote:
> >Hi everyone,
> >
> >More and more users are running their batch jobs on Flink nowadays.
> >One major problem they encounter is slow tasks running on hot/bad
> >nodes, resulting in very long and uncontrollable execution time of
> >batch jobs. This problem is a pain or even unacceptable in
> >production. Many users have been asking for a solution for it.
> >
> >Therefore, I'd like to revive the discussion of speculative
> >execution to solve this problem.
> >
> >Weijun Wang, Jing Zhang, Lijie Wang and I had some offline
> >discussions to refine the design[1]. We also implemented a PoC[2]
> >and verified it using TPC-DS benchmarks and production jobs.
> >
> >Looking forward to your feedback!
> >
> >Thanks,
> >Zhu
> >
> >[1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-168%3A+Speculative+execution+for+Batch+Job
> >[2]
> https://github.com/zhuzhurk/flink/commits/1.14-speculative-execution-poc
> >
> >
> >刘建刚  于2021年12月13日周一 11:38写道:
> >
> >> Any progress on the feature? We have the same requirement in our
> company.
> >> Since the soft and hard environment can be complex, it is normal to see
> a
> >> slow task which determines the execution time of the flink job.
> >>
> >>  于2021年6月20日周日 22:35写道:
> >>
> >> > Hi everyone,
> >> >
> >> > I would like to kick off a discussion on speculative execution for
> batch
> >> > job.
> >> > I have created FLIP-168 [1] that clarifies our motivation to do this
> and
> >> > some improvement proposals for the new design.
> >> > It would be great to resolve the problem of long tail task in batch
> job.
> >> > Please let me know your thoughts. Thanks.
> >> >   Regards,
> >> > wangwj
> >> > [1]
> >> >
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-168%3A+Speculative+execution+for+Batch+Job
> >> >
> >>
>


Re: [DISCUSS] FLIP-220: Temporal State

2022-04-24 Thread Jiangang Liu
Thanks for the discussion. I think that this is a very good question to
flink and we can benefit a lot from it. The default IntervalJoinOperator is
really inefficient. We have optimized the problem by using Rocksdb's upper
bound and lower bound which detail can refer to
https://issues.apache.org/jira/browse/FLINK-10949. A more generic way is
useful.

David Morávek  于2022年4月22日周五 20:28写道:

> >
> > With that in mind, we could only offer a couple of selected
> > temporal/sorted
> > state implementations that are handled internally, but not really a
> > generic
> > one - even if you let the user explicitly handle binary keys...
>
>
> If we want to have a generic interface that is portable between different
> state backends and allows for all the use-cases described above,
> lexicographical binary sort sounds reasonable, because you need to be able
> to push sorting out of the JVM boundary.
>
> Only trade off I can think of is that as long as you stay within the JVM
> (heap state backend), you need to pay a slight key serialization cost,
> which is IMO ok-ish.
>
> Do you have any future state backend ideas in mind, that might not work
> with this assumption?
>
> -
>
> I'm really starting to like the idea of having a BinarySortedMapState +
> higher level / composite states.
>
> D.
>
>
> On Fri, Apr 22, 2022 at 1:58 PM David Morávek 
> wrote:
>
> > Isn't allowing a TemporalValueState just a special case of b.III? So if a
> >> user
> >> of the state wants that, then they can leverage a simple API vs. if you
> >> want
> >> fancier duplicate handling, you'd just go with TemporalListState and
> >> implement
> >> the logic you want?
> >
> >
> > Yes it is. But it IMO doesn't justify adding a new state primitive. My
> > take would be that as long as we can build TVS using other existing state
> > primitives (TLS) we should treat it as a "composite state". We currently
> > don't have a good user facing API to do that, but it could be added in
> > separate FLIP.
> >
> > eg. something along the lines of
> >
> > TemporalValueState state = getRuntimeContext().getCompositeState(
> > new CompositeStateDescriptor<>(
> > "composite", new TemporalValueState(type)));
> >
> > On Fri, Apr 22, 2022 at 1:44 PM Nico Kruber  wrote:
> >
> >> David,
> >>
> >> 1) Good points on the possibility to make the TemporalListState generic
> >> -> actually, if you think about it more, we are currently assuming that
> >> all
> >> state backends use the same comparison on the binary level because we
> add
> >> an
> >> appropriate serializer at an earlier abstraction level. This may not
> >> actually
> >> hold for all (future) state backends and can limit further
> >> implementations (if
> >> you think this is something to keep in mind!).
> >>
> >> So we may have to push this serializer implementation further down the
> >> stack,
> >> i.e. our current implementation is one that fits RocksDB and that
> alone...
> >>
> >> With that in mind, we could only offer a couple of selected
> >> temporal/sorted
> >> state implementations that are handled internally, but not really a
> >> generic
> >> one - even if you let the user explicitly handle binary keys...
> >>
> >>
> >> 2) Duplicates
> >>
> >> Isn't allowing a TemporalValueState just a special case of b.III? So if
> a
> >> user
> >> of the state wants that, then they can leverage a simple API vs. if you
> >> want
> >> fancier duplicate handling, you'd just go with TemporalListState and
> >> implement
> >> the logic you want?
> >>
> >>
> >>
> >> Nico
> >>
> >> On Friday, 22 April 2022 10:43:48 CEST David Morávek wrote:
> >> >  Hi Yun & Nico,
> >> >
> >> > few thoughts on the discussion
> >> >
> >> > 1) Making the TemporalListState generic
> >> >
> >> > This is just not possible with the current infrastructure w.r.t type
> >> > serializers as the sorting key *needs to be comparable on the binary
> >> level*
> >> > (serialized form).
> >> >
> >> > What I could imagine, is introducing some kind of
> >> `Sorted(List|Map)State`
> >> > with explicit binary keys. User would either have to work directly
> with
> >> > `byte[]` keys or provide a function for transforming keys into the
> >> binary
> >> > representation that could be sorted (this would have to be different
> >> from
> >> > `TypeSerializer` which could get more fancy with the binary
> >> representation,
> >> > eg. to save up space -> varints).
> >> >
> >> > This kind of interface might be really hard to grasp by the pipeline
> >> > authors. There needs to be a deeper understanding how the byte
> >> comparison
> >> > works (eg. it needs to be different from the java byte comparison
> which
> >> > compares bytes as `signed`). This could be maybe partially mitigated
> by
> >> > providing predefined `to binary sorting key` functions for the common
> >> > primitives / types.
> >> >
> >> > 2) Duplicates
> >> >
> >> > I guess, this all boils down to dealing with duplicates / values for
> the
> >> >
> >> > > same timestamp.
> >> >
> >> 

Re: [ANNOUNCE] New Apache Flink Committer - David Morávek

2022-03-08 Thread Jiangang Liu
Congratulations, David.

Best
Liu Jiangang

Yu Li  于2022年3月9日周三 14:23写道:

> Congrats, David!
>
> Best Regards,
> Yu
>
>
> On Tue, 8 Mar 2022 at 16:11, Johannes Moser  wrote:
>
> > Very well deserved.
> >
> > > On 08.03.2022, at 05:43, Lincoln Lee  wrote:
> > >
> > > Congratulations David!
> > >
> > > Best,
> > > Lincoln Lee
> > >
> > >
> > > Yun Gao  于2022年3月8日周二 11:24写道:
> > >
> > >> Congratulations David!
> > >>
> > >> Best,
> > >> Yun Gao
> > >>
> > >>
> > >> --
> > >> From:Jing Zhang 
> > >> Send Time:2022 Mar. 8 (Tue.) 11:10
> > >> To:dev 
> > >> Subject:Re: [ANNOUNCE] New Apache Flink Committer - David Morávek
> > >>
> > >> Congratulations David!
> > >>
> > >> Ryan Skraba  于2022年3月7日周一 22:18写道:
> > >>
> > >>> Congratulations David!
> > >>>
> > >>> On Mon, Mar 7, 2022 at 9:54 AM Jan Lukavský  wrote:
> > >>>
> >  Congratulations David!
> > 
> >   Jan
> > 
> >  On 3/7/22 09:44, Etienne Chauchot wrote:
> > > Congrats David !
> > >
> > > Well deserved !
> > >
> > > Etienne
> > >
> > > Le 07/03/2022 à 08:47, David Morávek a écrit :
> > >> Thanks everyone!
> > >>
> > >> Best,
> > >> D.
> > >>
> > >> On Sun 6. 3. 2022 at 9:07, Yuan Mei 
> wrote:
> > >>
> > >>> Congratulations, David!
> > >>>
> > >>> Best Regards,
> > >>> Yuan
> > >>>
> > >>> On Sat, Mar 5, 2022 at 8:13 PM Roman Khachatryan <
> ro...@apache.org
> > >>>
> > >>> wrote:
> > >>>
> >  Congratulations, David!
> > 
> >  Regards,
> >  Roman
> > 
> >  On Fri, Mar 4, 2022 at 7:54 PM Austin Cawley-Edwards
> >   wrote:
> > > Congrats David!
> > >
> > > On Fri, Mar 4, 2022 at 12:18 PM Zhilong Hong <
> > >> zhlongh...@gmail.com
> > 
> >  wrote:
> > >> Congratulations, David!
> > >>
> > >> Best,
> > >> Zhilong
> > >>
> > >> On Sat, Mar 5, 2022 at 1:09 AM Piotr Nowojski <
> > >>> pnowoj...@apache.org
> > >
> > >> wrote:
> > >>
> > >>> Congratulations :)
> > >>>
> > >>> pt., 4 mar 2022 o 16:04 Aitozi 
> > >>> napisał(a):
> > >>>
> >  Congratulations David!
> > 
> >  Ingo Bürk  于2022年3月4日周五 22:56写道:
> > 
> > > Congrats, David!
> > >
> > > On 04.03.22 12:34, Robert Metzger wrote:
> > >> Hi everyone,
> > >>
> > >> On behalf of the PMC, I'm very happy to announce David
> > >>> Morávek
> >  as a
> > >>> new
> > >> Flink committer.
> > >>
> > >> His first contributions to Flink date back to 2019. He has
> > >>> been
> > >> increasingly active with reviews and driving major
> > >>> initiatives
> >  in
> > >> the
> > >> community. David brings valuable experience from being a
> >  committer
> > >> in
> >  the
> > >> Apache Beam project to Flink.
> > >>
> > >>
> > >> Please join me in congratulating David for becoming a
> Flink
> > >>> committer!
> > >> Cheers,
> > >> Robert
> > >>
> > 
> > >>>
> > >>
> >
> >
>


Re: [ANNOUNCE] New Apache Flink Committer - Martijn Visser

2022-03-03 Thread Jiangang Liu
Congratulations Martijn!

Best
Liu Jiangang

Lijie Wang  于2022年3月4日周五 14:00写道:

> Congratulations Martijn!
>
> Best,
> Lijie
>
> Jingsong Li  于2022年3月4日周五 13:42写道:
>
> > Congratulations Martijn!
> >
> > Best,
> > Jingsong
> >
> > On Fri, Mar 4, 2022 at 1:09 PM Yang Wang  wrote:
> > >
> > > Congratulations Martijn!
> > >
> > > Best,
> > > Yang
> > >
> > > Yangze Guo  于2022年3月4日周五 11:33写道:
> > >
> > > > Congratulations!
> > > >
> > > > Best,
> > > > Yangze Guo
> > > >
> > > > On Fri, Mar 4, 2022 at 11:23 AM Lincoln Lee 
> > > > wrote:
> > > > >
> > > > > Congratulations Martijn!
> > > > >
> > > > > Best,
> > > > > Lincoln Lee
> > > > >
> > > > >
> > > > > Yu Li  于2022年3月4日周五 11:09写道:
> > > > >
> > > > > > Congratulations!
> > > > > >
> > > > > > Best Regards,
> > > > > > Yu
> > > > > >
> > > > > >
> > > > > > On Fri, 4 Mar 2022 at 10:31, Zhipeng Zhang <
> > zhangzhipe...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Congratulations Martijn!
> > > > > > >
> > > > > > > Qingsheng Ren  于2022年3月4日周五 10:14写道:
> > > > > > >
> > > > > > > > Congratulations Martijn!
> > > > > > > >
> > > > > > > > Best regards,
> > > > > > > >
> > > > > > > > Qingsheng Ren
> > > > > > > >
> > > > > > > > > On Mar 4, 2022, at 9:56 AM, Leonard Xu 
> > > > wrote:
> > > > > > > > >
> > > > > > > > > Congratulations and well deserved Martjin !
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Leonard
> > > > > > > > >
> > > > > > > > >> 2022年3月4日 上午7:55,Austin Cawley-Edwards <
> > austin.caw...@gmail.com
> > > > >
> > > > > > 写道:
> > > > > > > > >>
> > > > > > > > >> Congrats Martijn!
> > > > > > > > >>
> > > > > > > > >> On Thu, Mar 3, 2022 at 10:50 AM Robert Metzger <
> > > > rmetz...@apache.org
> > > > > > >
> > > > > > > > wrote:
> > > > > > > > >>
> > > > > > > > >>> Hi everyone,
> > > > > > > > >>>
> > > > > > > > >>> On behalf of the PMC, I'm very happy to announce Martijn
> > > > Visser as
> > > > > > a
> > > > > > > > new
> > > > > > > > >>> Flink committer.
> > > > > > > > >>>
> > > > > > > > >>> Martijn is a very active Flink community member, driving
> a
> > lot
> > > > of
> > > > > > > > efforts
> > > > > > > > >>> on the dev@flink mailing list. He also pushes projects
> > such as
> > > > > > > > replacing
> > > > > > > > >>> Google Analytics with Matomo, so that we can generate our
> > web
> > > > > > > analytics
> > > > > > > > >>> within the Apache Software Foundation.
> > > > > > > > >>>
> > > > > > > > >>> Please join me in congratulating Martijn for becoming a
> > Flink
> > > > > > > > committer!
> > > > > > > > >>>
> > > > > > > > >>> Cheers,
> > > > > > > > >>> Robert
> > > > > > > > >>>
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > best,
> > > > > > > Zhipeng
> > > > > > >
> > > > > >
> > > >
> >
>


Re: [DISCUSS] Enable scala formatting check

2022-03-02 Thread Jiangang Liu
+1 for the feature. Good style management can help the developer a lot.

Marios Trivyzas  于2022年3月2日周三 18:19写道:

> +1 from me as well, Having a unified auto-formatter for scala would be
> great.
> Currently we don't have consistency in our code base, and this makes it
> more difficult
> to read and work on the scala code.
>
> Best,
> Marios
>
> On Wed, Mar 2, 2022 at 11:41 AM wenlong.lwl 
> wrote:
>
> > +1, currently the scalastyle does not work well actually, there are a lot
> > of style differences in different files. It would be great if the code
> can
> > be auto formatted.
> >
> > Best,
> > Wenlong
> >
> > On Wed, 2 Mar 2022 at 16:34, Jingsong Li  wrote:
> >
> > > +1.
> > >
> > > Thanks for driving.
> > >
> > > I wrote some scala code, the style of our flink's scala is messy. We
> > > can do better.
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Wed, Mar 2, 2022 at 4:19 PM Yun Tang  wrote:
> > > >
> > > > +1
> > > >
> > > > I also noticed that the project of scalafmt [1] is much more active
> > than
> > > scalatyle [2], which has no release in the past 4 years.
> > > >
> > > >
> > > > [1] https://github.com/scalameta/scalafmt/releases
> > > > [2] https://github.com/scalastyle/scalastyle/tags
> > > >
> > > > Best
> > > > Yun Tang
> > > >
> > > > 
> > > > From: Konstantin Knauf 
> > > > Sent: Wednesday, March 2, 2022 15:01
> > > > To: dev 
> > > > Subject: Re: [DISCUSS] Enable scala formatting check
> > > >
> > > > +1 I've never written any Scala in Flink, but this makes a lot of
> sense
> > > to
> > > > me. Converging on a smaller set of tools and simplifying the build is
> > > > always a good idea and the Community already concluded before that
> > > spotless
> > > > is generally a good approach.
> > > >
> > > > On Tue, Mar 1, 2022 at 5:52 PM Francesco Guardiani <
> > > france...@ververica.com>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I want to propose to enable the spotless scalafmt integration and
> > > remove
> > > > > the scalastyle plugin.
> > > > >
> > > > > From an initial analysis, scalafmt can do everything scalastyle can
> > > do, and
> > > > > the integration with spotless looks easy to enable:
> > > > > https://github.com/diffplug/spotless/tree/main/plugin-maven#scala.
> > The
> > > > > scalafmt conf file gets picked up automatically from every IDE, and
> > it
> > > can
> > > > > be heavily tuned.
> > > > >
> > > > > This way we can unify the formatting and integrate with our CI
> > without
> > > any
> > > > > additional configurations. And we won't need scalastyle anymore, as
> > > > > scalafmt will take care of the checks:
> > > > >
> > > > > * mvn spotless:check will check both java and scala
> > > > > * mvn spotless:apply will format both java and scala
> > > > >
> > > > > WDYT?
> > > > >
> > > > > FG
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Francesco Guardiani | Software Engineer
> > > > >
> > > > > france...@ververica.com
> > > > >
> > > > >
> > > > > 
> > > > >
> > > > > Follow us @VervericaData
> > > > >
> > > > > --
> > > > >
> > > > > Join Flink Forward  - The Apache Flink
> > > > > Conference
> > > > >
> > > > > Stream Processing | Event Driven | Real Time
> > > > >
> > > > > --
> > > > >
> > > > > Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
> > > > >
> > > > > --
> > > > >
> > > > > Ververica GmbH
> > > > >
> > > > > Registered at Amtsgericht Charlottenburg: HRB 158244 B
> > > > >
> > > > > Managing Directors: Karl Anton Wehner, Holger Temme, Yip Park Tung
> > > Jason,
> > > > > Jinwei (Kevin) Zhang
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Konstantin Knauf
> > > >
> > > > https://twitter.com/snntrable
> > > >
> > > > https://github.com/knaufk
> > >
> >
>
>
> --
> Marios
>


Re: Re: Change of focus

2022-02-28 Thread Jiangang Liu
Thanks for the efforts and help in flink, Till. Good luck!

Best
Liu Jiangang

Lijie Wang  于2022年3月1日周二 09:53写道:

> Thanks for all your efforts Till. Good luck !
>
> Best,
> Lijie
>
> Yun Gao  于2022年3月1日周二 01:15写道:
>
> > Very thanks Till for all the efforts! Good luck for the next chapter~
> >
> > Best,
> > Yun
> >
> > --
> > Sender:Piotr Nowojski
> > Date:2022/02/28 22:10:46
> > Recipient:dev
> > Theme:Re: Change of focus
> >
> > Good luck Till and thanks for all of your efforts.
> >
> > Best,
> > Piotrek
> >
> > pon., 28 lut 2022 o 15:06 Aitozi  napisał(a):
> >
> > > Good luck with the next chapter, will miss you :)
> > >
> > > Best,
> > > Aitozi
> > >
> > > Jark Wu  于2022年2月28日周一 21:28写道:
> > >
> > > > Thank you Till for every things. It's great to work with you. Good
> > luck!
> > > >
> > > > Best,
> > > > Jark
> > > >
> > > > On Mon, 28 Feb 2022 at 21:26, Márton Balassi <
> balassi.mar...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Thank you, Till. Good luck with the next chapter. :-)
> > > > >
> > > > > On Mon, Feb 28, 2022 at 1:49 PM Flavio Pompermaier <
> > > pomperma...@okkam.it
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Good luck for your new adventure Till!
> > > > > >
> > > > > > On Mon, Feb 28, 2022 at 12:00 PM Till Rohrmann <
> > trohrm...@apache.org
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > I wanted to let you know that I will be less active in the
> > > community
> > > > > > > because I’ve decided to start a new chapter in my life. Hence,
> > > please
> > > > > > don’t
> > > > > > > wonder if I might no longer be very responsive on mails and
> JIRA
> > > > > issues.
> > > > > > >
> > > > > > > It is great being part of such a great community with so many
> > > amazing
> > > > > > > people. Over the past 7,5 years, I’ve learned a lot thanks to
> you
> > > and
> > > > > > > together we have shaped how people think about stream
> processing
> > > > > > nowadays.
> > > > > > > This is something we can be very proud of. I am sure that the
> > > > community
> > > > > > > will continue innovating and setting the pace for what is
> > possible
> > > > with
> > > > > > > real time processing. I wish you all godspeed!
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Till
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
>