> Although YARN serves as the platform for Flink, does YARN also operate on
K8s?
YARN is an alternative to k8s and Flink should make no assumptions about
how it's deployed, even though some companies might deploy it as an overlay
RM on top of k8s (I doubt that, but I guess they might do it for
leg
Hi Ramkrishna,
I hope this email finds you well. Please accept my apologies for the delay
in responding to your previous message.
I would like to discuss the following matter with you: Although YARN serves
as the platform for Flink, does YARN also operate on K8s? I am curious to
know if this is
so the support of yarn is also more important. We have
> now
> > started to promote Autoscaling in our internal business. The model we use
> > is the DS2 model similar to flip-271. In the near future, we will also
> > communicate with you about the problems we encounter online.
> > > >
&g
caling in our internal business. The model we use
> is the DS2 model similar to flip-271. In the near future, we will also
> communicate with you about the problems we encounter online.
> > >
> > >
> > >
> > > --
> > >
> > > Best
--
> > >
> > > Best,
> > > Matt Wang
> > >
> > >
> > > Replied Message
> > > | From | Rui Fan<19...@gmail.com> |
> > > | Date | 02/20/2023 10:35 |
> > > | To | |
> > > | Subject | Re: [DISCUSS]
blems we encounter online.
> >
> >
> >
> > --
> >
> > Best,
> > Matt Wang
> >
> >
> > Replied Message ----
> > | From | Rui Fan<19...@gmail.com> |
> > | Date | 02/20/2023 10:35 |
> > | To | |
> > |
t;
>
>
> --
>
> Best,
> Matt Wang
>
>
> Replied Message
> | From | Rui Fan<19...@gmail.com> |
> | Date | 02/20/2023 10:35 |
> | To | |
> | Subject | Re: [DISCUSS] Extract core autoscaling algorithm as new SubModule
> in flink-kubernetes
e: [DISCUSS] Extract core autoscaling algorithm as new SubModule
in flink-kubernetes-operator |
Hi Gyula, Samrat and Shammon,
My team is also looking forward to autoscaler is compatible with yarn.
Currently, all of our flink jobs are running on yarn. And autoscaler is
a great feature for flink us
Hi Gyula, Samrat and Shammon,
My team is also looking forward to autoscaler is compatible with yarn.
Currently, all of our flink jobs are running on yarn. And autoscaler is
a great feature for flink users, it can greatly simplify the process of
tuning parallelism.
If the autoscaler supports yarn
Hi Samrat
My team is also looking at this piece. After you give your proposal, we
also hope to join it with you if possible. I hope we can improve this
together for use in our production too, thanks :)
Best,
Shammon
On Fri, Feb 17, 2023 at 9:27 PM Samrat Deb wrote:
> @Gyula
> Thank you
> We wi
@Gyula
Thank you
We will work on this and try to come up with an approach.
On Fri, Feb 17, 2023 at 6:12 PM Gyula Fóra wrote:
> In case you guys feel strongly about this I suggest you try to fork the
> autoscaler implementation and make a version that works with both the
> Kubernetes operator
In case you guys feel strongly about this I suggest you try to fork the
autoscaler implementation and make a version that works with both the
Kubernetes operator and YARN.
If your solution is generic and works well, we can discuss the way forward.
Unfortunately me or my team don't really have the
@Gyula
>> It is easier to make the operator work with jobs running in different
types of clusters than to take the
autoscaler module itself and plug that in somewhere else.
Our (part of Samrat's team) main problem is to leverage the AutoScaler
Recommendation Engine part of Flink-Kubernetes-Operat
Hi Gyula, Samrat
Thanks for your input and I totally agree with you that it's really big
work. As @Samrat mentioned above, I think it's not a short way to make the
autoscaler completely independent too. But I still find some valuable
points for the `completely independent autoscaler`, and I think
@Shammon , Samrat:
I appreciate the enthusiasm and I wish this was only a matter of intention
but making the autoscaler work without the operator may be a pretty big
task.
You must not forget 2 core requirements here.
1. The autoscaler logic itself has to run somewhere (in this case on k8s
within
Hi Shammon,
Thank you for your input, completely aligned with you.
We are fine with either of the options ,
but IMO, to start with it will be easy to have it in the
flink-kubernetes-operator as a module instead of a separate repo which
requires additional effort.
Given that we would be incremen
Hi Max ,
If you are fine and aligned with the same thought , since this is going to
be very useful to us, we are ready to help / contribute additional work
required.
Bests,
Samrat
On Thu, 16 Feb 2023 at 5:28 PM, Shammon FY wrote:
> Hi Samrat
>
> Do you mean to create an independent module for
Hi Samrat
Do you mean to create an independent module for flink scaling in
flink-k8s-operator? How about creating a project such as
`flink-auto-scaling` which is completely independent? Besides resource
managers such as k8s and yarn, we can do more things in the project, for
example, updating conf
Hi Samrat,
The autoscaling module is now pluggable but it is still tightly
coupled with Kubernetes. It will take additional work for the logic to
work independently of the cluster manager.
-Max
On Thu, Feb 16, 2023 at 11:14 AM Samrat Deb wrote:
>
> Oh! yesterday it got merged.
> Apologies , I m
Oh! yesterday it got merged.
Apologies , I missed the recent commit @Gyula.
Thanks for the update
On Thu, Feb 16, 2023 at 3:17 PM Gyula Fóra wrote:
> Max recently moved the autoscaler logic in a separate submodule, did you
> see that?
>
>
> https://github.com/apache/flink-kubernetes-operator/
Max recently moved the autoscaler logic in a separate submodule, did you
see that?
https://github.com/apache/flink-kubernetes-operator/commit/5bb8e9dc4dd29e10f3ba7c8ce7cefcdffbf92da4
Gyula
On Thu, Feb 16, 2023 at 10:27 AM Samrat Deb wrote:
> Hi ,
>
> *Context:*
> Auto Scaling was introduced in
Hi ,
*Context:*
Auto Scaling was introduced in Flink as part of FLIP-271[1].
It discusses one of the important aspects to provide a robust default
scaling algorithm.
a. Ensure scaling yields effective usage of assigned task slots.
b. Ramp up in case of any backlog to ensure it gets pro
22 matches
Mail list logo