Hi Weiwei

Thanks for sharing. I checked the video and for Alibaba's use case, they
have a mixed cluster for streaming and batch applications running with
Apache Flink. Our use case is different. We only use Apache Flink for
stream processing in physical clusters separate from Spark for batch
processing.

As we know, streaming applications are long-running and need to secure all
requested resources before starting to run. In most cases, they do not have
a strong need to be queued, ordered, or preempted to wait to obtain or give
back their resource.

I'm gathering more streaming use case requirements that could not be
satisfied by K8s namespace for resource quota management or other advanced
scheduling needs. Will keep this thread updated.

Meanwhile, happy to hear more thoughts from you!

Best,
Chenya

On Tue, Jan 4, 2022 at 9:20 PM Weiwei Yang <w...@apache.org> wrote:

> Hi Chenya
>
> The use case is similar, YK will play a big role there. Lots of features
> are relevant, such as queues, job ordering, user/group ACLs, preemption,
> over-subscription, and performance etc.
> Some of the basic functionalities are available in YK, some more needs to
> be built.
> Please take a look at the slides from the Alibaba Flink team, they have
> shared how they use YK to address their use cases.
> This was presented in ApacheConf:
> https://www.youtube.com/watch?v=4hghJCuZk5M
>
> On Tue, Jan 4, 2022 at 6:35 PM Chenya Zhang <chenyazhangche...@gmail.com>
> wrote:
>
> > Hey folks,
> >
> > We have some new streaming use cases with Apache Flink that could
> > potentially leverage YuniKorn for resource scheduling.
> >
> > The initial implementation is to use K8s namespace for resource quota
> > management. We are investigating what could be some strong benefits
> > switching to YuniKorn in streaming cases for long-running services. For
> > example: Job queueing, job ordering, resource reservation, user groups
> etc
> > all seem to be more desirable for batch use cases.
> >
> > Any thoughts or suggestions?
> >
> > Thanks,
> > Chenya
> >
>

Reply via email to