So Spark can dynamically scale on YARN, but standalone mode becomes a bit
complicated — where do you envision Spark gets the extra resources from?

On Wed, Oct 26, 2022 at 12:18 PM Artemis User <arte...@dtechspace.com>
wrote:

> Has anyone tried to make a Spark cluster dynamically scalable, i.e.,
> adding a new worker node automatically to the cluster when no more
> executors are available upon a new job submitted?  We need to make the
> whole cluster on-prem and really lightweight, so standalone mode is
> preferred and no k8s if possible.   Any suggestion?  Thanks in advance!
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
> --
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.):
https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Reply via email to