> The way I understand is that the Spark job will not run if the CPU/Mem
requirement is not met.

Spark jobs will still run if they only have a subset of the requested
resources.  Tasks begin scheduling as soon as the first executor comes up.
Dynamic allocation yields increased utilization by only allocating as many
executors as a job needs, rather than a single static amount set up front.

Dynamic Allocation is supported in Spark on Mesos, but we here at
Mesosphere haven't been testing it much, and I'm not sure what the
community adoption is.  So I can't yet speak to its robustness, but we will
be investing in it soon.  Many users want it.

On Fri, Jan 27, 2017 at 9:35 AM, Ji Yan <ji...@drive.ai> wrote:

> Dear Spark Users,
>
> Currently is there a way to dynamically allocate resources to Spark on
> Mesos? Within Spark we can specify the CPU cores, memory before running
> job. The way I understand is that the Spark job will not run if the CPU/Mem
> requirement is not met. This may lead to decrease in overall utilization of
> the cluster. An alternative behavior is to launch the job with the best
> resource offer Mesos is able to give. Is this possible with the current
> implementation?
>
> Thanks
> Ji
>
> The information in this email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>



-- 
Michael Gummelt
Software Engineer
Mesosphere

Reply via email to