>
> An alternative behavior is to launch the job with the best resource offer
> Mesos is able to give


Michael has just made an excellent explanation about dynamic allocation
support in mesos. But IIUC, what you want to achieve is something like
(using RAM as an example) : "Launch each executor with at least 1GB RAM,
but if mesos offers 2GB at some moment, then launch an executor with 2GB
RAM".

I wonder what's benefit of that? To reduce the "resource fragmentation"?

Anyway, that is not supported at this moment. In all the supported cluster
managers of spark (mesos, yarn, standalone, and the up-to-coming spark on
kubernetes), you have to specify the cores and memory of each executor.

It may not be supported in the future, because only mesos has the concepts
of offers because of its two-level scheduling model.


On Sat, Jan 28, 2017 at 1:35 AM, Ji Yan <ji...@drive.ai> wrote:

> Dear Spark Users,
>
> Currently is there a way to dynamically allocate resources to Spark on
> Mesos? Within Spark we can specify the CPU cores, memory before running
> job. The way I understand is that the Spark job will not run if the CPU/Mem
> requirement is not met. This may lead to decrease in overall utilization of
> the cluster. An alternative behavior is to launch the job with the best
> resource offer Mesos is able to give. Is this possible with the current
> implementation?
>
> Thanks
> Ji
>
> The information in this email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>

Reply via email to