On Sun, May 15, 2016 at 8:19 AM, Mail.com <pradeep.mi...@mail.com> wrote:

> In all that I have seen, it seems each job has to be given the max resources 
> allowed in the cluster.

Hi,

I'm fairly sure it was because FIFO scheduling mode was used. You
could change it to FAIR and make some adjustments.

https://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application

It may also a little bit depend on your resource manager (aka cluster
manager) but just a little bit since after resources are assigned and
executors spawned, the resources are handled by Spark itself.

Jacek

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to