YARN capacity scheduler support hierarchical queues, which you can assign
cluster resource as percentage. Your spark application/shell can be
submitted to different queues. Mesos supports fine-grained mode, which
allows the machines/cores used each executors ramp up and down.

Lan

On Wed, Apr 22, 2015 at 2:32 PM, yana <yana.kadiy...@gmail.com> wrote:

> Yes. Fair schedulwr only helps concurrency within an application.  With
> multiple shells you'd either need something like Yarn/Mesos or careful math
> on resources as you said
>
>
> Sent on the new Sprint Network from my Samsung Galaxy S®4.
>
>
> -------- Original message --------
> From: Arun Patel
> Date:04/22/2015 6:28 AM (GMT-05:00)
> To: user
> Subject: Scheduling across applications - Need suggestion
>
> I believe we can use the properties like --executor-memory
>  --total-executor-cores to configure the resources allocated for each
> application.  But, in a multi user environment, shells and applications are
> being submitted by multiple users at the same time.  All users are
> requesting resources with different properties.  At times, some users are
> not getting resources of the cluster.
>
>
> How to control resource usage in this case?  Please share any best
> practices followed.
>
>
> As per my understanding, Fair scheduler can used for scheduling tasks
> within an application but not across multiple applications.  Is this
> correct?
>
>
> Regards,
>
> Arun
>

Reply via email to