[ 
https://issues.apache.org/jira/browse/SPARK-23499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16398999#comment-16398999
 ] 

Pascal GILLET commented on SPARK-23499:
---------------------------------------

Yes, I have thought of that design. It would be more consistent that way. But 
conversely, the user can decide to set up a simple policy that would be 
applicable at the dispatcher level only: if no (weighted) role is declared on 
the Mesos side, the drivers can still be prioritized in the dispatcher's queue. 
Once the drivers are executed on Mesos, they are given equal resources.

Another possibility is to add a new boolean property 
_spark.mesos.dispatcher.queue.mapToMesosWeights_ to effectively map the 
drivers' priorities to the Mesos roles __ (false by default). If true, the 
_spark.mesos.dispatcher.queue.[QueueName]_ cannot be used anymore.

In this way, the user can decide which policy is the best. What do you think?

> Mesos Cluster Dispatcher should support priority queues to submit drivers
> -------------------------------------------------------------------------
>
>                 Key: SPARK-23499
>                 URL: https://issues.apache.org/jira/browse/SPARK-23499
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>    Affects Versions: 2.2.1, 2.2.2, 2.3.0, 2.3.1
>            Reporter: Pascal GILLET
>            Priority: Major
>         Attachments: Screenshot from 2018-02-28 17-22-47.png
>
>
> As for Yarn, Mesos users should be able to specify priority queues to define 
> a workload management policy for queued drivers in the Mesos Cluster 
> Dispatcher.
> Submitted drivers are *currently* kept in order of their submission: the 
> first driver added to the queue will be the first one to be executed (FIFO).
> Each driver could have a "priority" associated with it. A driver with high 
> priority is served (Mesos resources) before a driver with low priority. If 
> two drivers have the same priority, they are served according to their submit 
> date in the queue.
> To set up such priority queues, the following changes are proposed:
>  * The Mesos Cluster Dispatcher can optionally be configured with the 
> _spark.mesos.dispatcher.queue.[QueueName]_ property. This property takes a 
> float as value. This adds a new queue named _QueueName_ for submitted drivers 
> with the specified priority.
>  Higher numbers indicate higher priority.
>  The user can then specify multiple queues.
>  * A driver can be submitted to a specific queue with 
> _spark.mesos.dispatcher.queue_. This property takes the name of a queue 
> previously declared in the dispatcher as value.
> By default, the dispatcher has a single "default" queue with 0.0 priority 
> (cannot be overridden). If none of the properties above are specified, the 
> behavior is the same as the current one (i.e. simple FIFO).
> Additionaly, it is possible to implement a consistent and overall workload 
> management policy throughout the lifecycle of drivers by mapping these 
> priority queues to weighted Mesos roles if any (i.e. from the QUEUED state in 
> the dispatcher to the final states in the Mesos cluster), and by specifying a 
> _spark.mesos.role_ along with a _spark.mesos.dispatcher.queue_ when 
> submitting an application.
> For example, with the URGENT Mesos role:
> {code:java}
> # Conf on the dispatcher side
> spark.mesos.dispatcher.queue.URGENT=1.0
> # Conf on the driver side
> spark.mesos.dispatcher.queue=URGENT
> spark.mesos.role=URGENT
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to