[ 
https://issues.apache.org/jira/browse/SPARK-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730401#comment-14730401
 ] 

Iulian Dragos commented on SPARK-4940:
--------------------------------------

[~doctapp] I guess there is no perfect scheduling for every application, so 
some level of configuration may help. We could implement some form of queue 
based scheduling, with applications assigned to queues, and queues having 
different algorithms. But I fear the additional complexity may not pay off.

What is the case you had in mind? I'm wondering if different applications 
specify their preferred allocation strategy would really help. Assuming 2 apps 
on the cluster, one is streaming (prefers round-robin), the other one is... 
something else, that prefers "fill-up-slave". Would this really buy anything to 
the second one? And what type of application prefers to have multiple executors 
on the same slave, given the choice? I'd imagine in most cases having more 
slaves is better (for instance IO interference is lower).

> Support more evenly distributing cores for Mesos mode
> -----------------------------------------------------
>
>                 Key: SPARK-4940
>                 URL: https://issues.apache.org/jira/browse/SPARK-4940
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>            Reporter: Timothy Chen
>         Attachments: mesos-config-difference-3nodes-vs-2nodes.png
>
>
> Currently in Coarse grain mode the spark scheduler simply takes all the 
> resources it can on each node, but can cause uneven distribution based on 
> resources available on each slave.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to