[ 
https://issues.apache.org/jira/browse/SPARK-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankur Chauhan updated SPARK-6707:
---------------------------------
    Description: 
Currently, the mesos scheduler only looks at the `cpu` and `mem` resources when 
trying to determine the usablility of a resource offer from a mesos slave node. 
It may be preferable for the user to be able to ensure that the spark jobs are 
only started on a certain set of nodes (based on attributes). 

For example, If the user sets a property, let's say `spark.mesos.constraints` 
is set to `tachyon=true;us-east-1=false`, then the resource offers will be 
checked to see if they meet both these constraints and only then will be 
accepted to start new executors.

> Mesos Scheduler should allow the user to specify constraints based on slave 
> attributes
> --------------------------------------------------------------------------------------
>
>                 Key: SPARK-6707
>                 URL: https://issues.apache.org/jira/browse/SPARK-6707
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos, Scheduler
>    Affects Versions: 1.3.0
>            Reporter: Ankur Chauhan
>              Labels: mesos, scheduler
>
> Currently, the mesos scheduler only looks at the `cpu` and `mem` resources 
> when trying to determine the usablility of a resource offer from a mesos 
> slave node. It may be preferable for the user to be able to ensure that the 
> spark jobs are only started on a certain set of nodes (based on attributes). 
> For example, If the user sets a property, let's say `spark.mesos.constraints` 
> is set to `tachyon=true;us-east-1=false`, then the resource offers will be 
> checked to see if they meet both these constraints and only then will be 
> accepted to start new executors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to