[ 
https://issues.apache.org/jira/browse/SPARK-19606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187058#comment-16187058
 ] 

paul mackles commented on SPARK-19606:
--------------------------------------

+1 to being able to constrain drivers and another +1 to [~pgillet]'s suggestion 
for allowing drivers to be constrained to different resources than the 
executors. 

However, given my understanding, I don't think that using 
"spark.mesos.dispatcher.driverDefault.spark.mesos.constraints" will work. If 
"spark.mesos.constraints" is passed with the job then it will wind up 
overriding the value specified in the "driverDefault" property. If 
"spark.mesos.constraints" is not passed with the job, then the value specified 
in the "driverDefault" property will get passed to the executors - which we 
definitely don't want.

To maintain backwards compatibility while allowing drivers/executors to be 
constrained to either the same or different resources, I propose an additional 
property:

spark.mesos.constraints.driver

The new property could be set per job or for all jobs using 
"spark.mesos.dispatcher.driverDefault.*". The existing property 
"spark.mesos.constraints" would continue to apply to executors only.

If we can come to a consensus on this, I am happy to work on the PR

> Support constraints in spark-dispatcher
> ---------------------------------------
>
>                 Key: SPARK-19606
>                 URL: https://issues.apache.org/jira/browse/SPARK-19606
>             Project: Spark
>          Issue Type: New Feature
>          Components: Mesos
>    Affects Versions: 2.1.0
>            Reporter: Philipp Hoffmann
>
> The `spark.mesos.constraints` configuration is ignored by the 
> spark-dispatcher. The constraints need to be passed in the Framework 
> information when registering with Mesos.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to