Github user tnachen commented on the pull request:

    https://github.com/apache/spark/pull/4027#issuecomment-158668110
  
    @andrewor14 I've updated the patch now. Originally you suggested me to look 
at deploy/master.scala to try to use the same configurations like 
spark.executor.cores. But in the end spark.executor.cores are referring to a 
set number of cores that will be used to launch per spark executor, but in this 
case we're trying to specify a maximum number of cores that can potentially 
launch your coarse grain executor/worker, and Mesos scheduler will launch an 
executors using between 1 to the max number of cores, and maximumly launch the 
"max executors per slave" amount per slave.
    
    So I think having a spark.mesos.coarse.executor.cores.max or something 
similiar still makes sense. What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to