Github user sryza commented on the pull request:

    https://github.com/apache/spark/pull/5063#issuecomment-93058391
  
    My understanding based on the discussion here is that 
`spark.mesos.executor.cores` is the number of cores reserved by an executor 
*not* for use in running tasks.  So if `spark.mesos.executor.cores` is 1, 
`spark.task.cpus` is 2, and 3 tasks are running, then a total of 7 cores are 
being occupied.  The primary use case for setting it to a number different than 
1 is that Mesos allows values that are smaller than 1.  So, when running 
multiple executors per node, someone might set it to .1 in order to avoid 
sitting on a bunch of the node's cores.
    
    Did you look at the documentation for the new property?  If this wasn't 
clear, then we should probably update the doc with a better explanation or link 
to relevant Mesos doc.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to