Github user tnachen commented on the pull request:

    https://github.com/apache/spark/pull/5063#issuecomment-85096699
  
    @sryza When creating a Mesos Task, one usually define the resources 
required for the execution of the task and the resources required to run the 
Mesos executor. Again the executor role is initiate executing the task and 
report task statuses, but can do anything else if it's a custom executor 
provided by the user. (You can skip defining executor where Mesos provides a 
default one and also add a default resource padding for the default one).
    
    In Spark fine-grain mode we do have a custom executor in 
org.apache.spark.executor.MesosExecutorBackend, and cores assigned is just for 
running this executor alone which is running one per slave per app (it can run 
mulitple Spark tasks).



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to