Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/731#issuecomment-71403791
  
    @CodingCat Thanks for doing this and sorry for the delay. I haven't dug 
into the detail of `startMultiExecutorPerWorker`. Also, I don't see why we need 
a separate config to enable this. If we don't set the max cores per executor, 
then I think it's OK to assume that the executor will take all the cores (i.e. 
the max cores per executor defaults to Int.MaxValue or something, which would 
be the same as old behavior). If we do this, then do we still need to 
differentiate `startMultiExecutorPerWorker` and `startSingleExecutorPerWorker`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to