Github user tgravescs commented on the pull request:

    https://github.com/apache/spark/pull/9095#issuecomment-147721566
  
    So actually against this change. It breaks backwards compatibility and I 
think the current behavior is what we want.
    
    @jerryshao  why do you think this is a problem?
    
    If YARN doesn't schedule for cores then the options are to limit it to what 
it gives you (which is 1 simply as a default since it isn't managing them) or 
allow SPARK to go ahead and use what the user asked for.  The way it is now 
(without this patch)  it allows spark to use more then 1 since the scheduler 
can't schedule them.  Its up to the user to do something reasonable.  Otherwise 
there is no way to allow spark to use more then 1 core with the 
DefaultResourceCalculator which I think would be a limitation.   


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to