[ https://issues.apache.org/jira/browse/SPARK-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641856#comment-14641856 ]
Apache Spark commented on SPARK-9353: ------------------------------------- User 'andrewor14' has created a pull request for this issue: https://github.com/apache/spark/pull/7668 > Standalone scheduling memory requirement incorrect if cores per executor is > not set > ----------------------------------------------------------------------------------- > > Key: SPARK-9353 > URL: https://issues.apache.org/jira/browse/SPARK-9353 > Project: Spark > Issue Type: Bug > Components: Deploy > Affects Versions: 1.5.0 > Reporter: Andrew Or > Assignee: Andrew Or > > I tried to come up with a more succinct title. > The issue only happens if `spark.executor.cores` is not set. Right now if we > have a worker with 8G, and we set `spark.executor.memory` to 1G, then the > executor launched on the worker can have at most 8 cores, even if the worker > has more cores available. > This is caused by the fix in SPARK-8881. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org