[ https://issues.apache.org/jira/browse/SPARK-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000441#comment-15000441 ]
Sean Owen commented on SPARK-11154: ----------------------------------- I think we'd have to make new properties to maintain compatibility. However I agree it's confusing. I think it's therefore not worth fixing in 1.x. At best, target this for 2.x. > make specificaition spark.yarn.executor.memoryOverhead consistent with > typical JVM options > ------------------------------------------------------------------------------------------ > > Key: SPARK-11154 > URL: https://issues.apache.org/jira/browse/SPARK-11154 > Project: Spark > Issue Type: Improvement > Components: Documentation, Spark Submit > Reporter: Dustin Cote > Priority: Minor > > spark.yarn.executor.memoryOverhead is currently specified in megabytes by > default, but it would be nice to allow users to specify the size as though it > were a typical -Xmx option to a JVM where you can have 'm' and 'g' appended > to the end to explicitly specify megabytes or gigabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org