[ https://issues.apache.org/jira/browse/SPARK-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000426#comment-15000426 ]
Thomas Graves commented on SPARK-11154: --------------------------------------- It seems unnecessary to me to add new configs just to support this. I see this causing confusion to users. Personally I would rather see either adding the support for k/m/g and leaving if none specified defaults to m (although this doesn't match other spark things) or there is discussion going on about spark 2.0 and perhaps there we just change the existing configs to support k/m/g, etc, > make specificaition spark.yarn.executor.memoryOverhead consistent with > typical JVM options > ------------------------------------------------------------------------------------------ > > Key: SPARK-11154 > URL: https://issues.apache.org/jira/browse/SPARK-11154 > Project: Spark > Issue Type: Improvement > Components: Documentation, Spark Submit > Reporter: Dustin Cote > Priority: Minor > > spark.yarn.executor.memoryOverhead is currently specified in megabytes by > default, but it would be nice to allow users to specify the size as though it > were a typical -Xmx option to a JVM where you can have 'm' and 'g' appended > to the end to explicitly specify megabytes or gigabytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org