[ 
https://issues.apache.org/jira/browse/SPARK-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961184#comment-14961184
 ] 

Sean Owen commented on SPARK-11154:
-----------------------------------

Should be for all similar properties, not just this one. The twist is that you 
have to support the current syntax. 1000 must mean "1000 megabytes". But then 
someone writing "1000000" would be surprised to find that it means "1000000 
megabytes". (CM might do just this, note.) Hence I'm actually not sure if this 
is feasible.

> make specificaition spark.yarn.executor.memoryOverhead consistent with 
> typical JVM options
> ------------------------------------------------------------------------------------------
>
>                 Key: SPARK-11154
>                 URL: https://issues.apache.org/jira/browse/SPARK-11154
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation, Spark Submit
>            Reporter: Dustin Cote
>            Priority: Minor
>
> spark.yarn.executor.memoryOverhead is currently specified in megabytes by 
> default, but it would be nice to allow users to specify the size as though it 
> were a typical -Xmx option to a JVM where you can have 'm' and 'g' appended 
> to the end to explicitly specify megabytes or gigabytes.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to