[ 
https://issues.apache.org/jira/browse/SPARK-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14058943#comment-14058943
 ] 

Sean Owen commented on SPARK-2444:
----------------------------------

That's all right. I'm mostly saying I expected a small multiplier here rather 
than additive factor as the overhead tends to scale up with heap size and work 
load rather than be fairly constant. 384MB isn't big enough for large-ish 
executors. It's configurable of course, and I think it's going to come up a lot.

I am likely imagining this and mixing up with something else, but I thought 
there was a multiplier built in somewhere here _as well_ and that this was a 
small fudge for non-heap JVM memory only. If not, I agree that it can't hurt to 
make this one clear, although where else in the docs I'm not sure. (And I 
suppose then that's the answer to SPARK-2398)


> Make spark.yarn.executor.memoryOverhead a first class citizen
> -------------------------------------------------------------
>
>                 Key: SPARK-2444
>                 URL: https://issues.apache.org/jira/browse/SPARK-2444
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation
>    Affects Versions: 1.0.0
>            Reporter: Nishkam Ravi
>
> Higher value of spark.yarn.executor.memoryOverhead is critical to running 
> Spark applications on Yarn (https://issues.apache.org/jira/browse/SPARK-2398) 
> at least for 1.0. It would be great to have this parameter highlighted in the 
> docs/usage. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to