[ 
https://issues.apache.org/jira/browse/SPARK-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14139845#comment-14139845
 ] 

Vinod Kone commented on SPARK-3535:
-----------------------------------

This can happen if the spark executor doesn't use any cpus (or memory) and 
there are no tasks running on it. Note that in the next release of Mesos, such 
an executor is not allowed to launch. 
https://issues.apache.org/jira/browse/MESOS-1807

> Spark on Mesos not correctly setting heap overhead
> --------------------------------------------------
>
>                 Key: SPARK-3535
>                 URL: https://issues.apache.org/jira/browse/SPARK-3535
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos
>    Affects Versions: 1.1.0
>            Reporter: Brenden Matthews
>
> Spark on Mesos does account for any memory overhead.  The result is that 
> tasks are OOM killed nearly 95% of the time.
> Like with the Hadoop on Mesos project, Spark should set aside 15-25% of the 
> executor memory for JVM overhead.
> For example, see: 
> https://github.com/mesos/hadoop/blob/master/src/main/java/org/apache/hadoop/mapred/ResourcePolicy.java#L55-L63



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to