Andrew Or created SPARK-12081: --------------------------------- Summary: Make unified memory management work with small heaps Key: SPARK-12081 URL: https://issues.apache.org/jira/browse/SPARK-12081 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 1.6.0 Reporter: Andrew Or Assignee: Andrew Or Priority: Critical
By default, Spark drivers and executors are 1GB. With the recent unified memory mode, only 250MB is set aside for non-storage non-execution purposes (spark.memory.fraction is 75%). However, especially in local mode, the driver needs at least ~300MB. Some local jobs started to OOM because of this. Two mutually exclusive proposals: (1) First, cut out 300 MB, then take 75% of what remains (2) Use min(75% of JVM heap size, JVM heap size - 300MB) -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org