[ https://issues.apache.org/jira/browse/SPARK-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Matei Zaharia resolved SPARK-9244. ---------------------------------- Resolution: Fixed Fix Version/s: 1.5.0 > Increase some default memory limits > ----------------------------------- > > Key: SPARK-9244 > URL: https://issues.apache.org/jira/browse/SPARK-9244 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Reporter: Matei Zaharia > Assignee: Matei Zaharia > Priority: Minor > Fix For: 1.5.0 > > > There are a few memory limits that people hit often and that we could make > higher, especially now that memory sizes have grown. > - spark.akka.frameSize: This defaults at 10 but is often hit for map output > statuses in large shuffles. AFAIK the memory is not fully allocated up-front, > so we can just make this larger and still not affect jobs that never sent a > status that large. > - spark.executor.memory: Defaults at 512m, which is really small. We can at > least increase it to 1g, though this is something users do need to set on > their own. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org