[ https://issues.apache.org/jira/browse/SPARK-15340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337421#comment-15337421 ]
Zhongshuai Pei commented on SPARK-15340: ---------------------------------------- [~clockfly] 1. I run in the cluster mode on YARN and use spark-sql 2. i run tpcds(500g and must be orc) and set driver.memory 30g 3. it is heap space OOM.you can run " jstat -gc pid" and will find the memory of old grow fast and will not be released 4. i run tpcds for 5 hours and OOM happened > Limit the size of the map used to cache JobConfs to void OOM > ------------------------------------------------------------ > > Key: SPARK-15340 > URL: https://issues.apache.org/jira/browse/SPARK-15340 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.5.0, 1.6.0 > Reporter: Zhongshuai Pei > Priority: Critical > > when i run tpcds (orc) by using JDBCServer, driver always OOM. > i find tens of thousands of Jobconf from dump file and these JobConf can not > be recycled, So we should limit the size of the map used to cache JobConfs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org