[ https://issues.apache.org/jira/browse/SPARK-22471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Felix Cheung updated SPARK-22471: --------------------------------- Target Version/s: 2.2.2 > SQLListener consumes much memory causing OutOfMemoryError > --------------------------------------------------------- > > Key: SPARK-22471 > URL: https://issues.apache.org/jira/browse/SPARK-22471 > Project: Spark > Issue Type: Bug > Components: SQL, Web UI > Affects Versions: 2.2.0 > Environment: Spark 2.2.0, Linux > Reporter: Arseniy Tashoyan > Assignee: Arseniy Tashoyan > Labels: memory-leak, sql > Fix For: 2.2.2 > > Attachments: SQLListener_retained_size.png, > SQLListener_stageIdToStageMetrics_retained_size.png > > Original Estimate: 72h > Remaining Estimate: 72h > > _SQLListener_ may grow very large when Spark runs complex multi-stage > requests. The listener tracks metrics for all stages in > __stageIdToStageMetrics_ hash map. _SQLListener_ has some means to cleanup > this hash map regularly, but this is not enough. Precisely, the method > _trimExecutionsIfNecessary_ ensures that __stageIdToStageMetrics_ does not > have metrics for very old data; this method runs on each execution completion. > However, if an execution has many stages, _SQLListener_ keeps adding new > entries to __stageIdToStageMetrics_ without calling > _trimExecutionsIfNecessary_. The hash map may grow to enormous size. > Strictly speaking, it is not a memory leak, because finally > _trimExecutionsIfNecessary_ cleans the hash map. However, the driver program > has high odds to crash with OutOfMemoryError (and it does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org