[ https://issues.apache.org/jira/browse/SPARK-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14010099#comment-14010099 ]
Andrew Ash commented on SPARK-1103: ----------------------------------- https://github.com/apache/spark/pull/126 > Garbage collect RDD information inside of Spark > ----------------------------------------------- > > Key: SPARK-1103 > URL: https://issues.apache.org/jira/browse/SPARK-1103 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Reporter: Patrick Wendell > Assignee: Tathagata Das > Priority: Blocker > Fix For: 1.0.0 > > > When Spark jobs run for a long period of time, state accumulates. This is > dealt with now using TTL-based cleaning. Instead we should do proper garbage > collection using weak references. -- This message was sent by Atlassian JIRA (v6.2#6252)