[ https://issues.apache.org/jira/browse/SPARK-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499733#comment-16499733 ]
gagan taneja commented on SPARK-24437: -------------------------------------- I have already set the values of spark.cleaner.periodicGC.interval to 5 min Which branch did you try to reproduce this issue. There are many change made in this area to address some other leaks. Can you try it out with 2.2 branch. If its reproducible in 2.2 then we know for sure that we have a way to reproduce it and the issue is resolved in post 2.2 fixes > Memory leak in UnsafeHashedRelation > ----------------------------------- > > Key: SPARK-24437 > URL: https://issues.apache.org/jira/browse/SPARK-24437 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.2.0 > Reporter: gagan taneja > Priority: Critical > Attachments: Screen Shot 2018-05-30 at 2.05.40 PM.png, Screen Shot > 2018-05-30 at 2.07.22 PM.png > > > There seems to memory leak with > org.apache.spark.sql.execution.joins.UnsafeHashedRelation > We have a long running instance of STS. > With each query execution requiring Broadcast Join, UnsafeHashedRelation is > getting added for cleanup in ContextCleaner. This reference of > UnsafeHashedRelation is being held at some other Collection and not becoming > eligible for GC and because of this ContextCleaner is not able to clean it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org