[ 
https://issues.apache.org/jira/browse/SPARK-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324691#comment-14324691
 ] 

Yin Huai commented on SPARK-5841:
---------------------------------

[~matt.whelan] I noticed that after SQL unit tests, the following will be logged
{code}
10:59:28.721 ERROR org.apache.spark.util.Utils: Uncaught exception in thread 
delete Spark local dirs
java.lang.IllegalStateException: Shutdown in progress
        at 
java.lang.ApplicationShutdownHooks.remove(ApplicationShutdownHooks.java:82)
        at java.lang.Runtime.removeShutdownHook(Runtime.java:239)
        at 
org.apache.spark.storage.DiskBlockManager.stop(DiskBlockManager.scala:151)
        at 
org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply$mcV$sp(DiskBlockManager.scala:141)
        at 
org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply(DiskBlockManager.scala:139)
        at 
org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply(DiskBlockManager.scala:139)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1613)
        at 
org.apache.spark.storage.DiskBlockManager$$anon$1.run(DiskBlockManager.scala:139)
Exception in thread "delete Spark local dirs" java.lang.IllegalStateException: 
Shutdown in progress
        at 
java.lang.ApplicationShutdownHooks.remove(ApplicationShutdownHooks.java:82)
        at java.lang.Runtime.removeShutdownHook(Runtime.java:239)
        at 
org.apache.spark.storage.DiskBlockManager.stop(DiskBlockManager.scala:151)
        at 
org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply$mcV$sp(DiskBlockManager.scala:141)
        at 
org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply(DiskBlockManager.scala:139)
        at 
org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply(DiskBlockManager.scala:139)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1613)
        at 
org.apache.spark.storage.DiskBlockManager$$anon$1.run(DiskBlockManager.scala:139)
{code}
Seems the log is related to this commit. Is it expected? You can try "test-only 
org.apache.spark.sql.sources.InsertSuite" to see it.

> Memory leak in DiskBlockManager
> -------------------------------
>
>                 Key: SPARK-5841
>                 URL: https://issues.apache.org/jira/browse/SPARK-5841
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager
>    Affects Versions: 1.2.1
>            Reporter: Matt Whelan
>            Assignee: Matt Whelan
>             Fix For: 1.3.0
>
>
> DiskBlockManager registers a Runtime shutdown hook, which creates a hard 
> reference to the entire Driver ActorSystem.  If a long-running JVM repeatedly 
> creates and destroys SparkContext instances, it leaks memory.  
> I suggest we deregister the shutdown hook if DiskBlockManager.stop is called. 
>  It's redundant at that point.
> PR coming.
> See also 
> http://mail-archives.apache.org/mod_mbox/spark-user/201501.mbox/%3CCA+kjH+w_DDTEBE9XB6NrPxLTUXD=nc_d-3ogxtumk_5v-e0...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to