[ 
https://issues.apache.org/jira/browse/SPARK-2268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14043088#comment-14043088
 ] 

Mridul Muralidharan commented on SPARK-2268:
--------------------------------------------

That is not because of this hook.
There are a bunch of places in spark where filesystem objects are (incorrectly 
I should add) getting closed : some within shutdown hooks (check in stop method 
in various services in spark) and others elsewhere (like checkpointing code).

I have fixed a bunch of these as part of some other work ... should come in a 
PR soon.

> Utils.createTempDir() creates race with HDFS at shutdown
> --------------------------------------------------------
>
>                 Key: SPARK-2268
>                 URL: https://issues.apache.org/jira/browse/SPARK-2268
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.0.0
>            Reporter: Marcelo Vanzin
>
> Utils.createTempDir() has this code:
> {code}
>     // Add a shutdown hook to delete the temp dir when the JVM exits
>     Runtime.getRuntime.addShutdownHook(new Thread("delete Spark temp dir " + 
> dir) {
>       override def run() {
>         // Attempt to delete if some patch which is parent of this is not 
> already registered.
>         if (! hasRootAsShutdownDeleteDir(dir)) Utils.deleteRecursively(dir)
>       }
>     })
> {code}
> This creates a race with the shutdown hooks registered by HDFS, since the 
> order of execution is undefined; if the HDFS hooks run first, you'll get 
> exceptions about the file system being closed.
> Instead, this should use Hadoop's ShutdownHookManager with a proper priority, 
> so that it runs before the HDFS hooks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to