[ https://issues.apache.org/jira/browse/SPARK-2268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Marcelo Vanzin resolved SPARK-2268. ----------------------------------- Resolution: Invalid Looking at the logs from the user there are multiple exceptions being logger at the same time during shutdown, and this one wasn't really about HDFS. Sorry about that. > Utils.createTempDir() creates race with HDFS at shutdown > -------------------------------------------------------- > > Key: SPARK-2268 > URL: https://issues.apache.org/jira/browse/SPARK-2268 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.0.0 > Reporter: Marcelo Vanzin > > Utils.createTempDir() has this code: > {code} > // Add a shutdown hook to delete the temp dir when the JVM exits > Runtime.getRuntime.addShutdownHook(new Thread("delete Spark temp dir " + > dir) { > override def run() { > // Attempt to delete if some patch which is parent of this is not > already registered. > if (! hasRootAsShutdownDeleteDir(dir)) Utils.deleteRecursively(dir) > } > }) > {code} > This creates a race with the shutdown hooks registered by HDFS, since the > order of execution is undefined; if the HDFS hooks run first, you'll get > exceptions about the file system being closed. > Instead, this should use Hadoop's ShutdownHookManager with a proper priority, > so that it runs before the HDFS hooks. -- This message was sent by Atlassian JIRA (v6.2#6252)