[ 
https://issues.apache.org/jira/browse/SPARK-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14362605#comment-14362605
 ] 

Nathan McCarthy commented on SPARK-6313:
----------------------------------------

Stacktrace;

14/12/12 18:18:24 WARN scheduler.TaskSetManager: Lost task 7.0 in stage 0.0 
(TID 8, hadoop-016): java.io.IOException: Permission denied
at sun.nio.ch.FileDispatcherImpl.lock0(Native Method)
at sun.nio.ch.FileDispatcherImpl.lock(FileDispatcherImpl.java:91)
at sun.nio.ch.FileChannelImpl.lock(FileChannelImpl.java:1022)
at java.nio.channels.FileChannel.lock(FileChannel.java:1052)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:379)
at 
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$6.apply(Executor.scala:350)
at 
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$6.apply(Executor.scala:347)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at 
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:347)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

> Fetch File Lock file creation doesnt work when Spark working dir is on a NFS 
> mount
> ----------------------------------------------------------------------------------
>
>                 Key: SPARK-6313
>                 URL: https://issues.apache.org/jira/browse/SPARK-6313
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.2.0, 1.3.0, 1.2.1
>            Reporter: Nathan McCarthy
>            Priority: Critical
>
> When running in cluster mode and mounting the spark work dir on a NFS volume 
> (or some volume which doesn't support file locking), the fetchFile (used for 
> downloading JARs etc on the executors) method in Spark Utils class will fail. 
> This file locking was introduced as an improvement with SPARK-2713. 
> See 
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L415
>  
> Introduced in 1.2 in commit; 
> https://github.com/apache/spark/commit/7aacb7bfad4ec73fd8f18555c72ef696 
> As this locking is for optimisation for fetching files, could we take a 
> different approach here to create a temp/advisory lock file? 
> Typically you would just mount local disks (in say ext4 format) and provide 
> this as a comma separated list however we are trying to run Spark on MapR. 
> With MapR we can do a loop back mount to a volume on the local node and take 
> advantage of MapRs disk pools. This also means we dont need specific mounts 
> for Spark and improves the generic nature of the cluster. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to