[
https://issues.apache.org/jira/browse/SPARK-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14365452#comment-14365452
]
Josh Rosen commented on SPARK-6313:
-----------------------------------
I've merged Nathan's patch into 1.4.0, 1.3.1, and 1.2.2. After this path,
users can work around this bug by setting {{spark.files.useFetchCache=false}}
in their SparkConf.
> Fetch File Lock file creation doesnt work when Spark working dir is on a NFS
> mount
> ----------------------------------------------------------------------------------
>
> Key: SPARK-6313
> URL: https://issues.apache.org/jira/browse/SPARK-6313
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.2.0, 1.3.0, 1.2.1
> Reporter: Nathan McCarthy
> Assignee: Nathan McCarthy
> Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.1
>
>
> When running in cluster mode and mounting the spark work dir on a NFS volume
> (or some volume which doesn't support file locking), the fetchFile (used for
> downloading JARs etc on the executors) method in Spark Utils class will fail.
> This file locking was introduced as an improvement with SPARK-2713.
> See
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L415
>
> Introduced in 1.2 in commit;
> https://github.com/apache/spark/commit/7aacb7bfad4ec73fd8f18555c72ef696
> As this locking is for optimisation for fetching files, could we take a
> different approach here to create a temp/advisory lock file?
> Typically you would just mount local disks (in say ext4 format) and provide
> this as a comma separated list however we are trying to run Spark on MapR.
> With MapR we can do a loop back mount to a volume on the local node and take
> advantage of MapRs disk pools. This also means we dont need specific mounts
> for Spark and improves the generic nature of the cluster.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]