scala> org.apache.hadoop.fs.FileSystem.getLocal(sc.hadoopConfiguration)
res0: org.apache.hadoop.fs.LocalFileSystem =
org.apache.hadoop.fs.LocalFileSystem@3f84970b

scala> res0.delete(new org.apache.hadoop.fs.Path("/tmp/does-not-exist"), true)
res3: Boolean = false

Does that explain your confusion?


On Sat, Jan 14, 2017 at 11:37 AM, Marcelo Vanzin <van...@cloudera.com> wrote:
> Are you actually seeing a problem or just questioning the code?
>
> I have never seen a situation where there's a failure because of that
> part of the current code.
>
> On Fri, Jan 13, 2017 at 3:24 AM, Rostyslav Sotnychenko
> <r.sotnyche...@gmail.com> wrote:
>> Hi all!
>>
>> I am a bit confused why Spark AM and Client are both trying to delete
>> Staging Directory.
>>
>> https://github.com/apache/spark/blob/branch-2.1/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala#L1110
>> https://github.com/apache/spark/blob/branch-2.1/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala#L233
>>
>> As you can see, in case if a job was running on YARN in Cluster deployment
>> mode, both AM and Client will try to delete Staging directory if job
>> succeeded and eventually one of them will fail to do this, because the other
>> one already deleted the directory.
>>
>> Shouldn't we add some check to Client?
>>
>>
>> Thanks,
>> Rostyslav
>
>
>
> --
> Marcelo



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to