Hi Guys

Quick one: How spark deals (ie create partitions) with large files sitting
on NFS, assuming the all executors can see the file exactly same way.

ie, when I run

r = sc.textFile("file://my/file")

what happens if the file is on NFS?

is there any difference from

r = sc.textFile("hdfs://my/file")

Are the input formats used same in both cases?

-- 
Best Regards,
Ayan Guha

Reply via email to