I logged into master of my cluster and referenced the local file of the master 
node machine.  And yes that file only resides on master node, not on any of the 
remote workers.  

-----Original Message-----
From: Sean Owen [mailto:so...@cloudera.com] 
Sent: Friday, December 11, 2015 1:00 PM
To: Lin, Hao
Cc: user@spark.apache.org
Subject: Re: how to access local file from Spark sc.textFile("file:///path 
to/myfile")

Hm, are you referencing a local file from your remote workers? That won't work 
as the file only exists in one machine (I presume).

On Fri, Dec 11, 2015 at 5:19 PM, Lin, Hao <hao....@finra.org> wrote:
> Hi,
>
>
>
> I have problem accessing local file, with such example:
>
>
>
> sc.textFile("file:///root/2008.csv").count()
>
>
>
> with error: File file:/root/2008.csv does not exist.
>
> The file clearly exists since, since if I missed type the file name to 
> an non-existing one, it will show:
>
>
>
> Error: Input path does not exist
>
>
>
> Please help!
>
>
>
> The following is the error message:
>
>
>
> scala> sc.textFile("file:///root/2008.csv").count()
>
> 15/12/11 17:12:08 WARN TaskSetManager: Lost task 15.0 in stage 8.0 
> (TID 498,
> 10.162.167.24): java.io.FileNotFoundException: File 
> file:/root/2008.csv does not exist
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLoc
> alFileSystem.java:511)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawL
> ocalFileSystem.java:724)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSyst
> em.java:501)
>
>         at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.j
> ava:397)
>
>         at
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(
> ChecksumFileSystem.java:137)
>
>         at
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:3
> 39)
>
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:764)
>
>         at
> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java
> :108)
>
>         at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputForm
> at.java:67)
>
>         at
> org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:239)
>
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
>
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>
>         at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
>
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:3
> 8)
>
>         at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
>
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>
>         at org.apache.spark.scheduler.Task.run(Task.scala:88)
>
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> ava:1145)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> java:615)
>
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
> 15/12/11 17:12:08 ERROR TaskSetManager: Task 9 in stage 8.0 failed 4 
> times; aborting job
>
> org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 9 in stage 8.0 failed 4 times, most recent failure: Lost task 9.3 
> in stage 8.0 (TID 547, 10.162.167.23): java.io.FileNotFoundException: 
> File file:/root/2008.csv does not exist
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLoc
> alFileSystem.java:511)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawL
> ocalFileSystem.java:724)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSyst
> em.java:501)
>
>         at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.j
> ava:397)
>
>         at
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(
> ChecksumFileSystem.java:137)
>
>         at
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:3
> 39)
>
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:764)
>
>         at
> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java
> :108)
>
>         at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputForm
> at.java:67)
>
>         at
> org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:239)
>
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
>
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>
>        at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
>
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:3
> 8)
>
>         at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
>
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>
>         at org.apache.spark.scheduler.Task.run(Task.scala:88)
>
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> ava:1145)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> java:615)
>
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
> Driver stacktrace:
>
>         at
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAG
> Scheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
>
>         at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DA
> GScheduler.scala:1271)
>
>         at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DA
> GScheduler.scala:1270)
>
>         at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.s
> cala:59)
>
>         at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>
>         at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:
> 1270)
>
>         at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1
> .apply(DAGScheduler.scala:697)
>
>         at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1
> .apply(DAGScheduler.scala:697)
>
>         at scala.Option.foreach(Option.scala:236)
>
>         at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGSchedul
> er.scala:697)
>
>         at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DA
> GScheduler.scala:1496)
>
>         at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGS
> cheduler.scala:1458)
>
>         at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGS
> cheduler.scala:1447)
>
>         at 
> org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>
>         at
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
>
>         at 
> org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
>
>         at 
> org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
>
>         at 
> org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
>
>         at 
> org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
>
>         at org.apache.spark.rdd.RDD.count(RDD.scala:1125)
>
>         at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:25)
>
>         at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
>
>         at 
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:32)
>
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:34)
>
>         at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
>
>         at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38)
>
>         at $iwC$$iwC$$iwC$$iwC.<init>(<console>:40)
>
>         at $iwC$$iwC$$iwC.<init>(<console>:42)
>
>         at $iwC$$iwC.<init>(<console>:44)
>
>         at $iwC.<init>(<console>:46)
>
>         at <init>(<console>:48)
>
>         at .<init>(<console>:52)
>
>         at .<clinit>(<console>)
>
>         at .<init>(<console>:7)
>
>         at .<clinit>(<console>)
>
>         at $print(<console>)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> orImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:606)
>
>         at
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1
> 065)
>
>         at
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1
> 340)
>
>         at
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>
>         at 
> org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>
>         at 
> org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>
>         at
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:85
> 7)
>
>         at
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scal
> a:902)
>
>         at 
> org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>
>         at
> org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>
>         at
> org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>
>         at
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loo
> p(SparkILoop.scala:670)
>
>         at
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkI
> Loop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>
>         at
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkI
> Loop$$process$1.apply(SparkILoop.scala:945)
>
>         at
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkI
> Loop$$process$1.apply(SparkILoop.scala:945)
>
>         at
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassL
> oader.scala:135)
>
>         at
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$pro
> cess(SparkILoop.scala:945)
>
>         at 
> org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>
>         at org.apache.spark.repl.Main.main(Main.scala)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> orImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:606)
>
>         at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubm
> it$$runMain(SparkSubmit.scala:674)
>
>         at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180
> )
>
>         at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>
>         at 
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
>
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> Caused by: java.io.FileNotFoundException: File file:/root/2008.csv 
> does not exist
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLoc
> alFileSystem.java:511)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawL
> ocalFileSystem.java:724)
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSyst
> em.java:501)
>
>         at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.j
> ava:397)
>
>         at
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(
> ChecksumFileSystem.java:137)
>
>         at
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:3
> 39)
>
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:764)
>
>         at
> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java
> :108)
>
>         at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputForm
> at.java:67)
>
>         at
> org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:239)
>
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
>
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>
>         at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
>
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:3
> 8)
>
>         at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
>
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>
>         at org.apache.spark.scheduler.Task.run(Task.scala:88)
>
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> ava:1145)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> java:615)
>
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
> Confidentiality Notice:: This email, including attachments, may 
> include non-public, proprietary, confidential or legally privileged 
> information. If you are not an intended recipient or an authorized 
> agent of an intended recipient, you are hereby notified that any 
> dissemination, distribution or copying of the information contained in 
> or transmitted with this e-mail is unauthorized and strictly 
> prohibited. If you have received this email in error, please notify 
> the sender by replying to this message and permanently delete this 
> e-mail, its attachments, and any copies of it immediately. You should 
> not retain, copy or use this e-mail or any attachment for any purpose, nor 
> disclose all or any part of the contents to any other person.
> Thank you.

Confidentiality Notice::  This email, including attachments, may include 
non-public, proprietary, confidential or legally privileged information.  If 
you are not an intended recipient or an authorized agent of an intended 
recipient, you are hereby notified that any dissemination, distribution or 
copying of the information contained in or transmitted with this e-mail is 
unauthorized and strictly prohibited.  If you have received this email in 
error, please notify the sender by replying to this message and permanently 
delete this e-mail, its attachments, and any copies of it immediately.  You 
should not retain, copy or use this e-mail or any attachment for any purpose, 
nor disclose all or any part of the contents to any other person. Thank you.

Reply via email to