I am using standard readers and writers i believe. 
When i locally run the app, spark is able to write on hdfs. Then i assume
accessing and reading mfs is doable.

Here is the piece of code i use for testing:
/val list = List ("dad", "mum", "brother" , "sister")
val mlist = sc.parallelize(list)
mlist.saveAsTextFile("maprfs:///user/nelson/test")/

and the stack trace:
/14/05/26 16:02:54 WARN scheduler.TaskSetManager: Loss was due to
java.lang.NullPointerException
java.lang.NullPointerException
        at org.apache.hadoop.fs.FileSystem.fixName(FileSystem.java:187)
        at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:123)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:115)
        at 
org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:617)
        at
org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:439)
        at
org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:412)
        at 
org.apache.spark.SparkContext$$anonfun$15.apply(SparkContext.scala:391)
        at 
org.apache.spark.SparkContext$$anonfun$15.apply(SparkContext.scala:391)
        at
org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$1.apply(HadoopRDD.scala:111)
        at
org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$1.apply(HadoopRDD.scala:111)
        at scala.Option.map(Option.scala:145)
        at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:111)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:154)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
        at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
        at org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
        at org.apache.spark.scheduler.Task.run(Task.scala:53)
        at
org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:211)
        at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
        at
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
        at
org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)/

The uri name seems to be the issue now as i happen to get rid of the
serialization issue.

Regards,
Nelson



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/maprfs-and-spark-libraries-tp6392p6402.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to