Please go to hadoop configuration directory and open core-site.xml and
check the IP and port for HDFS, mentioned on the value of "fs.default.name"
. After that specify the same IP and Port number on your code.
format *hdfs://<ip>:port/*

I hope it will work.....


On Mon, Feb 10, 2014 at 2:14 PM, mohankreddy <mre...@beanatomics.com> wrote:

> I am getting the following error when trying to access my data using
> hdfs://
> ....... Not sure  how to fix this one.
>
> " java.io.IOException: Call to server1/10.85.85.17:9000 failed on local
> exception: java.io.EOFException
>         at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>         at $Proxy8.getProtocolVersion(Unknown Source)
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>         at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>         at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>         at
>
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:176)
>         at
>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
>         at
> org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:140)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
>         at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
>         at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
>         at org.apache.spark.mllib.recommendation.ALS.run(ALS.scala:139)
>         at org.apache.spark.mllib.recommendation.ALS$.main(ALS.scala:594)
>         at org.apache.spark.mllib.recommendation.ALS.main(ALS.scala)
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readInt(DataInputStream.java:375)
>         at
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.ja
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/EOF-Exception-when-trying-to-access-hdfs-tp1347.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to