I’m using Spark 1.1.0 built for HDFS 2.4.

My application enables check-point (to HDFS 2.5.1) and it can build. But when I 
run it, I get below error:

 

Exception in thread "main" org.apache.hadoop.ipc.RemoteException: Server IPC 
version 9 cannot communicate with client version 4

    at org.apache.hadoop.ipc.Client.call(Client.java:1070)

    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)

    at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)

    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)

    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)

    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)

    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)

    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)

    at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)

    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)

    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)

    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)

    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)

    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)

    at 
org.apache.spark.streaming.StreamingContext.checkpoint(StreamingContext.scala:201)

 

Does that mean I have to use HDFS 2.4 to save check-point? Thank you!

 

Reply via email to