"Server IPC version 7 cannot communicate with client version 4" means
your client is Hadoop 1.x and your cluster is Hadoop 2.x. The default
Spark distribution is built for Hadoop 1.x. You would have to make
your own build (or, use the artifacts distributed for CDH4.6 maybe?
they are certainly built vs Hadoop 2)

On Wed, Jul 16, 2014 at 10:32 AM, Juan Rodríguez Hortalá
<juan.rodriguez.hort...@gmail.com> wrote:
> Hi,
>
> I'm running a Java program using Spark Streaming 1.0.0 on Cloudera 4.4.0
> quickstart virtual machine, with hadoop-client 2.0.0-mr1-cdh4.4.0, which is
> the one corresponding to my Hadoop distribution, and that works with other
> mapreduce programs, and with the maven property
> <hadoop.version>2.0.0-mr1-cdh4.4.0</hadoop.version> configured according to
> http://spark.apache.org/docs/latest/hadoop-third-party-distributions.html.
> When I set
>
> jssc.checkpoint("hdfs://localhost:8020/user/cloudera/bicing/streaming_checkpoints");
>
>
> I get a "Server IPC version 7 cannot communicate with client version 4"
> running the program in local mode using "local[4]" as master. I have seen
> this problem before in other forums like
> http://qnalist.com/questions/4957822/hdfs-server-client-ipc-version-mismatch-while-trying-to-access-hdfs-files-using-spark-0-9-1
> or http://comments.gmane.org/gmane.comp.lang.scala.spark.user/106 but the
> solution is basically setting the property I have already set. I have tried
> also with <hadoop-version>2.0.0-cdh4.4.0</hadoop-version> and
> <hadoop.major.version>2.0</hadoop.major.version> with no luck.
>
> Could someone help me with this?
>
> Thanks a lot in advance
>
> Greetings,
>
> Juan

Reply via email to