Hollo there,

Just set up an ec2 cluster with no HDFS, hadoop, hbase whatsoever. Just
installed spark to read/process data from a hbase in a different cluster.
The spark was built against the hbase/hadoop version in the remote (ec2)
hbase cluster, which is 0.98.1 and 2.3.0 respectively. 

but I got the following error when running a simple test python script. The
command line
./spark-submit --master  spark://master:7077 --driver-class-path
./spark-examples-1.1.0-hadoop2.3.0.jar ~/workspace/test/sparkhbase.py

>From the worker log, I can see the worker node got the request from the
master.

Can anyone help with this problem? Tons of thanks!


java.lang.IllegalStateException: unread block data
       
java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2399)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1378)
       
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
       
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
       
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1776)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        java.io.ObjectInputStream.readObject(ObjectInputStream.java:368)
       
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
       
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
       
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:162)
       
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
       
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:679)



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/IllegalStateException-unread-block-data-tp18011.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to