I tried building with Java 6 and also tried the pre-built packages. I am still getting the same error.

It works fine when I run it on a machine with Solaris OS and X-86 architecture.

But, it does not work with Solaris OS and Sparc architecture.

Any ideas, why this would happen?

Thanks,
Suman.

On 6/4/2014 10:48 AM, Suman Somasundar wrote:
I am building Spark by myself and I am using Java 7 to both build and run.

I will try with Java 6.

Thanks,
Suman.

On 6/3/2014 7:18 PM, Matei Zaharia wrote:
What Java version do you have, and how did you get Spark (did you build it yourself by any chance or download a pre-built one)? If you build Spark yourself you need to do it with Java 6 — it’s a known issue because of the way Java 6 and 7 package JAR files. But I haven’t seen it result in this particular error.

Matei

On Jun 3, 2014, at 5:18 PM, Suman Somasundar <suman.somasun...@oracle.com> wrote:

Hi all,

I get the following exception when using Spark to run example k-means program. I am using Spark 1.0.0 and running the program locally.

java.io.InvalidClassException: scala.Tuple2; invalid descriptor for field _1 at java.io.ObjectStreamClass.readNonProxy(ObjectStreamClass.java:697) at java.io.ObjectInputStream.readClassDescriptor(ObjectInputStream.java:827) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1583) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1750) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63) at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:125) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) at org.apache.spark.Aggregator.combineCombinersByKey(Aggregator.scala:87) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$combineByKey$3.apply(PairRDDFunctions.scala:101) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$combineByKey$3.apply(PairRDDFunctions.scala:100)
        at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:582)
        at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:582)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
        at org.apache.spark.scheduler.Task.run(Task.scala:51)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: illegal signature
        at java.io.ObjectStreamField.<init>(ObjectStreamField.java:119)
at java.io.ObjectStreamClass.readNonProxy(ObjectStreamClass.java:695)
        ... 26 more

Anyone know why this is happening?

Thanks,
Suman.


Reply via email to