we experienced similar issue in our environment, below is the whole stack
trace,  it works fine if we run local mode, if we run it in cluster mode
(even with Master and 1 worker on the same node), we have this
serialversionUID issue. we use Spark 1.0.0 and compiled with JDK6.

here is a link about serialVersionUID and suggestion on using it for
Serializable class.. which suggests to define a serialVersionUID in the
serializable class
http://stackoverflow.com/questions/285793/what-is-a-serialversionuid-and-why-should-i-use-it


14/06/05 09:52:18 WARN scheduler.TaskSetManager: Lost TID 9 (task 1.0:9)
14/06/05 09:52:18 WARN scheduler.TaskSetManager: Loss was due to
java.io.InvalidClassException
java.io.InvalidClassException: org.apache.spark.SerializableWritable; local
class incompatible: stream classdesc serialVersionUID =
6301214776158303468, local class serialVersionUID = -7785455416944904980
    at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:630)
    at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1600)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1513)
    at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1749)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:365)
    at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
    at
org.apache.spark.broadcast.HttpBroadcast$.read(HttpBroadcast.scala:165)
    at
org.apache.spark.broadcast.HttpBroadcast.readObject(HttpBroadcast.scala:56)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
    at java.lang.reflect.Method.invoke(Method.java:611)
    at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1039)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
    at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
    at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888)
    at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
    at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888)
    at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:365)
    at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
    at java.lang.reflect.Method.invoke(Method.java:611)
    at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1039)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
    at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
    at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888)
    at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:365)
    at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
    at
org.apache.spark.scheduler.ResultTask$.deserializeInfo(ResultTask.scala:63)
    at
org.apache.spark.scheduler.ResultTask.readExternal(ResultTask.scala:139)
    at
java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1809)
    at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1768)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:365)
    at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
    at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:62)
    at
org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:195)
    at
org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
    at java.lang.Thread.run(Thread.java:738)




On Wed, Jun 4, 2014 at 12:30 PM, Suman Somasundar <
suman.somasun...@oracle.com> wrote:

>
> I tried building with Java 6 and also tried the pre-built packages. I am
> still getting the same error.
>
> It works fine when I run it on a machine with Solaris OS and X-86
> architecture.
>
> But, it does not work with Solaris OS and Sparc architecture.
>
> Any ideas, why this would happen?
>
> Thanks,
> Suman.
>
>
> On 6/4/2014 10:48 AM, Suman Somasundar wrote:
>
>> I am building Spark by myself and I am using Java 7 to both build and run.
>>
>> I will try with Java 6.
>>
>> Thanks,
>> Suman.
>>
>> On 6/3/2014 7:18 PM, Matei Zaharia wrote:
>>
>>> What Java version do you have, and how did you get Spark (did you build
>>> it yourself by any chance or download a pre-built one)? If you build Spark
>>> yourself you need to do it with Java 6 — it’s a known issue because of the
>>> way Java 6 and 7 package JAR files. But I haven’t seen it result in this
>>> particular error.
>>>
>>> Matei
>>>
>>> On Jun 3, 2014, at 5:18 PM, Suman Somasundar <
>>> suman.somasun...@oracle.com> wrote:
>>>
>>>  Hi all,
>>>>
>>>> I get the following exception when using Spark to run example k-means
>>>> program.  I am using Spark 1.0.0 and running the program locally.
>>>>
>>>> java.io.InvalidClassException: scala.Tuple2; invalid descriptor for
>>>> field _1
>>>>         at java.io.ObjectStreamClass.readNonProxy(
>>>> ObjectStreamClass.java:697)
>>>>         at java.io.ObjectInputStream.readClassDescriptor(
>>>> ObjectInputStream.java:827)
>>>>         at java.io.ObjectInputStream.readNonProxyDesc(
>>>> ObjectInputStream.java:1583)
>>>>         at java.io.ObjectInputStream.readClassDesc(
>>>> ObjectInputStream.java:1514)
>>>>         at java.io.ObjectInputStream.readOrdinaryObject(
>>>> ObjectInputStream.java:1750)
>>>>         at java.io.ObjectInputStream.readObject0(ObjectInputStream.
>>>> java:1347)
>>>>         at java.io.ObjectInputStream.readObject(ObjectInputStream.
>>>> java:369)
>>>>         at org.apache.spark.serializer.JavaDeserializationStream.
>>>> readObject(JavaSerializer.scala:63)
>>>>         at org.apache.spark.serializer.DeserializationStream$$anon$1.
>>>> getNext(Serializer.scala:125)
>>>>         at org.apache.spark.util.NextIterator.hasNext(
>>>> NextIterator.scala:71)
>>>>         at scala.collection.Iterator$$anon$13.hasNext(Iterator.
>>>> scala:371)
>>>>         at org.apache.spark.util.CompletionIterator.hasNext(
>>>> CompletionIterator.scala:30)
>>>>         at org.apache.spark.InterruptibleIterator.hasNext(
>>>> InterruptibleIterator.scala:39)
>>>>         at org.apache.spark.Aggregator.combineCombinersByKey(
>>>> Aggregator.scala:87)
>>>>         at org.apache.spark.rdd.PairRDDFunctions$$anonfun$
>>>> combineByKey$3.apply(PairRDDFunctions.scala:101)
>>>>         at org.apache.spark.rdd.PairRDDFunctions$$anonfun$
>>>> combineByKey$3.apply(PairRDDFunctions.scala:100)
>>>>         at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:582)
>>>>         at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:582)
>>>>         at 
>>>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>>>>
>>>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.
>>>> scala:262)
>>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>>>>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.
>>>> scala:111)
>>>>         at org.apache.spark.scheduler.Task.run(Task.scala:51)
>>>>         at org.apache.spark.executor.Executor$TaskRunner.run(
>>>> Executor.scala:187)
>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> ThreadPoolExecutor.java:1110)
>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> ThreadPoolExecutor.java:603)
>>>>         at java.lang.Thread.run(Thread.java:722)
>>>> Caused by: java.lang.IllegalArgumentException: illegal signature
>>>>         at java.io.ObjectStreamField.<init>(ObjectStreamField.java:119)
>>>>         at java.io.ObjectStreamClass.readNonProxy(
>>>> ObjectStreamClass.java:695)
>>>>         ... 26 more
>>>>
>>>> Anyone know why this is happening?
>>>>
>>>> Thanks,
>>>> Suman.
>>>>
>>>
>>
>

Reply via email to