Its clearly saying:

java.io.InvalidClassException: org.apache.spark.storage.BlockManagerId;
local class incompatible: stream classdesc serialVersionUID =
2439208141545036836, local class serialVersionUID = -7366074099953117729

Version incompatibility, can you double check your version?
On 18 Mar 2015 06:08, "Eason Hu" <eas...@gmail.com> wrote:

> Hi Akhil,
>
> sc.parallelize(1 to 10000).collect() in the Spark shell on Spark v1.2.0
> runs fine.  However, if I do the following remotely, it will throw
> exception:
>
> val sc : SparkContext = new SparkContext(conf)
>
>   val NUM_SAMPLES = 10
>   val count = sc.parallelize(1 to NUM_SAMPLES).map{i =>
>     val x = Math.random()
>     val y = Math.random()
>     if (x*x + y*y < 1) 1 else 0
>   }.reduce(_ + _)
>   println("Pi is roughly " + 4.0 * count / NUM_SAMPLES)
>
> Exception:
> 15/03/17 17:33:52 ERROR scheduler.TaskSchedulerImpl: Lost executor 1 on
> hcompute32228.sjc9.service-now.com: remote Akka client disassociated
> 15/03/17 17:33:52 INFO scheduler.TaskSetManager: Re-queueing tasks for 1
> from TaskSet 0.0
> 15/03/17 17:33:52 WARN scheduler.TaskSetManager: Lost task 1.1 in stage
> 0.0 (TID 3, hcompute32228): ExecutorLostFailure (executor lost)
> 15/03/17 17:33:52 INFO scheduler.DAGScheduler: Executor lost: 1 (epoch 3)
> 15/03/17 17:33:52 INFO storage.BlockManagerMasterActor: Trying to remove
> executor 1 from BlockManagerMaster.
> 15/03/17 17:33:52 INFO storage.BlockManagerMaster: Removed 1 successfully
> in removeExecutor
> 15/03/17 17:34:39 ERROR Remoting: org.apache.spark.storage.BlockManagerId;
> local class incompatible: stream classdesc serialVersionUID =
> 2439208141545036836, local class serialVersionUID = -7366074099953117729
> java.io.InvalidClassException: org.apache.spark.storage.BlockManagerId;
> local class incompatible: stream classdesc serialVersionUID =
> 2439208141545036836, local class serialVersionUID = -7366074099953117729
>     at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:604)
>     at
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
>     at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1515)
>
> v1.1.0 is totally fine, but v1.1.1 and v1.2.0+ are not.  Are there any
> special instruction to be Spark cluster for later versions?  Do you know if
> there are anything I'm missing?
>
>
> Thank you for your help,
> Eason
>
>
>
>
>
> On Mon, Mar 16, 2015 at 11:51 PM, Akhil Das <ak...@sigmoidanalytics.com>
> wrote:
>
>> Could you tell me what all you did to change the version of spark?
>>
>> Can you fireup a spark-shell and write this line and see what happens:
>>
>> sc.parallelize(1 to 10000).collect()
>>
>>
>> Thanks
>> Best Regards
>>
>> On Mon, Mar 16, 2015 at 11:13 PM, Eason Hu <eas...@gmail.com> wrote:
>>
>>> Hi Akhil,
>>>
>>> Yes, I did change both versions on the project and the cluster.  Any
>>> clues?
>>>
>>> Even the sample code from Spark website failed to work.
>>>
>>> Thanks,
>>> Eason
>>>
>>> On Sun, Mar 15, 2015 at 11:56 PM, Akhil Das <ak...@sigmoidanalytics.com>
>>> wrote:
>>>
>>>> Did you change both the versions? The one in your build file of your
>>>> project and the spark version of your cluster?
>>>>
>>>> Thanks
>>>> Best Regards
>>>>
>>>> On Sat, Mar 14, 2015 at 6:47 AM, EH <eas...@gmail.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I've been using Spark 1.1.0 for a while, and now would like to upgrade
>>>>> to
>>>>> Spark 1.1.1 or above.  However, it throws the following errors:
>>>>>
>>>>> 18:05:31.522 [sparkDriver-akka.actor.default-dispatcher-3hread] ERROR
>>>>> TaskSchedulerImpl - Lost executor 37 on hcompute001: remote Akka client
>>>>> disassociated
>>>>> 18:05:31.530 [sparkDriver-akka.actor.default-dispatcher-3hread] WARN
>>>>> TaskSetManager - Lost task 0.0 in stage 1.0 (TID 0, hcompute001):
>>>>> ExecutorLostFailure (executor lost)
>>>>> 18:05:31.567 [sparkDriver-akka.actor.default-dispatcher-2hread] ERROR
>>>>> TaskSchedulerImpl - Lost executor 3 on hcompute001: remote Akka client
>>>>> disassociated
>>>>> 18:05:31.568 [sparkDriver-akka.actor.default-dispatcher-2hread] WARN
>>>>> TaskSetManager - Lost task 1.0 in stage 1.0 (TID 1, hcompute001):
>>>>> ExecutorLostFailure (executor lost)
>>>>> 18:05:31.988 [sparkDriver-akka.actor.default-dispatcher-23hread] ERROR
>>>>> TaskSchedulerImpl - Lost executor 24 on hcompute001: remote Akka client
>>>>> disassociated
>>>>>
>>>>> Do you know what may go wrong?  I didn't change any codes, just
>>>>> changed the
>>>>> version of Spark.
>>>>>
>>>>> Thank you all,
>>>>> Eason
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/Upgrade-from-Spark-1-1-0-to-1-1-1-Issues-tp22045.html
>>>>> Sent from the Apache Spark User List mailing list archive at
>>>>> Nabble.com.
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to