It works fine on my *Spark 1.1.0*
Thanks
Best Regards
On Mon, Oct 27, 2014 at 12:22 AM, octavian.ganea wrote:
> Hi Akhil,
>
> Please see this related message.
>
> http://apache-spark-user-list.1001560.n3.nabble.com/Bug-in-Accumulators-td17263.html
>
> I am curious if this works for you also.
>
Hi Akhil,
Please see this related message.
http://apache-spark-user-list.1001560.n3.nabble.com/Bug-in-Accumulators-td17263.html
I am curious if this works for you also.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Accumulators-Task-not-serializable-j
= 1; foo(iter)}}
> .reduce(_ + _)
> println(accum.value)
>
> Now, if I remove the 'accum += 1', everything works fine. If I keep it, I
> get this weird error:
>
> Exception in thread "main" 14/10/25 21:58:56 INFO TaskSchedulerImpl:
ot;main" 14/10/25 21:58:56 INFO TaskSchedulerImpl:
Cancelling stage 0
org.apache.spark.SparkException: Job aborted due to stage failure: Task not
serializable: java.io.NotSerializableException:
org.apache.spark.SparkContext
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGSc
led to run collect at KMeans.scala:235
>>> [error] (run-main-0) org.apache.spark.SparkException: Job aborted: Task
>>> not serializable: java.io.NotSerializableException:
>>> org.apache.spark.SparkContext
>>> org.apache.spark.SparkException: Job aborted: Task not serializable:
>&g
rrays ]
>>
>> it can success in the redeceBykey operation, but failed at the
>> collect operation, this confused me.
>>
>>
>> INFO DAGScheduler: Failed to run collect at KMeans.scala:235
>> [error] (run-main-0) org.apache.spark.SparkException: Job aborted:
eans.scala:235
> [error] (run-main-0) org.apache.spark.SparkException: Job aborted: Task
> not serializable: java.io.NotSerializableException:
> org.apache.spark.SparkContext
> org.apache.spark.SparkException: Job aborted: Task not serializable:
> java.io.NotSerial
:
org.apache.spark.SparkContext
org.apache.spark.SparkException: Job aborted: Task not serializable:
java.io.NotSerializableException: org.apache.spark.SparkContext
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028