Which Spark release do you use ?

I wonder if the following may have fixed the problem:
SPARK-8029 Robust shuffle writer

JIRA is down, cannot check now.

On Fri, Mar 11, 2016 at 11:01 PM, Saurabh Guru <saurabh.g...@gmail.com>
wrote:

> I am seeing the following exception in my Spark Cluster every few days in
> production.
>
> 2016-03-12 05:30:00,541 - WARN  TaskSetManager - Lost task 0.0 in stage
> 12528.0 (TID 18792, ip-1X-1XX-1-1XX.us <http://ip-10-180-1-188.us>
> -west-1.compute.internal
> ): java.lang.NullPointerException
>        at
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)
>        at
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)
>        at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>        at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>        at org.apache.spark.scheduler.Task.run(Task.scala:89)
>        at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>        at java.lang.Thread.run(Thread.java:745)
>
>
> I have debugged in local machine but haven’t been able to pin point the
> cause of the error. Anyone knows why this might occur? Any suggestions?
>
>
> Thanks,
> Saurabh
>
>
>
>

Reply via email to