For such large output, I would suggest you to do the following processing
in cluster rather than in driver (use RDD api to do that).
If you really want to pull it to driver, then you can first save it in hdfs
and then read it using hdfs api to avoid the akka issue

On Fri, Nov 27, 2015 at 2:41 PM, Gylfi <gy...@berkeley.edu> wrote:

> Hi.
>
> I am doing very large collectAsMap() operations, about 10,000,000 records,
> and I am getting
> "org.apache.spark.SparkException: Error communicating with
> MapOutputTracker"
> errors..
>
> details:
> "org.apache.spark.SparkException: Error communicating with MapOutputTracker
>         at
> org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:117)
>         at
>
> org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:164)
>         at
>
> org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)
>         at
>
> org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)
>         at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>         at org.apache.spark.scheduler.Task.run(Task.scala:64)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.SparkException: Error sending message [message
> =
> GetMapOutputStatuses(1)]
>         at
> org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:209)
>         at
> org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:113)
>         ... 12 more
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after
> [300 seconds]
>         at
> scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>         at
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>         at
> scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>         at
>
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>         at scala.concurrent.Await$.result(package.scala:107)
>         at
> org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:195)
>         ... 13 more"
>
> I have already set set the akka.timeout to 300 etc.
> Anyone have any ideas on what the problem could be ?
>
> Regares,
>     Gylfi.
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Optimizing-large-collect-operations-tp25498.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


-- 
Best Regards

Jeff Zhang

Reply via email to