Thanks, I’ll go ahead and disable that setting for now.

From: Aaron Davidson <ilike...@gmail.com<mailto:ilike...@gmail.com>>
Date: Wednesday, August 20, 2014 at 3:20 PM
To: Silvio Fiorito 
<silvio.fior...@granturing.com<mailto:silvio.fior...@granturing.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>" 
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: Stage failure in BlockManager due to FileNotFoundException on 
long-running streaming job

This is likely due to a bug in shuffle file consolidation (which you have 
enabled) which was hopefully fixed in 1.1 with this patch: 
https://github.com/apache/spark/commit/78f2af582286b81e6dc9fa9d455ed2b369d933bd

Until 1.0.3 or 1.1 are released, the simplest solution is to disable 
spark.shuffle.consolidateFiles. (You could also try backporting the fixes 
yourself if you really need consolidated files.)


On Wed, Aug 20, 2014 at 9:28 AM, Silvio Fiorito 
<silvio.fior...@granturing.com<mailto:silvio.fior...@granturing.com>> wrote:
This is a long running Spark Streaming job running in YARN, Spark v1.0.2 on 
CDH5. The jobs will run for about 34-37 hours then die due to this 
FileNotFoundException. There’s very little CPU or RAM usage, I’m running 2 x 
cores, 2 x executors, 4g memory, YARN cluster mode.


Here’s the stack trace that I pulled from the History server:

Job aborted due to stage failure: Task 34331.0:1 failed 4 times, most recent 
failure: Exception failure in TID 902905 on host host05: 
java.io.FileNotFoundException: 
/data02/yarn/nm/usercache/sfiorito/appcache/application_1402494159106_0524/spark-local-20140818181035-079a/29/merged_shuffle_9809_1_0
 (No such file or directory) java.io.RandomAccessFile.open(Native Method) 
java.io.RandomAccessFile.<init>(RandomAccessFile.java:241) 
org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:98) 
org.apache.spark.storage.DiskStore.getValues(DiskStore.scala:124) 
org.apache.spark.storage.BlockManager.getLocalFromDisk(BlockManager.scala:332) 
org.apache.spark.storage.BlockFetcherIterator$BasicBlockFetcherIterator$$anonfun$getLocalBlocks$1.apply(BlockFetcherIterator.scala:204)
 
org.apache.spark.storage.BlockFetcherIterator$BasicBlockFetcherIterator$$anonfun$getLocalBlocks$1.apply(BlockFetcherIterator.scala:203)
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
org.apache.spark.storage.BlockFetcherIterator$BasicBlockFetcherIterator.getLocalBlocks(BlockFetcherIterator.scala:203)
 
org.apache.spark.storage.BlockFetcherIterator$BasicBlockFetcherIterator.initialize(BlockFetcherIterator.scala:234)
 org.apache.spark.storage.BlockManager.getMultiple(BlockManager.scala:537) 
org.apache.spark.BlockStoreShuffleFetcher.fetch(BlockStoreShuffleFetcher.scala:76)
 
org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$2.apply(CoGroupedRDD.scala:133)
 
org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$2.apply(CoGroupedRDD.scala:123)
 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) 
org.apache.spark.rdd.CoGroupedRDD.compute(CoGroupedRDD.scala:123) 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) 
org.apache.spark.rdd.RDD.iterator(RDD.scala:229) 
org.apache.spark.rdd.MappedValuesRDD.compute(MappedValuesRDD.scala:31) 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) 
org.apache.spark.rdd.RDD.iterator(RDD.scala:229) 
org.apache.spark.rdd.FlatMappedValuesRDD.compute(FlatMappedValuesRDD.scala:31) 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) 
org.apache.spark.rdd.RDD.iterator(RDD.scala:229) 
org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) 
org.apache.spark.rdd.RDD.iterator(RDD.scala:229) 
org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) 
org.apache.spark.rdd.RDD.iterator(RDD.scala:229) 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) 
org.apache.spark.rdd.RDD.iterator(RDD.scala:229) 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) 
org.apache.spark.rdd.RDD.iterator(RDD.scala:229) 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) 
org.apache.spark.scheduler.Task.run(Task.scala:51) 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183) 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
java.lang.Thread.run(Thread.java:744) Driver stacktrace:


Reply via email to