Check in your worker logs for exact reason, if you let the job run for 2
days then mostly this is because of you ran out of disk space or something.
Looking at the worker logs will give you a clear picture.

Thanks
Best Regards

On Mon, Dec 8, 2014 at 12:49 PM, Hafiz Mujadid <hafizmujadi...@gmail.com>
wrote:

> I am facing following exception while saving Dstream to hdfs
>
>
> 14/12/08 12:14:26 INFO DAGScheduler: Failed to run saveAsTextFile at
> DStream.scala:788
> 14/12/08 12:14:26 ERROR JobScheduler: Error running job streaming job
> 1418022865000 ms.0
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 10
> in stage 4.0 failed 4 times, most recent failure: Lost task 10.3 in stage
> 4.0 (TID 117, server2): java.lang.Exception: Could not compute split, block
> input-0-1418022857000 not found
>         org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:61)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
>         org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>         org.apache.spark.scheduler.Task.run(Task.scala:54)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:180)
>
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:745)
>
>
>
> The code was running good 2 days before but Now I am facing this error.
> What
> can be the reason
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/spark-Exception-while-performing-saveAsTextFiles-tp20573.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to