[ https://issues.apache.org/jira/browse/SPARK-5001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14263694#comment-14263694 ]
Saisai Shao commented on SPARK-5001: ------------------------------------ Hi [~hanhg], I don't think it is a problem of Spark Streaming, from my understanding, mostly this exception is introduced by your unstable running status, say the processing delay is larger than the tolerated batch interval. IMO I think you should tune your application, rather than modifying the Spark Streaming in this way. The patch you submitted is not a proper way to solve the problem from my understanding, it will break the internal logic of Spark Streaming. > BlockRDD removed unreasonablly in streaming > ------------------------------------------- > > Key: SPARK-5001 > URL: https://issues.apache.org/jira/browse/SPARK-5001 > Project: Spark > Issue Type: Bug > Affects Versions: 1.0.2, 1.1.1, 1.2.0 > Reporter: hanhonggen > Attachments: > fix_bug_BlockRDD_removed_not_reasonablly_in_streaming.patch > > > I've counted messages using kafkainputstream of spark-1.1.1. The test app > failed when the latter batch job completed sooner than the previous. In the > source code, BlockRDDs older than (time-rememberDuration) will be removed in > cleanMetaData after one job completed. And the previous job will abort due to > block not found.The relevant log are as follows: > 2014-12-25 > 14:07:12(Logging.scala:59)[sparkDriver-akka.actor.default-dispatcher-14] INFO > :Starting job streaming job 1419487632000 ms.0 from job set of time > 1419487632000 ms > 2014-12-25 > 14:07:15(Logging.scala:59)[sparkDriver-akka.actor.default-dispatcher-14] INFO > :Starting job streaming job 1419487635000 ms.0 from job set of time > 1419487635000 ms > 2014-12-25 > 14:07:15(Logging.scala:59)[sparkDriver-akka.actor.default-dispatcher-15] INFO > :Finished job streaming job 1419487635000 ms.0 from job set of time > 1419487635000 ms > 2014-12-25 > 14:07:15(Logging.scala:59)[sparkDriver-akka.actor.default-dispatcher-16] INFO > :Removing blocks of RDD BlockRDD[3028] at createStream at TestKafka.java:144 > of time 1419487635000 ms from DStream clearMetadata > java.lang.Exception: Could not compute split, block input-0-1419487631400 not > found for 3028 -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org