Hi TD,

I am using Spark Streaming to consume data from Kafka and do some
aggregation and ingest the results into RDS. I do use foreachRDD in the
program. I am planning to use Spark streaming in our production pipeline
and it performs well in generating the results. Unfortunately, we plan to
have a production pipeline 24/7 and Spark streaming job usually fails after
8-20 hours due to the exception "cannot compute split". In other cases, the
Kafka receiver has failure and the program runs without producing any
result.

In my pipeline, the batch size is 1 minute and the data volume per minute
from Kafka is 3G. I have been struggling with this issue for more than a
month. It will be great if you can provide some solutions for this. Thanks!

Bill


On Wed, Nov 26, 2014 at 5:35 PM, Tathagata Das <tathagata.das1...@gmail.com>
wrote:

> Can you elaborate on the usage pattern that lead to "cannot compute
> split" ? Are you using the RDDs generated by DStream, outside the
> DStream logic? Something like running interactive Spark jobs
> (independent of the Spark Streaming ones) on RDDs generated by
> DStreams? If that is the case, what is happening is that Spark
> Streaming is not aware that some of the RDDs (and the raw input data
> that it will need) will be used by Spark jobs unrelated to Spark
> Streaming. Hence Spark Streaming will actively clear off the raw data,
> leading to failures in the unrelated Spark jobs using that data.
>
> In case this is your use case, the cleanest way to solve this, is by
> asking Spark Streaming "remember" stuff for longer, by using
> streamingContext.remember(<duration>). This will ensure that Spark
> Streaming will keep around all the stuff for at least that duration.
> Hope this helps.
>
> TD
>
> On Wed, Nov 26, 2014 at 5:07 PM, Bill Jay <bill.jaypeter...@gmail.com>
> wrote:
> > Just add one more point. If Spark streaming knows when the RDD will not
> be
> > used any more, I believe Spark will not try to retrieve data it will not
> use
> > any more. However, in practice, I often encounter the error of "cannot
> > compute split". Based on my understanding, this is  because Spark cleared
> > out data that will be used again. In my case, the data volume is much
> > smaller (30M/s, the batch size is 60 seconds) than the memory (20G each
> > executor). If Spark will only keep RDD that are in use, I expect that
> this
> > error may not happen.
> >
> > Bill
> >
> > On Wed, Nov 26, 2014 at 4:02 PM, Tathagata Das <
> tathagata.das1...@gmail.com>
> > wrote:
> >>
> >> Let me further clarify Lalit's point on when RDDs generated by
> >> DStreams are destroyed, and hopefully that will answer your original
> >> questions.
> >>
> >> 1.  How spark (streaming) guarantees that all the actions are taken on
> >> each input rdd/batch.
> >> This is isnt hard! By the time you call streamingContext.start(), you
> >> have already set up the output operations (foreachRDD, saveAs***Files,
> >> etc.) that you want to do with the DStream. There are RDD actions
> >> inside the DStream output oeprations that need to be done every batch
> >> interval. So all the systems does is this - after every batch
> >> interval, put all the output operations (that will call RDD actions)
> >> in a job queue, and then keep executing stuff in the queue. If there
> >> is any failure in running the jobs, the streaming context will stop.
> >>
> >> 2.  How does spark determines that the life-cycle of a rdd is
> >> complete. Is there any chance that a RDD will be cleaned out of ram
> >> before all actions are taken on them?
> >> Spark Streaming knows when the all the processing related to batch T
> >> has been completed. And also it keeps track of how much time of the
> >> previous RDDs does it need to remember and keep around in the cache
> >> based on what DStream operations have been done. For example, if you
> >> are using a window 1 minute, the system knows that it needs to keep
> >> around at least last 1 minute data in the memory. Accordingly, it
> >> cleans up the input data (actively unpersisted), and cached RDD
> >> (simply dereferenced from DStream metadata, and then Spark unpersists
> >> them as the RDD object gets GarbageCollected by the JVM).
> >>
> >> TD
> >>
> >>
> >>
> >> On Wed, Nov 26, 2014 at 10:10 AM, tian zhang
> >> <tzhang...@yahoo.com.invalid> wrote:
> >> > I have found this paper seems to answer most of questions about life
> >> > duration.
> >> >
> >> >
> https://www.cs.berkeley.edu/~matei/papers/2012/hotcloud_spark_streaming.pdf
> >> >
> >> > Tian
> >> >
> >> >
> >> > On Tuesday, November 25, 2014 4:02 AM, Mukesh Jha
> >> > <me.mukesh....@gmail.com>
> >> > wrote:
> >> >
> >> >
> >> > Hey Experts,
> >> >
> >> > I wanted to understand in detail about the lifecycle of rdd(s) in a
> >> > streaming app.
> >> >
> >> > From my current understanding
> >> > - rdd gets created out of the realtime input stream.
> >> > - Transform(s) functions are applied in a lazy fashion on the RDD to
> >> > transform into another rdd(s).
> >> > - Actions are taken on the final transformed rdds to get the data out
> of
> >> > the
> >> > system.
> >> >
> >> > Also rdd(s) are stored in the clusters RAM (disc if configured so) and
> >> > are
> >> > cleaned in LRU fashion.
> >> >
> >> > So I have the following questions on the same.
> >> > - How spark (streaming) guarantees that all the actions are taken on
> >> > each
> >> > input rdd/batch.
> >> > - How does spark determines that the life-cycle of a rdd is complete.
> Is
> >> > there any chance that a RDD will be cleaned out of ram before all
> >> > actions
> >> > are taken on them?
> >> >
> >> > Thanks in advance for all your help. Also, I'm relatively new to
> scala &
> >> > spark so pardon me in case these are naive questions/assumptions.
> >> >
> >> > --
> >> > Thanks & Regards,
> >> > Mukesh Jha
> >> >
> >> >
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> >> For additional commands, e-mail: user-h...@spark.apache.org
> >>
> >
>

Reply via email to