Hi,

I've an application in which an rdd is being updated with tuples coming
from RDDs in a DStream with following pattern.

dstream.foreachRDD(rdd => {
  myRDD = myRDD.union(rdd.filter(myfilter)).reduceByKey(_+_)
})

I'm using cache() and checkpointin to cache results. Over the time, the
lineage of myRDD keeps increasing and stages in each batch of dstream keeps
increasing, even though all the earlier stages are skipped. When the number
of stages grow big enough, the overall delay due to scheduling delay starts
increasing. The processing time for each batch is still fixed.

Following figures illustrate the problem:

Job execution: https://i.imgur.com/GVHeXH3.png?1


Delays: https://i.imgur.com/1DZHydw.png?1


Is there some pattern that I can use to avoid this?

Regards,
Anand

Reply via email to