bq. streamingContext.remember("duration") did not help
Can you give a bit more detail on the above ?
Did you mean the job encountered OOME later on ?
Which Spark release are you using ?
tried these 2 global settings (and restarted the app) after enabling cache
for stream1
I tried these 2 global settings (and restarted the app) after enabling
cache for stream1
conf.set("spark.streaming.unpersist", "true")
streamingContext.remember(Seconds(batchDuration * 4))
batch duration is 4 sec
Using spark-1.4.1. The application runs for about 4-5 hrs then see out of
memory
bq. streamingContext.remember("duration") did not help
Can you give a bit more detail on the above ?
Did you mean the job encountered OOME later on ?
Which Spark release are you using ?
Cheers
On Wed, Feb 17, 2016 at 6:03 PM, ramach1776 wrote:
> We have a streaming
We have a streaming application containing approximately 12 jobs every batch,
running in streaming mode (4 sec batches). Each job has several
transformations and 1 action (output to cassandra) which causes the
execution of the job (DAG)
For example the first job,
/job 1
---> receive Stream A