: Tuesday, June 9, 2015 5:28 PM
To: Shao, Saisai; user
Subject: RE: [SparkStreaming 1.3.0] Broadcast failure after setting
spark.cleaner.ttl
Jerry, I agree with you.
However, in my case, I kept the monitoring the blockmanager folder. I do see
sometimes the number of files decreased, but the folder's
.
-Original Message-
From: Shao, Saisai [mailto:saisai.s...@intel.com]
Sent: Tuesday, June 09, 2015 4:33 PM
To: Haopu Wang; user
Subject: RE: [SparkStreaming 1.3.0] Broadcast failure after setting
spark.cleaner.ttl
From the stack I think this problem may be due to the deletion of
broadcast
Hi,
Are you restarting your Spark streaming context through getOrCreate?
On 9 Jun 2015 09:30, Haopu Wang hw...@qilinsoft.com wrote:
When I ran a spark streaming application longer, I noticed the local
directory's size was kept increasing.
I set spark.cleaner.ttl to 1800 seconds in order
From the stack I think this problem may be due to the deletion of broadcast
variable, as you set the spark.cleaner.ttl, so after this timeout limit, the
old broadcast variable will be deleted, you will meet this exception when you
want to use it again after that time limit.
Basically I think