In a case that memory cannot hold all the cached RDD, then BlockManager
will evict some older block for storage of new RDD block.


Hope that will helpful.

2015-06-24 13:22 GMT+08:00 bit1...@163.com <bit1...@163.com>:

> I am kind of consused about when cached RDD will unpersist its data. I
> know we can explicitly unpersist it with RDD.unpersist ,but can it be
> unpersist automatically by the spark framework?
> Thanks.
>
> ------------------------------
> bit1...@163.com
>



-- 
王海华

Reply via email to