Hi all,

On https://spark.apache.org/docs/latest/programming-guide.html
under the "RDD Persistence > Removing Data", it states

"Spark automatically monitors cache usage on each node and drops out old
> data partitions in a least-recently-used (LRU) fashion."


 Can it be understood that the cache will be automatically refreshed with
new data. If yes when and how? How Spark determines the old data?

Regards.

Reply via email to