Hi, I'm using local mode and read a text file as RDD using
JavaSparkContext.textFile() API.
And then call cache() method on the result RDD.
I look at the Storage information and find the RDD has 3 partitions but
2 of them have been cached.
Is this a normal behavior? I assume all of
Yes, it's normal when memory is not enough to put the third partition, as you
can see in your attached picture.
Thanks
Jerry
From: Haopu Wang [mailto:hw...@qilinsoft.com]
Sent: Tuesday, July 22, 2014 3:09 PM
To: user@spark.apache.org
Subject: number of Cached Partitions v.s. Total Partitions