I am new to spark and I keep hearing that RDD's can be persisted to memory or
disk after each checkpoint. I wonder why RDD's are persisted in memory? In case
of node failure how would you access memory to reconstruct the RDD? persisting
to disk make sense because its like persisting to a Network file system (in case
of HDFS) where a each block will have multiple copies across nodes so if a node
goes down RDD's can still be reconstructed by the reading the required block
from other nodes and recomputing it but my biggest question is Are RDD's ever 
persisted to disk?

Reply via email to