I don't think current Spark Streaming supports window operations which beyond
its available memory, internally Spark Streaming puts all the data in the
memory belongs to the effective window, if the memory is not enough,
BlockManager will discard the blocks at LRU policy, so something
The default persistence level is MEMORY_AND_DISK, so the LRU policy would
discard the blocks to disk, so the streaming app will not fail. However,
since things will get constantly read in and out of disk as windows are
processed, the performance wont be great. So it is best to have sufficient