Swapping is pretty bad here, especially because a JVM-based won't even feel
the memory pressure and try to GC or shrink the heap when the OS faces
memory pressure. It's probably relatively worse than in M/R because Spark
uses memory more. Enough grinding in swap will cause tasks to fail due to
This may seem like a silly question, but it really isn’t.
In terms of Map/Reduce, its possible to over subscribe the cluster because
there is a lack of sensitivity if the servers swap memory to disk.
In terms of HBase, which is very sensitive, swap doesn’t just kill performance,
but also can