Obviously, you can't store 900GB of data into 80GB memory. 
There is a concept in spark called disk spill, it means when your data size
increases and can't fit into memory then it spilled out to disk.

Also, spark doesn't use whole memory for storing the data, some fraction of
memory used for processing, shuffling and internal data structure too.
For more detail, you can have a look at 
https://0x0fff.com/spark-memory-management/
<https://0x0fff.com/spark-memory-management/>  

Hope this will help you.






-----

__Vikash Pareek
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to