Hi all,

I am planning to use spark with HBase, where I generate RDD by reading data
from HBase Table.

I want to know that in the case when the size of HBase Table grows larger
than the size of RAM available in the cluster, will the application fail,
or will there be an impact in performance ?

Any thoughts in this direction will be helpful and are welcome.

Thanks,
-Vibhor

Reply via email to