Hi,

I had a few questions regarding the way *newApiHadoopRDD *accesses data
from HBase.

1. Does it load all the data from a scan operation directly in memory?
2. According to my understanding, the data is loaded from different regions
to different executors, is that assumption/understanding correct?
3. If it does load all the data from the scan operation, what happens when
the data size is more than executor memory?
4. What happens when we have a huge number of column qualifiers for a given
row ?


Thanks & Regards
Biplob Biswas

Reply via email to