The 4.1 GB table has 3 regions. This means that there would be at least 2
nodes which don't carry its region.
Can you split this table into 12 (or more) regions ?
BTW what's the value for spark.yarn.executor.memoryOverhead ?
Cheers
On Sat, Mar 14, 2015 at 10:52 AM, francexo83 wrote:
> Hi all,
Hi all,
I have the following cluster configurations:
- 5 nodes on a cloud environment.
- Hadoop 2.5.0.
- HBase 0.98.6.
- Spark 1.2.0.
- 8 cores and 16 GB of ram on each host.
- 1 NFS disk with 300 IOPS mounted on host 1 and 2.
- 1 NFS disk with 300 IOPS mounted on host