>
>
> If this is your setup, your HDFS' namenode is bound to OOM soon.
> (Namenode's
> memory consumption is proportional to the number of blocks on HDFS)
>
>
NN runs on master and we have 4GB for NN and that is good for long time
given amount of blocks we have. DN has 1GB, TT 512MB and JT 1GB.



> I guess you meant "hfile.min.blocksize.size" in ? That is a different
> parameter from HDFS' block size, IMO. (need someone to confirm)
>
>
yes, HBase and HDFS blocks are two different params. We are testing with
8k HBASE (default 64KB) and 64k HDFS (default 64MB) blocks sizes. Both these
are much smaller than defaults, but we have random read heavy work load and
smaller blocks should help, given smaller sizes are not exposing some other
bottleneck.

HBASE smaller blocks means larger indices and better random read
performance. So make sense to trade some RAM for block index as we have
plenty RAM on our machines.

Reply via email to