Hi,
I¹m having problems doing client operations when my table is large.  I did
an initial test like this:
6 servers
6 GB heap size per server
20 million 1K recs (so ~3 GB per server)

I was able to do at least 5,000 random read/write operations per second.

I think increased my table size to
120 million 1K recs (so ~20 GB per server)

I then put a very light load of random reads on the table: 20 reads per
second.  I¹m able to do a few, but within 10-20 seconds, they all fail.  I
found many errors of the following type in the hbase master log:

java.io.IOException: java.io.IOException: Could not obtain block:
blk_-7409743019137510182_39869
file=/hbase/.META./1028785192/info/2540865741541403627

If I wait about 5 minutes, I can repeat this sequence (do a few operations,
then get errors).

If anyone has any suggestions or needs me to list particular settings, let
me know.  The odd thing is that I observe no problems and great performance
with a smaller table.

Thanks,
Adam


Reply via email to