See http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A6.  Different hdfs
complaint but make sure your ulimit is > 1024 (check first or second line in
master log -- it prints out what hbase is seeing for ulimit), check that
hdfs-127 is applied to the first hadoop that hbase sees on CLASSPATH (this
is particularly important if your loading script is a mapreduce task,
clients might not be seeing the patched hadoop that hbase ships with).  Also
up the handler count for hdfs (the referred to timeout is no longer
pertinent I believe) and while you are at it, those for hbase if you haven't
changed them from defaults.  While you are at it, make sure you don't suffer
from http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A5.

How many regions per regionserver?

Can you put a regionserver log somewhere I can pull it to take a look?

For a "Could not obtain block message", what happens if you take the
filename -- 2540865741541403627 in the below -- and grep NameNode.  Does it
tell you anything?

St.Ack

On Sat, Dec 5, 2009 at 3:32 PM, Adam Silberstein <[email protected]>wrote:

> Hi,
> I¹m having problems doing client operations when my table is large.  I did
> an initial test like this:
> 6 servers
> 6 GB heap size per server
> 20 million 1K recs (so ~3 GB per server)
>
> I was able to do at least 5,000 random read/write operations per second.
>
> I think increased my table size to
> 120 million 1K recs (so ~20 GB per server)
>
> I then put a very light load of random reads on the table: 20 reads per
> second.  I¹m able to do a few, but within 10-20 seconds, they all fail.  I
> found many errors of the following type in the hbase master log:
>
> java.io.IOException: java.io.IOException: Could not obtain block:
> blk_-7409743019137510182_39869
> file=/hbase/.META./1028785192/info/2540865741541403627
>
> If I wait about 5 minutes, I can repeat this sequence (do a few operations,
> then get errors).
>
> If anyone has any suggestions or needs me to list particular settings, let
> me know.  The odd thing is that I observe no problems and great performance
> with a smaller table.
>
> Thanks,
> Adam
>
>
>

Reply via email to