Hi,

I am loading data into my HBase cluster and running into two issues -

During my import, I received the following exception ->

org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
53484 actions: servers with issues: spock7001:60020,
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1220)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1234)
        at
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:819)
        at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:675)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:660)

May have cluster issues => true
Cause 0

When I check the logs on the regions server, the last thrown exception is
the following =>

Thu Jun 23 05:16:18 2011 GMT regionserver 10460-0@spock7001:0 [DEBUG] (IPC
Server handler 7 on 60020)
{ org.apache.hadoop.hbase.NotServingRegionException:
hbaseTable,,1308805558566.5aefc6c2b9599f55f8b40351a61db03c. is closing
Thu Jun 23 05:22:18 2011 GMT regionserver 10460-0@spock7001:0 [DEBUG]
(regionserver60020.logRoller) org.apache.hadoop.conf.Configuration:
java.io.IOException: config()

On running status 'detailed' in the shell , I get =>

0 regionsInTransition
3 live servers
   spock7001:60020 1308805454136
        requests=0, regions=0, usedHeap=470, maxHeap=910
    spock6002:60020 1308805434201
        requests=0, regions=1, usedHeap=550, maxHeap=910
        hbaseTable,,1308805558566.5aefc6c2b9599f55f8b40351a61db03c.
            stores=1, storefiles=2, storefileSizeMB=383, memstoreSizeMB=0,
storefileIndexSizeMB=1
    spock6001:60020 1308805268507
        requests=0, regions=2, usedHeap=90, maxHeap=910
        -ROOT-,,0
            stores=1, storefiles=1, storefileSizeMB=0, memstoreSizeMB=0,
storefileIndexSizeMB=0
        .META.,,1
            stores=1, storefiles=0, storefileSizeMB=0, memstoreSizeMB=0,
storefileIndexSizeMB=0
0 dead servers


I am issuing a checkAndPut() to insert records into HBase. Is this a bug ?

Secondly, I have followed the instructions in the HBase book to increase
write throughput. I have the following settings for my hbase table:

config = HBaseConfiguration.create();
table = new HTable (config, "hbaseTable");
table.setAutoFlush(false);
table.setWriteBufferSize(104857600);

However, according to my logs, each checkAndPut() call takes on an average
of 5 milliseconds. Is this unavoidable overhead due to locking ?

All of my HBase daemons are running with -Xmx1g of heapsize.

Any help is appreciated.

Thank you,

Sam

Reply via email to