Hi, all,
    I have two region servers setup and each machine have around 32G
memory. For each region server, I started it with 12G JVM limit.  Recently
I have one map-reduce job which will write big chunk of data into a hbase
table. The job will run around 10 hours and the final hbase table will be
big.
    Now I found that the region server's memory usage will keep increasing
while the map-reduce job is running. While this memory reaches the 12G
limit, it will die. From the log, it seems that the compact/split will fail
due to the memory problem:
  2012-02-15 21:39:58,013 ERROR
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction/Split
failed for region test_table,ali
ramos,1329364709394.5e1b41c1ea5e87d75fbac2e5fb26e68b.

  I know very little about the internal implementation of hbase, could you
guys give me some suggestions for the following questions:
   1. Why the memory usage of the region server keep increasing? Is it
simply because I am writing big data into the hbase table?  Which parts of
hbase will use more memory for increasing table size? Are there any
configuration options for me to alleviate this problem?

   2. Why the region server die? is it because the GC is not quick enough
to free memory for hbase? I assume that writing data, compacting/spliting
all need to allocate new memory. And if the GC is not quick enough, these
function will simply get exception and cause the region server to die. Is
that right?


We used hbase-0.90.1 version and this problem really bothers us a lot. Hope
you can give us some suggestions.

Thank you very much

Tianwei

Reply via email to