>> You wrote you have 24GB available, why only give 3 to HBase? You don't like 
>> him?
Hehe..
So, are you running 64-bit Java then? I thought java-32 doesn't allow more than 
around 3-4 gigs of RAM..

On 6/9/10 11:26 AM, "Jean-Daniel Cryans" <jdcry...@apache.org> wrote:

On Wed, Jun 9, 2010 at 11:16 AM, Vidhyashankar Venkataraman
<vidhy...@yahoo-inc.com> wrote:
> What do you mean by pastebinning it? I will try hosting it on a webserver..

pastebin.com

>
> I know that OOME is Java running out of heap space: Can you let me know what 
> are the usual causes for OOME happening in Hbase? Was I pounding the servers 
> a bit too hard with updates?

Well you know why it happens then ;)

So, since you sent me the log in private until you get it somewhere
public, I see this very important line:

2010-06-09 02:31:23,874 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
request=0.0, regions=467, stores=933, storefiles=3272,
storefileIndexSize=0, memstoreSize=1181, compactionQueueSize=59,
usedHeap=2796, maxHeap=2999, blockCacheSize=681651688,
blockCacheFree=261889816, blockCacheCount=7, blockCacheHitRatio=61,
fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0

This is outputted by the region servers when they shut down. As you
can see, the used heap is 2796 over 2999, and then the OOME was
triggered during:

2010-06-09 02:31:23,874 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: Set stop flag in
regionserver/74.6.71.59:60020.compactor
java.lang.OutOfMemoryError: Java heap space
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
        at 
org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1019)
        at 
org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:971)
        at 
org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.next(HFile.java:1163)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:58)
        at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:79)
        at 
org.apache.hadoop.hbase.regionserver.MinorCompactingStoreScanner.next(MinorCompactingStoreScanner.java:123)

Since your max file size is 2GB, which btw is fairly big compared to
your actual available heap, I'd say your settings are set way too high
for what you gave to HBase to play with. You wrote you have 24GB
available, why only give 3 to HBase? You don't like him?

Jokes apart, HBase is a database, database needs memory, etc. We have
the exact same HW here, and we give 8GB to HBase. Try also setting
compressed object pointers on the JVM, passing it through hbase-env.sh

J-D

Reply via email to