Re: out of memory error

2008-10-16 Thread Rui Xing
H, I should have sent to hbase mailing list. Please ignore this. Sorry
for the spam.

On Fri, Oct 17, 2008 at 12:30 AM, Jim Kellerman (POWERSET) <
[EMAIL PROTECTED]> wrote:

> In the future, you will get a more timely response for hbase
> questions if you post them on the [EMAIL PROTECTED]
> mailing list.
>
> In order to address your question, it would be helpful to
> know your hardware configuration (memory, # of cores),
> any changes you have made to hbase-site.xml, how many
> file handles are allocated per process, what else is
> running on the same machine as the region server and
> what versions of hadoop and hbase you are running.
>
> ---
> Jim Kellerman, Powerset (Live Search, Microsoft Corporation)
>
> > -Original Message-
> > From: Rui Xing [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, October 16, 2008 4:52 AM
> > To: core-user@hadoop.apache.org
> > Subject: out of memory error
> >
> > Hello List,
> >
> > We encountered an out-of-memory error in data loading. We have 5 data
> > nodes
> > and 1 name node distributed on 6 machines. Block-level compression was
> > used.
> > Following is the log output. Seems the problem was caused in compression.
> > Is
> > there anybody who ever experienced such error? Any helps or clues are
> > appreciated.
> >
> > 2008-10-15 21:44:33,069 FATAL
> > [regionserver/0:0:0:0:0:0:0:0:60020.compactor]
> > regionserver.HRegionServer$1(579): Set stop flag in
> > regionserver/0:0:0:0:0:0:0:0:60020.compactor
> > java.lang.OutOfMemoryError
> > at sun.misc.Unsafe.allocateMemory(Native Method)
> > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:99)
> > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
> > at
> >
> org.apache.hadoop.io.compress.zlib.ZlibDecompressor.(ZlibDecompresso
> > r.java:108)
> > at
> >
> org.apache.hadoop.io.compress.zlib.ZlibDecompressor.(ZlibDecompresso
> > r.java:115)
> > at
> >
> org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibDecompressor(ZlibFac
> > tory.java:104)
> > at
> >
> org.apache.hadoop.io.compress.DefaultCodec.createDecompressor(DefaultCodec
> > .java:80)
> > at
> >
> org.apache.hadoop.io.SequenceFile$Reader.getPooledOrNewDecompressor(Sequen
> > ceFile.java:1458)
> > at
> > org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1543)
> > at
> > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1442)
> > at
> > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1431)
> > at
> > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1426)
> > at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:292)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HStoreFile$HbaseMapFile$HbaseReader.<
> > init>(HStoreFile.java:635)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Reader.
> > (HStoreFile.java:717)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HStoreFile$HalfMapFileReader.(H
> > StoreFile.java:915)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:
> > 408)
> > at
> > org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:263)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.jav
> > a:1698)
> > at
> > org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:481)
> > at
> > org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:421)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.splitRegion(HRegion.java:815)
> > at
> >
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.split(CompactSplit
> > Thread.java:133)
> > at
> >
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitTh
> > read.java:86)
> > 2008-10-15 21:44:33,661 FATAL
> > [regionserver/0:0:0:0:0:0:0:0:60020.cacheFlusher]
> > regionserver.Flusher(183):
> > Replay of hlog required. Forcing server restart
> > org.apache.hadoop.hbase.DroppedSnapshotException: region:
> > p4p_test,,1224072139042
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.ja
> > va:1087)
> > at
> > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:985)
> > a

out of memory error

2008-10-16 Thread Rui Xing
Hello List,

We encountered an out-of-memory error in data loading. We have 5 data nodes
and 1 name node distributed on 6 machines. Block-level compression was used.
Following is the log output. Seems the problem was caused in compression. Is
there anybody who ever experienced such error? Any helps or clues are
appreciated.

2008-10-15 21:44:33,069 FATAL [regionserver/0:0:0:0:0:0:0:0:60020.compactor]
regionserver.HRegionServer$1(579): Set stop flag in
regionserver/0:0:0:0:0:0:0:0:60020.compactor
java.lang.OutOfMemoryError
at sun.misc.Unsafe.allocateMemory(Native Method)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:99)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.(ZlibDecompressor.java:108)
at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.(ZlibDecompressor.java:115)
at
org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibDecompressor(ZlibFactory.java:104)
at
org.apache.hadoop.io.compress.DefaultCodec.createDecompressor(DefaultCodec.java:80)
at
org.apache.hadoop.io.SequenceFile$Reader.getPooledOrNewDecompressor(SequenceFile.java:1458)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1543)
at
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1442)
at
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1431)
at
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1426)
at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:292)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$HbaseMapFile$HbaseReader.(HStoreFile.java:635)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Reader.(HStoreFile.java:717)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$HalfMapFileReader.(HStoreFile.java:915)
at
org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:408)
at
org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:263)
at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1698)
at
org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:481)
at
org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:421)
at
org.apache.hadoop.hbase.regionserver.HRegion.splitRegion(HRegion.java:815)
at
org.apache.hadoop.hbase.regionserver.CompactSplitThread.split(CompactSplitThread.java:133)
at
org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:86)
2008-10-15 21:44:33,661 FATAL
[regionserver/0:0:0:0:0:0:0:0:60020.cacheFlusher] regionserver.Flusher(183):
Replay of hlog required. Forcing server restart
org.apache.hadoop.hbase.DroppedSnapshotException: region:
p4p_test,,1224072139042
at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1087)
at
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:985)
at
org.apache.hadoop.hbase.regionserver.Flusher.flushRegion(Flusher.java:174)
at org.apache.hadoop.hbase.regionserver.Flusher.run(Flusher.java:91)
Caused by: java.lang.OutOfMemoryError
at sun.misc.Unsafe.allocateMemory(Native Method)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:99)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.(ZlibDecompressor.java:107)
at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.(ZlibDecompressor.java:115)
at
org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibDecompressor(ZlibFactory.java:104)
at
org.apache.hadoop.io.compress.DefaultCodec.createDecompressor(DefaultCodec.java:80)
at
org.apache.hadoop.io.SequenceFile$Reader.getPooledOrNewDecompressor(SequenceFile.java:1458)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1555)
at
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1442)
at
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1431)
at
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1426)
at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:292)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$HbaseMapFile$HbaseReader.(HStoreFile.java:635)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Reader.(HStoreFile.java:717)
at
org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:413)
at
org.apache.hadoop.hbase.regionserver.HStore.updateReaders(HStore.java:665)
at
org.apache.hadoop.hbase.regionserver.HStore.internalFlushCache(HStore.java:640)
at
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:577)
at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1074)
... 3 more
2008-10-15 21:44:33,661 INFO
[regionserver/0:0:0:0:0:0:0:0:60020.cacheFlu