out of memory error

2008-10-16 Thread Rui Xing
Hello List,

We encountered an out-of-memory error in data loading. We have 5 data nodes
and 1 name node distributed on 6 machines. Block-level compression was used.
Following is the log output. Seems the problem was caused in compression. Is
there anybody who ever experienced such error? Any helps or clues are
appreciated.

2008-10-15 21:44:33,069 FATAL [regionserver/0:0:0:0:0:0:0:0:60020.compactor]
regionserver.HRegionServer$1(579): Set stop flag in
regionserver/0:0:0:0:0:0:0:0:60020.compactor
java.lang.OutOfMemoryError
at sun.misc.Unsafe.allocateMemory(Native Method)
at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:99)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.init(ZlibDecompressor.java:108)
at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.init(ZlibDecompressor.java:115)
at
org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibDecompressor(ZlibFactory.java:104)
at
org.apache.hadoop.io.compress.DefaultCodec.createDecompressor(DefaultCodec.java:80)
at
org.apache.hadoop.io.SequenceFile$Reader.getPooledOrNewDecompressor(SequenceFile.java:1458)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1543)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1442)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1431)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1426)
at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:292)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$HbaseMapFile$HbaseReader.init(HStoreFile.java:635)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Reader.init(HStoreFile.java:717)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$HalfMapFileReader.init(HStoreFile.java:915)
at
org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:408)
at
org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:263)
at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1698)
at
org.apache.hadoop.hbase.regionserver.HRegion.init(HRegion.java:481)
at
org.apache.hadoop.hbase.regionserver.HRegion.init(HRegion.java:421)
at
org.apache.hadoop.hbase.regionserver.HRegion.splitRegion(HRegion.java:815)
at
org.apache.hadoop.hbase.regionserver.CompactSplitThread.split(CompactSplitThread.java:133)
at
org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:86)
2008-10-15 21:44:33,661 FATAL
[regionserver/0:0:0:0:0:0:0:0:60020.cacheFlusher] regionserver.Flusher(183):
Replay of hlog required. Forcing server restart
org.apache.hadoop.hbase.DroppedSnapshotException: region:
p4p_test,,1224072139042
at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1087)
at
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:985)
at
org.apache.hadoop.hbase.regionserver.Flusher.flushRegion(Flusher.java:174)
at org.apache.hadoop.hbase.regionserver.Flusher.run(Flusher.java:91)
Caused by: java.lang.OutOfMemoryError
at sun.misc.Unsafe.allocateMemory(Native Method)
at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:99)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.init(ZlibDecompressor.java:107)
at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.init(ZlibDecompressor.java:115)
at
org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibDecompressor(ZlibFactory.java:104)
at
org.apache.hadoop.io.compress.DefaultCodec.createDecompressor(DefaultCodec.java:80)
at
org.apache.hadoop.io.SequenceFile$Reader.getPooledOrNewDecompressor(SequenceFile.java:1458)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1555)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1442)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1431)
at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1426)
at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:292)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$HbaseMapFile$HbaseReader.init(HStoreFile.java:635)
at
org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Reader.init(HStoreFile.java:717)
at
org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:413)
at
org.apache.hadoop.hbase.regionserver.HStore.updateReaders(HStore.java:665)
at
org.apache.hadoop.hbase.regionserver.HStore.internalFlushCache(HStore.java:640)
at
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:577)
at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1074)
... 

Re: out of memory error

2008-10-16 Thread Rui Xing
H, I should have sent to hbase mailing list. Please ignore this. Sorry
for the spam.

On Fri, Oct 17, 2008 at 12:30 AM, Jim Kellerman (POWERSET) 
[EMAIL PROTECTED] wrote:

 In the future, you will get a more timely response for hbase
 questions if you post them on the [EMAIL PROTECTED]
 mailing list.

 In order to address your question, it would be helpful to
 know your hardware configuration (memory, # of cores),
 any changes you have made to hbase-site.xml, how many
 file handles are allocated per process, what else is
 running on the same machine as the region server and
 what versions of hadoop and hbase you are running.

 ---
 Jim Kellerman, Powerset (Live Search, Microsoft Corporation)

  -Original Message-
  From: Rui Xing [mailto:[EMAIL PROTECTED]
  Sent: Thursday, October 16, 2008 4:52 AM
  To: core-user@hadoop.apache.org
  Subject: out of memory error
 
  Hello List,
 
  We encountered an out-of-memory error in data loading. We have 5 data
  nodes
  and 1 name node distributed on 6 machines. Block-level compression was
  used.
  Following is the log output. Seems the problem was caused in compression.
  Is
  there anybody who ever experienced such error? Any helps or clues are
  appreciated.
 
  2008-10-15 21:44:33,069 FATAL
  [regionserver/0:0:0:0:0:0:0:0:60020.compactor]
  regionserver.HRegionServer$1(579): Set stop flag in
  regionserver/0:0:0:0:0:0:0:0:60020.compactor
  java.lang.OutOfMemoryError
  at sun.misc.Unsafe.allocateMemory(Native Method)
  at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:99)
  at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
  at
 
 org.apache.hadoop.io.compress.zlib.ZlibDecompressor.init(ZlibDecompresso
  r.java:108)
  at
 
 org.apache.hadoop.io.compress.zlib.ZlibDecompressor.init(ZlibDecompresso
  r.java:115)
  at
 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibDecompressor(ZlibFac
  tory.java:104)
  at
 
 org.apache.hadoop.io.compress.DefaultCodec.createDecompressor(DefaultCodec
  .java:80)
  at
 
 org.apache.hadoop.io.SequenceFile$Reader.getPooledOrNewDecompressor(Sequen
  ceFile.java:1458)
  at
  org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1543)
  at
  org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1442)
  at
  org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1431)
  at
  org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1426)
  at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:292)
  at
 
 org.apache.hadoop.hbase.regionserver.HStoreFile$HbaseMapFile$HbaseReader.
  init(HStoreFile.java:635)
  at
 
 org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Reader.
  init(HStoreFile.java:717)
  at
 
 org.apache.hadoop.hbase.regionserver.HStoreFile$HalfMapFileReader.init(H
  StoreFile.java:915)
  at
 
 org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:
  408)
  at
  org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:263)
  at
 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.jav
  a:1698)
  at
  org.apache.hadoop.hbase.regionserver.HRegion.init(HRegion.java:481)
  at
  org.apache.hadoop.hbase.regionserver.HRegion.init(HRegion.java:421)
  at
 
 org.apache.hadoop.hbase.regionserver.HRegion.splitRegion(HRegion.java:815)
  at
 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread.split(CompactSplit
  Thread.java:133)
  at
 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitTh
  read.java:86)
  2008-10-15 21:44:33,661 FATAL
  [regionserver/0:0:0:0:0:0:0:0:60020.cacheFlusher]
  regionserver.Flusher(183):
  Replay of hlog required. Forcing server restart
  org.apache.hadoop.hbase.DroppedSnapshotException: region:
  p4p_test,,1224072139042
  at
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.ja
  va:1087)
  at
  org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:985)
  at
 
 org.apache.hadoop.hbase.regionserver.Flusher.flushRegion(Flusher.java:174)
  at
  org.apache.hadoop.hbase.regionserver.Flusher.run(Flusher.java:91)
  Caused by: java.lang.OutOfMemoryError
  at sun.misc.Unsafe.allocateMemory(Native Method)
  at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:99)
  at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
  at
 
 org.apache.hadoop.io.compress.zlib.ZlibDecompressor.init(ZlibDecompresso
  r.java:107)
  at
 
 org.apache.hadoop.io.compress.zlib.ZlibDecompressor.init(ZlibDecompresso
  r.java:115)
  at
 
 org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibDecompressor(ZlibFac
  tory.java:104)
  at
 
 org.apache.hadoop.io.compress.DefaultCodec.createDecompressor(DefaultCodec
  .java:80

Amazon Node Dead! Help! Urgent!

2008-09-11 Thread Xing

Hi All,

I made a big mistake and right now some of the nodes have already dead...
I have already terminated all the program but still those dead nodes are 
dead...

Anyone has experienced the same problem?
What should I do right now to recover those nodes at a minimum cost?
Thanks a lot for you help!!

Xing


How to order all the output file if I use more than one reduce node?

2008-08-06 Thread Xing

If I use one node for reduce, hadoop can sort the result.
If I use 30 nodes for reduce, the result is part-0 ~ part-00029.
How make all the 30 parts sort globally and all the files in part-1 
are greater that part-0 ?

Thanks a lot

Xing


Anyway to order all the output folder?

2008-07-24 Thread Xing

Hi All,

There are 30 output folders using Hadoop. Each folder it is in ascending 
order, but the order is not ascending among folders, like the value is 
1, 5 , 10 in folder A and 6, 8, 9 in folder B.
My question is how to enforce the order among all the folders as well, 
as output value 1, 5, 6 in folder A and 8, 9, 10 in folder B.

I just start to learn Hadoop and hope you can help me. :)
Thanks

Shane