Will it be better to set the limition at the Region/Store level, so that it
will got a major compaction ASAP.  The side effect is that the insert
process will be clocked during the major compaction, but a carefully
designed application will not fail on this situration.

Is there any method to save the 4450 hstorefiles? ( about 53GB on DFS)

On Tue, May 12, 2009 at 5:01 PM, Andrew Purtell <[email protected]> wrote:

>
> Is there already a JIRA up for this issue? If not one
> should be put up I think.
>
> The first patch to 1058 helped some but did not block
> indefinitely or apparently in all cases. There is another
> patch up for that which may block better, but can tie up
> all IPC handlers so I wonder if the better strategy here
> is to let the data pass through and let flushes pile up
> and then be smart enough in the compactor to recognize
> that 4450 store files will take a few passes to crunch...
>
>    - Andy
>
> > From: 11 Nov.
> > Subject: OOME when restarting hbase
> > Date: Tuesday, May 12, 2009, 1:46 AM
> > hi colleagues,
> >
> > I am doing batup insert on a 20 nodes cluster.
> > Unfortunately the data are
> > inserted to the last region (sequential insert),
> >
> > I have already applied the Hbase-1058 patch before the
> > batch insert. But I got a "big" region with 4450
> > HStoreFile after a long run. Now I want to start the
> > hbase, but always got a OOME on compaction. And the
> > OOME will infect to other regionServer if one server
> > is OOME.
> [...]
>  >
> > 2009-05-12 13:55:12,520 INFO
> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
> > metrics: request=0.0, regions=316, stores=316,
> > storefiles=4450, storefileIndexSize=251, memcacheSize=0,
> > usedHeap=3941, maxHeap=5333
> >
> > 2009-05-12 13:55:12,520 FATAL
> > org.apache.hadoop.hbase.regionserver.HRegionServer: Set
> > stop flag in regionserver/0:0:0:0:0:0:0:0:62020.compactor
> >
> > java.lang.OutOfMemoryError: Java heap space
> >
> >         at java.util.Arrays.copyOf(Arrays.java:2760)
> >
> >         at java.util.Arrays.copyOf(Arrays.java:2734)
> >
> >         at
> > java.util.ArrayList.ensureCapacity(ArrayList.java:167)
> >
> >         at java.util.ArrayList.add(ArrayList.java:351)
> >
> >         at
> > org.apache.hadoop.hbase.io.MapFile$Reader.readIndex(MapFile.java:370)
> >
> >         at
> > org.apache.hadoop.hbase.io.MapFile$Reader.seekInternal(MapFile.java:462)
> >
> >         at
> > org.apache.hadoop.hbase.io.MapFile$Reader.getClosest(MapFile.java:586)
> >
> >         at
> > org.apache.hadoop.hbase.io.MapFile$Reader.getClosest(MapFile.java:569)
> >
> >         at
> >
> org.apache.hadoop.hbase.io.BloomFilterMapFile$Reader.getClosest(BloomFilterMapFile.java:115)
> >
> >         at
> >
> org.apache.hadoop.hbase.io.HalfMapFileReader.getClosest(HalfMapFileReader.java:152)
> >
> >         at
> >
> org.apache.hadoop.hbase.io.HalfMapFileReader.next(HalfMapFileReader.java:190)
> >
> >         at
> > org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1133)
> >
> >         at
> > org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:936)
> >
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:727)
> >
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:684)
> >
> >         at
> >
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:105)
>
>
>
>
>

Reply via email to