It's default off. I'd say we just say it's an experimental feature in the 
release notes.


Are you saying we should have another RC?
There was other stuff that went into 0.94 after I cut the RC, so that would 
potentially need to stabilize if I cut a new RC now.

-- Lars

________________________________
From: Ted Yu <[email protected]>
To: [email protected] 
Sent: Monday, May 14, 2012 7:17 AM
Subject: Re: ANN: The third hbase 0.94.0 release candidate is available for 
download

Thanks for sharing this information, Ramkrishna.

Dictionary WAL compression makes replication not functional - see details
in https://issues.apache.org/jira/browse/HBASE-5778

I would vote for the removal of Dictionary WAL compression until we make it
more robust and consuming much less memory.

On Mon, May 14, 2012 at 6:59 AM, Ramkrishna.S.Vasudevan <
[email protected]> wrote:

> Hi
>
> One small observation after giving +1 on the RC.
> The WAL compression feature causes OOME and causes Full GC.
>
> The problem is, if we have 1500 regions and I need to create
> recovered.edits
> for each of the region (I don’t have much data in the regions (~300MB)).
> Now when I try to build the dictionary there is a Node object getting
> created.
> Each node object occupies 32 bytes.
> We have 5 such dictionaries.
>
> Initially we create indexToNodes array and its size is 32767.
>
> So now we have 32*5*32767 = ~5MB.
>
> Now I have 1500 regions.
>
> So 5MB*1500 = ~7GB.(Excluding actual data).  This seems to a very high
> initial memory foot print and this never allows me to split the logs and I
> am not able to make the cluster up at all.
>
> Our configured heap size was 8GB, tested in 3 node cluster with 5000
> regions, very less data( 1GB in hdfs cluster including replication), some
> small data is spread evenly across all regions.
>
> The formula is 32(Node object size)*5(No of dictionary)*32767(no of node
> objects)*noofregions.
>
> I think this initial memory needs to be documented (documentation should do
> for now)or has to be fixed with some workarounds.
>
> So pls give your thoughts on this.
>
> Regards
> Ram
>
>
>
>
> > -----Original Message-----
> > From: Ramkrishna.S.Vasudevan [mailto:[email protected]]
> > Sent: Monday, May 14, 2012 11:48 AM
> > To: [email protected]; 'lars hofhansl'
> > Subject: RE: ANN: The third hbase 0.94.0 release candidate is available
> > for download
> >
> > Hi
> > We (it includes the test team here) 0.94 RC and carried out various
> > operations on it.
> > Puts, Scans, and all the restart scenarios (using kill -9 also).  Even
> > the
> > encoding stuffs were tested and carried out our basic test scenarios.
> > Seems
> > to work fine.
> >
> > Did not test rolling restart with 0.92. By this week we may try to do
> > some
> > performance comparison with 0.92.
> > Also Lars and I agreed for a point release too.
> > So I am +1 on the RC.
> >
> > Regards
> > Ram
> >
> > > -----Original Message-----
> > > From: lars hofhansl [mailto:[email protected]]
> > > Sent: Sunday, May 13, 2012 10:53 PM
> > > To: [email protected]
> > > Subject: Re: ANN: The third hbase 0.94.0 release candidate is
> > available
> > > for download
> > >
> > > OK, I'll change my tactic :)
> > >
> > > If there are no -1's by Wed, May 16th, I'll release RC4 as 0.94.0.
> > >
> > > -- Lars
> > >
> > >
> > >
> > > ----- Original Message -----
> > > From: Stack <[email protected]>
> > > To: lars hofhansl <[email protected]>
> > > Cc: "[email protected]" <[email protected]>
> > > Sent: Friday, May 11, 2012 10:39 PM
> > > Subject: Re: ANN: The third hbase 0.94.0 release candidate is
> > available
> > > for download
> > >
> > > On Fri, May 11, 2012 at 10:26 PM, lars hofhansl <[email protected]>
> > > wrote:
> > > > Thanks Stack.
> > > >
> > > > So that's two +1 (mine doesn't count I guess). And no -1.
> > >
> > > Why doesn't yours count?  Usually the RMs does, if they +1 it.  So,
> > > that'd be 3x+1 + a non-binding +1.
> > >
> > > > I talked to Ram offline, and we'll fix HBase with Hadoop 2.0.0 in a
> > > 0.94 point release.
> > > >
> > > > I would like to see a few more +1's before I declare this the
> > > official 0.94.0 release.
> > > >
> > >
> > > You might be waiting a while (smile).  Fellas seem to be busy...
> > >
> > > Good on you Lars,
> > > St.Ack
>
>

Reply via email to