+1

On Tue, Dec 15, 2009 at 2:08 PM, Cristian Ivascu <[email protected]> wrote:

> +1
>
> Cristian
>
> On Dec 15, 2009, at 6:59 PM, Cosmin Lehene wrote:
>
> > +1
> >
> > Cosmin
> >
> >
> > On 12/15/09 10:44 AM, "Lars George" <[email protected]> wrote:
> >
> >> +1
> >>
> >> Lars
> >>
> >> On Tue, Dec 15, 2009 at 8:53 AM, Jean-Daniel Cryans
> >> <[email protected]>wrote:
> >>
> >>> +1 for 0.21.0
> >>>
> >>> J-D
> >>>
> >>> On Mon, Dec 14, 2009 at 11:30 PM, Andrew Purtell <[email protected]>
> >>> wrote:
> >>>> +1
> >>>>
> >>>>
> >>>> On Sat, Dec 12, 2009 at 3:54 PM, stack <[email protected]> wrote:
> >>>>
> >>>>> HDFS-630 is kinda critical to us over in hbase.  We'd like to get it
> >>> into
> >>>>> 0.21 (Its been committed to TRUNK).  Its probably hard to argue its a
> >>>>> blocker for 0.21.  We could run a vote.  Or should we just file it
> >>> against
> >>>>> 0.21.1 hdfs and commit it after 0.21 goes out?  What would folks
> >>> suggest?
> >>>>>
> >>>>> Without it, a node crash (datanode+regionserver) will bring down a
> >>> second
> >>>>> regionserver, particularly if the cluster is small (See HBASE-1876
> for
> >>>>> description of the play-by-play where NN keeps giving out dead DN as
> >>> place
> >>>>> to locate new blocks).  Since the bulk of hbase clusters are small --
> >>>>> whether evaluations, test, or just small productions -- this issue is
> an
> >>>>> important fix for us.  If the cluster is of 5 or less nodes, we'll
> >>> probably
> >>>>> recover but there'll be a period of churn.  At a minimum mapreduce
> jobs
> >>>>> running against the cluster will fail (usually some kind of bullk
> >>> upload).
> >>>>>
> >>>>> St.Ack
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>
> >
>
>


-- 
Guilherme

msn: [email protected]
homepage: http://sites.google.com/site/germoglio/

Reply via email to