I though that both the capacity and fair share scheduler are available in
0.19.  Are there new features add in 0.20?  Is that documented anywhere?
How do I learn more?

Bill

On Fri, Mar 20, 2009 at 7:28 AM, David B. Ritch <david.ri...@gmail.com>wrote:

> Looks like there's been quite a bit of work on both the Capacity
> Scheduler and the Fair Scheduler.  Are there others we should look at in
> 20.0?
>
> Again, thank you for your informative responses.
>
> David
>
> Raghu Angadi wrote:
> >
> > I think 0.20.0 is preferred because of new job scheduler(s) there.
> > Stability wise, 0.20 is expected to be as good as 0.19.2.
> >
> > I don't know the schedule for 0.19.2.
> >
> > Raghu.
> >
> > David Ritch wrote:
> >> Raghu,
> >>
> >> Thank you for your prompt and informative response.  Moving to
> >> anything that
> >> ends in .0 is a bit scary - what are the reasons to go with 0.20.0
> >> instead
> >> of 0.19.2?  Yahoo is jumping from 0.18.x directly to 0.20.0?  Why is
> >> Yahoo
> >> skipping the 0.19.x release?
> >>
> >> Is the expectation that 0.19.2 will be released at the same time as
> >> 0.20.0?
> >>
> >> Thanks,
> >>
> >> David
> >>
> >> On Wed, Mar 18, 2009 at 1:31 PM, Raghu Angadi <rang...@yahoo-inc.com>
> >> wrote:
> >>
> >>> Short is answer I am afraid is no.
> >>>
> >>> As an alternative, I recommend upgrading to latest 0.19.x or 0.20.0
> >>> (to be
> >>> released in couple of days). 0.19.2 is certainly a lot better than
> >>> 0.19.0.
> >>> Yahoo is rolling out 0.20.x if that helps your confidence.
> >>>
> >>> Raghu.
> >>>
> >>>
> >>> David Ritch wrote:
> >>>
> >>>> There is an established procedure for upgrading from one release of
> >>>> Hadoop
> >>>> to a newer release.  Is there something similar to move back to an
> >>>> lower-numered release?
> >>>>
> >>>> Specifically, we have data in a cloud running Hadoop-19.0.  Because of
> >>>> stability issues, we are wondering whether we should move back to
> >>>> 18, but
> >>>> we
> >>>> don't want to lose our data.  Is there a downward migration path?
> >>>>
> >>>> Thanks,
> >>>>
> >>>> David
> >>>>
> >>>>
> >>
> >
> >
>
>

Reply via email to