Hey Tom,

That sounds like a great idea. +1.

Thanks,
Jeff

On Wed, Mar 24, 2010 at 4:25 PM, Tom White <t...@cloudera.com> wrote:

> I agree that getting the release process restarted is of utmost
> importance to the project. To help make that happen I'm happy to
> volunteer to be a release manager for the next release. This will be
> the first release post-split, so there will undoubtedly be some issues
> to work out. I think the focus should be on getting an alpha release
> out, so I suggest we create a new 0.21 branch from trunk, then spend
> time fixing blockers (which will be a superset of the existing 0.21
> blockers).
>
> Cheers,
> Tom
>
> On Wed, Mar 24, 2010 at 1:27 PM, Brian Bockelman <bbock...@cse.unl.edu>
> wrote:
> > Hey Allen,
> >
> > Your post provoked a few thoughts:
> > 1) Hadoop is a large, but relatively immature project (as in, there's
> still a lot of major features coming down the pipe).  If we wait to release
> on features, especially when there are critical bugs, we end up with a large
> number of patches between releases.  This ends up encouraging custom patch
> sets and custom distributions.
> > 2) The barrier for patch acceptance is high, especially for opportunistic
> developers.  This is a good thing for code quality, but for getting patches
> in a timely manner.  This means that there are a lot of 'mostly good'
> patches out there in JIRA which have not landed.  This again encourages
> folks to develop their own custom patch sets.
> > 3) We make only bugfixes for past minor releases, meaning the stable
> Apache release is perpetually behind in features, even features that are not
> core.
> >
> > Not sure how to best fix these things.  One possibility:
> > a) Have a stable/unstable series (0.19.x is unstable, 0.20.x is stable,
> 0.21.x is unstable).  For the unstable releases, lower the bar for code
> acceptance for less-risky patches.
> > b) Combined with a a time-based release for bugfixes (and non-dangerous
> features?) in order to keep the feature releases "fresh".
> >
> > (a) aims to tackle problems (1) and (2).  (b) aims to tackle (3).
> >
> > This might not work for everything.  If I had a goal, it would be to
> decrease the number of active distributions from 3 to 2 - otherwise you end
> up spending far too much time consensus building.
> >
> > Just a thought from an outside, relatively content observer,
> >
> > Brian
> >
> > On Mar 24, 2010, at 1:38 PM, Allen Wittenauer wrote:
> >
> >> On 3/15/10 9:06 AM, "Owen O'Malley" <o...@yahoo-inc.com> wrote:
> >>> From our 21 experience, it looks like our old release strategy is
> >>> failing.
> >>
> >>    Maybe this is a dumb question but... Are we sure it isn't the
> community
> >> failing?
> >>
> >>    From where I stand, the major committers (PMC?) have essentially
> forked
> >> Hadoop into three competing source trees.  No one appears to be
> dedicated to
> >> helping the community release because the focus is on their own tree.
>  Worse
> >> yet, two of these trees are publicly available with both sides pushing
> their
> >> own tree as vastly superior (against each other and against the official
> >> Apache branded one).
> >>
> >>    What are the next steps in getting this resolved?  Is
> >> Hadoop-as-we-know-it essentially dead?  What is going to prevent the
> fiasco
> >> that is 0.21 from impacting 0.22?
> >>
> >>    For me personally, I'm more amused than upset that 0.21 hasn't been
> >> released.  But I'm less happy that there appears to be a focus on
> feature
> >> additions rather than getting some of the 0.21 blockers settled (I'm
> >> assuming here that most of the 0.21 blockers apply to 0.22 as well).
> >>
> >>    I don't think retroactively declaring 0.20 as 1.0 is going to make
> the
> >> situation any better.  [In fact, I believe it will make it worse, since
> it
> >> gives an external impression that 0.20 is somehow stable at all levels.
>  We
> >> all know this isn't true.]
> >
> >
>

Reply via email to