OK, so it sounds like a 3-month dev cycle for a four-month release was on
purpose.

Just curious...thanks :)


On Thu, Mar 13, 2014 at 11:31 AM, David Nalley <da...@gnsa.us> wrote:

> This was (IIRC) part of the explicit decision in how to do things. The
> thought being that if you are restricting what people can do with a
> release branch, people still need to be able to have a place to base
> their ongoing work; and master should be that place. Some features
> will take more than a cycle to get integrated.
>
> --David
>
> On Thu, Mar 13, 2014 at 1:11 PM, Mike Tutkowski
> <mike.tutkow...@solidfire.com> wrote:
> > Yeah, if you "abandon" the "old" release as soon as a release branch is
> cut
> > for it, then you essentially have three months on the new release before
> > its release branch is cut and you move on to the newer release. I'm not
> > sure that was the intent when such a schedule was created. It means we're
> > releasing every four months, but developing for only three.
> >
> >
> > On Thu, Mar 13, 2014 at 11:03 AM, Marcus <shadow...@gmail.com> wrote:
> >
> >> The overlap is simply a byproduct of cutting the branch, I'm not sure
> >> there's a way around it. It's a good point though, that essentially
> >> the window is 1 month shorter than I think was intended. Better
> >> testing will help that, however, with the point being that we
> >> shouldn't be doing a ton of work to make the release branch stable. It
> >> should push the majority of the work back into the pre-branch stage.
> >>
> >> On Thu, Mar 13, 2014 at 10:50 AM, Mike Tutkowski
> >> <mike.tutkow...@solidfire.com> wrote:
> >> > I wanted to add a little comment/question in general about our release
> >> > process:
> >> >
> >> > Right now we typically have a one-month overlap between releases. That
> >> > being the case, if you are focusing on the current release until it is
> >> out
> >> > the door, you effectively lose a month of development for the future
> >> > release. It might be tempting during this one-month time period to
> focus
> >> > instead on the future release and leave the current release alone.
> >> >
> >> > Would it make sense to keep a four-month release cycle, but not have
> an
> >> > overlapping month of two releases?
> >> >
> >> > Just a thought
> >> >
> >> >
> >> > On Thu, Mar 13, 2014 at 10:42 AM, David Nalley <da...@gnsa.us> wrote:
> >> >
> >> >> The RC7 vote thread contained a lot of discussion around release
> >> >> cadence, and I figured I'd move that to a thread that has a better
> >> >> subject so there is better visibility to list participants who don't
> >> >> read every thread.
> >> >>
> >> >> When I look at things schedule wise, I see our aims and our reality.
> >> >> We have a relatively short development window (in the schedule) and
> we
> >> >> have almost 50% of our time in the schedule allocated to testing.
> >> >> (over two months). However, it seems that a lot of testing - or at
> >> >> least a lot of testing for  what became blockers to the release
> didn't
> >> >> appear to happen until RCs were kicked out - and that's where our
> >> >> schedule has fallen apart for multiple releases. The automated tests
> >> >> we have were clean when we issued RCs, so we clearly don't have the
> >> >> depth needed from an automated standpoint.
> >> >>
> >> >> Two problems, one cultural and one technical. The technical problem
> is
> >> >> that our automated test suite isn't deep enough to give us a high
> >> >> level of confidence that we should release. The cultural problem is
> >> >> that many of us wait until the release period of the schedule to
> test.
> >> >>
> >> >> What does that have to do with release cadence? Well inherently not
> >> >> much; but let me describe my concerns. As a project; the schedule is
> >> >> meaningless if we don't follow it; and effectively the release date
> is
> >> >> held hostage. Personally, I do want as few bugs as possible, but it's
> >> >> a balancing act where people doubt our ability if we aren't able to
> >> >> ship. I don't think it matters if we move to 6 month cycles, if this
> >> >> behavior continues, we'd miss the 6 month date as well and push to 8
> >> >> or 9 months. See my radical proposition at the bottom for an idea on
> >> >> dealing with this.
> >> >>
> >> >> I also find myself agreeing with Daan on the additional complexity.
> >> >> Increasing the window for release inherently increases the window for
> >> >> feature development. As soon as we branch a release, master is open
> >> >> for feature development again. This means a potential for greater
> >> >> change at each release. Change is a risk to quality; or at least an
> >> >> unknown that we again have to test. The greater that quantity of
> >> >> change, the greater the potential threat to quality.
> >> >>
> >> >> Radical proposition:
> >> >>
> >> >> Because we have two problems, of different nature, we are in a
> >> >> difficult situation. This is a possible solution, and I'd appreciate
> >> >> you reading and considering it.  Feedback is welcome. I propose that
> >> >> after we enter the RC stage that we not entertain any bugs as
> blockers
> >> >> that don't have automated test cases associated with them. This means
> >> >> that you are still welcome to do manual testing of your pet feature
> >> >> and the things that are important to you; during the testing window
> >> >> (or anytime really). However, if the automation suite isn't also
> >> >> failing then we consider the release as high enough quality to ship.
> >> >> This isn't something we can codify, but the PMC can certainly adopt
> >> >> this attitude as a group when voting. Which also means that we can
> >> >> deviate from it. If you brought up a blocker for release - we should
> >> >> be immediately looking at how we can write a test for that behavior.
> >> >> This should also mean several other behaviors need to become a valid
> >> >> part of our process. We need to ensure that things are well tested
> >> >> before allowing a merge. This means we need a known state of master,
> >> >> and we need to perform testing that allows us to confirm that a patch
> >> >> does no harm. We also need to insist on implementation of
> >> >> comprehensive tests for every inbound feature.
> >> >>
> >> >> Thoughts, comments, flames, death threats? :)
> >> >>
> >> >> --David
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkow...@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > *(tm)*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *(tm)*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*(tm)*

Reply via email to