I think it's a very difficult subject, because we're relying on the developers 
to write tests which will show whether the code in their feature is good, and 
think of every possible failure mode in order to ratify their code.

Obviously if they can think of the failure, they'll probably have coded for it, 
it's the ones they didn't think of that are the problem.

Without an automated suite of hypervisors and mgmt. vms to building the code 
and testing the feature 'in the real world' I can't see how we could otherwise 
(automated) test in a meaningful way.

I believe that currently that's what the likes of myself, Geoff, the guys at 
Schuberg and the others who test outside of devcloud add to this process - 
testing the feature against real hypervisors with real storage etc.

I would love it if these tests were already automated and part of the 
development process and continuously repeat to check for regression.

But until they are I believe that the RC windows (72 hrs) is too short, and 
that all -1s should have a Jira ticket that can then be tracked as blockers and 
that the next RC shouldn't start until those bugs are cleared, so everyone can 
see where we stand during the RC process.



Regards,

Paul Angus
Cloud Architect
S: +44 20 3603 0540 | M: +447711418784 | T: @CloudyAngus
paul.an...@shapeblue.com

-----Original Message-----
From: David Nalley [mailto:da...@gnsa.us]
Sent: 13 March 2014 16:42
To: dev@cloudstack.apache.org
Subject: Release cadence

The RC7 vote thread contained a lot of discussion around release cadence, and I 
figured I'd move that to a thread that has a better subject so there is better 
visibility to list participants who don't read every thread.

When I look at things schedule wise, I see our aims and our reality.
We have a relatively short development window (in the schedule) and we have 
almost 50% of our time in the schedule allocated to testing.
(over two months). However, it seems that a lot of testing - or at least a lot 
of testing for  what became blockers to the release didn't appear to happen 
until RCs were kicked out - and that's where our schedule has fallen apart for 
multiple releases. The automated tests we have were clean when we issued RCs, 
so we clearly don't have the depth needed from an automated standpoint.

Two problems, one cultural and one technical. The technical problem is that our 
automated test suite isn't deep enough to give us a high level of confidence 
that we should release. The cultural problem is that many of us wait until the 
release period of the schedule to test.

What does that have to do with release cadence? Well inherently not much; but 
let me describe my concerns. As a project; the schedule is meaningless if we 
don't follow it; and effectively the release date is held hostage. Personally, 
I do want as few bugs as possible, but it's a balancing act where people doubt 
our ability if we aren't able to ship. I don't think it matters if we move to 6 
month cycles, if this behavior continues, we'd miss the 6 month date as well 
and push to 8 or 9 months. See my radical proposition at the bottom for an idea 
on dealing with this.

I also find myself agreeing with Daan on the additional complexity.
Increasing the window for release inherently increases the window for feature 
development. As soon as we branch a release, master is open for feature 
development again. This means a potential for greater change at each release. 
Change is a risk to quality; or at least an unknown that we again have to test. 
The greater that quantity of change, the greater the potential threat to 
quality.

Radical proposition:

Because we have two problems, of different nature, we are in a difficult 
situation. This is a possible solution, and I'd appreciate you reading and 
considering it.  Feedback is welcome. I propose that after we enter the RC 
stage that we not entertain any bugs as blockers that don't have automated test 
cases associated with them. This means that you are still welcome to do manual 
testing of your pet feature and the things that are important to you; during 
the testing window (or anytime really). However, if the automation suite isn't 
also failing then we consider the release as high enough quality to ship.
This isn't something we can codify, but the PMC can certainly adopt this 
attitude as a group when voting. Which also means that we can deviate from it. 
If you brought up a blocker for release - we should be immediately looking at 
how we can write a test for that behavior.
This should also mean several other behaviors need to become a valid part of 
our process. We need to ensure that things are well tested before allowing a 
merge. This means we need a known state of master, and we need to perform 
testing that allows us to confirm that a patch does no harm. We also need to 
insist on implementation of comprehensive tests for every inbound feature.

Thoughts, comments, flames, death threats? :)

--David
Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure 
Support<http://shapeblue.com/cloudstack-infrastructure-support/> offers the 
best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training<http://shapeblue.com/cloudstack-training/>
18th-19th February 2014, Brazil. 
Classroom<http://shapeblue.com/cloudstack-training/>
17th-23rd March 2014, Region A. Instructor led, 
On-line<http://shapeblue.com/cloudstack-training/>
24th-28th March 2014, Region B. Instructor led, 
On-line<http://shapeblue.com/cloudstack-training/>
16th-20th June 2014, Region A. Instructor led, 
On-line<http://shapeblue.com/cloudstack-training/>
23rd-27th June 2014, Region B. Instructor led, 
On-line<http://shapeblue.com/cloudstack-training/>

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.

Reply via email to