On Fri, 18 Apr 2008, Kim Quirk wrote:

In my experience running QA teams and releases for commercial projects,
small fast releases require (or imply) quite a bit of focused process and
really good automation on the testing side.

Also, after other discussions on this list, it seems like there are two
other items that drive 'major release' twice a year and a few bug fix
releases in between:

1 - Our target users (mostly schools) will not be upgrading often, and many
will require weeks or months of their own testing before they do upgrade
thousands of computers.

however, if you only make two releases a year the schools have very little choice about what they upgrade to.

if you have more frequent releases they can more easily make a choice between one that just came out (and hasn't been tested much, but has some more updates) vs one that has been out a little longer na dthey can be more confident doesn't have any major landmines still undiscovered.

2 - From a support perspective, this audience will probably require that we
support a major release for an entire school year. If we offer too many
releases during that time, we will not be able to keep up with the backward
compatibility matrix of releases that have to work with other releases. If
kids upgrade on their own, will they work with the older version that was
installed on 90% of the other laptops, etc.

the compatibility of software is an important issue in any case. whatever your release cycle you are going to have times when you have mixed releases. and if OLPC achieves the deployment scales they are aiming for, you will start to have XO machines near each other that are controlled by different schools (think of the mix that you could get at a vacation spot during school breaks)

I think if our product were aimed at developers or if it was a server-based
product where we could control the releases and there were no backward
compatibility problems, then it would be great to have many small, fast
releases.

I think more, faster releases are a better approach. the testing effort grows much faster then the count of changes (with the need to test combinations of things) so frequent, small releases are easier to test.

I don't view the backwards compatibility issue as a showstopper, becouse I see it as being nessasary in either case, it's just more obvious with frequent releases (which can be a good thing if it makes people do a better job)

David Lang

Kim


On Thu, Apr 17, 2008 at 7:54 AM, Marco Pesenti Gritti <[EMAIL PROTECTED]>
wrote:

On Thu, Apr 17, 2008 at 1:23 PM, Tomeu Vizoso <[EMAIL PROTECTED]>
wrote:
 I see this too as a hard problem and don't really have experience
 neither. What I would expect is that working on frequent time-based
 releases with features slipping as needed works best for projects like
 linux distros, where slipping a feature grossly means not updating a
 set of packages to the latest stable version.

Even linux distro (Fedora at least,), doesn't actually do focused
releases. Roughly, they set a timeframe and they get in everything
which is ready by that date. This is very easy for a linux
distribution. It would be harder on the Sugar codebase but still very
much feasible, it's the same approach of GNOME releases.

Though Michael proposal goes a step further. We would be focusing only
on one (or a very limited number) of goals per release.

Marco
_______________________________________________
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel

_______________________________________________
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel
_______________________________________________
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel

Reply via email to