On Monday 11 May 2009 15:13:05 Quim Gil wrote:
> 1. Developer pushes packages from extras-devel to extras testing.
>
> 2. Automated testing does the checks.
>
> 2a. If the test fails, the packages don't get to extras-testing.
>
> 2b. If the test is successful, the packages go to extras testing.
>
> 3. In extras-testing the betatesters put the software into stress,
> equipped with Nitro (crash reporter installed in their devices) plus
> whatever tools they can use voluntarily.

Not sure about the requirement to have Nitro installed.  In particular, what 
happens to the (potentially large number) of people using maemo-testing who 
do not have Nitro installed?  What happens if none of the people who want to 
beta test this package have Nitro installed.

Also, what if this is a brand new application (e.g. liqbase, last year) -- how 
does the developer recruit beta testers?  What if this is a commercial 
company who has created an app and wants it available as soon as possible?

> 3a. If they find severe bugs the packages go back to extras devel.

This at the judgement of the developer, presumably (one man's "severe" may be 
different from another's).  Downgrading packages in repositories does not 
work well so this would have to be a rare occurence.

> 3b. If nobody finds a showstopper the app goes to extras after N weeks
> and N votes.

Certainly not if the developer has not requested it.

For GPE, I release one or two beta releases over a period of several weeks or 
months in the run up to a production release.  These need to go into 
extras-testing because that is where the people who like to run and test new 
stuff live.  They are not candidates to be promoted to extras because they 
are beta releases -- even if they are perfect they would need descriptions 
changing.  And, more realistically, there will be bugs reported, some of 
which I will want to fix before release.

Eventually I create a production release.  This needs a brief sanity check 
(for the last production release I gave this 2 days and about 3-4 people 
tried it out) and then should go into extras.  I would prefer that sanity 
check to be faster than 2 weeks (if previous, similar but not identical, 
versions have already been in extras-testing for several months).  But for 
GPE I could live with the N weeks, M testers for this sanity check.

> I see the point of asking for qualified humans giving the green light as
> something more trustworthy than community testers rating. The problem I
> see is that these people are usually busy and being the filter can put a
> lot of stress in them (holidays, exams, problems at home... and a dozen
> of developers waiting for you to press a green button).

I don't see the humans as "more trustworthy", just more flexible.  Maemo is a 
small community, with a very small number of packages compared with (say) 
Debian.  And as a "mobile device" oriented distribution we need to be 
encouraging developers to make availabe neat applications, of good quality, 
as quickly as possible.   And I don't see a fixed rule as being the best way 
to achieve that.  

I would rather that any one of a team of (say) Jeremiah, Jaffa, Qole and a few 
others reviewed a submission request form allowing the developer to explain 
what testing has happened, by whom, over what period of time and then made 
the upgrade if they are convinced.  The "N weeks, M testers" would be the 
standard guideline but the reviewer can change the criteria if they feel it 
is appropriate to the circumstances.

> In fact, what we probably should and will do is test more those apps
> that are most downloaded, for a simple logical reason: they are more
> used. Also, more automated testing and torture could be put on the most
> downloaded apps. But doing intensive and reliable testing on every
> package uploaded is not scalable, probably not even when automated since
> there are all kinds of apps, languages, dependencies...
>
> If a major bug could make it inside a final release of an app downloaded
> 50 times and the chance for the bug to explode is 1/1000... tough luck.
> It happens all the time.

No single N weeks and M testers criterion will work well both for apps 
downloaded 50 times and apps downloaded 50,000 times.

Graham
_______________________________________________
maemo-developers mailing list
maemo-developers@maemo.org
https://lists.maemo.org/mailman/listinfo/maemo-developers

Reply via email to