On Wednesday 22 September 2010 18:15:57 you wrote:
> 1.    Is the application useful? I think the "average user" is the better
> equipped to answer this question so getting as many votes/comments from the
> "average community" definitely helps.

And herein lies the rub - extras-testing has nothing to do with usefulness. 
For people without the proper background thumbing down or up means little - 
and it would be a shame for an app to get thumbed down (esp if it's an upgrade 
for something already in Extras) because the icons aren't pretty enough or 
because the UI is not snappy or they simply think it is nut 'fun enough'.

> 2.    Is the application safe for the system? Here the average user will
> not have much to help but I still think that having a bunch of "power
> users" report "it has bugs", "it crashes my system" will be a good
> starting point. 

> 3.    What application do I want to install? What new
> applications are out there that I don't know about? These are questions
> that other people's reviews would help. Why not have all in one place and
> make it really easier for the user to give feedback.

No compaints here - though it might be worth pointing out that new app != 
testable app. This is the especially problem when, say, someone uploads to 
extras-devel and goes to sleep, and then the following morning promotes to 
extras-testing. The people who installed the app (even if they did check the 
repo) in the meantime from extras-devel have no clue that they can/should 
leave feedback for that app. KISStester tries to go around this by matching 
installed apps with apps listed in the QA queue.

> I was about to ask what "kisstester" was. I didn't know there was another

http://talk.maemo.org/showthread.php?t=60158

> application on the works. But as much as I value the testers job and
> opinions I think one of the big issues seems to be that "the demand is
> higher than the resources". Also, I can't see why bringing more information
> to the system can hurt.. It is up to us to decide how that information
> affects the promotion system but I think as a user I would like to know as
> much as I can about other people experiences with the applications.

Information - yes - that's why we even defaulted the bugtracker link to the 
testing/package pages, we WANT comments, the careful part starts with the 
voting. 

> 1.    Right now there is not distinction between a new version of the
> package and the first one. I understand a previous version being approved
> should not be "guarantee" for a new release but it got help some. The worst
> case scenario for me is: one application gets approved on testing but has
> some big problem. The developer fixes the problem in a matter of minutes
> but it will take another 10-20 days to have it out there. A simple idea on
> how to improve is to have someway to communicate to the "testers" of the
> previous version about that and they would prioritize reviewing the new
> version.
> 2.    After you promote a package to extras-testing and before it is
> approved or rejected you can't promote a new version of the package. When
> the "testing" process can take a long time there is a good chance a new
> version will be available before the last one was "voted".  I don't pretend
> to know an answer here but I see a problem: developers will just promote to
> testing right after uploading to devel to save a place on the queue
> (defeating the purpose of extras-devel), or they will not promote a new
> version of their application that supposedly is better than the previous on
> "testing" because this will reset the clock. I understand the focus should
> not be the developer but I think the users will suffer on both cases.

I'm with you, this *has* been talked about but we were far from consensus, 
maybe it will be something worth revisiting.

> 3.    Anything we can do to check for user's privacy vulnerabilities on
> the QA. Again, I don't have an answer but I think it is potentially more
> important than things like having 20% of the files here or there. Of
> course, much more difficult to check for too.

Currently we have a generic privacy/security checkpoint, but not sure how/what 
to check... Suggestions welcome.

> By no means I want to reduce the merit of the "testers" but even they will
> miss what a "larger user base" would see. I think the testers should be the
> "rejecters" and not the "approvers". They would be in charge of checking
> for the minimum set requirements and where the application is a danger to
> the user. The community would be the positive stimulus . enough good and
> none bad the application is ready!

I would agree on a general note but had bad experience with that - as said, 
I've seen several (valid) complaints made on talk even though the testing page 
was mostly empty - that's why I'm hesitant to say 'no/minimal feedback means 
good feedback'.

Best regards,
Attila Csipa
_______________________________________________
maemo-developers mailing list
maemo-developers@maemo.org
https://lists.maemo.org/mailman/listinfo/maemo-developers

Reply via email to