The results of the meeting all sound good.

As part of the UX work, should we include the application ratings as
part of the testing interface?  We just had someone step through
random areas of an app, it seems we should leverage this and remind
them to rate it.
Which reminds me, any reason Maemo doesn't use Debian's popularity contest?

To help remind people that there is a checklist and whats on it,
should the rating page link to or include the criteria?

I see there were no notes on the algorithm.  A threshold of 10 was
annoying as a developer.  As a tester, a threshold of 10 made me feel
more comfortable not doing a full blown /opt check or power management
check because of 10 people I could hope someone else would do it and I
could worry about other issues like application stability.  With a
smaller threshold I would feel more of a burden to do all of the steps
which would discourage me.

So I guess I'll share my idea.  To me, it seems that one tester would
probably be enough for /opt, power management, etc.  If the categories
were broken out, these could just require a net of +1 karma with a
required comment to describe steps and results regardless of whether
they gave an up or down.  Net +1 is in case others disagree, they can
vote it down.  Required comments either way are to make people feel
comfortable that it was tested properly and not just someone saying
"it works for me" and voting it up.

Also, for most apps, /opt and power management are less likely to
change from release to release.  A package's net-karma in those
categories could carry over with the attached comment being that it
carried over from release X.Y.  If a tester feels highly motivated or
feel its been too long since these have been tested and they find an
issue, their single -1 will block promotion.

Ed Page
(epage)
_______________________________________________
maemo-developers mailing list
maemo-developers@maemo.org
https://lists.maemo.org/mailman/listinfo/maemo-developers

Reply via email to