There are a few things I think need to be clarified:
1. What are we looking as a community to get out of this Fedora
Test Day event?
2. What is the purpose of a "test case" and/or "test plan"? How
detailed should they be? Presuming we want them, how should
they be stored in general, and for this particular scenario?
For #1, I disagree that having users start out for this occasion by
learning how to use Sugar on their own is appropriate for a few reasons:
* The amount of time that any given tester will be available to
help us out likely is limited. Time spent doing discovery will
not be spent on other tasks.
* Not everyone will start at the same time. Perhaps calling this
a "Test Day" is a misnomer because there is no guarantee that
testers will be in the time zone or country.
* Sugar is a relatively stable platform with a few known recurring
UI disputes. It is not Nell, the Helicopter experiment, or
another one where the user interface could potentially require
major changes.
With this test day, my personal view is that we need to get feedback
verifying basic Sugar and activity functionality in Fedora. When
Peter Robinson, Kalpa Welivitigoda, or someone else updates a Sugar
software package in Fedora, these packages often go through the
verification process without a single person commenting on if the
proposed update worked or not.
Mind you, usage feedback is appreciated; but is more of a secondary
concern to me. Fedora had "Fit and Finish" test days during Fedora 12
cycle where they asked for general usage feedback; perhaps we can
propose that they do another round of those aimed at the various
desktop environments with Fedora 18.
You might look at these 2 pages:
http://wiki.sugarlabs.org/go/Sugar_Creation_Kit#Activity_Testing
http://wiki.sugarlabs.org/go/Community/Distributions/Fedora-SoaS#Testing_Results
Testers are welcome to add info: "It is a wiki"
Tom Gilliard
satellit on #sugar
For #2, I have used similar test templates to the the New Zealand's to
verify activities in the past, and was thinking of making one
available in this case. Translating one into a Wiki template would
make it straightforward to clarify which activities support sharing,
webcam usage, etc.
The reason I am interested in maintaining test cases with a system to
keep a historical log of who did what when is because I want to be
able to parallelize tasks. Although I recognize I could be more
efficient, there simply is too much material in Sugar and the XO
platform for one person to focus on. And yet I get regularly asked
"do you remember bug #123" or "when was the last time someone looked
at Q"?
I am not looking for detailed test cases to the point of listing which
buttons to click when; but rather simple ones like "Does it install?"
and "Can it open a saved document?".
Fedora takes a curious approach to this in that they write a series of
test cases which could be parallelized, but then offer to have
everyone run the same set of test cases. And usually, pretty much
everyone runs most of the available items.
For comparison, look at the last GNOME 3 test day
(http://fedoraproject.org/wiki/Test_Day:2011-04-21_GNOME3_Final)
versus the last Sugar one
(http://fedoraproject.org/wiki/Test_Day:2010-08-19_Sugar).
I'm open to taking suggestions back to the Fedora Testing mailing list
if someone has an idea on how we could do things better, but I'm
trying to avoid cross-posting too much. We could also inquire on the
Fedora QA mailing list as to who might participate and what their
skill levels are so we can better tune our approach.
On Sat, Feb 18, 2012 at 5:04 AM, Tabitha Roder <[email protected]
<mailto:[email protected]>> wrote:
On 17 February 2012 08:36, Samuel Greenfeld <[email protected]
<mailto:[email protected]>> wrote:
On March 22 there will be a Sugar test day for Fedora 17.
This means that the Fedora community in general will be
gathering to look at Sugar and see what issues we have close
to the end of the Sugar 0.96 cycle.
While test cases can be useful, I always try to start with some
discovery time as this is when you can get some feedback on design
and intuitive behaviour (though this is impacted by use of other
systems with many users). Something like:
"Find a friend. Work together to discover how to open the laptop
if you have an XO, or start Sugar. Together try clicking on things
and see if you can learn how to play any games or complete any
activities. Can you find ways to take photos, write stories, make
music."
After that, get their feedback on how that went before giving them
a test case. First time users of Sugar can also give you feedback
on their experience of first use of an activity while following
testing instructions. There have been a number of occasions when I
have said "oh, you have to click on that first and then click on
that other thing" and they have said "why is it designed like
that?" which really makes us rethink about the design of activities.
Our basic activity testing template (written a long time ago) is
here -
http://wiki.laptop.org/go/Activity_testing_template#The_NZ_activity_test
On the topic of tracking testing, we have looked at a number of
options here in NZ and I think Australia also looked at a number
of options. In NZ we tried writing them on wiki.laptop.org
<http://wiki.laptop.org> but that didn't really work. My personal
method of managing test requests is to try to tag the requests (or
potential requests) in my email inbox when they arrive and then
test them on Saturday, archiving off emails as things are tested.
This only works for us because we meet in one place, not a
suitable solution for multiple testing locations. I personally
don't think we should add anymore systems, but look at ways to use
existing systems to manage testing - such as the two bug trackers
we already use or the activities.sugarlabs.org
<http://activities.sugarlabs.org> site.
Hope this helps
Tabitha
_______________________________________________
Testing mailing list
[email protected]
http://lists.laptop.org/listinfo/testing
_______________________________________________
Testing mailing list
[email protected]
http://lists.laptop.org/listinfo/testing