On 20/03/14 23:38, Pasi Lallinaho wrote:
Hello,
this is a reply to the QA recap/feedback thread. As the original
thread went off track, I decided to start a new one to discuss the
original question at hand.
PACKAGE TESTING
First of all, I think it was a good move to run the package testing in
groups and in cadence before we hit the beta milestones. Running all
those tests and gathering a (big) list of bugs was and is important,
especially now that we have entered the "bug fixes only" stage of the
release preparing. I am sure we would be able to fix a lot less bugs
that are annoying and affect numerous of people.
That being said, I think the amount of calls was just about perfect
for an LTS cycle. I personally think we should go through all the
groups during regular releases as well, but possibly group more groups
into one call, and relax on the amount of testing "required". Optional
tests could be literally that; run if comfortable, but if they are
left untested, that's fine as well.
As to what (else) to test, I think we should try to focus on new
features, as we did this cycle. This can and probably should be
extended to running tests on applications that have had a major update
during the cycle. All of this in a flexible manner; the more new
things we have about to test, the looser running the other tests
should be. Except on the LTS releases...
I've yet to decide if some of the testcases are a bit too thorough or
if they are just about right. I guess we can agree and assume that the
amount of bugs is somewhat correlating with how deep the tests are. As
I see it though, the deeper and specific the tests are, the more
mechanic running them is. Which leads us to exploratory testing...
I have a few doubtful thoughts on exploratory testing. How do we
motivate people to run exploratory testing with the development
version, while it is not ready for production, or day-to-day
environments? If the tests aren't run on/as your main system, how can
the testing be natural enough to be of exploratory nature? How do we
specify a good balance between feature and exploratory testing?
MILESTONE (ISO) TESTING
It is hard to evaluate how the milestone ISO testing succeeded because
we still have one beta to go, which is also the most important
milestone. That is something where we can improve though.
The alpha releases could have been focused more on specific issues.
Now we kind of just ran through them without clear focus. Of course
this means that developers need to have their stuff together earlier
in the cycle, but that is a desirable direction generally.
I would rethink the amount of alpha releases we want to participate in
especially with non-LTS releases. We can opt-in for as many as we did
now if we have set a clear point of focus for those. This looks
unrealistic for T+1 though, as this cycle has been really busy for
everybody and we have got a lot of stuff that was prepared in the last
2 years included.
For the beta releases, we should get more publicity. We still have the
beta 2 release to come, so let's try to fix at least some of that for
Trusty.
CONCLUSION
To end the feedback on a positive note (though there weren't so many
negative points in total anyway), I think we have been up to the
highest possible standard with QA considering the size of our team and
the amount of new things landing this cycle.
Finally, a big THANK YOU Elfy for running the QA team, doing all the
calls, reporting back to us, taking care of bugs being noticed,
features landing in time et cetera... Last but not least, thanks for
putting up with us all who have sometimes more or less neglected our
duties in QA and being unresponsive to questions and calls. It is very
much appreciated, and I totally think that 14.04 would be a lesser
release without your work and persistence!
Cheers,
Pasi
Rather than post to the last mail I'll reply to this one.
Thanks for the feedback by everyone - much appreciated :)
So I've taken this from the comments.
*Testcase grouping* - call for more than one at a time, I'll likely
be re-organising some of them post 14.04 as well.
*
**Optional testcases* - can leave these for non-LTS testing
*New feature testing* - much as we did this cycle, fit them in when
we can - existing testcases to take a back seat if new features need
testing.
*Exploratory testing* - I'm not looking at this any longer - or at
least, it needs to work in conjunction with autopilot testing, there
will be a mail to the list in the near future about this from one of
the other members of the QA team.
*Specific Testing during milestones* - Work specific package testing
into various milestones when it's appropriate for us. Necessarily
this will need to be led by devs - they'll know more about what
needs to be tested. Only take part in milestones when there is a need.
*Testcase feedback* - I'll send a mail to the list regarding this
seperately, those that have actually taken part in package testing -
your input on this will be invaluable, please join in with this
discussion.
*Feedback* to the list does help us - but it is a whole lot easier
to follow the trackers, bugs reported to those end up on our
blueprints during cycles - we can track that. Mailing list threads -
not trackable. In addition when you are reporting to a tracker it
will tell you bugs that others have reported against that test, be
it a package or an image.
Elfy
--
Ubuntu Forum Council Member
Xubuntu QA Lead
--
xubuntu-devel mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel