On 20/03/14 23:38, Pasi Lallinaho wrote:
Hello,
this is a reply to the QA recap/feedback thread. As the original
thread went off track, I decided to start a new one to discuss the
original question at hand.
PACKAGE TESTING
First of all, I think it was a good move to run the package testing in
groups and in cadence before we hit the beta milestones. Running all
those tests and gathering a (big) list of bugs was and is important,
especially now that we have entered the "bug fixes only" stage of the
release preparing. I am sure we would be able to fix a lot less bugs
that are annoying and affect numerous of people.
That being said, I think the amount of calls was just about perfect
for an LTS cycle. I personally think we should go through all the
groups during regular releases as well, but possibly group more groups
into one call, and relax on the amount of testing "required". Optional
tests could be literally that; run if comfortable, but if they are
left untested, that's fine as well.
That makes some sense. It's easy enough to call for more than one group
at a time, though I will be looking at the current groups we have, if
nothing else then the new trusty group will be amalgamated.
Thoughts on Optional testing is close to my current thoughts ;)
As to what (else) to test, I think we should try to focus on new
features, as we did this cycle. This can and probably should be
extended to running tests on applications that have had a major update
during the cycle. All of this in a flexible manner; the more new
things we have about to test, the looser running the other tests
should be. Except on the LTS releases...
This is ok in itself, as long as new features aren't landing at the end.
Which is just me banging the same drum as last cycle :)
I've yet to decide if some of the testcases are a bit too thorough or
if they are just about right. I guess we can agree and assume that the
amount of bugs is somewhat correlating with how deep the tests are. As
I see it though, the deeper and specific the tests are, the more
mechanic running them is. Which leads us to exploratory testing...
I've been rethinking this, there is no reason why a testcase couldn't
have a less thorough test incorporated, or even seperate tests.
It is at the end of the day about getting eyes on these - at current
count we have ~30-40 people actually reporting.
I have a few doubtful thoughts on exploratory testing. How do we
motivate people to run exploratory testing with the development
version, while it is not ready for production, or day-to-day
environments? If the tests aren't run on/as your main system, how can
the testing be natural enough to be of exploratory nature? How do we
specify a good balance between feature and exploratory testing?
This would be a completely new thing for us, but then 2 cycles ago the
packages tracker was as well.
I need to talk to Nick Skaggs about how well this went for Ubuntu.
MILESTONE (ISO) TESTING
It is hard to evaluate how the milestone ISO testing succeeded because
we still have one beta to go, which is also the most important
milestone. That is something where we can improve though.
The alpha releases could have been focused more on specific issues.
Now we kind of just ran through them without clear focus. Of course
this means that developers need to have their stuff together earlier
in the cycle, but that is a desirable direction generally.
This too makes sense to me, it does lead to rethinking the way calls are
put out - could be, for example, a call to test AlphaX and also to test
Parole with it's testcase.
I would rethink the amount of alpha releases we want to participate in
especially with non-LTS releases. We can opt-in for as many as we did
now if we have set a clear point of focus for those. This looks
unrealistic for T+1 though, as this cycle has been really busy for
everybody and we have got a lot of stuff that was prepared in the last
2 years included.
I agree here - this has to be led by those landing features.
For the beta releases, we should get more publicity. We still have the
beta 2 release to come, so let's try to fix at least some of that for
Trusty.
CONCLUSION
To end the feedback on a positive note (though there weren't so many
negative points in total anyway), I think we have been up to the
highest possible standard with QA considering the size of our team and
the amount of new things landing this cycle.
Finally, a big THANK YOU Elfy for running the QA team, doing all the
calls, reporting back to us, taking care of bugs being noticed,
features landing in time et cetera... Last but not least, thanks for
putting up with us all who have sometimes more or less neglected our
duties in QA and being unresponsive to questions and calls. It is very
much appreciated, and I totally think that 14.04 would be a lesser
release without your work and persistence!
Cheers,
Pasi
Elfy
--
Ubuntu Forum Council Member
Xubuntu QA Lead
--
xubuntu-devel mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/xubuntu-devel