On Fri, Dec 9, 2011 at 12:50 AM, Gema Gomez <gema.gomez-sol...@canonical.com
> wrote:

> On 08/12/11 21:09, Alex Lourie wrote:
> > On Thu, Dec 8, 2011 at 7:57 PM, Gema Gomez
> > <gema.gomez-sol...@canonical.com
> > <mailto:gema.gomez-sol...@canonical.com>> wrote:
> >
> >     On 08/12/11 15:06, Alex Lourie wrote:
> >     > Hi all
> >     >
> >     > Proceeding with the work we started for test case rewriting,
> >     there's an
> >     > issue I'd like to discuss here - categorising the test cases. How
> >     would
> >     > we like it to be? What categories would you think should be
> >     created? How
> >     > do we decided the relation of a test case to a specific category?
> Can
> >     > any given test be part of more than one categories?
> >     >
> >     > Please share your thoughts,
> >     > Thanks.
> >     >
> >     > --
> >     > Alex Lourie
> >     >
> >     >
> >
> >     The categorization we have at the moment is:
> >
> >     - Applications
> >     - System
> >     - Hardware
> >     - Install
> >     - Upgrade
> >     - CasesMods (not sure what this even means)
> >
> >     There are many ways to categorize test cases:
> >
> >     - by functionality under test (like we are sort of doing, but not
> quite)
> >
> >     - by test type
> >            * positive/negative
> >            * smoke: target the system horizontally and superficially /
> >     regression:
> >     target vertical slices of the system, in depth
> >            * Unit testing (target an api method, or a very small
> >     functionality)/Integration testing (target the integration of two or
> >     more subsystems)/System testing (target the system as a whole)
> >            * Functional (target functionality, the system behaves as it
> >     should and
> >     fails gracefully in error situations) / Non-Functional (performance
> or
> >     benchmarking, security testing, fuzzy testing, load or stress
> testing,
> >     compatibility testing, MTBF testing, etc)
> >
> >     - by test running frequency: this test case should run
> >     daily/weekly/fortnightly/once per milestone
> >
> >
> >     And many other ways. I am deliberately introducing a lot of jargon
> here,
> >     for those less familiar with the QA speech, please have a look at the
> >     glossary or ask when in doubt, if we want to truly improve the test
> >     cases we are writing we need to start thinking about all these
> things:
> >     https://wiki.ubuntu.com/QATeam/Glossary
> >
> >     Thanks,
> >     Gema
> >
> >
> > Hi Gema
>
> Hi Alex,
>
> > That's OK, we can handle the jargon.
>
> I think this is not true for everyone in the list, so I'd like to give
> everyone the opportunity not just to collaborate and add test cases but
> also to learn in the process. You may be comfortable with those terms
> but, some other people may not.
>
>
Sure, I didn't mean to patronise by no means.


> >
> > I think that in our case, categories should represent our way of work.
>
> Categories should be related to the test cases and what they are trying
> to test, and they are something different from runlists or set of test
> cases that we run together.
>
> > So for community team, current categories are probably fine, but for QA
> > engineering they may not be well suited (you may want an additional
> > manual/automatic note).
>
> I think we should all be doing QA engineering. I don't see why the
> community work should be less accurate or valuable than the work done by
> QA engineering (who are these people anyway? you mean Canonical QA
> team?)... I think we are prepared to help whoever wants to do so, grow
> as QA engineers and learn, also respecting those that just want to spend
> some time running test cases and contributing in that way. Both are
> engineering tasks and valuable, they are just different.
>
> I don't believe community work has to mean lack of accuracy/quality or
> lack of engineering... apologies if I have misinterpreted your words,
> but I feel quite strongly about this.
>
>
I completely agree. I don't think that there is more or less important
contributors, or that someones'
work is more professional than someone else's. I just meant that the
selection of specific categories
would be a result of the specifics of our work. In some cases the
importance of the task would mean
a category (such as Urgent, Very Important, Important, Regular, Optional,
etc), and in case of our QA,
at least currently, I feel that the provided categorisation would do. Of
course we all need to strive to
a better quality product and any type of contribution is welcome. I didn't
try to imply that there are
different flows in Ubuntu QA. I understand that we all should do the
engineering part, and I think
it's what we are trying to do here.


> >I don't think we should stumble on this issue
> > for too long, so I'd recommend to go with the following scheme, and
> > update it if we feel necessary.
>
> Agreed, we shouldn't be stuck by this, just have it on the back of our
> minds and categorize test cases as we go along in a sensible way. We may
> find that in Precise our old categories are fine, but whenever the test
> suite grows we'll need to change that.
>
> And anybody writing test cases should feel empowered to add/remove
> categories if for whichever reason they don't make sense anymore or
> start making more sense.
>
>
For that matter, I might write up some short guide for rewriting the test
cases. We have
created resources for rewriting, but had no real guidance about using them.
So I think creating
something short would serve us well in this regard. And considering the
nature of this, if someone
would have an idea of improving the flows/tools, we would be happy to
update anything required.


> Also, we need to get people writing test cases thinking why they are
> doing that, what is the aim of a test case? is it to find problems? is
> it to demonstrate that a piece of functionality works? What is the pass
> criteria? Because otherwise we won't be writing good test cases that
> people can follow and report a result from.
>
>
And that would be another good subject to have a good guide for.


> > So it would go as this:
> >
> > * *Applications* (for application related tests, such as testing
> > editors, browsers, etc).
> > * *System* (for testing system built ins, such as, maybe, services
> > scripts, global/local settings, default system configurations, etc)
> > * *Hardware* (for testing hardware components)
> > * *Install* (for test cases performed during the installation process)
> > * *Upgrade* (for test cases performed during the upgrade process)
> > /* CasesMods (I have no idea what it is right now, so if anyone does
> > please let us know)./
>
> Agreed, this is as good starting point as any other, as long as we
> realize it is the beginning :)
>
> > I am going to use this selection on the Test Cases Rewriting document,
> > and if anything changes we'll update accordingly.
>
> Sounds good.
>
> Thanks for all the ideas that are bouncing about, I think this is a very
> healthy discussion to have.
>
> Have a nice weekend folks!
> Gema
>


Great, thanks for the feedback Gema!
Have you all a nice weekend!

-- 
Alex Lourie
-- 
Ubuntu-qa mailing list
Ubuntu-qa@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-qa

Reply via email to