Hi, all.  I wanted to speak a bit on test case priorities.  I have used
a system like the one below in the past (I wasn't always a PM ;) )  and
found it quite effective  for managing large sets of test cases,
especially in short release cycles and limited resources.  It's also
helpful when managing larger test teams or teams with turnover when it
can be difficult to explain what is and is not important.

Note that this system applies to additional automated/manual
tests--developer unit test should pass or have failures documented
BEFORE they build is released.  

Prioritizing the test cases should include the prominence of the feature
with respect to the BUSINESS IMPACT and LIKLIHOOD A FEATURE IS USED.
The idea is to get away from focusing on functional areas that are easy
to test (for example "Can this note field really take 255 characters or
does it just take 254?") and direct efforts towards items that matter to
the MFI's ("What happens in this batch files if they are not run for one
day?").  The Mifos operations team will have a lot of input from the
MFI's and consultants in the field as to what areas are most critical.


Note that test case management--whether manual or automated, waterfall
or agile--needs to follow a "constant gardener" model.  You should be
evaluating, pruning, and updating test cases regularly (and don't get
tricked by any test tool or consultant that tells you differently!).
RPN's and other factor should be periodically reevaluated.


**The Risk Priority Numbers (RPN)**

1 = smoke test  

These tests should be run on each build before it's released to the test
team to make sure the build is viable and time isn't wasted trying to
test a build that's known to be poor.  Example from Mifos might be
things like entering a collection sheet, disbursing a loan, running
reports, entering a single payment, etc cetera.  Usually, one person is
assigned smoke test for each build as it is released, and her or his
priority is to complete the build evaluation, identify any defects, and
report results to the whole team (or in our case community) as quickly
as possible.  This test will likely be around 15-25 cases.  Bugs
directly from these test cases are always P1's and often marked
"BLOCKING."

2 = beta/Release Candidate tests

All of these tests must pass before a release candidate can be declared.
This test set will be significantly larger.  Newer areas of code and/or
particularly complex feature sets may get P2 for a couple releases until
they are more stable or better documented.  Usually, bugs resulting
directly from these test cases end up as P1's or P2's

3 = Alpha release candidate tests
Run all of these tests to assess the overall quality.  Ideally they
would pass, but if they fail, they are probably P3's.

99 = Tests that *should* be run but would almost never hold up a release
(I have seen exceptions--i.e. on-screen text that is inadvertently
offensive!).  

Why "99" and not 4?  You may end up with a number of levels of testing
(see below), but the 99 tests are ALWAYS going to be lowest priority
(like "Does this field accept only 254 instead of 255?").

**Other uses of RPN **
You can also the number system to flag certain categories of tests that
may need to be run if only certain areas were changed.  For example, PRN
= 50 might be "all reporting UI tests that may only need to get run if
Config setting X changes", et cetera.






-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Aliya Walji
Sent: Monday, February 11, 2008 1:04 PM
To: Developer
Subject: [Mifos-developer] General test case template feedback - test
caseprioritization

Hello Arpita,

I wanted to start a thread around general test case template feedback
(independent of the feedback for the test cases for specific features
that you have provided).

For now, I only have one general piece of feedback, but as others on the
team start to look at your test cases and formats, there will probably
be others with some ideas for improvement.  Hopefully you don't mind
going about the process of finalizing the format a little bit
iteratively.

A while back, we discussed the need to prioritize test cases versus just
the general test scenarios you have been creating.  In looking at your
test cases, I think it will be necessary to prioritize not only the
scenarios, but also the individual test cases themselves, to make sure
we know which cases to cut if time runs short and also, which bugs to
fix if certain test cases do not pass (e.g. if a bug is found in a low
priority test case, it is also a lower priority bug to fix).

I haven't provided guidance about overall quality risks and priorities
around quality risks thus far, because these have not been formally
defined.  That being said, I still think you should take a stab at
providing test case prioritization, even without this guidance from our
team, if possible.

Amy has some good suggestions for how to go about assigning priorities
per test case.  I will let her respond to this email with her ideas, and
then we can discuss over email and in our meetings to figure out how to
take her suggestions and make them work for the v1.1 Mifos testing
effort.

Thanks,

Aliya


------------------------------------------------------------------------
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

Reply via email to