We were having discussion earlier around this and as a summary there were 
definitions written in:
http://wiki.meego.com/Quality/Glossary#Test_case_verdict 

So you want to change these?

Br, Jari

-----Original Message-----
From: meego-qa-boun...@lists.meego.com 
[mailto:meego-qa-boun...@lists.meego.com] On Behalf Of ext 
jake.kunn...@nokia.com
Sent: 05 January 2011 14:36
To: Takku Anssi (Nokia-MS/Tampere); Halmagiu Mark (EXT-Digia/Finland); 
Rantalainen Niko.M (Nokia-MS/Tampere); meego...@meego.com
Subject: Re: [Meego-qa] QA verdict rules.

Hi,

My "understanding" on TC verdict is this:

'PASS' - When the test case passes
'FAIL' - When the test case fails, e.g. feature not working or feature not 
implemented
'N/A' - When QA cannot give a verdict, 

Are we agreeing on these?

-jake-



-----Original Message-----
From: meego-qa-boun...@lists.meego.com 
[mailto:meego-qa-boun...@lists.meego.com] On Behalf Of Takku Anssi 
(Nokia-MS/Tampere)
Sent: 05 January, 2011 13:32
To: Halmagiu Mark (EXT-Digia/Finland); Rantalainen Niko.M (Nokia-MS/Tampere); 
meego...@meego.com
Subject: Re: [Meego-qa] QA verdict rules.

Good strict comments.

It starts to seem that this has two different perspectives: Core and UX. 
Core is developed based on detailed features (or is it?) and mostly tested by 
automation. UX on the other hand is changing constantly, features are described 
on high level, and testing is mostly done manually. 

These specialities affect also how both QA sides see these test case verdicts. 

~Anssi

-----Original Message-----
From: meego-qa-boun...@lists.meego.com 
[mailto:meego-qa-boun...@lists.meego.com] On Behalf Of Halmagiu Mark 
(EXT-Digia/Finland)
Sent: 05 January, 2011 13:10
To: Rantalainen Niko.M (Nokia-MS/Tampere); meego...@meego.com
Subject: Re: [Meego-qa] QA verdict rules.

Hi

I'd like to make some comments about this, both from a theoretical and
from a practical point of view.

To us, a specific version of MeeGo represents a set of features.
Specifically, MeeGo 1.2 represents the features marked for 1.2 (and
earlier) in bugs.meego.com. If a feature is supposed to be implemented
and it is not, this counts as a fail against MeeGo 1.2. The feature does
not work, the test case fails, it needs to be marked as such! I don't
understand why there is any argument about this.

The reason it is not implemented or the development methods used are
irrelevant. The fact of the matter is that MeeGo 1.2 should contain xyz
features. If the features are not there, the testing report needs to
reflect that. Marking a test as N/A is inadequate, because this could
mean many things (no test case, unable to run, etc.).

If we can run the test case for a feature that should be implemented and
it fails (for ANY reason not related to the test case itself), the test
result should be fail. If a fourth category is added for 'Not
Implemented' this is fine. But for the moment, 'N/A' means 'Not
Applicable' the test case (and result) ARE applicable, they show that
the feature does not work. This is true even if we take N/A to mean 'Not
Available', the test case and verdict are available and that is what is
being reported.

From a practical point of view:

We have test plans that are run against every image, for the most part,
these should be run automatically. If we are to take into account what
is and isn't implemented, this would require every test plan (or result)
to be modified manually for every image. This is not practical.

The progression of test results will show more passes as test cases are
implemented. That way, running the same test set, we can show the
maturity of MeeGo.

As a summary:

'PASS' - When the test case passes.
'FAIL' - When the test case fails and the feature *should' be
implemented.
'N/A' - When we cannot give a verdict (e.g. no method to test).


Regards
Mark Halmagiu
Software Engineer - MeeGo Reference Testing



On Wed, 2011-01-05 at 09:11 +0000, ext anssi.ta...@nokia.com wrote:
> I personally don’t like the idea of marking not implemented features
> to failed. Why:
> 
>  
> 
> -       In agile development we actually don’t know what are the
> features that will be implemented (ok, we have some wishlist of
> features for 1.2 but not all will be implemented that’s for sure)
> 
> -       Agile SW development is run on time and quality based. Meaning
> that tiny pieces of features will be done with high quality and then
> add more high quality features on top of that. If we would mark not
> done features failed then we would easily give wrong picture about
> current SW maturity (=pass rates and run rates. Many times test
> reports are not read detailed and possible explanations about failed
> cases are not noticed)
> 
>  
> 
> So, I propose that we continue using old definitions.
> 
>  
> 
> ~Anssi 
> 
>  
> 
> From:meego-qa-boun...@lists.meego.com
> [mailto:meego-qa-boun...@lists.meego.com] On Behalf Of ext
> jake.kunn...@nokia.com
> Sent: 05 January, 2011 10:24
> To: Pronin Jakov (EXT-Ixonos/Estonia); Rantalainen Niko.M
> (Nokia-MS/Tampere); meego...@meego.com
> Subject: Re: [Meego-qa] QA verdict rules.
> 
> 
>  
> 
> Hi,
> 
>  
> 
> From a testing point-of-view:  When feature is not implemented = test
> is failing and verdict should be Fail
> 
>  
> 
> Br
> 
> Jake
> 
>  
> 
> From:meego-qa-boun...@lists.meego.com
> [mailto:meego-qa-boun...@lists.meego.com] On Behalf Of Pronin Jakov
> (EXT-Ixonos/Estonia)
> Sent: 05 January, 2011 10:02
> To: Rantalainen Niko.M (Nokia-MS/Tampere); meego...@meego.com
> Subject: Re: [Meego-qa] QA verdict rules.
> 
> 
>  
> 
> Hi, 
> 
> We think it would be better to leave verdicts as they are:
> 
>  
> 
> Pass = When QA can verify that tested feature works as expected. 
> 
> This is OK with us.
> 
> Fail = When QA can verify that tested feature does not work as
> expected. (ex. feature is not implemented.)
> 
> We propose: Fail = When QA can verify that tested feature does not
> work as expected. (ex. Program crashes or results doesn`t match pass
> criteria)
> 
>  
> 
> N/A = Feature is implemented but QA can't give pass or fail verdict.
> 
> We propose: N/A = When feature is not implemented. 
> 
>  
> 
> The main thing is to put TC-s which we cannot verify for some reason
> (missing package or not implemented) to N/A, because how can we Fail
> these test cases if we cannot execute them (the problem might be with
> hardware or accessories etc.)?
> 
>  
> 
> Regards,
> 
> Jakov,
> 
> MHUX QA Team.
> 
>  
> 
>  
> 
>  
> 
> -----Original Message-----
> From: meego-qa-boun...@lists.meego.com
> [mailto:meego-qa-boun...@lists.meego.com] On Behalf Of ext
> niko.m.rantalai...@nokia.com
> Sent: Tuesday, January 04, 2011 3:12 PM
> To: meego...@meego.com
> Subject: [Meego-qa] QA verdict rules.
> 
>  
> 
> Hi, 
> 
>  
> 
> There has been separate discussions about test case result verdicts
> and how QA gives them. When case is pass, when fail, when N/A etc.
> These verdict rules should be aligned trough QA and rules should be
> simple for everyone to understand (test execution and audience looking
> reports). So, wanted to open a discussion about this.
> 
>  
> 
> To unify and simplify the verdict rules, I propose following:  
> 
>  
> 
> Pass = When QA can verify that tested feature works as expected.
> 
> Fail = When QA can verify that tested feature does not work as
> expected. (ex. feature is not implemented.) N/A = Feature is
> implemented but QA can't give pass or fail verdict.
> 
>  
> 
>  
> 
> NOTES:
> 
> QA-Reports gives us a good way to add more information to test reports
> and individual test cases. Updating feature info, updating bug info,
> comments etc. describing what is the cause of something.
> 
>  
> 
> - PASS is simple. When something works like describing, it works.
> Exception is Dataflow testing. When speaking about Dataflow testing,
> it's enough that tested Data Flow works, ex. in handset icon color or
> localization does not need to be correct, cause "data flows" are
> working properly trough SW stacks. But this is on Dataflow testing
> context only.
> 
>  
> 
> - FAIL, this should also be simple. When something does not work as
> described, it fails. There shall always be explanation about this. If
> there is a bug that causes test case to fail, bug ID shall be linked.
> If feature is not implemented, naturally it does not work, causes test
> case to fail and feature ID will be linked. If there are fail due
> unknown reason, root cause shall be analyzed and bug filed,
> temporarily "under investication etc." marked. So from QA-Reports, it
> should be really easy to see the root cause of the fail.
> 
>  
> 
> - N/A points mainly to QA, whenever QA cant give verdict, test case is
> not available = N/A. There needs to be explanation why this has been
> happened and if root cause is test asset, bug to be filed.
> 
>  
> 
>  
> 
> Reference:
> 
> QA-Reports: http://qa-reports.meego.com/1.2 Core OS N900 report with
> these rules: http://qa-reports.meego.com/1.2/Core/Basic%20feature%
> 20testing/N900/478
> 
> Bug about missing "blocked" status:
> http://bugs.meego.com/show_bug.cgi?id=10388
> 
>  
> 
>  
> 
> BR,
> 
> .niko
> 
>  
> 
> ---
> 
> Niko Rantalainen
> 
> MeeGo Core OS QA Owner
> 
> +358 40 838 1891
> 
> http://wiki.meego.com/Quality
> 
> 
> _______________________________________________
> MeeGo-qa mailing list
> MeeGo-qa@lists.meego.com
> http://lists.meego.com/listinfo/meego-qa


_______________________________________________
MeeGo-qa mailing list
MeeGo-qa@lists.meego.com
http://lists.meego.com/listinfo/meego-qa
_______________________________________________
MeeGo-qa mailing list
MeeGo-qa@lists.meego.com
http://lists.meego.com/listinfo/meego-qa
_______________________________________________
MeeGo-qa mailing list
MeeGo-qa@lists.meego.com
http://lists.meego.com/listinfo/meego-qa
_______________________________________________
MeeGo-qa mailing list
MeeGo-qa@lists.meego.com
http://lists.meego.com/listinfo/meego-qa

Reply via email to