Nick Coghlan added the comment:

I think we're going to have to separate out two counts in the metrics - the 
total number of tests (the current counts), and the total number of subtests 
(the executed subtest blocks). (Other parameterisation solutions can then 
choose whether to treat each pair of parameters as a distinct test case or as a 
subtest - historical solutions would appear as distinct test cases, while new 
approaches might choose to use the subtest machinery).

The aggregation of subtest results to test case results would then be that the 
test case fails if either:
- an assertion directly in the test case fails
- an assertion fails in at least one subtest of that test case

The interpretation of "expected failure" in a world with subtests is then 
clear: as long as at least one subtest or assertion fails, the decorator is 
satisfied that the expected test case failure is occurred.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue16997>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to