Nick Coghlan <ncogh...@gmail.com> added the comment:

I agree with Michael - one test that covers multiple settings can easily be 
done by collecting results within the test itself and then checking at the end 
that no failures were detected (e.g. I've done this myself with a test that 
needed to be run against multiple input files - the test knew the expected 
results and maintained lists of filenames where the result was incorrect. At 
the end of the test, if any of those lists contained entries, the test was 
failed, with the error message giving details of which files had failed and 
why).

What parameterised tests could add which is truly unique is for each of those 
files to be counted and tracked as a separate test. Sometimes the 
single-test-with-internal-failure-recording will still make more sense, but 
tracking the tests individually will typically give a better indication of 
software health (e.g. in my example above, the test ran against a few dozen 
files, but the only way to tell if it was one specific file that was failing or 
all of them was to go look at the error message).

----------
nosy: +ncoghlan

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue7897>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to