Michael Foord <mich...@voidspace.org.uk> added the comment: Test selection would require load time parameterisation - although the current test selection mechanism is through importing which would probably *not* work without a specific fix. Same for run time parameterisation.
Well how *exactly* you generate the names is an open question, and once you've solved that problem it should be no harder to show them clearly in the failure message with a "single test report" than with multiple test reports. The way to generate the names is to number each test *and* show the parameterised data as part of the name (which can lead to *huge* names if you're not careful - or just object reprs in names which isn't necessarily useful). I have a decorator example that does runtime parameterisation, concatenating failures to a single report but still keeping the generated name for each failure. Another issue is whether or not parameterised tests share a TestCase instance or have separate ones. If you're doing load time generation it makes sense to have a separate test case instance, with setUp and tearDown run individually. This needs to be clearly documented as the parameter generation would run against an uninitialised (not setUp) testcase. Obviously reporting multiple test failures separately (run time parameterisation) is a bit nicer, but runtime test generation doesn't play well with anything that works with test suites - where you expect all tests to be represented by a single test case instance in the suite. I'm not sure that's a solveable problem. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue7897> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com