There has been a bit of discussion about a way of providing test cases in a test suite that _should_ work but don't. One of the rules has been the test suite should be runnable and silent at every checkin. Recently there was a checkin of a test that _should_ work but doesn't. The discussion got around to means of indicating such tests (because the effort of creating a test should be captured) without disturbing the development flow.
The following code demonstrates a decorator that might be used to aid this process. Any comments, additions, deletions? from unittest import TestCase class BrokenTest(TestCase.failureException): def __repr__(self): return '%s: %s: %s works now' % ( (self.__class__.__name__,) + self.args) def broken_test_XXX(reason, *exceptions): '''Indicates unsuccessful test cases that should succeed. If an exception kills the test, add exception type(s) in args''' def wrapper(test_method): def replacement(*args, **kwargs): try: test_method(*args, **kwargs) except exceptions + (TestCase.failureException,): pass else: raise BrokenTest(test_method.__name__, reason) replacement.__doc__ = test_method.__doc__ replacement.__name__ = 'XXX_' + test_method.__name__ replacement.todo = reason return replacement return wrapper You'd use it like: class MyTestCase(unittest.TestCase): def test_one(self): ... def test_two(self): ... @broken_test_XXX("The thrumble doesn't yet gsnort") def test_three(self): ... @broken_test_XXX("Using list as dictionary", TypeError) def test_four(self): ... It would also point out when the test started succeeding. --Scott David Daniels [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list