On 11/22/2010 4:38 AM, Ulrich Eckhardt wrote:
Let's say I have two flags invert X and invert Y. Now, for testing these, I
would write one test for each combination. What I have in the test case is
something like this:

   def test_invert_flags(self):
       """test flags to invert coordinates"""
       tests = [((10, 20), INVERT_NONE, (10, 20)),
                ((10, 20), INVERT_X, (-10, 20)),
                ((10, 20), INVERT_Y, (10, -20))]
       for input, flags, expected in tests:
           res = do_invert(input, flags)
           self.assertEqual(res, expected,
                            "%s caused wrong results" % (flags,))

So, what I do that I test the function 'do_invert' for different input
combinations and verify the result. The ugly thing is that this will abort
the whole test if one of the tests in the loop fails. So, my question is
how do I avoid this?

I know that I could write a common test function instead:

   def _test_invert_flags(self, input, flags, expected):
       res = do_invert(input, flags)
       self.assertEqual(res, expected)

   def test_invert_flags_non(self):
       """test not inverting coordinates"""
       self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

   def test_invert_flags_x(self):
       """test inverting X coordinates"""
       self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

   def test_invert_flags_y(self):
       """test inverting Y coordinates"""
       self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

What I don't like here is that this is unnecessarily verbose and that it
basically repeats information.

The above code looks perfectly fine to me for testing. I think the question you should ask yourself is whether the different combinations you are testing represent tests of distinct behaviors, or tests of the same behavior on a variety of data. If the former case, as in the sample code you posted, then these should probably have separate tests anyway, so that you can easily see that both INVERT_X and INVERT_BOTH are failing, but INVERT_Y is not, which may be valuable diagnostic data.

On the other hand, if your test is trying the INVERT_X behavior on nine different points, you probably don't need or want to see every individual point that fails. It's enough to know that INVERT_X is failing and to have a sample point where it fails. In that case I would say just run them in a loop and don't worry that it might exit early.

Also, I'd rather construct the error message
from the data instead of maintaining it in different places, because
manually keeping those in sync is another, errorprone burden.

I'm not sure I follow the problem you're describing. If the factored out workhorse function receives the data to test, what prevents it from constructing an error message from that data?

Cheers,
Ian

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to