Re: unittests with different parameters

2010-11-24 Thread Ulrich Eckhardt
Short update on what I've settled for generating test functions for various
input data:

# test case with common test function
class MyTest(unittest.TestCase):
def _test_invert_flags(self, input, flags, expected):
        res = do_invert(input, flags)
        self.assertEqual(res, expected)

# test definitions for the various invert flags
tests = [((10, 20), INVERT_NONE, (10, 20)),
         ((10, 20), INVERT_X, (-10, 20)),
         ((10, 20), INVERT_Y, (10, -20))]

# add test to the test case class
for input, flags, expected in tests:
def test(self):
self._test_invert_flags(input, flags, expected)
test.__doc__ = testing invert flags %s % flags
setattr(MyTest, test_invert_flags_%s % flags, test)


Yes, the names of the test functions would clash if I tested the same flags
twice, in the real code that doesn't happen (enumerate is my friend!).

Thanks all!

Uli

-- 
Domino Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-24 Thread Jack Keegan
Apologies if this is a bit off the wall but I've only just started getting
into unit testing (in Python) this morning. Would generators help you in any
way? You might be able to have a generator which would yield an attribute
set combination each time it is called.
I'm not sure if it would still stop at the first fail but I was reading this
morning that the Py.test framework utilises generators, and is apparently
compatible with the python unittest module.

I could be wrong though...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-23 Thread Jonathan Hartley
On Nov 22, 11:38 am, Ulrich Eckhardt ulrich.eckha...@dominolaser.com
wrote:
 Hi!

 I'm writing tests and I'm wondering how to achieve a few things most
 elegantly with Python's unittest module.

 Let's say I have two flags invert X and invert Y. Now, for testing these, I
 would write one test for each combination. What I have in the test case is
 something like this:

   def test_invert_flags(self):
       test flags to invert coordinates
       tests = [((10, 20), INVERT_NONE, (10, 20)),
                ((10, 20), INVERT_X, (-10, 20)),
                ((10, 20), INVERT_Y, (10, -20))]
       for input, flags, expected in tests:
           res = do_invert(input, flags)
           self.assertEqual(res, expected,
                            %s caused wrong results % (flags,))

 So, what I do that I test the function 'do_invert' for different input
 combinations and verify the result. The ugly thing is that this will abort
 the whole test if one of the tests in the loop fails. So, my question is
 how do I avoid this?

 I know that I could write a common test function instead:

   def _test_invert_flags(self, input, flags, expected):
       res = do_invert(input, flags)
       self.assertEqual(res, expected)

   def test_invert_flags_non(self):
       test not inverting coordinates
       self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

   def test_invert_flags_x(self):
       test inverting X coordinates
       self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

   def test_invert_flags_y(self):
       test inverting Y coordinates
       self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

 What I don't like here is that this is unnecessarily verbose and that it
 basically repeats information. Also, I'd rather construct the error message
 from the data instead of maintaining it in different places, because
 manually keeping those in sync is another, errorprone burden.

 Any suggestions?

 Uli

 --
 Domino Laser GmbH
 Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932


The following is a bit ghastly, I'm not sure I'd recommend it, but if
you are determined, you could try dynamically adding test methods to
the test class. The following is untested - I suspect I have made a
schoolboy error in attempting to make methods out of functions - but
something like it might work:


class MyTestClass(unittest.TestCase):
  pass

testdata = [
  (INPUTS, EXPECTED),
  (INPUTS, EXPECTED),
  (INPUTS, EXPECTED),
]

for index, (input, expected) in enumerate(testdata):
# the following sets an attribute on MyTestClass
# the names of the attributes are 'test_1', 'test_2', etc
# the value of the attributes is a test method that performs the
assert
setattr(
MyTestClass,
'test_%d' % (index,),
lambda s: s.assertEquals(METHOD_UNDER_TEST(*input), expected)
)
-- 
http://mail.python.org/mailman/listinfo/python-list


unittests with different parameters

2010-11-22 Thread Ulrich Eckhardt
Hi!

I'm writing tests and I'm wondering how to achieve a few things most
elegantly with Python's unittest module.

Let's say I have two flags invert X and invert Y. Now, for testing these, I
would write one test for each combination. What I have in the test case is
something like this:

  def test_invert_flags(self):
  test flags to invert coordinates
  tests = [((10, 20), INVERT_NONE, (10, 20)),
   ((10, 20), INVERT_X, (-10, 20)),
   ((10, 20), INVERT_Y, (10, -20))]
  for input, flags, expected in tests:
  res = do_invert(input, flags)
  self.assertEqual(res, expected,
   %s caused wrong results % (flags,))

So, what I do that I test the function 'do_invert' for different input
combinations and verify the result. The ugly thing is that this will abort
the whole test if one of the tests in the loop fails. So, my question is
how do I avoid this?

I know that I could write a common test function instead:

  def _test_invert_flags(self, input, flags, expected):
  res = do_invert(input, flags)
  self.assertEqual(res, expected)

  def test_invert_flags_non(self):
  test not inverting coordinates
  self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

  def test_invert_flags_x(self):
  test inverting X coordinates
  self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

  def test_invert_flags_y(self):
  test inverting Y coordinates
  self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

What I don't like here is that this is unnecessarily verbose and that it
basically repeats information. Also, I'd rather construct the error message
from the data instead of maintaining it in different places, because
manually keeping those in sync is another, errorprone burden.


Any suggestions?

Uli

-- 
Domino Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-22 Thread Richard Thomas
On Nov 22, 11:38 am, Ulrich Eckhardt ulrich.eckha...@dominolaser.com
wrote:
 Hi!

 I'm writing tests and I'm wondering how to achieve a few things most
 elegantly with Python's unittest module.

 Let's say I have two flags invert X and invert Y. Now, for testing these, I
 would write one test for each combination. What I have in the test case is
 something like this:

   def test_invert_flags(self):
       test flags to invert coordinates
       tests = [((10, 20), INVERT_NONE, (10, 20)),
                ((10, 20), INVERT_X, (-10, 20)),
                ((10, 20), INVERT_Y, (10, -20))]
       for input, flags, expected in tests:
           res = do_invert(input, flags)
           self.assertEqual(res, expected,
                            %s caused wrong results % (flags,))

 So, what I do that I test the function 'do_invert' for different input
 combinations and verify the result. The ugly thing is that this will abort
 the whole test if one of the tests in the loop fails. So, my question is
 how do I avoid this?

 I know that I could write a common test function instead:

   def _test_invert_flags(self, input, flags, expected):
       res = do_invert(input, flags)
       self.assertEqual(res, expected)

   def test_invert_flags_non(self):
       test not inverting coordinates
       self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

   def test_invert_flags_x(self):
       test inverting X coordinates
       self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

   def test_invert_flags_y(self):
       test inverting Y coordinates
       self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

 What I don't like here is that this is unnecessarily verbose and that it
 basically repeats information. Also, I'd rather construct the error message
 from the data instead of maintaining it in different places, because
 manually keeping those in sync is another, errorprone burden.

 Any suggestions?

 Uli

 --
 Domino Laser GmbH
 Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932

You could have a parameter to the test method and some custom
TestLoader that knows what to do with it. See 
http://docs.python.org/library/unittest.html.
I would venture that unit tests are verbose by their very nature; they
are 100% redundant. The usual argument against unnecessary redundancy,
that of ease of maintenance, really doesn't apply to unit tests.
Anyway, good luck with your efforts.

Chard.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-22 Thread Roy Smith
In article q91qr7-i9j@satorlaser.homedns.org,
 Ulrich Eckhardt ulrich.eckha...@dominolaser.com wrote:

   def test_invert_flags(self):
   test flags to invert coordinates
   tests = [((10, 20), INVERT_NONE, (10, 20)),
((10, 20), INVERT_X, (-10, 20)),
((10, 20), INVERT_Y, (10, -20))]
   for input, flags, expected in tests:
   res = do_invert(input, flags)
   self.assertEqual(res, expected,
%s caused wrong results % (flags,))
 
 So, what I do that I test the function 'do_invert' for different input
 combinations and verify the result. The ugly thing is that this will abort
 the whole test if one of the tests in the loop fails. So, my question is
 how do I avoid this?

Writing one test method per parameter combination, as you suggested, is 
a reasonable approach, especially if the number of combinations is 
reasonably small.  Another might be to make your loop:

   failCount = 0
   for input, flags, expected in tests:
   res = do_invert(input, flags)
   if res != expected:
   print %s caused wrong results % (flags,)
   failCount += 1
   self.assertEqual(failCount, 0, %d of them failed % failCount)

Yet another possibility is to leave it the way you originally wrote it 
and not worry about the fact that the loop aborts on the first failure.  
Let it fail, fix it, then re-run the test to find the next failure.  
Perhaps not as efficient as finding them all at once, but you're going 
to fix them one at a time anyway, so what does it matter?  It may also 
turn out that all the failures are due to a single bug, so fixing one 
fixes them all.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-22 Thread Ulrich Eckhardt
Roy Smith wrote:
 Writing one test method per parameter combination, as you suggested, is
 a reasonable approach, especially if the number of combinations is
 reasonably small.

The number of parameters and thus combinations are unfortunately rather
large. Also, sometimes that data is not static but rather computed from a
loop instead. There are a few optimised computations, where I compute the
expected result with the slow but simple version, in those cases I want to
check a whole range of inputs using a loop.

I'm wondering, classes aren't as static as I'm still used to from C++, so
creating the test functions dynamically with a loop outside the class
declaration should be another possibility...

 Yet another possibility is to leave it the way you originally wrote it
 and not worry about the fact that the loop aborts on the first failure.
 Let it fail, fix it, then re-run the test to find the next failure.
 Perhaps not as efficient as finding them all at once, but you're going
 to fix them one at a time anyway, so what does it matter?

Imagine all tests that use INVERT_X fail, all others pass. What would your
educated guess be where the code is wrong? ;)

Thanks Roy!

Uli

-- 
Domino Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-22 Thread Ulrich Eckhardt
Richard Thomas wrote:
[batch-programming different unit tests] 
 You could have a parameter to the test method and some custom
 TestLoader that knows what to do with it.

Interesting, thanks for this suggestion, I'll look into it!

Uli

-- 
Domino Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-22 Thread Roy Smith
In article ddbqr7-5rj@satorlaser.homedns.org,
 Ulrich Eckhardt ulrich.eckha...@dominolaser.com wrote:

  Yet another possibility is to leave it the way you originally wrote it
  and not worry about the fact that the loop aborts on the first failure.
  Let it fail, fix it, then re-run the test to find the next failure.
  Perhaps not as efficient as finding them all at once, but you're going
  to fix them one at a time anyway, so what does it matter?
 
 Imagine all tests that use INVERT_X fail, all others pass. What would your
 educated guess be where the code is wrong? ;)

Well, let me leave you with one last thought.  There's really two kinds 
of tests -- acceptance tests, and diagnostic tests.

I tend to write acceptance tests first.  The idea is that if all the 
tests pass, I know my code works.  When some test fails, that's when I 
start digging deeper and writing diagnostic tests, to help me figure out 
what went wrong.

The worst test is a test which is never written because it's too hard to 
write.  If it's easy to write a bunch of tests which verify correct 
operation but don't give a lot of clues about what went wrong, it might 
be worth doing that first and seeing what happens.  If some of the tests 
fail, then invest the time to write more detailed tests which give you 
more information about each failure.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-22 Thread Ian Kelly

On 11/22/2010 4:38 AM, Ulrich Eckhardt wrote:

Let's say I have two flags invert X and invert Y. Now, for testing these, I
would write one test for each combination. What I have in the test case is
something like this:

   def test_invert_flags(self):
   test flags to invert coordinates
   tests = [((10, 20), INVERT_NONE, (10, 20)),
((10, 20), INVERT_X, (-10, 20)),
((10, 20), INVERT_Y, (10, -20))]
   for input, flags, expected in tests:
   res = do_invert(input, flags)
   self.assertEqual(res, expected,
%s caused wrong results % (flags,))

So, what I do that I test the function 'do_invert' for different input
combinations and verify the result. The ugly thing is that this will abort
the whole test if one of the tests in the loop fails. So, my question is
how do I avoid this?

I know that I could write a common test function instead:

   def _test_invert_flags(self, input, flags, expected):
   res = do_invert(input, flags)
   self.assertEqual(res, expected)

   def test_invert_flags_non(self):
   test not inverting coordinates
   self._test_invert_flags((10, 20), INVERT_NONE, (10, 20))

   def test_invert_flags_x(self):
   test inverting X coordinates
   self._test_invert_flags((10, 20), INVERT_X, (-10, 20))

   def test_invert_flags_y(self):
   test inverting Y coordinates
   self._test_invert_flags((10, 20), INVERT_Y, (10, -20))

What I don't like here is that this is unnecessarily verbose and that it
basically repeats information.


The above code looks perfectly fine to me for testing.  I think the 
question you should ask yourself is whether the different combinations 
you are testing represent tests of distinct behaviors, or tests of the 
same behavior on a variety of data.  If the former case, as in the 
sample code you posted, then these should probably have separate tests 
anyway, so that you can easily see that both INVERT_X and INVERT_BOTH 
are failing, but INVERT_Y is not, which may be valuable diagnostic data.


On the other hand, if your test is trying the INVERT_X behavior on nine 
different points, you probably don't need or want to see every 
individual point that fails.  It's enough to know that INVERT_X is 
failing and to have a sample point where it fails.  In that case I would 
say just run them in a loop and don't worry that it might exit early.



Also, I'd rather construct the error message
from the data instead of maintaining it in different places, because
manually keeping those in sync is another, errorprone burden.


I'm not sure I follow the problem you're describing.  If the factored 
out workhorse function receives the data to test, what prevents it from 
constructing an error message from that data?


Cheers,
Ian

--
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-22 Thread Ulrich Eckhardt
Ian Kelly wrote:
 On 11/22/2010 4:38 AM, Ulrich Eckhardt wrote:
 Also, I'd rather construct the error message from the data
 instead of maintaining it in different places, because 
 manually keeping those in sync is another, errorprone burden.
 
 I'm not sure I follow the problem you're describing.  If the factored
 out workhorse function receives the data to test, what prevents it from
 constructing an error message from that data?

Sorry, unprecise description of what I want. If you define a test function
and run the tests with -v, the framework prints the first line of the
docstring of that function followed by okay/fail/error, which is much
friendlier to the reader than the exception dump afterwards. Using multiple
very similar functions requires equally similar docstrings that repeat
themselves. I'd prefer creating these from the input data.

Thanks for your suggestion, Ian!

Uli

-- 
Domino Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unittests with different parameters

2010-11-22 Thread Ben Finney
Ulrich Eckhardt ulrich.eckha...@dominolaser.com writes:

 Let's say I have two flags invert X and invert Y. Now, for testing these, I
 would write one test for each combination. What I have in the test case is
 something like this:

   def test_invert_flags(self):
   test flags to invert coordinates
   tests = [((10, 20), INVERT_NONE, (10, 20)),
((10, 20), INVERT_X, (-10, 20)),
((10, 20), INVERT_Y, (10, -20))]
   for input, flags, expected in tests:
   res = do_invert(input, flags)
   self.assertEqual(res, expected,
%s caused wrong results % (flags,))

The ‘testscenarios’ library is designed for just this reason
URL:http://pypi.python.org/pypi/testscenarios/. It takes a sequence of
scenarios, each of which is a tuple just like in your example, and
causes a separate test run and report for each one.

-- 
 \   “If we listen only to those who are like us, we will squander |
  `\   the great opportunity before us: To live together peacefully in |
_o__)a world of unresolved differences.” —David Weinberger |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list