On 28/06/13 11:32, Jim Mooney wrote:
On 27 June 2013 17:05, Dave Angel <da...@davea.name>

Nope.  it is limited to the tests you write.  And those tests are necessarily 
fairly simple.

Hmm, so it seems a lot of trouble for a few hardcoded tests I could
run myself from the IDE interpreter window. Or better  yet, I could
code a loop with  some random input, and some extreme cases, and work
the function out myself. I guess there is no easy substitute for
simply beating up your functions with a slew of garbage, since you're
the one who understands them ;')


I'm afraid that you've missed the point, sorry :-)

Or actually, multiple points.


Firstly, doctests are *documentation first*, and tests secondly. They show by example 
what the function does. It is a real pain reading five pages of documentation and at the 
end you say, "yes, but what does this function actually *do*???" Examples can 
help to make it clear.

The only thing worse than no examples are examples that are wrong.

doubler("3.1415")
6.283

Do you see the subtle bug? If you write a wrong example, you may never realise 
it is wrong, and your users will be confused and distressed. Sometimes your 
users are *you*, and you wrote the software a long time ago and don't remember 
what it is supposed to do but you can't get it to work like the examples show...

But if you do it as a doctest, you will find out that the example is wrong 
because the test will fail the first time you run it. Then you can fix the 
example while it is still fresh in your mind.

Another point that you missed is that doctests are *automated*, which is much 
better than manual testing. Sure, you can always do your own testing at the 
interactive interpreter. For instance, I might sit down to test my statistics 
module. I can call up the interactive interpreter, and sit down for an hour and 
a half and run tests like this:

import statistics
statistics.mean([1, 2, 5, 4, 7, 1, 9, 2])
statistics.stdev([4, 7, 0, 1, 2, 3, 3, 3, 7, 3])
statistics.stdev([])

and so on. And then, tomorrow, when I've made some changes to the code, I have 
to do the whole thing again. Over and over again. Who the hell can be bothered? 
Certainly not me. That's why testing doesn't happen, or if it does happen, it's 
only the most ineffective, obvious, simple tests, the ones which are the least 
likely to pick up bugs.

And of course I can't use my own module to test that my module is getting the 
right answers! Having to calculate the expected answers by hand (or at least 
using a different program) is a lot of work, and I don't want to have to do 
that work more than once. If I'm sensible, I'll write the answers down 
somewhere, together with the question of course. But then I'm likely to lose 
the paper, and even if I don't, I still have to re-type it into the interpreter.

But with *automated tests*, I only need to pre-calculate the answers once, put 
them into a suite of tests, and the computer can run them over and over again. 
I can start off with one test, and then over time add more tests, until I have 
five hundred. And the computer can run all five hundred of them in the time I 
could manually do one or two.

Instead of needing the discipline to spend an hour or three manually 
calculating results and comparing them to the module's results in an ad hoc 
manner, I only need the discipline to run a couple of simple commands such as:

python -m doctest statistics.py
python -m unittest statistics_tests.py


or equivalent.

Any tests are better than no tests, and doctest is a good way to get started 
with a few, low-impact, easy-to-use tests without the learning curve of 
unittest.




--
Steven
_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor

Reply via email to