Hi again list.

>
> As you come up with the corner case, write a test for it and leave the 
> implementation
> for later. The hard part of coding is always defining your problem. Once it is
> defined (by a test) the solution is just a matter of tidy work.
>

Is it considered to be cheating if you make a test case which always
fails with a "TODO: Make a proper test case" message?

While it is possible to describe all problems in docs, it can be very
hard to write actual test code.

For example: sanity tests. Functions can have tests for situations
that can never occur, or are very hard to reproduce. How do you unit
test for those?

A few examples off the top of my head:

* Code which checks for hardware defects (pentium floating point,
memory or disk errors, etc).

* Code that checks that a file is less than 1 TB large (but you only
have 320 GB harddrives in your testing environment).

* Code which checks if the machine was rebooted over a year ago.

And so on. These I would manually test by temporarily changing
variables in the code, then changing them back. To unit test these you
would need to write mock functions and arrange for the tested code to
call them instead of the python built-ins.

Also, there are places where mock objects can't be used that easily.

eg 1: A complicated function, which needs to check the consistency of
it's local variables at various points.

It *is* possible to unit test those consistency checks, but you may
have to do a lot of re-organization to enable unit testing.

In other cases it might not be appropriate to unit test, because it
makes your tests brittle (as mentioned by another poster).

eg: You call function MyFunc with argument X, and expect to get result Y.

MyFunc calls __private_func1, and __private_func2.

You can check in your unit test that MyFunc returns result Y, but you
shouldn't check __private_func1 and __private_func2 directly, even if
they really should be tested (maybe they sometimes have unwanted side
effects unrelated to MyFunc's return value).

eg: Resource usage.

How do you unit test how much memory, cpu, temporary disk space, etc a
function uses?

eg: Platforms for which unit tests are hard to setup/run.

 - embedded programming. You would need to load your test harness into
the device, and watch LED patterns or feedback over serial. Assuming
it has enough memory and resources :-)
 - mobile devices (probably the same issues as above)

eg: race conditions in multithreaded code: You can't unit test
effectively for these.

And so on.

>
> Agreed. There is no good way of reusing your prototype code in TDD. You end
> up having to throw your propotype away in order to have a proper tested
> implementation in the end. Takes more time up front, but less time over the
> lifecycle of the program you are building.
>

Sounds like you are talking about cases where you have to throw away
the prototype *because* you couldn't unit test it properly? (but it
was otherwise functioning perfectly well).

>>Problem 3: Slows down development in general
>>
>>Having to write tests for all code takes time. Instead of eg: 10 hours
>>coding and say 1/2 an hour manual testing, you spend eg: 2-3 hours
>>writing all the tests, and 10 on the code.
>
> This is incorrect. It speeds up development in general. The debugging phase of
> development becomes much shorter because the bugs are fewer and the ones you
> have a much shallower. There are so many tests that reduce the scope in which
> you have to search for the bug that it usually becomes trivial to find.

Depends on the type of bug. If it's a bug which breaks the unit tests,
then it can be found quickly. Unit tests won't help with bugs they
don't explicitly cover. eg off-by-one, memory leaks, CPU load,
side-effects (outside what the unit tests test), and so on.

That's another reason why I don't think that unit tests are a silver
bullet. You can have code that's totally wrong, but still succeeds in
the tests (even if they're very detailed). eg: hardcoding return
values expected by the tests, and returning garbage the rest of the
time.

But once you track down problems like the above you can write more
unit tests to catch those exact bugs in the future. This is one case
where I do favour unit tests.

I guess you could compare unit tests to blacklists or anitivirus
software. All of them only catch cases that have been explicitely
coded into them.

>
> I have direct experience from this, getting my company to change to TDD about
> 10 months ago. Productivity has improved enormously. I'd say that we have cut
> between 25 and 50% in development time.
>

Going by your figures and other cases I've read on the web, there are
definitely cases where TDD is beneficial and can save a lot of time.
What I'm not sure of (probably inexperience on my part) is when you
should and shouldn't use TDD, and to what extent.

I'm sure that factors like this have to come in to play when deciding
if and how to use TDD in a given project:

- size and age of the project (new, small code is easier to understand
than large, old)
- complexity & modularity of project
- programming language used (dynamic langs need unit tests more than compiled)
- importance of project (consequences of bugs)
- who the project is for (yourself, inhouse, or for client)
- how easy it is to fix problems in deployed software
- number of developers
- skill & discipline of developers
- development style (waterfall/incremental, etc)
- consistency checks already built into the software

That last one (internal consistency checks) is a big one for me. If
software has a lot of internal consistency checks (contracts), then I
feel that the need for unit tests is a lot less.

>>Problem 4: Can make refactoring difficult.
>>
>>If you have very complete & detailed tests for your project, but one
>>day you need to change the logic fundamentally (maybe change it from
>>single-threaded to multi-threaded, or from running on 1 server to
>>distributed), then you need to do a large amount of test refactoring
>>also. The more tests you have (usually a good thing), the longer it
>>will take to update all the tests, write new ones, etc. It's worse if
>>you have to do all this first before you can start updating the code.
>
> No, this is a total misunderstanding. It makes refactoring much easier.
> It takes a bit of time to refactor the affected tests for the module, but you
> gain so much by having tests that show that your refactoring is not breaking
> code that should be unaffected that it saves the extra time spent many times
> over.
>

Makes refactoring easier how?

I assume you mean the unchanged tests, which check functionality which
should be the same before and after your refactoring. All the other
tests need to be replaced. I agree that the unchanged tests can be
helpful. It's the extra work of unit test maintenance I have a problem
with.

In an extreme case you might refactor the same code multiple times,
requiring the test cases to also be refactored each time too (more
test cases = more work each time). To me it feels like the test cases
can be a lot of 'dead weight', slowing down development.

Even if you want unit tests for the final version which goes to the
customer, you still had to spend time with re-writing unit tests for
all the refactored versions inbetween. To me those intermediate unit
test versions sound like a complete waste of time. Please correct me
if this is also mistaken :-)

> Introducing bugs because you missed some aspect in a refactoring is one of the
> most common problems in non-TDD code and it is a really nasty quality
> concern.

I agree here. But I feel that you can get the same quality by good QA
and human testing (ie: developer does a lot of testing, then hands it
over to testers, then it goes to the customer). Which should be done
whether or not you have unit tests. I feel that unit tests are mainly
good for catching rare cases which might not be tested by a human.

In other words, full-on TDD only becomes really useful when projects
grow large (and complicated) and can't be fully tested by humans?
Until that point, only using unit tests to catch regressions seems to
be more than enough to ensure good quality. (Again, correct me if this
is mistaken).

>>
>>Problem 5: Tests are more important than code
>>
>>You need to justify the extra time spent on writing test code. Tests
>>are nice, and good to have for code maintainability, but they aren't
>>an essential feature (unless you're writing critical software for life
>>support, etc). Clients, deadlines, etc require actual software, not
>>tests for software (that couldn't be completed on time because you
>>spent too much time writing tests first ;-)).
>>
> The tests are as important as the code. As a customer, I don't think I'd buy
> software today unless I know that it has been built using TDD. Certainly I am
> ahead of the curve in this, but it won't be long before this will be required
> by skilled organisations buying software and sooner or later the rest of the
> world will follow.

How much software is written with TDD? Do companies generally
advertise this? I get the idea that most high quality open source
software (Apache, linux kernel, GNU, Python, etc) are developed in a
non-TDD way. What they do have is intelligent developers, coding
conventions,  and a very good testing & QA process. Where they do have
unit tests it's usually for regressions. Why don't they use TDD if it
would make such a big difference? Are you going to stop using open
source software (like Python) which isn't written with TDD?

>
> Getting into a TDD mindset is hard work, but those who succeed produce better
> software with less effort.
>

Thanks for your informative reply. I've learned a bit from this thread
and will definitely look more into TDD :-)

David.
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to