----- Original Message -----
> From:Andrei Alexandrescu <[email protected]>

> Not taking one femtosecond to believe that. The hard part is to get the 
> unittest 
> to fail. Once it fails, it is all trivial. Insert a writeln or use a debugger.

Please please, let's *NOT* make this a standard practice.  If a test fails, I 
don't want to get a debugger out or start printf debugging *to find the unit 
test*.  I want it to tell me where it failed, and focus on fixing the problem.

> 
> > 
> > Normal code can afford to be more complex - _especially_ if it's well 
> unit
> > tested. But if you make complicated unit tests, then pretty soon you have a
> > major burden in making sure that your tests are correct rather than your 
> code.
> 
> I am now having a major burden finding the code that does work in a sea of 
> chaff.

I don't sympathize with you, we have tools to easily do this without much 
burden.  If you want to find a function to read, use your editor's find 
feature.  Some even can just allow you to click on the function and find the 
definition.  This argument is a complete red herring.

> 
> > 
> > In the case above, it's testing 5 things, so it's 5 lines. It's 
> simple and
> > therefore less error prone. Unit tests really should favor simplicity and
> > correctness over reduced line count or increased cleverness.
> 
> All code should do that. This is a false choice. Good code must go inside 
> unittest and outside unittest.

Good unit tests are independent and do not affect one another.  Jonathan is 
right, it's simply a different goal for unit tests, you want easily isolated 
blocks of code that are simple to understand so when something goes wrong you 
can work through it in your head instead of figuring out how the unit test 
works.

This could still be true of loops, as long as the loop is simple and provable.  
Overly clever code does not belong in unit tests, and performance is not an 
issue (unless that is what you are testing).  However, a unit test failure 
should immediately and unambiguously tell you which test failed.  The way D's 
unit tests are set up, looping does not do that, it tells you which loop 
failed, but not the exact test.  This can be possibly alleviated using assert's 
message feature.

> > The goal of unit
> > testing code is inherently different from normal code. _That_ is why unit 
> testing
> > code is written differently from normal code.
> 
> Not buying it. Unittest code is not exempt from simple good coding principles 
> such as avoiding copy and paste.

With unit tests, you care about one thing -- what happened to make this unit 
test fail.  A one line independent unit test is ideal, you need to do no 
context reading in order to figure out how the test is constructed.  It allows 
easy proof that the test is not flawed, and quick direction to the actual 
problem.

That being said, if you have repetitive setup or teardown code, that can and 
should be abstracted.

*That* being said, I have not read through all the datetime unit tests, and I 
don't know the nature of how many could be omitted.  I typically write one unit 
test per functional situation.  What I mean is, if a function takes one code 
path, I write one test for that code path.  If a function has different code 
paths depending on parameters, I try to cover them all.  Random data unit tests 
are not helpful.  It might be useful as justification for the unit tests to add 
comments on what particular aspect each line or block of lines is testing.  
This is a daunting task and will add even more size to the file, but maybe we 
will find a slew of tests that are unnecessary.

-Steve



      
_______________________________________________
phobos mailing list
[email protected]
http://lists.puremagic.com/mailman/listinfo/phobos

Reply via email to