On Mon, Jul 11, 2005 at 13:17:48 -0700, Michael G Schwern wrote:
> On Mon, Jul 11, 2005 at 07:38:57PM +0300, Yuval Kogman wrote:
> 
> I'll make the same argument "no broken windows" argument here that I do 
> about warnings and tests:  elminate all warnings, even if they are dubious.  
> Ensure all tests pass eliminating all false negatives.  Do not leave any 
> "expected warnings" or "expected failures" because this erodes the 
> confidence in the test suite.  Warnings and test failures fail to ring alarm
> bells.  One "expected" warning leads to two.  Then four.  Then finally too
> many to remember which are expected and which are not and you ignore them
> all together.

I think that's the main difference between Devel::Cover and test
runs for me - i run Devel::Cover carefully, just a few times before
each release (repeating until I decide it's enough... The number of
runs is roughly $lines_of_code/50, i would guess)

During normal development I don't bother with coverage runs. They
take long and I never get to 100% anyway, so I don't look for
constant feedback.

Devel::Cover is, for me, a think-hard process, where I try to figure
out if there is duplicate logic, or completely untouched portions,
and resolve this issue normally not by adding tests, but by removing
meat. Only the trivial cases are resolved quickly.

Perfecting coverage reports is IMHO too much pushing a project to
where it could be, rather than pulling it to where it ought to be -
determining features, and writing code and tests based on these
ideas.

In short, I think that no red in a coverage report is too much
effort for me to work on continually. It doesn't help me as much as
simply using the code does, and if I have no use for the code, then
why am I working on it in the first place?

-- 
 ()  Yuval Kogman <[EMAIL PROTECTED]> 0xEBD27418  perl hacker &
 /\  kung foo master: /me climbs a brick wall with his fingers: neeyah!

Attachment: pgpRgUIITWju5.pgp
Description: PGP signature

Reply via email to