In article <[EMAIL PROTECTED]>, Chris
Dolan <[EMAIL PROTECTED]> wrote:

> On Sep 12, 2006, at 9:24 AM, Salve J Nilsen wrote:
> 
> >> Any metric that catches bad things, particularly bad technical  
> >> things, is going to be just fine.
> >> Metrics that try to push "good" behavior are fraught with trouble,  
> >> because they start pushing people in odd directions.

> > Do you have an example on this? (Any pointer would be wonderful.)

> I have two: pod.t and pod_coverage.t.  These are pointless to run on  
> an end-user's machine.  At best they are redundant to immutable tests  
> already run on the author's machine and just waste processor cycles.   

I've actually discovered POD problems when users run these tests. They
aren't immutable because people use different versions of tools and
different versions of the various POD modules. With simple fixes, I can
make some things readable by people even with old Perl distributions.

Having said that, I still find value in them. If the installer watches
the tests go by, they see that the documentation is being tested. I
hope that gives them a little more confidence in the module.

And, since this is open source, I distribute all the source I use to
develop the module. That's the idea, isn't it? If the user changes
something, they still have all the tests, including one to remind them
to document their new function using the proper format. :)

Reply via email to