Le mardi 12 septembre 2006 à 11:15, Chris Dolan écrivait:
> On Sep 12, 2006, at 9:24 AM, Salve J Nilsen wrote:
> 
> >>Any metric that catches bad things, particularly bad technical  
> >>things, is going to be just fine.
> >>Metrics that try to push "good" behavior are fraught with trouble,  
> >>because they start pushing people in odd directions.
> >
> >Do you have an example on this? (Any pointer would be wonderful.)
> 
> I have two: pod.t and pod_coverage.t.  These are pointless to run on  
> an end-user's machine.  At best they are redundant to immutable tests  
> already run on the author's machine and just waste processor cycles.   
> At worst they fail and cause false negative test reports.  The  
> prevalence of those two tests in CPAN modules is almost entirely due  
> to the influence of CPANTS.

At least has_test_pod can be rewritten as no_pod_errors and achieve the
same goal (check that the documentation is syntactically correct pod-wise).

Since running Test::Pod on all .pm and .pod file doesn't require actually
running the code itself, it sound perfectly acceptable for both parties
(those who want to check that the pod is correct and those that think
that multiple copies of t/pod.t shouldn't clutter CPAN).

...

I just had myself motivated enough to write such a metric, when I
discovered that Module::CPANTS::Analyse already has a no_pod_errors
metric!

That makes me wonder about the utility of has_test_pod, since
no_pod_errors is even more interesting: we want to give points to people
that *have* a correctly written pod, rather than to those who merely try,
don't we?

-- 
 Philippe "BooK" Bruhat

 What everyone wants, nobody gets, What nobody gets, everybody wants.
                                    (Moral from Groo The Wanderer #47 (Epic))

Reply via email to