Adam Kennedy writes:

> Christopher H. Laco wrote:
> 
> > Tony Bowden wrote:
> > 
> > > What's the difference to you between me shipping a a .t file that
> > > uses Pod::Coverage, or by having an internal system that uses
> > > Devel::Cover in a mode that makes sure I have 100% coverage on
> > > everything, including POD, or even if I hire a team of Benedictine
> > > Monks to peruse my code and look for problems?
> > >
> > > The only thing that should matter to you is whether the Pod
> > > coverage is adequate, not how that happens.
> > 
> > How as a module consumer would I find out that the Pod coverage is
> > adequate again? Why the [unshipped] .t file in this case.
> > 
> > The only other way to tell is to a) write my own pod_coverage.t test
> > for someone elses module at install time, or b) hand review all of
> > the pod vs. code.  Or CPANTS.
> 
> The main point is not so much that you define a measure of quality,
> but that you dictate to everyone the one true way in which they must
> determine it.

I'm completely with Tony and Adam on this particular point: that
TIMTOWTDI applies to checking pod coverage, and it doesn't make sense to
dismiss a label as being of lower quality because it doesn't perform one
particular check in one particular way.

But ...

Remember that we aren't measuring quality, but kwalitee.  Kwalitee is
supposed to provide a reasonable indication of quality, so far as that's
possible.  So what matters in determining whether a kwalitee heuristic
is appropriate is whether there is a correlation between modules that
pass the heuristic and those that humans would consider to be of high
quality.

(Theoretically) it doesn't actually matter whether the heuristic _a
priori_ makes sense.  If it happens to turn out that the particular
string "grrr', @{$_->{$" occurs in many modules that are of high quality
and few that are of low quality, then it happens that looking for the
existence of that string in a distribution's source will provide a
useful indication when assessing the module.  It doesn't have to make
sense in order for that to be the case.  (Think of the rules that neural
net or Baysian spam detectors come up with for guessing the quality of
e-mail messages.)

I say "theoretically" cos in the case of Cpants kwalitee the rules are
publicly available -- so even if a neural net (or whatever) did come up
with the above substring heuristic, once it's know then authors can game
the system by artificially crowbarring into their modules' sources, at
which point the heuristic loses value.

So while I agree the pod coverage test criterion makes no sense, and
that it's perfectly valid not to distribute such a test, what I think is
more important is whether that criterion works.

In other words, which are the modules of poor quality but with high
kwalitee (and vice versa)?  And what can be done to distinguish those
modules from modules of high (low) quality?  It may be that removing the
pod coverage test criterion is an answer to that question (or it may
not).

> Why not give a kwalitee point for modules that bundle a test that
> checks for kwalitee?

If it produces a good correlation, then yes, have such a criterion.

Smylers
-- 
May God bless us with enough foolishness to believe that we can make a
difference in this world, so that we can do what others claim cannot be done.

Reply via email to