On 07 April 2005 19:34 David Golden wrote:

> Let's step back a moment.
> 
> Does anyone object that CPANTS Kwalitee looks for tests?  

I think you're missing the point of Tony's argument. I don't think anyone would
dispute that shipping tests with a distribution is a Good Thing (tm). What is at
issue are tests that have no real benefit from testing on the authors or users
platform, apart from a feel good factor. If Test::Pod and Test::Pod::Coverage
don't produced errors on the authors platform, it is extremely doubtful they'll
produce errors on the users platform. I've been putting these tests into my
distributions for sometime, but others haven't, and some, like Tony, prefer to
keep those tests as part of their local test suite.

I do think that some kwalitee mark for them is worthwhile, but not at the
expense of checking a specific filename (as it was in the original check) or
whether the author includes a test using those modules. I believe Test::Pod
could be run without executing the modules, but Pod::Coverage requires the
module to at least load to see all the symbol table information, as some
functions/methods maybe created dynamically.

> * Shipping tests is a hint that a developer at least thought about
> testing.  Counter: It's no guarantee of the quality of
> testing and can
> be easily spoofed to raise quality.

This is not in question.

> * Tests evaluate the success of the distribution against its design
> goals given a user's unique system and Perl configuration.  Counter:
> developers should take responsibility for ensuring
> portability instead
> of hoping it works unti some user breaks it.

Again, not in question.

> The first point extends very nicely to both has_test_* and coverage
> testing.

Daffodils are flowers, therefore all flowers are daffodils!

Including pod/coverage tests shows the author felt comfortable releasing those
tests. Not including them tells you nothing about the authors thought process or
test suite. Please don't second guess them.

> The presence of a
> test is just a sign -- and one that doesn't require code to be run to
> determine Kwalitee.  

Test::Pod maybe, not true of Test::Pod::Coverage.

> The flip side, of course, is that by including test
> that are necessary for CPANTS, a developer inflicts them on
> everyone who uses the code.  That isn't so terrible for pod
> and pod coverage testing, but it's a much bigger hit for 
> Devel::Cover.

Devel::Cover test coverage can be a misleading. In one of my modules there are
set of debug statements, that if the value is undef, to avoid warnings it prints
the string 'undef'. Devel::Cover notes that I don't test for that, and as such I
don't have 100% test coverage in my module. I'm happy with that, and it doesn't
effect the working of the module. Would I be marked down on kwalitee for not
having 100% test coverage? There are plenty of other modules that are in a
similar situation. In another module, should I check that my module can handle a
broken DBI connection in the middle of a fetch?

> Why not find a way to include them in the META.yml file and have the
> build tools keep track of whether pod/pod-coverage/code-coverage was
> run?  Self reported statistics are easy to fake, but so are the
> has_test_* Kwalitee checks as many people have pointed out.

Not quite sure whether you're arguing for or against here. 

> Anyone who is obsessed about Kwality scores is going to fake other 
> checks, too.

Daffodils are flowers, therefore all flowers are daffodils!

You're second guessing again here. Someone who is obsessed with their kwalitee
scores, may actually be quite passionate about having the best quality packaged
distribution they possibly can. Many authors do actually take great pride in the
work they produce, so why would they want to release it in a packaged
distribution with a low quality rating?

> As to the benefits of having Devel::Cover run on many
> environments and recording the output

While this may be of interest to some authors, not all users would be that
interested. May be they should be, but that's another discussion entirely.
Devel::Cover reports, as I've mentioned above, could end up being very
misleading. How many modules actually have 100% test coverage? For those that
don't attain 100%, does that mean they are bad modules? Kwalitee is currently
measure 0 and 1, there is no decimal point.

> Ironically, for all the skeptical comments about "why a
> scoreboard" --
> the fact that many people care about the Kwalitee metric
> suggests that
> it does serve some inspirational purpose.

That's all it's there for. There is no prize, other than self satisfaction that
you've packaged your distributions as reliably as you possibly can.

Personally, I'm not fussed whether the pod testing is a kwalitee item, as I was
including those tests in my distributions before it was introduced. I look to
kwalitee items, simply as a checklist of things I may have missed from my
distributions. It does not represent good quality code, as there are many, many
far better modules to learn from than mine.

I also try and reliably contain a consistent set of POD headers too, but this is
a purely personal thing. Every author has their own interpretation of what
headings they should include. That's something else that could be considered a
kwalitee item, but shouldn't be. 

Anything that productively improves the kwalitee of CPAN and the distributions
on it is good. Anything that is going to have a bad impact on some good quality
work, is likely to mean CPANTS and the kwalitee system will get ignored.

Barbie.

-- 
Barbie (@missbarbell.co.uk) | Birmingham Perl Mongers user group |
http://birmingham.pm.org/

---------------------------------------------------
This mail sent through http://www.easynetdial.co.uk

Reply via email to