Re: Kwalitee and has_test_*

2005-04-18 Thread David Cantrell
Michael Graham wrote:
If someone were to take over maintenance of your module, or they were to
fork it, or they were submitting patches to you, then they would want
these tools and tests, right?  How would they get them?
By asking for them?
It is my experience that when someone takes over maintenance of a module 
it is usually with the blessing of the previous maintainer, so that 
shouldn't be difficult most of the time.

--
David Cantrell


Re: Kwalitee and has_test_*

2005-04-18 Thread Adrian Howard
On 17 Apr 2005, at 11:09, Tony Bowden wrote:
On Sun, Apr 17, 2005 at 08:24:01AM +, Smylers wrote:
Negative quality for anybody who includes a literal tab character
anywhere in the distro's source!
Negative quality for anyone whose files appear to have been edited in
emacs!
Ow! Coffee snorted down nose. Ouch.
Adrian


Re: Kwalitee and has_test_*

2005-04-18 Thread Adrian Howard
On 17 Apr 2005, at 13:47, David A. Golden wrote:
[snip]
2) A metric to estimate the quality of a distribution for authors to 
compare their work against a subjective standard in the hopes that 
authors strive to improve their Kwalitee scores.  In this model, 
faking Kwalitee is irrelevant, because even if some authors fake it, 
others will improve improve quality (as measured by Kwalitee) for 
real, thus making Kwalitee useful as a quality improvement tool.

Actually, in #2, fakers can provide extra competitive pressure, as 
module authors who take Kwalitee seriously perceive a higher standard 
that they should be striving for.

I think most of the Kwalitee debate has been around confusion between 
whether #1 or #2 is the goal, plus what the subjective standard 
should be.
If #2 is the primary goal then one option might be to have a standard 
way of popping the information into the META.yml file? If we're 
assuming honesty on the module authors part...

Adrian


Re: Kwalitee and has_test_*

2005-04-17 Thread Smylers
Adam Kennedy writes:

 Christopher H. Laco wrote:
 
  Tony Bowden wrote:
  
   What's the difference to you between me shipping a a .t file that
   uses Pod::Coverage, or by having an internal system that uses
   Devel::Cover in a mode that makes sure I have 100% coverage on
   everything, including POD, or even if I hire a team of Benedictine
   Monks to peruse my code and look for problems?
  
   The only thing that should matter to you is whether the Pod
   coverage is adequate, not how that happens.
  
  How as a module consumer would I find out that the Pod coverage is
  adequate again? Why the [unshipped] .t file in this case.
  
  The only other way to tell is to a) write my own pod_coverage.t test
  for someone elses module at install time, or b) hand review all of
  the pod vs. code.  Or CPANTS.
 
 The main point is not so much that you define a measure of quality,
 but that you dictate to everyone the one true way in which they must
 determine it.

I'm completely with Tony and Adam on this particular point: that
TIMTOWTDI applies to checking pod coverage, and it doesn't make sense to
dismiss a label as being of lower quality because it doesn't perform one
particular check in one particular way.

But ...

Remember that we aren't measuring quality, but kwalitee.  Kwalitee is
supposed to provide a reasonable indication of quality, so far as that's
possible.  So what matters in determining whether a kwalitee heuristic
is appropriate is whether there is a correlation between modules that
pass the heuristic and those that humans would consider to be of high
quality.

(Theoretically) it doesn't actually matter whether the heuristic _a
priori_ makes sense.  If it happens to turn out that the particular
string grrr', @{$_-{$ occurs in many modules that are of high quality
and few that are of low quality, then it happens that looking for the
existence of that string in a distribution's source will provide a
useful indication when assessing the module.  It doesn't have to make
sense in order for that to be the case.  (Think of the rules that neural
net or Baysian spam detectors come up with for guessing the quality of
e-mail messages.)

I say theoretically cos in the case of Cpants kwalitee the rules are
publicly available -- so even if a neural net (or whatever) did come up
with the above substring heuristic, once it's know then authors can game
the system by artificially crowbarring into their modules' sources, at
which point the heuristic loses value.

So while I agree the pod coverage test criterion makes no sense, and
that it's perfectly valid not to distribute such a test, what I think is
more important is whether that criterion works.

In other words, which are the modules of poor quality but with high
kwalitee (and vice versa)?  And what can be done to distinguish those
modules from modules of high (low) quality?  It may be that removing the
pod coverage test criterion is an answer to that question (or it may
not).

 Why not give a kwalitee point for modules that bundle a test that
 checks for kwalitee?

If it produces a good correlation, then yes, have such a criterion.

Smylers
-- 
May God bless us with enough foolishness to believe that we can make a
difference in this world, so that we can do what others claim cannot be done.



Re: Kwalitee and has_test_*

2005-04-17 Thread Smylers
chromatic writes:

 On Sat, 2005-04-16 at 20:59 -0500, Andy Lester wrote:
 
  And the more the better!
 
 Well sure.  Two-space indent is clearly better than one-space indent,
 and four-space is at least twice as good as that.

Negative quality for anybody who includes a literal tab character
anywhere in the distro's source!

Smylers
-- 
May God bless us with enough foolishness to believe that we can make a
difference in this world, so that we can do what others claim cannot be done.



Re: Kwalitee and has_test_*

2005-04-17 Thread Smylers
Adam Kennedy writes:

 Michael Graham wrote:
 
  Another good reason to ship all of your development tests with code
  is that it makes it easer for users to submit patches with tests.
  Or to fork your code and retain all your development tools and
  methods.
 
 Perl::MinimumVersion, which doesn't exist yet, could check that the 
 version a module says it needs is higher than what Perl::MinimumVersion 
 can work out based on syntax alone.
 
 And it uses PPI... all 55 classes of it... which uses Class::Autouse, 
 which uses Class::Inspector, and prefork.pm, and Scalar::Util and 
 List::Util, oh and List::MoreUtils and a few other bits and pieces.
 
 I'm not going to push that giant nest of dependencies on people just so 
 they can install Chart::Math::Axis...

You don't have to -- have a test which depends on Perl::MinimumVersion,
but which skips itself entirely if Perl::MinimumVersion isn't installed.
Then anybody without all those dependencies isn't inconvenienced in the
slightest -- but anybody who wants to patch the module can see the
test's existence and choose to go to the effort of installing it if she
so desires.

(Note I'm not specifically agreeing with the point that shipping dev
tests makes sense -- I think it's fine for some authors to choose to do
so and some to choose not to; I'm merely disagreeing with the suggestion
that shipping a dev test _necessarily_ imposes a burden on mere users of
the module.)

Smylers
-- 
May God bless us with enough foolishness to believe that we can make a
difference in this world, so that we can do what others claim cannot be done.



Re: Kwalitee and has_test_*

2005-04-17 Thread Tony Bowden
On Sun, Apr 17, 2005 at 08:24:01AM +, Smylers wrote:
 Negative quality for anybody who includes a literal tab character
 anywhere in the distro's source!

Negative quality for anyone whose files appear to have been edited in
emacs!

Tony




Re: Kwalitee and has_test_*

2005-04-17 Thread Tony Bowden
On Sun, Apr 17, 2005 at 12:17:17AM +, Smylers wrote:
 Remember that we aren't measuring quality, but kwalitee.  Kwalitee is
 supposed to provide a reasonable indication of quality, so far as that's
 possible.  So what matters in determining whether a kwalitee heuristic
 is appropriate is whether there is a correlation between modules that
 pass the heuristic and those that humans would consider to be of high
 quality.

I don't think this is what kwalitee is. (Of course, I may be wrong.)

If we consider your example of Bayesian spam detectors, they only work
once you've trained them. And that's not what's going on here. We have
no indications of which are Quality modules in the first place, such
that we can run tests to see what the, perhaps surprising, common
features are that we could then set up as Kwalitee checks.

Rather, Kwalitee is that subset of Quality which can be measured in an
automated manner. You should be able to look at a Kwalitee check and
agree that it does indeed constitute Quality. 

 so even if a neural net (or whatever) did come up
 with the above substring heuristic, once it's know then authors can game
 the system by artificially crowbarring into their modules' sources, at
 which point the heuristic loses value.

I thought the idea was that we /wanted/ people to increase their
Kwalitee, and thus their Quality. The things we designate as Kwalitee
indicators should be things that module authors are encouraged to do.
Gaming the system in this environment is to be welcomed.

Tony



Re: Kwalitee and has_test_*

2005-04-17 Thread David A. Golden
Tony Bowden wrote:
so even if a neural net (or whatever) did come up
with the above substring heuristic, once it's know then authors can game
the system by artificially crowbarring into their modules' sources, at
which point the heuristic loses value.

I thought the idea was that we /wanted/ people to increase their
Kwalitee, and thus their Quality. The things we designate as Kwalitee
indicators should be things that module authors are encouraged to do.
Gaming the system in this environment is to be welcomed.
Great point.   It leads me to suggest that people are thinking about 
Kwalitee in two ways:

1) A metric to estimate the quality of a distribution for objective 
comparison against other distributions.  In this model, faking Kwalitee 
is bad because it obscures the comparison.

2) A metric to estimate the quality of a distribution for authors to 
compare their work against a subjective standard in the hopes that 
authors strive to improve their Kwalitee scores.  In this model, faking 
Kwalitee is irrelevant, because even if some authors fake it, others 
will improve improve quality (as measured by Kwalitee) for real, thus 
making Kwalitee useful as a quality improvement tool.

Actually, in #2, fakers can provide extra competitive pressure, as 
module authors who take Kwalitee seriously perceive a higher standard 
that they should be striving for.

I think most of the Kwalitee debate has been around confusion between 
whether #1 or #2 is the goal, plus what the subjective standard should be.

Regards,
David


Re: Kwalitee and has_test_*

2005-04-17 Thread Randal L. Schwartz
 Tony == Tony Bowden [EMAIL PROTECTED] writes:

Tony Negative quality for anyone whose files appear to have been edited in
Tony emacs!

Now, them's fightin' words!

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
merlyn@stonehenge.com URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!


Re: Kwalitee and has_test_*

2005-04-16 Thread Adam Kennedy
Christopher H. Laco wrote:
Tony Bowden wrote:
On Thu, Apr 07, 2005 at 12:32:31PM -0400, Christopher H. Laco wrote:
CPANTS can't check that for me, as I don't ship those tests.
They're part of my development environment, not part of my release 
tree.

That is true. But if you don't ship them, how do I know you bothered 
to check those things in the first place?

Why do you care? What's the difference to you between me shipping a a .t
file that uses Pod::Coverage, or by having an internal system that uses
Devel::Cover in a mode that makes sure I have 100% coverage on 
everything,
including POD, or even if I hire a team of Benedictine Monks to peruse
my code and look for problems?

The only thing that should matter to you is whether the Pod coverage is
adequate, not how that happens.

I think you just answered youre own question, assuming you just agreed 
that I should care about whether your pod coverage is adequate.

How as a module consumer would I find out that the Pod coverage is 
adequate again? Why the [unshipped] .t file in this case.

The only other way to tell is to a) write my own pod_coverage.t test for 
 someone elses module at install time, or b) hand review all of the pod 
vs. code.  Or CPANTS.
The main point is not so much that you define a measure of quality, but 
that you dictate to everyone the one true way in which they must 
determine it.

The POD is the worst example of this. Why on earth should you care? You 
say because you should care if the author does consistent POD checking. 
Fine, but you dictate the One True Path to POD checking, which is to 
bundle a test for it with the package.

I've got all sorts of crap in my package delivery pipeline, but it 
doesn't mean I want to go bundling all of it in. Imagine a PPI-based 
tool that verifies that the required version matches the syntax. Should 
we make every single module bundle it, and make every single user 
download a 55 class 5000 SLOC 3 meg package? Just to install Thingy.pm.

Why not give a kwalitee point for modules that bundle a test that checks 
for kwalitee? The required kwalitee.t checks quality and requires your 
kwalitee be over 15 before passing.

Where does it end?
Adam K


Re: Kwalitee and has_test_*

2005-04-16 Thread Adam Kennedy
Michael Graham wrote:
Another good reason to ship all of your development tests with code is
that it makes it easer for users to submit patches with tests.  Or to
fork your code and retain all your development tools and methods.
Perl::MinimumVersion, which doesn't exist yet, could check that the 
version a module says it needs is higher than what Perl::MinimumVersion 
can work out based on syntax alone.

And it uses PPI... all 55 classes of it... which uses Class::Autouse, 
which uses Class::Inspector, and prefork.pm, and Scalar::Util and 
List::Util, oh and List::MoreUtils and a few other bits and pieces.

I'm not going to push that giant nest of dependencies on people just so 
they can install Chart::Math::Axis...

So I run it in my packaging pipeline. It's a low percentage test that 
catches some annoying cases that bite me once a year.

And I should probably not talk about the RecDescent parser for the 
bundled .sql files, or the test that ensures that any bundled .gif files 
are at something close to their best possible compression level.

Adam K


Re: Kwalitee and has_test_*

2005-04-16 Thread Michael Graham

 Michael Graham wrote:
  Another good reason to ship all of your development tests with code is
  that it makes it easer for users to submit patches with tests.  Or to
  fork your code and retain all your development tools and methods.

 Perl::MinimumVersion, which doesn't exist yet, could check that the
 version a module says it needs is higher than what Perl::MinimumVersion
 can work out based on syntax alone.

 And it uses PPI... all 55 classes of it... which uses Class::Autouse,
 which uses Class::Inspector, and prefork.pm, and Scalar::Util and
 List::Util, oh and List::MoreUtils and a few other bits and pieces.

 I'm not going to push that giant nest of dependencies on people just so
 they can install Chart::Math::Axis...

I'm not suggesting that end users be forced to *run* your development
tests.  Just that the tests be included in your CPAN package.  Ideally,
the install process can be made smart enough to skip this kind of test.

 So I run it in my packaging pipeline. It's a low percentage test that
 catches some annoying cases that bite me once a year.

 And I should probably not talk about the RecDescent parser for the
 bundled .sql files, or the test that ensures that any bundled .gif files
 are at something close to their best possible compression level.

If someone were to take over maintenance of your module, or they were to
fork it, or they were submitting patches to you, then they would want
these tools and tests, right?  How would they get them?


Michael


---
Michael Graham [EMAIL PROTECTED]

YAPC::NA 2005 Toronto - http://www.yapc.org/America/ - [EMAIL PROTECTED]
---



Re: Kwalitee and has_test_*

2005-04-16 Thread chromatic
On Sat, 2005-04-16 at 19:31 -0400, Michael Graham wrote:

 I'm not suggesting that end users be forced to *run* your development
 tests.  Just that the tests be included in your CPAN package.  Ideally,
 the install process can be made smart enough to skip this kind of test.

Shipping tests but not running them is a sign of kwalitee?  Some file
somewhere mentions POD::Coverage is a sign of kwalitee?  Shipping
useless code is a sign of kwalitee?

I'm about halfway ready to propose 'has_indentation' as a kwalitee
metric.

 If someone were to take over maintenance of your module, or they were
 to fork it, or they were submitting patches to you, then they would 
 want these tools and tests, right?  How would they get them?

I think this is a discussion completely separate from CPANTS' kwalitee
metrics.

Besides, if someone wants to run test or documentation coverage checks
against any of my modules, he or she ought to be able to copy and paste
the SYNOPSIS output to the appropriate test files, if not ask me for
them outright.

-- c



Re: Kwalitee and has_test_*

2005-04-16 Thread Andy Lester
I'm about halfway ready to propose 'has_indentation' as a kwalitee
metric.
And the more the better!
--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: Kwalitee and has_test_*

2005-04-16 Thread chromatic
On Sat, 2005-04-16 at 20:59 -0500, Andy Lester wrote:

  I'm about halfway ready to propose 'has_indentation' as a kwalitee
  metric.

 And the more the better!

Well sure.  Two-space indent is clearly better than one-space indent,
and four-space is at least twice as good as that.

It falls down a bit as the numbers increase from there, though.

-- c



Re: Kwalitee and has_test_*

2005-04-08 Thread Tony Bowden
On Thu, Apr 07, 2005 at 02:34:21PM -0400, David Golden wrote:
 * Shipping tests is a hint that a developer at least thought about 
 testing.  Counter: It's no guarantee of the quality of testing and can 
 be easily spoofed to raise quality.

This is certainly not why I ship tests, and I've never heard anyone
claim that it's a sensible reason for shipping tests, outside of this
thread.

 * Tests evaluate the success of the distribution against its design 
 goals given a user's unique system and Perl configuration.  Counter: 
 developers should take responsibility for ensuring portability instead 
 of hoping it works unti some user breaks it.

This is exactly why I believe shipping tests with code is a good thing.
I cannot possibly test every perl configuration on every OS. Even if the
team of Benedictine Monks that I employ to do this for could somehow
manage this, unfortunately they are not yet capable of testing it
against /future/ configurations.

Shipping tests with my code protects users from incompatability by
warning them /before/ they install the module, rather than when a user
finally triggers the problem in real code.

The more exhaustive my test suite is, the better this is.

This also makes my life as a developer much better. I don't have to field
lots of bug reports that say nothing more than I can't get Foo::Bar
version 2.09 to work on Windows. Instead they can tell me t/wibble.t
fails test 13 on windows saying: 'no such file or directory'. Then,
even without access to that platform, I can probably track down what
the problem is.

Shipping tests is a good thing. The better they are, the better a thing
it is.

Shipping tests that don't actually exercise any functionality of the
code, OTOH, does not share in this mojo. If I have POD for method
frobnitz(), then the POD is there for everyone to enjoy. Shipping a test
that confirms that tells no-one anything, and achieves no purpose, other
than, currently to raise my Kwalitee. Rather, it wastes time for the
poeple installing it, especially as it will recommend that they install
another non-core module (with several other non-core dependencies in
turn).

Kwalitee is only useful if it measures useful things. The presence of a
test file that somewhere mentions the words Pod::Coverage is not a
useful thing.

Tony




RE: Kwalitee and has_test_*

2005-04-08 Thread Barbie
On 07 April 2005 19:34 David Golden wrote:

 Let's step back a moment.
 
 Does anyone object that CPANTS Kwalitee looks for tests?  

I think you're missing the point of Tony's argument. I don't think anyone would
dispute that shipping tests with a distribution is a Good Thing (tm). What is at
issue are tests that have no real benefit from testing on the authors or users
platform, apart from a feel good factor. If Test::Pod and Test::Pod::Coverage
don't produced errors on the authors platform, it is extremely doubtful they'll
produce errors on the users platform. I've been putting these tests into my
distributions for sometime, but others haven't, and some, like Tony, prefer to
keep those tests as part of their local test suite.

I do think that some kwalitee mark for them is worthwhile, but not at the
expense of checking a specific filename (as it was in the original check) or
whether the author includes a test using those modules. I believe Test::Pod
could be run without executing the modules, but Pod::Coverage requires the
module to at least load to see all the symbol table information, as some
functions/methods maybe created dynamically.

 * Shipping tests is a hint that a developer at least thought about
 testing.  Counter: It's no guarantee of the quality of
 testing and can
 be easily spoofed to raise quality.

This is not in question.

 * Tests evaluate the success of the distribution against its design
 goals given a user's unique system and Perl configuration.  Counter:
 developers should take responsibility for ensuring
 portability instead
 of hoping it works unti some user breaks it.

Again, not in question.

 The first point extends very nicely to both has_test_* and coverage
 testing.

Daffodils are flowers, therefore all flowers are daffodils!

Including pod/coverage tests shows the author felt comfortable releasing those
tests. Not including them tells you nothing about the authors thought process or
test suite. Please don't second guess them.

 The presence of a
 test is just a sign -- and one that doesn't require code to be run to
 determine Kwalitee.  

Test::Pod maybe, not true of Test::Pod::Coverage.

 The flip side, of course, is that by including test
 that are necessary for CPANTS, a developer inflicts them on
 everyone who uses the code.  That isn't so terrible for pod
 and pod coverage testing, but it's a much bigger hit for 
 Devel::Cover.

Devel::Cover test coverage can be a misleading. In one of my modules there are
set of debug statements, that if the value is undef, to avoid warnings it prints
the string 'undef'. Devel::Cover notes that I don't test for that, and as such I
don't have 100% test coverage in my module. I'm happy with that, and it doesn't
effect the working of the module. Would I be marked down on kwalitee for not
having 100% test coverage? There are plenty of other modules that are in a
similar situation. In another module, should I check that my module can handle a
broken DBI connection in the middle of a fetch?

 Why not find a way to include them in the META.yml file and have the
 build tools keep track of whether pod/pod-coverage/code-coverage was
 run?  Self reported statistics are easy to fake, but so are the
 has_test_* Kwalitee checks as many people have pointed out.

Not quite sure whether you're arguing for or against here. 

 Anyone who is obsessed about Kwality scores is going to fake other 
 checks, too.

Daffodils are flowers, therefore all flowers are daffodils!

You're second guessing again here. Someone who is obsessed with their kwalitee
scores, may actually be quite passionate about having the best quality packaged
distribution they possibly can. Many authors do actually take great pride in the
work they produce, so why would they want to release it in a packaged
distribution with a low quality rating?

 As to the benefits of having Devel::Cover run on many
 environments and recording the output

While this may be of interest to some authors, not all users would be that
interested. May be they should be, but that's another discussion entirely.
Devel::Cover reports, as I've mentioned above, could end up being very
misleading. How many modules actually have 100% test coverage? For those that
don't attain 100%, does that mean they are bad modules? Kwalitee is currently
measure 0 and 1, there is no decimal point.

 Ironically, for all the skeptical comments about why a
 scoreboard --
 the fact that many people care about the Kwalitee metric
 suggests that
 it does serve some inspirational purpose.

That's all it's there for. There is no prize, other than self satisfaction that
you've packaged your distributions as reliably as you possibly can.

Personally, I'm not fussed whether the pod testing is a kwalitee item, as I was
including those tests in my distributions before it was introduced. I look to
kwalitee items, simply as a checklist of things I may have missed from my
distributions. It does not represent good quality code, as there are many

Re: Kwalitee and has_test_*

2005-04-08 Thread Michael Graham

Another good reason to ship all of your development tests with code is
that it makes it easer for users to submit patches with tests.  Or to
fork your code and retain all your development tools and methods.

Since most Perl modules go up on CPAN and nowhere else, I think that the
CPAN distribution should contain as much of the useful development
environment as possible.  That includes POD tests, coverage tests,
benchmark tests, developer documentation, nonsense poetry, scraps of
paper, bits of string.

However, personally I think all POD tests and coverage tests should be
skipped at install time unless the user specifically asks for those
tests to be run.

For one thing, they slow down the install and rarely provide useful
information to the user.

For another, they may generate errors that prevent install, even though
the module works correctly.

For instance, if my POD extraction tools are slightly different than the
developer's POD extraction tools, and this causes a POD coverage test to
fail, then the module will fail to install even though it would have
worked correctly.

Maybe I get all my documentation from search.cpan.org and I don't care
whether the module's docs are considered 'kwalitudinous' on my system.
Maybe I don't know enough about testing and module development to know
that the module still works fine in spite of the failed POD test.


Michael




---
Michael Graham [EMAIL PROTECTED]

YAPC::NA 2005 Toronto - http://www.yapc.org/America/ - [EMAIL PROTECTED]
---




Re: Kwalitee and has_test_*

2005-04-07 Thread Adam Kennedy
David Cantrell wrote:
Thomas Klausner wrote:
I cannot check POD coverage because Pod::Coverage executes the code.

No it doesn't.  That said, if you don't want to run the code you're 
testing, you are, errm, limiting yourself rather badly.

Do YOU want to run all of CPAN?
I certainly don't.
Bulk testing requires that you don't have to run it.
Adam K


Re: Kwalitee and has_test_*

2005-04-07 Thread Adam Kennedy
Adding a kwalitee check for a test that runs Devel::Cover by default
might on the surface appear to meet this goal, but I hope people
recognize it as a bad idea.
Why, then, is suggesting that people ship tests for POD errors and
coverage a good idea?
Although I've now added the automated inclusion of a 99_pod.t to my 
packaging system (less for kwalitee than that I've noticed the odd bug 
get through myself) why doesn't kwalitee just check the POD itself, 
rather than make a check for a check?

Adam K


Re: Kwalitee and has_test_*

2005-04-07 Thread Thomas Klausner
Hi!

On Thu, Apr 07, 2005 at 01:17:40PM +1000, Adam Kennedy wrote:
 Adding a kwalitee check for a test that runs Devel::Cover by default
 might on the surface appear to meet this goal, but I hope people
 recognize it as a bad idea.
 
 Why, then, is suggesting that people ship tests for POD errors and
 coverage a good idea?
 
 Although I've now added the automated inclusion of a 99_pod.t to my 
 packaging system (less for kwalitee than that I've noticed the odd bug 
 get through myself) why doesn't kwalitee just check the POD itself, 
 rather than make a check for a check?

It does:

no_pod_errors
Shortcoming: The documentation for this distribution contains syntactic
 errors in it's POD.
Defined in: Module::CPANTS::Generator::Pod

I added the check for Test::Pod because somebody requested it (together with
Test::Pod::Coverage).

While I can see the point why people object to this metrics, I currently
leave them in, mostly because I've got no time for CPANTS right now (mostly
because of the Austrian Perl Workshop organisation (shameless plug:
http://conferences.yapceurope.org/apw2005/)


-- 
#!/usr/bin/perl   http://domm.zsi.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}


Re: Kwalitee and has_test_*

2005-04-07 Thread Christopher H. Laco
Adam Kennedy wrote:
Adding a kwalitee check for a test that runs Devel::Cover by default
might on the surface appear to meet this goal, but I hope people
recognize it as a bad idea.
Why, then, is suggesting that people ship tests for POD errors and
coverage a good idea?

Although I've now added the automated inclusion of a 99_pod.t to my 
packaging system (less for kwalitee than that I've noticed the odd bug 
get through myself) why doesn't kwalitee just check the POD itself, 
rather than make a check for a check?

Adam K

Because they're two seperate issues.
First, checking the pod syntax is ok for the obvious reasons. Broken pad 
leads to doc problems.

Second, we're checkling that the AUTHOR is also checking his/her pod 
syntax and coverage. That's an important distinction.

I would go as for to say that checking the authors development 
intentions via checks like Test::Pod::Coverage, Test::Strict, 
Test::Distribution, etc is just as important, if not more, than just 
checkong syntax and that all tests pass.

Givin two modules with a passing basic.t, I'd go for the one with all of 
the development side tests over the other. Those tests listed above 
signal [to me] that the author [probably] pays more loving concern to 
all facets of their module than the one with just the passing basic.t

-=Chris


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Kwalitee and has_test_*

2005-04-07 Thread David Golden
This is an interesting point and triggered the thought in my mind that 
CPANTS Kwalitee is really testing *distributions* not modules -- i.e. 
the quality of the packaging, not the underlying code.  That's 
important, too, but quite arbitrary -- insisting that distributions test 
pod and pod coverage is arbitrary.  If CPANTS insisted that all modules 
in a distribution be in a lib directory, that would be arbitrary, too, 
but not consistent with general practice (fortunately, it's written to 
allow a single .pm in the base directory, otherwise there has to be a 
lib directory).

The point I'm making is that CPANTS -- if it is to stay true to purpose 
-- should stick to distribution tests and try to ensure that those 
reflect widespread quality practices, not evangelization (however well 
meaning) to push an arbitrary definition of quality on an unruly 
community.  Devel::Cover is a useful tool -- but it pushes further and 
further away from a widespread distribution-level measure of quality.  
(Whereas I see pod testing as analogoous to a compilation test and pod 
coverage testing being a documentation test -- both of which are 
reasonable things to include in a high quality test suite.)

David Golden
Christopher H. Laco wrote:
Because they're two seperate issues.
First, checking the pod syntax is ok for the obvious reasons. Broken 
pad leads to doc problems.

Second, we're checkling that the AUTHOR is also checking his/her pod 
syntax and coverage. That's an important distinction.

I would go as for to say that checking the authors development 
intentions via checks like Test::Pod::Coverage, Test::Strict, 
Test::Distribution, etc is just as important, if not more, than just 
checkong syntax and that all tests pass.

Givin two modules with a passing basic.t, I'd go for the one with all 
of the development side tests over the other. Those tests listed above 
signal [to me] that the author [probably] pays more loving concern to 
all facets of their module than the one with just the passing basic.t

-=Chris



Re: Kwalitee and has_test_*

2005-04-07 Thread Tony Bowden
On Thu, Apr 07, 2005 at 08:56:26AM -0400, Christopher H. Laco wrote:
 I would go as for to say that checking the authors development 
 intentions via checks like Test::Pod::Coverage, Test::Strict, 
 Test::Distribution, etc is just as important, if not more, than just 
 checkong syntax and that all tests pass.

CPANTS can't check that for me, as I don't ship those tests.

They're part of my development environment, not part of my release tree.

Tony


Re: Kwalitee and has_test_*

2005-04-07 Thread Christopher H. Laco
Tony Bowden wrote:
On Thu, Apr 07, 2005 at 08:56:26AM -0400, Christopher H. Laco wrote:
I would go as for to say that checking the authors development 
intentions via checks like Test::Pod::Coverage, Test::Strict, 
Test::Distribution, etc is just as important, if not more, than just 
checkong syntax and that all tests pass.

CPANTS can't check that for me, as I don't ship those tests.
They're part of my development environment, not part of my release tree.
Tony

That is true. But if you don't ship them, how do I know you bothered to 
check those things in the first place?

[I don't think there is a right answer to that question by the way.]
I'm just saying that the presence of those types of tests bumps up some 
level of kwalittee, and they should be left alone within CPANTS.

-=Chris


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Kwalitee and has_test_*

2005-04-07 Thread Tony Bowden
On Thu, Apr 07, 2005 at 12:32:31PM -0400, Christopher H. Laco wrote:
 CPANTS can't check that for me, as I don't ship those tests.
 They're part of my development environment, not part of my release tree.
 That is true. But if you don't ship them, how do I know you bothered to 
 check those things in the first place?

Why do you care? What's the difference to you between me shipping a a .t
file that uses Pod::Coverage, or by having an internal system that uses
Devel::Cover in a mode that makes sure I have 100% coverage on everything,
including POD, or even if I hire a team of Benedictine Monks to peruse
my code and look for problems?

The only thing that should matter to you is whether the Pod coverage is
adequate, not how that happens.

Tony



Re: Kwalitee and has_test_*

2005-04-07 Thread Christopher H. Laco
Tony Bowden wrote:
On Thu, Apr 07, 2005 at 12:32:31PM -0400, Christopher H. Laco wrote:
CPANTS can't check that for me, as I don't ship those tests.
They're part of my development environment, not part of my release tree.
That is true. But if you don't ship them, how do I know you bothered to 
check those things in the first place?

Why do you care? What's the difference to you between me shipping a a .t
file that uses Pod::Coverage, or by having an internal system that uses
Devel::Cover in a mode that makes sure I have 100% coverage on everything,
including POD, or even if I hire a team of Benedictine Monks to peruse
my code and look for problems?
The only thing that should matter to you is whether the Pod coverage is
adequate, not how that happens.
I think you just answered youre own question, assuming you just agreed 
that I should care about whether your pod coverage is adequate.

How as a module consumer would I find out that the Pod coverage is 
adequate again? Why the [unshipped] .t file in this case.

The only other way to tell is to a) write my own pod_coverage.t test for 
 someone elses module at install time, or b) hand review all of the pod 
vs. code.  Or CPANTS.

-=Chris


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Kwalitee and has_test_*

2005-04-07 Thread chromatic
On Thu, 2005-04-07 at 13:22 -0400, Christopher H. Laco wrote:

 How as a module consumer would I find out that the Pod coverage is 
 adequate again? Why the [unshipped] .t file in this case.

How as a module consumer would you find out that the test coverage is
adequate?

Furthermore, what if I as a developer refuse to install POD testing
modules yet ship their tests anyway.  The kwalitee metric assumes that
*I* have run the tests, but I haven't.

For modules with platform-specific behavior, it's *more* useful to make
module users run coverage tests than POD coverage and checking tests.
Which is more likely to vary?  Yet I don't hear a lot of people arguing
that the author making the users do his work for him is a sign of
kwalitee.

Do I have to write Test::Coverage to show what a bad idea this is?

-- c



Re: Kwalitee and has_test_*

2005-04-07 Thread David Golden
Let's step back a moment.
Does anyone object that CPANTS Kwalitee looks for tests?  Why not apply 
the same arguments against has_test_* to test themselves?  What if I, as 
a developer, choose to run test as part of my development but don't ship 
them.  Why should I make users have to spent time waiting for my test 
suite to run?

Keeping in mind that this is a thought exercise, not a real argument, 
here are some possible reasons (and counter arguments) for including 
test files in a distribution and for Kwalitee to include the existence 
of tests

* Shipping tests is a hint that a developer at least thought about 
testing.  Counter: It's no guarantee of the quality of testing and can 
be easily spoofed to raise quality.

* Tests evaluate the success of the distribution against its design 
goals given a user's unique system and Perl configuration.  Counter: 
developers should take responsibility for ensuring portability instead 
of hoping it works unti some user breaks it.

The first point extends very nicely to both has_test_* and coverage 
testing.  Including a test for pod/pod-coverage shows that the developer 
thought about it.  It doesn't mean that a developer couldn't do those 
things and just not create a *.t file for them, of course, or create a 
*.t file for them and not do those things, either.  The presence of a 
test is just a sign -- and one that doesn't require code to be run to 
determine Kwalitee.  The flip side, of course, is that by including test 
that are necessary for CPANTS, a developer inflicts them on everyone who 
uses the code.  That isn't so terrible for pod and pod coverage testing, 
but it's a much bigger hit for Devel::Cover.

Why not find a way to include them in the META.yml file and have the 
build tools keep track of whether pod/pod-coverage/code-coverage was 
run?  Self reported statistics are easy to fake, but so are the 
has_test_* Kwalitee checks as many people have pointed out.  Anyone who 
is obsessed about Kwality scores is going to fake other other checks, 
too.  And that way, people who have customized their environments can 
report that they are doing it.

As to the benefits of having Devel::Cover run on many environments and 
recording the output, rather than suggest developers put it in a *.t 
file -- which forces all users to cope with it -- instead why not build 
it into CPANPLUS as an option along the lines of how test reporting is 
done.  Make it a user choice, not a mandated action.

Ironically, for all the skeptical comments about why a scoreboard -- 
the fact that many people care about the Kwalitee metric suggests that 
it does serve some inspirational purpose.

Regards,
David Golden



Re: Kwalitee and has_test_*

2005-04-07 Thread Pete Krawczyk
Subject: Re: Kwalitee and has_test_*
From: David Golden [EMAIL PROTECTED]
Date: Thu, 07 Apr 2005 14:34:21 -0400

}What if I, as a developer, choose to run test as part of my development
}but don't ship them.  Why should I make users have to spent time waiting
}for my test suite to run?

Let's extend that argument a bit - I have two platforms I can test my 
module on - a Linux single-processor CPU and an OSX Panther system.  Both 
are only running one version of Perl (although I could do more, I just 
don't), and both are set up sane (to me).  I run my test suite on both 
before I ship my module off to CPAN.  No problems, right?

Except that my module might run on a Solaris box, or a Windows box, or any
number of alternate platforms, perls and environments that I cannot
envision right now.  The only reliable method a sysadmin has to find out
if the program is doing what the author intended is through some form test 
suite or through other people reporting their successful builds (e.g. how 
djb asks people to mail him a SYSDEPS file on successful install).

If I write something, I also want to make sure that if I receive a bug 
report, it's a real bug and not an environmental bug.  Having a test suite 
gives me a controlled environment in which I can (hopefully) reproduce a 
simple enough test to indicate what's wrong.

}The flip side, of course, is that by including test that are necessary
}for CPANTS, a developer inflicts them on everyone who uses the code.

As a sysadmin, I'd rather spend an extra 5 minutes (or even 5 hours) 
running a regression/testing suite to make sure it doesn't break something 
else than to have a surprise foisted on me at the least inopportune 
moment.  The only reason I really see D::C as not being appropriate for 
make test is because it's not a binary - it's more of a fuzzy how 
much, which people will interpret differently and which may have no 
bearing on how the program operates.

}Counter:  developers should take responsibility for ensuring portability
}instead of hoping it works unti some user breaks it.

It's not just portability.  Should the module I wrote and tested on 5.6.1 
work on 5.8.6?  How about 5.005_04?  CPAN doesn't have a by-perl-rev 
repository for modules, and maintaining one would be a nightmare, at best.


I agree with your stance on Kwalitee.  I think it's important to 
understand that the presence of tests in the first place puts us light 
years ahead of many other systems.  Imagine if you had a full test suite 
(or even a partial) for Windows, or the Linux kernel, etc.  Sure, those 
things aren't necessarily public right now, but if I had a hardware-level 
test suite that simulated what I was actually doing, I could find out much 
quicker if that new stick of RAM I put in my computer was going to cause 
unexpected behavior.

-Pete K
-- 
Pete Krawczyk
  perl at bsod dot net




Re: Kwalitee and has_test_*

2005-04-04 Thread David Cantrell
Yitzchak Scott-Thoennes wrote:
Since you ask...
An important part of kwalitee to me is that Makefile.PL / Build.PL run
successfully with stdin redirected to /dev/null  (that is, that any
user interaction be optional).

Another is that a bug-reporting address or mechanism (e.g. clp.modules
or cpan RT) be listed in the README or pod.
... and that the author really does pay attention.  I'd like to see a 
metric based on how long the authors let bugs languish.  No doubt I'd do 
badly because I've left a couple of bugs unfixed for nearly a year :-(

A beer for anyone who can think how on earth to objectively measure that.
--
David Cantrell


Re: Kwalitee and has_test_*

2005-04-04 Thread David Cantrell
Thomas Klausner wrote:
I cannot check POD coverage because Pod::Coverage executes the code.
No it doesn't.  That said, if you don't want to run the code you're 
testing, you are, errm, limiting yourself rather badly.

--
David Cantrell


Re: Kwalitee and has_test_*

2005-04-04 Thread Thomas Klausner
Hi!

On Mon, Apr 04, 2005 at 10:32:14AM +0100, David Cantrell wrote:
 Thomas Klausner wrote:
 
 I cannot check POD coverage because Pod::Coverage executes the code.
 
 No it doesn't.

Yes, it does.

Pod::Coverage uses Devel::Symdump to get a list of all subs.

 That said, if you don't want to run the code you're 
 testing, you are, errm, limiting yourself rather badly.

http://domm.zsi.at/talks/2005_brussels_cpants/s00023.html


-- 
#!/usr/bin/perl   http://domm.zsi.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}


Re: Kwalitee and has_test_*

2005-04-03 Thread Yitzchak Scott-Thoennes
On Fri, Apr 01, 2005 at 09:00:17PM +0200, Thomas Klausner wrote:
 Well, kwalitee != quality. Currently, kwalitee basically only says how
 well-formed a distribution is. For my definition of well-formed :-) But I'm
 always open to suggestion etc.

Since you ask...

An important part of kwalitee to me is that Makefile.PL / Build.PL run
successfully with stdin redirected to /dev/null  (that is, that any
user interaction be optional).

Another is that a bug-reporting address or mechanism (e.g. clp.modules
or cpan RT) be listed in the README or pod.


Re: Kwalitee and has_test_*

2005-04-03 Thread Michael G Schwern
On Sun, Apr 03, 2005 at 03:09:17PM -0700, Yitzchak Scott-Thoennes wrote:
 Since you ask...
 
 An important part of kwalitee to me is that Makefile.PL / Build.PL run
 successfully with stdin redirected to /dev/null  (that is, that any
 user interaction be optional).
 
 Another is that a bug-reporting address or mechanism (e.g. clp.modules
 or cpan RT) be listed in the README or pod.

These might be valid but how do you mechanize their recognition?  The
former requires running a program... a no-no for a CPAN-wide scan.

The later requires some sort of intellegence to know what is and what
is not a bug report mechanism.  And its too simplistic to just look for 
rt.cpan.org.



Re: Kwalitee and has_test_*

2005-04-01 Thread Thomas Klausner
Hi!

On Sun, Mar 27, 2005 at 11:40:45AM +0100, Tony Bowden wrote:

 There are now two kwalitee tests for 'has_test_pod' and
 'has_test_pod_coverage'. These check that there are test scripts for
 POD correctness and POD coverage.

Actually they check if Test::Pod and Test::Pod::Coverage are used in a test
script.

 These seem completely and utterly wrong to me. Surely the kwalitee
 checks should be purely that the POD is correct and sufficiently covers
 the module?

Well, sort of. In fact, there is a check for pod correctnes (no_pod_errors).

I cannot check POD coverage because Pod::Coverage executes the code. Which
is something I do not want to do (for various reasons, see
  http://domm.zsi.at/talks/2005_brussels_cpants/s00023.html
).

 Otherwise a distribution which has perfectly correct POD, which
 completely covers the module, is deemed to be of lesser kwalitee, and
 listed on CPANTS as having shortcomings.

Well, kwalitee != quality. Currently, kwalitee basically only says how
well-formed a distribution is. For my definition of well-formed :-) But I'm
always open to suggestion etc. Several kwalitee indicators have been removed
because people convvinced me. Several where added (e.g. has_test_pod_coverage)

 We should be very wary of stipulating HOW authors have to achieve their
 quality. Saying you can only check your POD in one specific way goes to
 far IMO.
 
That's a good point.

OTOH, I know of several people who added Pod::Coverage to their test suites
(and hopefully found some undocumented methods...) because of this metric.
Thus one goal (raising the overall quality (!) of CPAN) is reached.

Anyway, I invite everybody to suggest new metrics or convince me why current
metrics are bad. And nobody is going to stop you from fetching the raw data
(either all the YAML files or the SQLite DB) and create your own Kwalitee
(yay, heretic kwalitee!).



-- 
#!/usr/bin/perl   http://domm.zsi.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}


Re: Kwalitee and has_test_*

2005-04-01 Thread chromatic
On Fri, 2005-04-01 at 21:00 +0200, Thomas Klausner wrote:

 On Sun, Mar 27, 2005 at 11:40:45AM +0100, Tony Bowden wrote:

  We should be very wary of stipulating HOW authors have to achieve their
  quality. Saying you can only check your POD in one specific way goes to
  far IMO.
  
 That's a good point.
 
 OTOH, I know of several people who added Pod::Coverage to their test suites
 (and hopefully found some undocumented methods...) because of this metric.
 Thus one goal (raising the overall quality (!) of CPAN) is reached.

Adding a kwalitee check for a test that runs Devel::Cover by default
might on the surface appear to meet this goal, but I hope people
recognize it as a bad idea.

Why, then, is suggesting that people ship tests for POD errors and
coverage a good idea?

-- c



Re: Kwalitee and has_test_*

2005-04-01 Thread Thomas Klausner
Hi!

On Fri, Apr 01, 2005 at 10:59:04AM -0800, chromatic wrote:

 Why, then, is suggesting that people ship tests for POD errors and
 coverage a good idea?

I'm not 100% sure if it's a good idea, but it's an idea. 

But then, if I write some test (eg to check pod coverage), why should I not
ship them? It's a good feeling to let others know that I took some extra
effort to make sure everything works.

Oh, and I really look forward to the time when I grok PPI and can add
metrics like has_cuddled_elses and force /my/ view of how Perl should look
like onto all of you. BWHAHAHAHA! 

OK, seriously. CPANTS currently isn't much more than a joke. It might have
some nice benefits, but it is far from a real quality measurment tool. Never
will it happen that the Perl community decides on one set of kwalitee
metrics. That's why we're writing Perl, not Python. 

Currently, CPANTS tests for what is easy to test, i.e. distribution layout.
Soon it will test more interesting stuff (eg. does prereq and used modules
match up or are used modules missing from prereq). But feel free to ignore
it...


-- 
#!/usr/bin/perl   http://domm.zsi.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}


Re: Kwalitee and has_test_*

2005-04-01 Thread chromatic
On Fri, 2005-04-01 at 21:43 +0200, Thomas Klausner wrote:

 But then, if I write some test (eg to check pod coverage), why should I not
 ship them? It's a good feeling to let others know that I took some extra
 effort to make sure everything works.

If I use Devel::Cover to check my test coverage, why should I not ship
that?  If I write use cases to decide which features to add, why should
I not ship that?  If I use Module::Starter templates for tests and
modules, why should I not ship those?

They're all tools for the developer, not the user.  Any benefit they add
to the user comes from the developer using them before the user sees the
code.

If anything it's *more* useful to ship Devel::Cover tests than POD tests
-- cross-platform testing can be difficult and there's the chance of
gaining more data.  Who does it though?

 OK, seriously. CPANTS currently isn't much more than a joke. It might have
 some nice benefits, but it is far from a real quality measurment tool. Never
 will it happen that the Perl community decides on one set of kwalitee
 metrics. That's why we're writing Perl, not Python. 

Suggestion one: take down the scoreboard.  If people agree to your
measure of kwalitee, they can see their own scores.  If you don't intend
to promote your measure as the gold standard of kwalitee, make it
difficult for people to measure the rest of us against it.

Suggestion two: figure out which kwalitee criteria are valid and worth
taking seriously and which aren't and drop the second or make them
optional.  There are good, repeatable, automatably testable metrics that
are close enough to objective standards that it's worth promoting them.

Suggestion three: figure out what the goals of the POD kwalitee criteria
are and test those.  I don't think the criteria should be ships test
files containing strings that match heuristics for the presence and use
of two POD-testing modules.  If anything, a better criterion comes from
running those tests yourself.

There are legitimate questions of Quality that kwalitee can never
address, but there are legitimate questions of Quality that kwalitee
can.  I suggest to focus on those as far as possible.  Sure, it'll never
be the last word on what's good and what isn't, but the metrics could be
a lot more useful.  I would like to see that.

-- c



Re: Kwalitee and has_test_*

2005-04-01 Thread Tony Bowden
On Fri, Apr 01, 2005 at 09:00:17PM +0200, Thomas Klausner wrote:
 Anyway, I invite everybody to suggest new metrics 

I'd like the is pre-req thing to be more useful. Rather than a binary
yes/no thing (and the abuses it leads to), I'd rather have something
akin to Google's Page Rank, where the score you get is based not just on
the number of other modules who link to you, but how large their score
is in turn.

Schwern already gave a good example of this: Ima::DBI. It really only
has one other module for which it's a pre-req: Class::DBI. But that in
turn has quite a few others. So Ima::DBI should somehow get the benefit
of that.

And if all you're used by is 10 Acme:: modules written by yourself...

Tony



Re: Kwalitee and has_test_*

2005-04-01 Thread Tony Bowden
On Fri, Apr 01, 2005 at 09:00:17PM +0200, Thomas Klausner wrote:
  We should be very wary of stipulating HOW authors have to achieve their
  quality. Saying you can only check your POD in one specific way goes to
  far IMO.
 That's a good point.
 OTOH, I know of several people who added Pod::Coverage to their test suites
 (and hopefully found some undocumented methods...) because of this metric.
 Thus one goal (raising the overall quality (!) of CPAN) is reached.

Speaking only for myself, I found that in this case it served to lower
the kwalitee of my CPAN modules.

I needed to make a new release of one of them, and so I decided to check
out what the latest kwalitee checks were so I could make sure I'd
everything covered.

However, when I discovered these two checks, I decided not to bother any
more.

I already have things like this (and more besides) built into my whole
module development environment, but they don't work the way CPANTS
currently wants. I don't really want to change that environment, which
I've spent years tweaking to the way I like it, so instead I'll just
take the path of least resistance and decide to ignore CPANTS instead.

I used to think CPANTS was a great idea. But if it's going to be based
on HOW you do things, rather than whether you do things, I think it'll
ultimately fail.

Tony


Re: Kwalitee and has_test_*

2005-04-01 Thread Sébastien Aperghis-Tramoni
Tony Bowden wrote:
On Fri, Apr 01, 2005 at 09:00:17PM +0200, Thomas Klausner wrote:
Anyway, I invite everybody to suggest new metrics
I'd like the is pre-req thing to be more useful. Rather than a binary
yes/no thing (and the abuses it leads to), I'd rather have something
akin to Google's Page Rank, where the score you get is based not just 
on
the number of other modules who link to you, but how large their 
score
is in turn.

Schwern already gave a good example of this: Ima::DBI. It really only
has one other module for which it's a pre-req: Class::DBI. But that in
turn has quite a few others. So Ima::DBI should somehow get the benefit
of that.
You may then be interested by CPAN::Dependency[1], which could be used 
as a first step towards a Module Rank algorithm. Currently, what it 
does is to give a score to each distribution based on the number of 
times it appears as a prerequisite of another distribution (taking into 
account the depth of the dependency). As a result you also have for 
each distribution the list of the distributions that depend upon it.

[1] http://search.cpan.org/dist/CPAN-Dependency/
(eg/report.html contains a few results)
Sébastien Aperghis-Tramoni
 -- - --- -- - -- - --- -- - --- -- - --[ http://maddingue.org ]
Close the world, txEn eht nepO


Kwalitee and has_test_*

2005-03-27 Thread Tony Bowden

I was having a look at CPANTS again this morning, and I noticed
something rather strange.

There are now two kwalitee tests for 'has_test_pod' and
'has_test_pod_coverage'. These check that there are test scripts for
POD correctness and POD coverage.

These seem completely and utterly wrong to me. Surely the kwalitee
checks should be purely that the POD is correct and sufficiently covers
the module?

Otherwise a distribution which has perfectly correct POD, which
completely covers the module, is deemed to be of lesser kwalitee, and
listed on CPANTS as having shortcomings.

We should be very wary of stipulating HOW authors have to achieve their
quality. Saying you can only check your POD in one specific way goes to
far IMO.

Tony



Re: Kwalitee and has_test_*

2005-03-27 Thread Andy Lester
These seem completely and utterly wrong to me. Surely the kwalitee
checks should be purely that the POD is correct and sufficiently covers
the module?
Especially because pod.t and pod-coverage.t don't need to actually get 
distributed with the module.  Perhaps I keep pod*.t in my Subversion 
repository, and they get run when I run a make test, but aren't in the 
MANIFEST and therefore don't get shipped.

xoa
--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: Kwalitee and has_test_*

2005-03-27 Thread Michael Graham

 These seem completely and utterly wrong to me. Surely the kwalitee
 checks should be purely that the POD is correct and sufficiently covers
 the module?

One problem I can see with this is that (I think) the only way to
indicate extra private methods is to do so in pod-coverage.t.  For
instance:

pod_coverage_ok(
My::Module,
{ also_private = [ qr/^conf$/ ], },
My::Module, POD coverage, but marking the 'conf' sub as private
);

My::Module will pass its own pod-coverage.t script, but it won't pass
with someone else's pod-coverage.t.

Maybe there should be a different way of marking additional private
methods?


Michael



--
Michael Graham [EMAIL PROTECTED]