Re: Request for Comments: Package testing results analysis, result codes

2006-02-19 Thread Michael Graham
Tyler MacDonald [EMAIL PROTECTED] wrote:

 Adam,
   I have one more edgey case I'd like to see on this list:

   Tests run, but 50% (or maybe 80%?) are skipped.

   From what I've seen, the most common cause of this is that the
 package is untestable with the current build configuration. Eg; you needed
 to specify a webserver or database or something to get these tests to run.

This scenario (many skipped tests) can also happen with modules that
come bundled with many drivers.  One driver might be required, but the
rest might be optional.  Tests for optional drivers that are missing
their prerequisites will be skipped.

I have several modules[1] like this, and for each of them, I wrote the
test suite so that each test script re-runs all of its tests with every
available driver.

In the case of C::A::P::AnyTemplate, five templating systems are
supported, but only HTML::Template is required[2].

So if the user has no other supported templating modules besides
HTML::Template installed (i.e. the minimum supported configuration),
then 243 out of 326 tests will be skipped, i.e. 75%.  If, at some point
in the future, I add support for another templating system, then the
number of tests skipped by default will increase.

I'm not really sure if this is the best way to manage my optional tests
(one disavantage is that users tend to get freaked out by so many
skipped tests).  I'm just pointing out a couple of modules where a lot
of skips are the norm.



Michael


[1] http://search.cpan.org/dist/Config-Context/
http://search.cpan.org/dist/CGI-Application-Plugin-Config-Context/
http://search.cpan.org/dist/CGI-Application-Plugin-AnyTemplate/

[2] HTML::Template is required because it's already a prerequisite of
CGI::Application

--
Michael Graham [EMAIL PROTECTED]



Re: Running test suites under PersistentPerl

2005-12-07 Thread Michael Graham

 I thought of an alternative which might have a number of the benefits of
 this solution with less of the drawbacks.

 The idea is to create one big file test file that is run in the normal
 way. Everything would only need to be loaded once instead of N times.
 There wouldn't be the usual persistence issues, either.

I tried a couple of different acceleration techniques earlier this
year.  I do remember playing with the one big test file approach.  I
can't remember why I didn't go further with it, because it was indeed fast. 
I think it may have been due to the fact that the output of all the test
scripts was flattened into one huge list.  So when a test failed
somewhere, it was hard to figure out what file it was in.

I also tried a forking system suggested by Fergal Daly:  I had a main
script that preloaded as many library modules as possible and then
forked each test script as a child process.  This was also quite fast,
but I don't think I ever figured out how to get Test::Harness to parse
the output of the forked children properly.  

  - BEGIN and END blocks may need some care. For example, an END block
may be used to remove test data before the next text runs.

That's another caveat with the PersistentPerl approach - END blocks seem
only to run on the first request.


Michael



---
Michael Graham [EMAIL PROTECTED]



Running test suites under PersistentPerl

2005-12-05 Thread Michael Graham
   have problems.

 * The usual persistent environment caveats apply:  be careful with
   redefined subs, global vars; 'require'd code only gets loaded on the
   first request, etc.

 * Test scripts have to end in a true value.


If there's interest, I'll try to package all this up into a CPAN module.


Michael


---
Michael Graham [EMAIL PROTECTED]




Re: Spurious CPAN Tester errors from Sep 23rd to present.

2005-10-06 Thread Michael Graham

 Once Test-URI is upgraded, we are going to need to make sure the newest
 version is installed, so could the authors of the following modules
 please note you will need to do a new release to update this dependency.

 Amethyst  SHEVEK
 Apache-ACEProxy   MIYAGAWA
 Apache-DoCoMoProxyKOBAYASI
 Apache-GalleryLEGART
 Apache-No404Proxy MIYAGAWA

[...the long list of modules continues...]

This list came from CPANTS, right?  I think there's something screwy
with the way it's following dependencies.  It looks like a lot of these
modules require URI, not Test::URI.  In fact, I haven't yet found one
that requires Test::URI.


Michael



---
Michael Graham [EMAIL PROTECTED]




Re: New kwalitee test, has_changes

2005-09-26 Thread Michael Graham

 Collecting any sort of coverage data is a complete bitch. Let me just
 say right now that doing it across _all_ of CPAN is flat out impossible.

 It's impossible.

Is it possible to use PPI to count the number of tests in a module's
test suite?  More than 5 tests would probably mean the author added one
of his own.  Of course the problem here how to deal with tests that
don't use Test:: modules.  So maybe this is also impossible.

Maybe another approach is to take the module's entire test suite and 
remove from it any code that looks like it came from pod.t, pod-coverage.t,
00.load.t or any other stock test boilerplate. If there's something left
over, then the module probably has an actual test suite.


Michael





---
Michael Graham [EMAIL PROTECTED]




Re: New kwalitee test, has_changes

2005-09-22 Thread Michael Graham

 As I was downloading the newest version of Devel::Cover this morning, I
 pondered on the concept of 1 Kwalitee point for coverage = 80%, and
 another for 100%, and how absolutely impossible it would be to set out
 to establish these points for all the modules on CPAN. But it would be Good.

I think a point for = 80% would be okay (for some definition of 80%).

But I think a more useful measure of kwalitee would be a 20%-30%
coverage test.

Right now 'has_tests' is satisfied by including the default tests that
module-starter provides (00.load.t, and the pod tests).

There are a lot of modules that have nothing beyond the default tests
and yet they get their 'has_tests' point.

Passing a low coverage bar would at least indicate that the author wrote
*some* tests.  If there's an easier way of finding out if there are
actual tests, then that would be fine too.

I think there's a big difference in kwalitee between a module that has
only the default tests and a module with a hand-crafted test suite.  One
of the first things I do when checking out a new module is to check if
there are more than three files in the t/ directory.


Michael



---
Michael Graham [EMAIL PROTECTED]




Re: kwalitee: drop Acme?

2005-09-10 Thread Michael Graham

One of the problems with dependency checking is dealing with a module
that optionally uses other modules.  For instance a module might have
drivers for a bunch of different backends, each of which is optional.
For instance, CGI::Session or Class::DBI::Loader.

Such a relationship, even though not technically a dependency, is still
really the same level of endorsement for the other module.

This kind of relationship is common in plugin-based systems, where
features will be used if they are available, but ignored if they are
not.  So plugins may be widely used in other CPAN modules but not
actually listed in anyone's prerequisites.

I think the correct way to deal with this is to use 'recommends' in
Build.PL.  I don't know if CPANTS includes the 'recommends' data, but it
probably should.  I also don't know if there's a MakeMaker equivalent.

I doubt that most modules list their optional dependencies, but that's
probably not a big deal.  If someone uses my module but doesn't list it
in recommends, and I want my prereq point, then I'll bug them to add it.


Michael


---
Michael Graham [EMAIL PROTECTED]



Re: kwalitee: drop Acme?

2005-09-08 Thread Michael Graham

 We should at least throw the poor module author's a bone and leave
 Acme:: out of this.

Just as long as ACME keeps working for is_prereq, though!

A bunch of us are planning ACME::CGI::Application::Kwalitee, which will
exist solely to require all of the C::A plugins, so we can all get our
'is_prereq' point.

Don't make us release this foolishness outside of ACME::!


Michael



---
Michael Graham [EMAIL PROTECTED]




Re: kwalitee: drop Acme?

2005-09-08 Thread Michael Graham

 It can't be by the same author, though, to count for is_prereq, right?

 So someone needs to create a new CPAN ID, and release a module under that ID
 that prereqs all of CPAN.  Then we'd all get our prereq points.

 Probably could be done with a Build.PL that pulls the full module list then
 constructs a massive requires hash.  Unless CPANTS scans for dependencies,
 in which case you'd need to build the .pm file dynamically, too.  And then
 run a cron job to rebuild/re-release with cpan-upload every so often to keep
 it fresh.

This would definitely work, at the cost of massive inflation of the
'is_prereq' currency.

Maybe a peer-to-peer is_prereq network could be created where each CPAN
author enters reciprocal agreements with other like-minded authors to
list their modules as prerequisites of one of his or her own modules.

Each auther could have a special empty 'dependent' module for this
purpose.  Something like ACME::Prereq::[AUTHOR_ID].  


Michael



---
Michael Graham [EMAIL PROTECTED]




Re: Kwalitee and has_test_*

2005-04-16 Thread Michael Graham

 Michael Graham wrote:
  Another good reason to ship all of your development tests with code is
  that it makes it easer for users to submit patches with tests.  Or to
  fork your code and retain all your development tools and methods.

 Perl::MinimumVersion, which doesn't exist yet, could check that the
 version a module says it needs is higher than what Perl::MinimumVersion
 can work out based on syntax alone.

 And it uses PPI... all 55 classes of it... which uses Class::Autouse,
 which uses Class::Inspector, and prefork.pm, and Scalar::Util and
 List::Util, oh and List::MoreUtils and a few other bits and pieces.

 I'm not going to push that giant nest of dependencies on people just so
 they can install Chart::Math::Axis...

I'm not suggesting that end users be forced to *run* your development
tests.  Just that the tests be included in your CPAN package.  Ideally,
the install process can be made smart enough to skip this kind of test.

 So I run it in my packaging pipeline. It's a low percentage test that
 catches some annoying cases that bite me once a year.

 And I should probably not talk about the RecDescent parser for the
 bundled .sql files, or the test that ensures that any bundled .gif files
 are at something close to their best possible compression level.

If someone were to take over maintenance of your module, or they were to
fork it, or they were submitting patches to you, then they would want
these tools and tests, right?  How would they get them?


Michael


---
Michael Graham [EMAIL PROTECTED]

YAPC::NA 2005 Toronto - http://www.yapc.org/America/ - [EMAIL PROTECTED]
---



YAPC::NA - call for (testing-related) papers

2005-04-14 Thread Michael Graham

As you all know, YAPC::NA is coming up (June 27-29, 2005 in Toronto),
and the call-for-papers deadline is next week (Apr 18).

We already have a couple of submissions for talks on test-related
topics, but I think it would be good to have more.  (Okay that's my
personal bias showing).

I think it would be great to have an Introduction to Perl Testing talk.

There are lots of Perl programmers out there who know they should be
testing, but just need to get their feet wet.

And you can get 95% of the benefits of testing even if all you know are
Test::More and prove.  The trick, of course, is how to break your
programs down into small, easily testable chunks.

Any takers on an intro to testing talk?

I'm not trying to discourage talks on advanced testing topics (large
projects, specific technologies, XP, QA, Kwalitee, phalanx, etc.).
Those are also most welcome.

The info on the call-for-papers is here:

   http://yapc.org/America/cfp-2005.shtml

And info on the conference including is here:

   http://yapc.org/America/

If you have any questions regarding the call-for-papers or speaking at
YAPC::NA 2005 please email [EMAIL PROTECTED]

Other conference questions can be directed to [EMAIL PROTECTED]


Michael

---
Michael Graham [EMAIL PROTECTED]

YAPC::NA 2005 Toronto - http://www.yapc.org/America/ - [EMAIL PROTECTED]
---




Re: Kwalitee and has_test_*

2005-04-08 Thread Michael Graham

Another good reason to ship all of your development tests with code is
that it makes it easer for users to submit patches with tests.  Or to
fork your code and retain all your development tools and methods.

Since most Perl modules go up on CPAN and nowhere else, I think that the
CPAN distribution should contain as much of the useful development
environment as possible.  That includes POD tests, coverage tests,
benchmark tests, developer documentation, nonsense poetry, scraps of
paper, bits of string.

However, personally I think all POD tests and coverage tests should be
skipped at install time unless the user specifically asks for those
tests to be run.

For one thing, they slow down the install and rarely provide useful
information to the user.

For another, they may generate errors that prevent install, even though
the module works correctly.

For instance, if my POD extraction tools are slightly different than the
developer's POD extraction tools, and this causes a POD coverage test to
fail, then the module will fail to install even though it would have
worked correctly.

Maybe I get all my documentation from search.cpan.org and I don't care
whether the module's docs are considered 'kwalitudinous' on my system.
Maybe I don't know enough about testing and module development to know
that the module still works fine in spite of the failed POD test.


Michael




---
Michael Graham [EMAIL PROTECTED]

YAPC::NA 2005 Toronto - http://www.yapc.org/America/ - [EMAIL PROTECTED]
---




Re: Kwalitee and has_test_*

2005-03-27 Thread Michael Graham

 These seem completely and utterly wrong to me. Surely the kwalitee
 checks should be purely that the POD is correct and sufficiently covers
 the module?

One problem I can see with this is that (I think) the only way to
indicate extra private methods is to do so in pod-coverage.t.  For
instance:

pod_coverage_ok(
My::Module,
{ also_private = [ qr/^conf$/ ], },
My::Module, POD coverage, but marking the 'conf' sub as private
);

My::Module will pass its own pod-coverage.t script, but it won't pass
with someone else's pod-coverage.t.

Maybe there should be a different way of marking additional private
methods?


Michael



--
Michael Graham [EMAIL PROTECTED]


Re: Test::Builder-create

2005-03-10 Thread Michael Graham

 If script startup and module loading really is the culprit you could try the
 mod_perl approach.

 Load all required modules and then for each script, fork a new perl process
 which uses do testxxx.t to run each script.

That's a good idea - thanks!

I gave it a try and these are the times I got:

Time   Method
   --
6:09   prove -r tests/
4:14   for i in tests/**/*.t ; do perl $i; done
2:57   runscripts-forking.pl tests/**/*.t

This is for a suite of 165 test scripts.

So it does look like there are efficiencies to be had, it's just a
question of whether it's worth the bother (e.g. to figure out how to
parse the output of the forked scripts).

runscripts-forking.pl basically looks like this:

#!/usr/bin/perl

use strict;
# ... use a ton of modules here ...

foreach my $script (@ARGV) {
warn Script: $script\n;
unless (runscript($script)) {
warn FAILED: Script $script: $! [EMAIL PROTECTED];
last;
}
}

sub runscript {
my $script = shift;

my $pid;
if (!defined($pid = fork)) {
warn Cannot fork: $!\n;
return;
}
elsif ($pid) {
my $ret = waitpid($pid, 0);
return $ret;
}
do $script or die Compile errors: $script: [EMAIL PROTECTED];
exit;
}





Michael


--
Michael Graham [EMAIL PROTECTED]




Re: Test::Builder-create

2005-03-08 Thread Michael Graham

 Something that's been sitting in the Test::Builder repository for a while
 now is Test::Builder-create.  Finally you can create a second Test::Builder
 instance.  I haven't done much with it and I think $Level is still global
 across all instances (bug) but I figured folks would want to play with it
 particuarly the Test::Builder::Tester guys.

Would this make it possible to run many test scripts (each with its own
plan) within the same perl process?  'Cos that would be nifty.


Michael




--
Michael Graham [EMAIL PROTECTED]




Re: Test::Builder-create

2005-03-08 Thread Michael Graham

  Would this make it possible to run many test scripts (each with its own
  plan) within the same perl process?  'Cos that would be nifty.

 Yes.  Though beyond testing testing libraries I don't know why you'd want to
 do that.

Well, all I was going to do was try to shave a few seconds off the
running time of my test suite (which is now climbing up to the 10 minute
mark).  I figured I could do the mod_perl thing:  preload all my modules
and do most of my setup at startup and then require each of the test
scripts.  Dunno if it will be worth the effort but it was something
I was going to play with for a couple of hours.


Michael


--
Michael Graham [EMAIL PROTECTED]



prove -M (was Re: Differences between output of 'make test' and 'prove')

2004-11-05 Thread Michael Graham

 prove -MDevel::Cover -Ilib -v t/*

 I remember mentioning something to Andy, but at the time he didn't like
 it.

On a related note, I think an -M option to prove might be a useful
feature.

With my own test suite, I want to load and run a perl module
(TestConfig.pm) before each test script is run.

Currently, I do this via a wrapper:

$ cat runtests
#!/bin/sh

export PERL5LIB=/path/to/modules
export PERL5OPT=-MTestConfig

prove $*

This seems ugly to me, mainly because perl also loads and runs 
TestConfig.pm before running prove itself.

With an -M feature, I could use:

$ cat runtests
#!/bin/sh

prove -I/path/to/modules -MTestConfig $*


Which is (very) slightly cleaner.  


Michael


--
Michael Graham [EMAIL PROTECTED]