Knocking off points for fails, however, might be due to things that are completely idiosyncratic. For example, anyone whose module depended on a test module that used Test::Builder::Tester when Test::Builder changed and broke it could get dinged.

Does this really tell us anything about actual quality?

Yes, it tells us that the modules you chose to depend on are dodgy and don't install. Having your module drop a point when a module you depend on goes crazy is a desired behavior for the point of view of a clean_install point.

Whether or not that is a transient thing that lasts for a week, or a serious and ongoing problem, I think it's still worth it.

A sign of a kwalitee module is when the installation of it Just Works, regardless of who is assigned the blame for any failure.

More to the point, it should lead people to spend more time looking into WHY their module isn't installing, and help us nail down the critical modules in the CPAN toolchain that have problems.

If all of a sudden you lose the clean_install point, you can go find the problem, then either help the author fix the problem or stop using that module. Either way, your module used to not install, and now it does install.

What about if I list a prerequisite version of Perl and someone who tries it under an older version causes a "FAIL" on CPAN Testers? Does that tell us anything?

I believe that that isn't classed as a fail. If the testing platform does not have a new enough version to match the listed Perl version, it counts as a Not Applicable and the testing system shouldn't be sending FAILs.

A testing system should only be sending FAIL reports when it believes it has a platform that is compatible with the needs of the module, but when it tries to install tests fail.

There are so many special cases that I don't think the value derived from such a metric will be worth the effort put into it.

I'm interested to hear what some the special cases might be though. I'm trying to put together a mental list of possible problems I need to deal with.

A FAIL should ONLY mean that some package has told the smoke system that thinks it should be able to pass on your platform, and then it doesn't.

Failures that aren't the fault of the code (platform not compatible, or whatever) should be N/A or something else.

The vanilla testing is a more interesting idea in its own right. I've had that on my back burner for a while -- installing a fresh perl and scripting up something to try installing my distributions to it, then blowing away the site library directory afterwards. I just haven't gotten around to it, so I look forward to seeing what you come up with.

If you want to help out, we'd love to see you in irc.perl.net #pita.

That's an open invitation to other interested people too.

I'll do a bit more announce+explaining once we get to PITA 0.20 and there's something other people can actually download, install and try out.

I tend to add to "Release early, release often", "... once it works"

Adam K

Reply via email to