Now 100% skips, THAT could potentially be interesting, or maybe TODOs. But
then I don't necesarily know why it would be worthy of a different result
code.

        Is there metadata stored apart from these result codes? If so it
might be useful to just store the statistics on skips. Assuming this
database was searchable, it'd be easy to identify modules that could be
tested better by, say, sorting by % of tests skipped and then looking at the
top curplits' log output manually.

        That actually brings up another point: Currently testers.cpan.org
only provides verbose log output for failures. In cases like this, (or cases
where you're driving yourself mad trying to understand why platform X always
passes whereas platform Y which doesn't seem too different always fails),
it'd be handy for the module author to have access to full logs of the test
output for both passes *and* fails.

Yep. One of the general concepts is that we capture in as high a resolution as possible while we are inside the testing instance, and then do further analysis later, outside the testing instance.

So yes, you get the complete output of everything, in as much details as I can pull out. And analysis will (sort of) be pluggable or extendable in some way. No details or specifics on analysis yet, as it's not on the critical development path.

There will be ways to do what you want.

The second issue is that it has the potential to override the package
author. If the author felt that the tests weren't critical, and could be
skipped quite happily, who are we to make a call that he is wrong and that
it's a probably, or anything other than 100% ok.

        I'd still like such a thing to be visible in some way. Of course
you're going to happily skip tests that require a database if you don't have
DBI_DSN set.

Not necesarily... it all depends on how important it is to you. I see some potential cases where you'd rather abort the install if you can't do proper testing.

Adam K

Reply via email to