Adam Kennedy <[EMAIL PROTECTED]> wrote:
> Firstly is that it might turn an otherwise normal result into something 
> else, with no clear rule. It makes a judgement call that some level of 
> testing is good or bad, which isn't really the place of an installer to 
> call.
> 
> The reason Kwalitee has metrics like this is that it's not important in 
> the scheme of things, it's only looking for indicators which may well be 
> wrong (hence the name Kwalitee). The Kwalitee of a module does not 
> prevent it being installed. What makes 79 skips different from 80 skips? 
> You need some clear distinction between the two states, not just 
> something arbitrary (be it 50 or 80 or something else).

        I was speaking in percentages, not actual numbers, but I do see your
point: part of the problem is that while we can send a message to the screen
about why a test was skipped, there's no convention for making a *computer*
understand why. If there were flags or tags that were standard convention
(eg; "need_config", "missing_module", etc) then this would be a lot easier.

> Now 100% skips, THAT could potentially be interesting, or maybe TODOs. But
> then I don't necesarily know why it would be worthy of a different result
> code.

        Is there metadata stored apart from these result codes? If so it
might be useful to just store the statistics on skips. Assuming this
database was searchable, it'd be easy to identify modules that could be
tested better by, say, sorting by % of tests skipped and then looking at the
top curplits' log output manually.

        That actually brings up another point: Currently testers.cpan.org
only provides verbose log output for failures. In cases like this, (or cases
where you're driving yourself mad trying to understand why platform X always
passes whereas platform Y which doesn't seem too different always fails),
it'd be handy for the module author to have access to full logs of the test
output for both passes *and* fails.

> The second issue is that it has the potential to override the package
> author. If the author felt that the tests weren't critical, and could be
> skipped quite happily, who are we to make a call that he is wrong and that
> it's a probably, or anything other than 100% ok.

        I'd still like such a thing to be visible in some way. Of course
you're going to happily skip tests that require a database if you don't have
DBI_DSN set. (I am toying with the idea of marshalling a whitebox for the
purpose of doing CPAN testers specifically for database and mod_perl-based
packages... that'll probably be a project in itself, hacking YACsmoke to
only pick up those packages, add "-mysql" or whatever to the uname that it
posts with, etc. :)

        Cheers,
                Tyler

Reply via email to