On Fri, Feb 10, 2012 at 04:17:24PM -0800, Michael G Schwern wrote:
> Here is the tl;dr version:
> 
> I agree there are two use cases here.

I disagree there are more than that :) And that's why it's hard to
distinguish with which distribution the fault lies. However, I do agree
that we don't need to create a parallel testing stream.

The reason it is hard currently is because of the way we start testing.
Everything is based on a single distribution. If that distribution has
tests which fail, the automated systems don't try and determine what the
problem is, and attributes the fault to the distribution we start a test
run with.

> I'm pretty sure Metabase has enough resolution right now to determine when a
> test has an alpha dependency, maybe not all the way down the dependency chain
> but at least direct dependencies.

In this instance a secondary distribution is at fault, but the automated
system has no way of determining that. It needs a human to see that it
relates to another distribution. Although we may be able to write an
automated parser to spot some cases, more often than not it won't see
them, unless we write a specific case.

We cannot automatically assume that a development release is at fault
either. The calling distribution may have relied on a deprecated
feature, or another distribution with an official release may have
tripped the fault. What if there are two development distributions,
which one would we determine was at fault?

> A simple first-order fix would be to add something to the report emails
> indicating if a report contains an alpha dependency.

It alreasy does, see under "Perl module toolchain versions installed:"
of a report [1]

[1]
http://www.cpantesters.org/cpan/report/a29a3a96-541a-11e1-8417-42773f40e7b5

> Something people can
> easily filter on, either with eyeballs or robots.

But it still requires a human. Robots will get it wrong enough times for
it to be a nuisance.

> A second-order approximation would be to add a switch to choose whether to
> include reports with alpha dependencies,

This is more possible, but would then hide those reports, which someone
would need to be alerted to. 

> It's the reporting that needs to change.

I'm not convinced that it does. The analysis may be able to help, but
the reports themselves are a record of what happened.

> I would say the integrity of the toolchain is more important than authors
> getting some extra emails.  I'd rather they got some "false" negatives before
> a broken stable toolchain release than real failures after a stable toolchain
> release.

At the moment, that's what is happening, and we're relying on authors to
alert the author of the secondary distribution. Unfortunately that
doesn't always happen, otherwise Andreas wouldn't have sent the original
post :)

> A simple first-order fix would be to add something to the report emails
> indicating if a report contains an alpha dependency.  Something people can
> easily filter on, either with eyeballs or robots.

Andreas' analysis is currently best placed to spot these sorts of
failing reports, as it does some very indepth analysis of reports to
spot common factors. Hence why he spotted that TM 1.005000_002 was a
factor in his post.

It may be possible for Andreas to raise an alert to an author if the
module appears to be a common factor if failing reports (regardless of
being in development or not). However, I don't know easy it would be to
automate this, or to not spam an author either.

As CPAN Testers current reporting process stands, I don't think we need
to or should change it. Yes it might be an inconvience for authors, but
at least we're reporting there are problems.

In the longer term, I can look at reassigning reports to the real
failing distribution, which while confusing to a degree, would at least
mean the reports are not "poluting" distributions which are not at
fault.

This is probably something we can discuss at the QA Hackathon.

Cheers,
Barbie.
-- 
Birmingham Perl Mongers <http://birmingham.pm.org>
Memoirs Of A Roadie <http://barbie.missbarbell.co.uk>
CPAN Testers Blog <http://blog.cpantesters.org>
YAPC Conference Surveys <http://yapc-surveys.org>
Ark Appreciation Pages <http://ark.eology.org>

Reply via email to