As for not testing suite at all, I'm not sure that's cause for an
UNKNOWN, probably something else.


If you see an UNKNOWN in CPAN test reports, it means that 'make test'
returned "No tests defined". This is the result of no test suite at all.
It should be caught, and may be cover by point 2.

Hmm... in that case I disagree with CPAN Testers' version of UNKNOWN.

I consider unknown to be when something happens, but the analyzer itself is unable to classify what happened to a result, because it has no idea what just happened. Or situations like that. If a distribution doesn't define any tests, well... hmm... will have to think about it more.

5.  Installer missing dependency.
   For the installer itself.

6.  Installer waits forever.
   Intended to cover the "non-skippable interactive question"

This needs to cover misuse of fork() and alarm. Too many
distributions assume these work on Windows. They don't. At least not on any Windows platform I've ever used from Win95 to WinXP. They usually occur either in the Makefile.PL or the test suite, rarely in the actual module code.

I have absolutely no idea how we would go about testing that though.
Certainly in the installer, which is expected to run on ALL platforms,
having a fork/alarm would be possibly something of a no no. But how to
test it? Almost impossible. It might be more of a Kwalitee element
perhaps?


It is a difficult one, but should be picked up as an NA (Not Applicable
for this platform) report. A PITA framework would be better placed to
spot this, as the installer and test harness are not looking to parse
the distribution. Perhaps a Portability plugin could spot common issues,
such as those listed in perlport. Although there should be a distinction
made between NA in the actual module code and NA in the make/build/test
suite.

As far as I'm concerned, NA in some form requires the specific intent of the author. It should be the case that the Makefile.PL intentionally says "nope, can't run on this". Just having alarm or for exist is tricky for this, because what if it's if { ! $^O eq 'Win32' ) { alarm...

Presumably because it failed tests and thus failed to install? :)


Correct, but that wasn't what I meant. It was the portability issue I
was getting at. There are several modules which use features they expect
to be there, and for one reason or another they aren't. I picked
EWOULDBLOCK as this is a common one, but there are others. Again a
portability plugin to PITA might help to detect potential issues for
authors.

I think having some sort of portability tester could be interesting. Actually, it would be nice to see someone start to put together some PPI code to search for such cases. :)

Any such stuff is waiting on a platform in which to run it though, which for me means I'll start dealing with such things when the refactoring perl editor I'm working on works.

To use an analogy, tt's not up to the testing infrastructure to stop
you putting the noose around your neck, only to tell everybody that you hanged yourself.


Agreed, but if there are common pitfalls that can be detected, that some
authors are not aware of, it can only help authors to avoid them in the
future.

Not sure if it belongs in PITA specifically, but yeah, might be useful to allow some form of prescanning.

It shouldn't have to do problem analysis, only identify that there is
a problem, identify the point at which it failed and in what class of
general problem, and record everything that happened so you can fix
the problem yourself.


This was the intention of the CPAN test reports, but to avoid the
initial back and forth, the reports highlight code that can be used to
fix the simple reports, that are typically sent to new authors, e.g.
missing test suite or missing prerequisites.

If the report can identify a problem, without in depth analysis, and
there is a likely solution, it would be useful to highlight it.

Agree, just not on the location of the checks (yet). Lets see when we get closer to a working CPAN Testers 2.


As for the CPANPLUS, etc problems, I imagine we can fix those over time.


Careful not to open a can of worms there. The bogus dependency issue has
been a problem for nearly 2 years, and has caused a significant amount
of tension in the past. My patch is now in CPANPLUS, which detects the
symptoms, but the root cause is still outstanding.


Hopefully the results we can provide will have enough granularity so
they don't just all get dumped into "FAIL".


That would be good.

That's mostly where I expect the problem to lie. Not that things fail, because it's still important to say "installation of your module fails", but if we can assign blame with more accuracy, then we might not have such bad feelings about it.

Um, what?

Can you explain this in a bit more detail?

Is this documented somewhere?


Not as far as I know, but several more experienced authors than myself
have said that that was the default. Any distribution that uses Test.pm
will always produce a PASS report from CPAN Testers. As a result, when I
was submitting CPAN testing reports, I manually reviewed every single
one to ensure that they were correct. As a result some got manually
fixed by me to highlight errors. One that I've found is for LJ-Simple
[1].

[1] http://www.nntp.perl.org/group/perl.cpan.testers/115560

If PITA is going to base its result on the content of the test results,
then that should be covered.

Agreed. I'll take a look at it later.

Adam K

Reply via email to