> From: Adam Kennedy [mailto:[EMAIL PROTECTED]
> 
> Barbie wrote:
> > On Sun, Feb 19, 2006 at 10:22:20PM +1100, Adam Kennedy wrote:
> >> 2.  Incompatible packaging.
> >>     Packaging unwraps, but missing files for the testing scheme.
> >
> > You may want to split this into a result that contains no test suite
> > at all (UNKNOWN) and one that has missing files according to the
> > MANIFEST.
> >
> > Currently the second only gets flagged to the tester, and is usually
> > only sent to the author, if the tester makes a point of it, or the
> > CPAN testing FAILs.
> 
> If the package type supports a MANIFEST, and there are files missing
> from it, then the package is broken or corrupt, see 1.

Agreed. I'd misread the two points.

> As for not testing suite at all, I'm not sure that's cause for an
> UNKNOWN, probably something else.

If you see an UNKNOWN in CPAN test reports, it means that 'make test'
returned "No tests defined". This is the result of no test suite at all.
It should be caught, and may be cover by point 2.

> >> 5.  Installer missing dependency.
> >>     For the installer itself.
> >>
> >> 6.  Installer waits forever.
> >>     Intended to cover the "non-skippable interactive question"
> >
> > This needs to cover misuse of fork() and alarm. Too many
> > distributions assume these work on Windows. They don't. At least not 
> > on any Windows platform I've ever used from Win95 to WinXP. They 
> > usually occur either in the Makefile.PL or the test suite, rarely in 
> > the actual module code.
> 
> I have absolutely no idea how we would go about testing that though.
> Certainly in the installer, which is expected to run on ALL platforms,
> having a fork/alarm would be possibly something of a no no. But how to
> test it? Almost impossible. It might be more of a Kwalitee element
> perhaps?

It is a difficult one, but should be picked up as an NA (Not Applicable
for this platform) report. A PITA framework would be better placed to
spot this, as the installer and test harness are not looking to parse
the distribution. Perhaps a Portability plugin could spot common issues,
such as those listed in perlport. Although there should be a distinction
made between NA in the actual module code and NA in the make/build/test
suite.

> And I believe fork does work on Windows, it's just that it's simulated
> with a thread or something yes?

I have used ActivePerl 5.6.1 and a couple of version of ActivePerl 5.8,
and have yet to see fork ever work on any Windows box I've used them on.

> >> 12. System is incompatible with the package.
> >>     Linux::, Win32::, Mac:: modules. Irreconcilable differences.
> >
> > Not sure how you would cover this, but point 12 seems to possibly
> > fit. POSIX.pm is created for the platform it's installed on. A 
> > recent package I was testing, File::Flock (which is why I can't 
> > install PITA) attempted to use the macro EWOULDBLOCK. Windows 
> > doesn't support this, and there doesn't seem to be a suitable way to 
> > detect this properly.
> 
> Presumably because it failed tests and thus failed to install? :)

Correct, but that wasn't what I meant. It was the portability issue I
was getting at. There are several modules which use features they expect
to be there, and for one reason or another they aren't. I picked
EWOULDBLOCK as this is a common one, but there are others. Again a
portability plugin to PITA might help to detect potential issues for
authors.

> To use an analogy, tt's not up to the testing infrastructure to stop
> you putting the noose around your neck, only to tell everybody that 
> you hanged yourself.

Agreed, but if there are common pitfalls that can be detected, that some
authors are not aware of, it can only help authors to avoid them in the
future.

> It shouldn't have to do problem analysis, only identify that there is
> a problem, identify the point at which it failed and in what class of
> general problem, and record everything that happened so you can fix
> the problem yourself.

This was the intention of the CPAN test reports, but to avoid the
initial back and forth, the reports highlight code that can be used to
fix the simple reports, that are typically sent to new authors, e.g.
missing test suite or missing prerequisites.

If the report can identify a problem, without in depth analysis, and
there is a likely solution, it would be useful to highlight it.

> BTW, the top level PITA package is not intended to be compatible with
> Windows, although I do go out of my way to generally try to be
> platform neutral wherever possible. The key one that requires 
> cross-platform support is PITA::Image and PITA::Scheme which will sit 
> inside the images and so need to work on everything.

Will try another install later.

> >> 14. Tests run, and some/all tests fail.
> >>     The normal FAIL case due to test failures.
> >
> > Unless you can guarantee the @INC paths are correct when testing,
> > this should be split into two. The first is simply the standard FAIL 
> > result.
> >
> > The second is the result of the fact that although the distribution
> > states the minimum version of a dependancy, the installer has either
> > failed to find it or found the wrong version. This is a problem
> > currently with CPANPLUS, and is unfortunately difficult to track 
> > down. It's part of the "bogus dependancy" checks.
> 
> The second half is a clarification. I mean it to cover all the various
> things that current generate FAIL... when tests run and some or all 
> fail.
> 
> As for the CPANPLUS, etc problems, I imagine we can fix those over 
> time.

Careful not to open a can of worms there. The bogus dependency issue has
been a problem for nearly 2 years, and has caused a significant amount
of tension in the past. My patch is now in CPANPLUS, which detects the
symptoms, but the root cause is still outstanding.

> Hopefully the results we can provide will have enough granularity so
> they don't just all get dumped into "FAIL".

That would be good.

> >> Installation and Completion
> >> ---------------------------
> >>
> >> 16. All Tests Pass, did not attempt install.
> >>
> >> 17. All Tests Pass, but installation failed.
> >>
> >> 18. All Tests Pass, installation successful.
> >
> > And another one, test suite reports success, but failures occurred. 
> > This is usually the result of the use of Test.pm. I've been told 
> > that due to legacy systems, test scripts using Test.pm must always 
> > pass, even if there are failures. There are still a few 
> > distributions that are submitted to CPAN like this.
> 
> Um, what?
> 
> Can you explain this in a bit more detail?
> 
> Is this documented somewhere?

Not as far as I know, but several more experienced authors than myself
have said that that was the default. Any distribution that uses Test.pm
will always produce a PASS report from CPAN Testers. As a result, when I
was submitting CPAN testing reports, I manually reviewed every single
one to ensure that they were correct. As a result some got manually
fixed by me to highlight errors. One that I've found is for LJ-Simple
[1].

[1] http://www.nntp.perl.org/group/perl.cpan.testers/115560

If PITA is going to base its result on the content of the test results,
then that should be covered.

Barbie.

PS: Apologies for the delayed reply, my home router hasn't been letting
ssh in for the last couple of days :(

Reply via email to