Regarding the blow, I may have been a little unclear on the layout of the points.

The first line is the name of the error.
Following lines are meant to provide details to help clarify what it means.

Barbie wrote:
On Sun, Feb 19, 2006 at 10:22:20PM +1100, Adam Kennedy wrote:
2.  Incompatible packaging.
    Packaging unwraps, but missing files for the testing scheme.

You may want to split this into a result that contains no test suite
at all (UNKNOWN) and one that has missing files according to the MANIFEST.

Currently the second only gets flagged to the tester, and is usually
only sent to the author, if the tester makes a point of it, or the CPAN
testing FAILs.

If the package type supports a MANIFEST, and there are files missing from it, then the package is broken or corrupt, see 1.

As for not testing suite at all, I'm not sure that's cause for an UNKNOWN, probably something else.

5.  Installer missing dependency.
    For the installer itself.

6.  Installer waits forever.
    Intended to cover the "non-skippable interactive question"

This needs to cover misuse of fork() and alarm. Too many distributions
assume these work on Windows. They don't. At least not on any Windows
platform I've ever used from Win95 to WinXP. They usually occur either
in the Makefile.PL or the test suite, rarely in the actual module code.

I have absolutely no idea how we would go about testing that though. Certainly in the installer, which is expected to run on ALL platforms, having a fork/alarm would be possibly something of a no no. But how to test it? Almost impossible. It might be more of a Kwalitee element perhaps?

And I believe fork does work on Windows, it's just that it's simulated with a thread or something yes?

They may also fail on other OSs. Although they could potentially be
covered in point 5.

12. System is incompatible with the package.
    Linux::, Win32::, Mac:: modules. Irreconcilable differences.

Not sure how you would cover this, but point 12 seems to possibly fit.
POSIX.pm is created for the platform it's installed on. A recent package
I was testing, File::Flock (which is why I can't install PITA) attempted to use the macro EWOULDBLOCK. Windows doesn't support this, and there
doesn't seem to be a suitable way to detect this properly.

Presumably because it failed tests and thus failed to install? :)

To use an analogy, tt's not up to the testing infrastructure to stop you putting the noose around your neck, only to tell everybody that you hanged yourself.

It shouldn't have to do problem analysis, only identify that there is a problem, identify the point at which it failed and in what class of general problem, and record everything that happened so you can fix the problem yourself.

BTW, the top level PITA package is not intended to be compatible with Windows, although I do go out of my way to generally try to be platform neutral wherever possible. The key one that requires cross-platform support is PITA::Image and PITA::Scheme which will sit inside the images and so need to work on everything.

This is just one example, I've come across others during testing.

Also note that the name alone does not signify it will not work on other
platforms. There are some Linix:: distros that work on Windows.

That's fine, I don't mean to test by name and reject on that basis. I expect the Makefile.PL to somehow inform me that the platform is NA.

14. Tests run, and some/all tests fail.
    The normal FAIL case due to test failures.

Unless you can guarantee the @INC paths are correct when testing, this
should be split into two. The first is simply the standard FAIL result.

The second is the result of the fact that although the distribution
states the minimum version of a dependancy, the installer has either
failed to find it or found the wrong version. This is a problem
currently with CPANPLUS, and is unfortunately difficult to track down.
It's part of the "bogus dependancy" checks.

The second half is a clarification. I mean it to cover all the various things that current generate FAIL... when tests run and some or all fail.

As for the CPANPLUS, etc problems, I imagine we can fix those over time.

Hopefully the results we can provide will have enough granularity so they don't just all get dumped into "FAIL".

Installation and Completion
---------------------------

16. All Tests Pass, did not attempt install.

17. All Tests Pass, but installation failed.

18. All Tests Pass, installation successful.

And another one, test suite reports success, but failures occurred. This
is usually the result of the use of Test.pm. I've been told that due to
legacy systems, test scripts using Test.pm must always pass, even if
there are failures. There are still a few distributions that are
submitted to CPAN like this.

Um, what?

Can you explain this in a bit more detail?

Is this documented somewhere?

Adam K

Reply via email to