I'm starting to get a bit closer (waiting on a test images and some last testing to be done) to finishing the initial PITA test cycle (and thus be able to do an initial release) and so I'm starting to do some prep work now for the next stage, which is to start to assemble some infrastructure around it.

One of the questions that I don't really have clear answers for yet and doesn't really seem to be answered yet properly relates to the testing return codes.

That is, if we run a testing instance, what is the _result_ (PASS, FAIL, etc) of that test run.

Some caveats and context, to keep the conversation on track and prevent musing and side-tracking... :)

I understand that there is going to be some logic relating to recursiveness of results, but I'm not worried about that yet.

I also understand that these eventually need to map to the current CPAN Testers PASS/FAIL/NA/UNKNOWN, but I'm not worried about how to do that yet.

Please also note that analysis is done _separately_ in some cases from results collection, so there isn't necessarily a hard need for any conclusions to be reached only with the resources available inside the testing run itself.

And on that same point, these codes are looking for the root cause, without necessarily assigning fixed blame or conclusions, as this will vary depending on the context in which the results are used.

Again, my starting point for this analysis work is only the specific result value that is emitted out of a test instance (or that is derived from testing output outside the instance)

Without these being final names or codes, I have the following result cases so far that might need to be taken into account by an arbitrary testing scheme (Makefile.PL, Build.PL, Java Ant, Configure, etc)



================================================================================



Package Container catagory
(covering tarballs, contents of the package, etc)
-------------------------------------------------

1.  Broken or corrupt packaging.
    A bad tarball, MANIFEST files missing.

2.  Incompatible packaging.
    Packaging unwraps, but missing files for the testing scheme.



Installer catagory
(covering Makefile.PL/Build.PL/Configure)
-----------------------------------------

3.  Installer unknown result.
    Something happened, we just aren't sure what.

4.  Installer crash/failure.
    Installer fails to complete, and doesn't give a reason.

5.  Installer missing dependency.
    For the installer itself.

6.  Installer waits forever.
    Intended to cover the "non-skippable interactive question"



Resources and Dependencies.
All the below would refer to compulsory deps, not optional.
All the below class 'unable to locate' as 'missing' even if
the dependency is actually installed.
------------------------------------------------------------

6.  Package is missing a build-time dependency within installer scope.
    Doesn't have Test::More, missing headers in the C case.

7.  Package missing a run-time dependency within installer scope.
    Missing a Perl module dependency, missing a .so in the C case.

8.  Package missing a build-process resource.
    C-compiler, other required build tools missing.
    Does not include testing tools? They are in 10? Comments?

9.  Package missing an external resource.
    System .h for Perl-with-C, missing run-time bins like 'cvs',
    libraries in other languages outside the normal scope of the
    installer (so non-CPAN things, possibly even Perl things)

10. Package missing an external build-time resource.
    Defining "external" is a little fuzzy, and should there be a
    seperate build-external? Does build-process cover this?
    How do we split $make from $test within "build". Can we do this?

11. System is missing a required resource.
    Missing a bit of hardware, missing something else structural,
    that could be resolvable.

12. System is incompatible with the package.
    Linux::, Win32::, Mac:: modules. Irreconcilable differences.



Building and Testing
--------------------

11. Compilation and/or transformation of files fails.
    Includes XS, C, pod2man, Java compiling, etc.
    Basically covers the scope of "make".

12. Compilation and/or transformation of files hangs.
    Covers "non-skippable interactive question".
    Might also cover other things.

13. Tests exist, but fail to be executed.
    There is tests, but the tests themselves aren't failing.
    It's the build-process that is failing.

14. Tests run, and some/all tests fail.
    The normal FAIL case due to test failures.

15. Tests run, but hang.
    Covers "non-skippable interactive question".
    Covers infinite attempts to connect network sockets.
    Covers other such cases, if detectable.




Installation and Completion
---------------------------

16. All Tests Pass, did not attempt install.

17. All Tests Pass, but installation failed.

18. All Tests Pass, installation successful.



================================================================================

So, that's what I have in my head so far.

Comments, questions (on topic), additional possible results all extremely welcomed.

For anyone thinking "You Ain't Gunna Need It" please understand I actually DO have the need for about 12 of these, and I've included the rest for symmetry, compatibility across test schemes, and completion sake.

So, with all _that_ said, fire away! :)

Adam K

Reply via email to