Re: META.yml feature for autotesters?

2006-02-25 Thread Barbie
On Fri, Feb 24, 2006 at 12:51:13PM -0800, Tyler MacDonald wrote:
> 
>   Sometimes it's beneficial for an automated tester to install
> additional packages (in software I'm releasing, Test::CPANpm and
> sqlite are perfect examples). It would be good to have a standard way
> to tell a smoke tester what these packages are.

I had some thoughts surrounding this for a Bundle::CPAN::YACSmoke
distribution, however, it also means that any distribution that doesn't
include them in the PREREQ correctly will still get them by default, and
may miss them in the FAIL report the author should get. 

Your suggestion seems reasonable, but currently CPAN::YACSmoke doesn't
parse the META.yml file. In fact it doesn't parse any part of the
distribution, just the name and the test results. It leaves all 
distribution parsing to CPANPLUS. It may be possible via a callback, but
it seems more sensible to leave that in the hands of CPANPLUS.

>   If an optional_feature field for packages useful for automated
> testing was standardized, functionality could be added to 
> CPAN::YACSmoke and PITA to take advantage of it, and then there would
> be an easy way for an author to define prerequisites for doing an
> exhaustive smoke test of their distribution.

I think it would be possible for CPANPLUS to be patched to add these to
the list of prerequisites if in AUTOMATED_TESTING mode, so it should be
possible. I think PITA should be able to do something similar.

Barbie


Re: Request for Comments: Package testing results analysis, result codes

2006-02-22 Thread Barbie
> From: Adam Kennedy [mailto:[EMAIL PROTECTED]
> 
> Barbie wrote:
> > On Sun, Feb 19, 2006 at 10:22:20PM +1100, Adam Kennedy wrote:
> >> 2.  Incompatible packaging.
> >> Packaging unwraps, but missing files for the testing scheme.
> >
> > You may want to split this into a result that contains no test suite
> > at all (UNKNOWN) and one that has missing files according to the
> > MANIFEST.
> >
> > Currently the second only gets flagged to the tester, and is usually
> > only sent to the author, if the tester makes a point of it, or the
> > CPAN testing FAILs.
> 
> If the package type supports a MANIFEST, and there are files missing
> from it, then the package is broken or corrupt, see 1.

Agreed. I'd misread the two points.

> As for not testing suite at all, I'm not sure that's cause for an
> UNKNOWN, probably something else.

If you see an UNKNOWN in CPAN test reports, it means that 'make test'
returned "No tests defined". This is the result of no test suite at all.
It should be caught, and may be cover by point 2.

> >> 5.  Installer missing dependency.
> >> For the installer itself.
> >>
> >> 6.  Installer waits forever.
> >> Intended to cover the "non-skippable interactive question"
> >
> > This needs to cover misuse of fork() and alarm. Too many
> > distributions assume these work on Windows. They don't. At least not 
> > on any Windows platform I've ever used from Win95 to WinXP. They 
> > usually occur either in the Makefile.PL or the test suite, rarely in 
> > the actual module code.
> 
> I have absolutely no idea how we would go about testing that though.
> Certainly in the installer, which is expected to run on ALL platforms,
> having a fork/alarm would be possibly something of a no no. But how to
> test it? Almost impossible. It might be more of a Kwalitee element
> perhaps?

It is a difficult one, but should be picked up as an NA (Not Applicable
for this platform) report. A PITA framework would be better placed to
spot this, as the installer and test harness are not looking to parse
the distribution. Perhaps a Portability plugin could spot common issues,
such as those listed in perlport. Although there should be a distinction
made between NA in the actual module code and NA in the make/build/test
suite.

> And I believe fork does work on Windows, it's just that it's simulated
> with a thread or something yes?

I have used ActivePerl 5.6.1 and a couple of version of ActivePerl 5.8,
and have yet to see fork ever work on any Windows box I've used them on.

> >> 12. System is incompatible with the package.
> >> Linux::, Win32::, Mac:: modules. Irreconcilable differences.
> >
> > Not sure how you would cover this, but point 12 seems to possibly
> > fit. POSIX.pm is created for the platform it's installed on. A 
> > recent package I was testing, File::Flock (which is why I can't 
> > install PITA) attempted to use the macro EWOULDBLOCK. Windows 
> > doesn't support this, and there doesn't seem to be a suitable way to 
> > detect this properly.
> 
> Presumably because it failed tests and thus failed to install? :)

Correct, but that wasn't what I meant. It was the portability issue I
was getting at. There are several modules which use features they expect
to be there, and for one reason or another they aren't. I picked
EWOULDBLOCK as this is a common one, but there are others. Again a
portability plugin to PITA might help to detect potential issues for
authors.

> To use an analogy, tt's not up to the testing infrastructure to stop
> you putting the noose around your neck, only to tell everybody that 
> you hanged yourself.

Agreed, but if there are common pitfalls that can be detected, that some
authors are not aware of, it can only help authors to avoid them in the
future.

> It shouldn't have to do problem analysis, only identify that there is
> a problem, identify the point at which it failed and in what class of
> general problem, and record everything that happened so you can fix
> the problem yourself.

This was the intention of the CPAN test reports, but to avoid the
initial back and forth, the reports highlight code that can be used to
fix the simple reports, that are typically sent to new authors, e.g.
missing test suite or missing prerequisites.

If the report can identify a problem, without in depth analysis, and
there is a likely solution, it would be useful to highlight it.

> BTW, the top level PITA package is not intended to be compatible with
> Windows, although I do go out of my way to generally try to be
> platform neutral wherever possible. The key one that requires

Re: Request for Comments: Package testing results analysis, result codes

2006-02-20 Thread Barbie
On Sun, Feb 19, 2006 at 10:22:20PM +1100, Adam Kennedy wrote:
> 
> 2.  Incompatible packaging.
> Packaging unwraps, but missing files for the testing scheme.

You may want to split this into a result that contains no test suite
at all (UNKNOWN) and one that has missing files according to the 
MANIFEST.

Currently the second only gets flagged to the tester, and is usually
only sent to the author, if the tester makes a point of it, or the CPAN
testing FAILs.

> 5.  Installer missing dependency.
> For the installer itself.
> 
> 6.  Installer waits forever.
> Intended to cover the "non-skippable interactive question"

This needs to cover misuse of fork() and alarm. Too many distributions
assume these work on Windows. They don't. At least not on any Windows
platform I've ever used from Win95 to WinXP. They usually occur either
in the Makefile.PL or the test suite, rarely in the actual module code.

They may also fail on other OSs. Although they could potentially be
covered in point 5.

> 12. System is incompatible with the package.
> Linux::, Win32::, Mac:: modules. Irreconcilable differences.

Not sure how you would cover this, but point 12 seems to possibly fit.
POSIX.pm is created for the platform it's installed on. A recent package
I was testing, File::Flock (which is why I can't install PITA) attempted 
to use the macro EWOULDBLOCK. Windows doesn't support this, and there
doesn't seem to be a suitable way to detect this properly.

This is just one example, I've come across others during testing.

Also note that the name alone does not signify it will not work on other
platforms. There are some Linix:: distros that work on Windows.

> 14. Tests run, and some/all tests fail.
> The normal FAIL case due to test failures.

Unless you can guarantee the @INC paths are correct when testing, this
should be split into two. The first is simply the standard FAIL result.

The second is the result of the fact that although the distribution
states the minimum version of a dependancy, the installer has either
failed to find it or found the wrong version. This is a problem
currently with CPANPLUS, and is unfortunately difficult to track down.
It's part of the "bogus dependancy" checks.

> Installation and Completion
> ---
> 
> 16. All Tests Pass, did not attempt install.
> 
> 17. All Tests Pass, but installation failed.
> 
> 18. All Tests Pass, installation successful.

And another one, test suite reports success, but failures occurred. This
is usually the result of the use of Test.pm. I've been told that due to
legacy systems, test scripts using Test.pm must always pass, even if
there are failures. There are still a few distributions that are
submitted to CPAN like this.

Barbie.


Re: Kwalitee in your dependencies (was CPAN Upload: etc etc)

2006-01-31 Thread Barbie
On Mon, Jan 30, 2006 at 08:59:58AM -0500, David Golden wrote:
> 
> Well, the more generalized problem is how to you signal to an automated 
> test that you're bailing out as N/A for whatever reason?  For Perl 
> itself, it's easy enough for the smoke test to check if the required 
> version of Perl is available -- and the smoke test is smart enough not 
> to try to install an updated version of Perl to satisfy the dependency. 
>  It bails out with N/A instead.
> 
> What's a clean, generic mechanism for a distribution to signal "please 
> check this dependency and abort if it's not satisfied"?  Something in 
> the META.yml (e.g. Alien::*)?  Send a specific string to STDERR?  Send a 
> specific exit codes?  Ugh.  Other ideas?

For CPANPLUS (and thus YACSmoke) the distribution author can check the
OS from a list of known compatible OSs, and if it doesn't find it, bail
out with a "OS unsupported" message. See this slide [1] for a simple
example.

[1] http://birmingham.pm.org/talks/barbie/cpan-ready/slide603.html

This has been in CPANPLUS for a while now. While the obvious
distributions of Win32:: and Linux:: may be OS specific, there are
others that are not so obvious from the name, which may support a number
of OSs, or might not support specific ones. 

The above exit mechanism provides an NA report to the CPAN testers. 

Barbie.


Re: Binary distributions

2006-01-30 Thread Barbie
On Sat, Jan 28, 2006 at 09:34:09AM -0800, Tyler MacDonald wrote:
> 
>   From what I gather, CPANPLUS is a linear package building
> system, whereas YACsmoke is a wrapper around that that tries to build as
> many packages as is humanly (er, computerly) possible on a system, with the
> side effect of sending it's results through Test::Reporter. YACsmoke also
> can maintain a database of what it has built and what it hasn't, allowing a
> YACsmoke system to eventually exhaustively build every single package on
> CPAN without building the same one twice. Is this correct?

Sort of. CPANPLUS also maintains a database for the current run, so if
you have a list of 20 distributions you are testing, if 5 of those have
a prerequisite for the same distribution, CPANPLUS internally only
makes/tests it once, but adding each distribution path to @INC. However,
the state information that CPANPLUS stores doesn't include the test
result of a distribution.

CPAN::YACsmoke on the other hand, does record the test result. This
means that if it tried to build a distribution (currently checks agains
the latest version), which reports a FAIL or NA, any distribution that
requires it will have their test results graded as ABORTED and NA
respectively. The ABORTED result does not get reported to CPAN Testers.

The part that is missing currently, is that if a distribution that is a
prerequisite now PASSes, there is no code to actively change all those
ABORTED to NONE, so that the requiring distribution can be retested.
This is part of my todo list.

> > What would the plugins for .deb, .rpm and .ppm do? Currently we just
> > pass the path to CPANPLUS and let CPANPLUS unwrap, make and test the
> > distribution. We then just interrogate the results.
> 
>   These plugins would execute *after* YACsmoke/CPANPLUS has done it's
> work, but before it's cleaned up after itself, and generate the actual
> .deb/.rpm/.ppm files.

I would see this as a different system, perhaps for someone who actively
maintains a repository for that particular install package. I would
prefer to see a code base that uses the YACSmoke database, rather than
used as a plugin. The .cpanplus build directories are not removed once
testing is complete, so the blib directories are all still available
after a test run.

YACSmoke does provide a purge facility, but it doesn't run automatically
after a test run, you have to manually instigate it.

> > Perhaps this is something you meant for CPANPLUS to handle, in which
> > case I'm sure that's something Jos would be interested in, as he has
> > previously looked at some kind of binary support.
> 
>   Well, maybe the actual low-level package generation could be kicked
> down a level, but the whole reason I thought of using YACsmoke for this was
> because of it's "grab everything and try to build it" nature.

YACSmoke would certianly be useful as a first stage, to ensure you
only package distributions that pass the tests for your platform.

> > I plan to find time soon to complete the tests required for the next
> > release of CPAN::YACSmoke, so if what you want does relate to
> > CPAN::YACSmoke I can start taking a look and see what needs to be done
> > to implement it.

I will look into this further, but it'll be lower on my todo list, as it
will be driven by packagers rather than testers.

Barbie.


Re: Kwalitee in your dependencies (was CPAN Upload: etc etc)

2006-01-28 Thread Barbie
On Sun, Jan 29, 2006 at 04:45:51AM +1100, Adam Kennedy wrote:
> 
> I haven't been able to find (although I haven't looked to hard) for a 
> documented set of result codes. But either DEPFAIL or N/A makes sense.

See CPAN::YACSmoke pod. Just before Christmas I completed the
TestersGuide pod, which also explains it in more detail. That version
has been released to CPAN, but is available via CVS on SourceForge.

> If not, I certainly plan to deal with it that way. N/As cascade.

Agreed.

> Without knowing how many exist now, I hope to define a more 
> comprehensive set of HTTP-like codes, but that design element is still 
> flexible.

There are only 4 that are submitted to cpan-testers, PASS, FAIL, NA and
UNKNOWN (no tests supplied with the distribution). CPAN::YACSmoke also
deals with ABORTED and UNGRADED, which are referenced internally, as
well as IGNORED (a subsequent version PASSes) and NONE (where a version 
has been cleared of it's previous grade by manual intervention.

Changing these HTTP like codes, might okay for an internal
representation, but would require ALOT of work to change several CPAN
modules and ensure all the testers upgraded. There is also the fact that
all existing reports are in the system and not going to change. Although
I may have misunderstood your intentions.

> >Is there a mailing list or wiki or some 
> >such?  (Subversion repository?)
> 
> No infrastructure for now, until it's actually announced, but the IRC 
> channel is logged ala #perl6, so you can visit when you wish, and catch 
> up when you care.

I use IRC very rarely, as due to work and family taking up most of my 
time ... when I'm not organising a conference ;) Now that more tools are
being created, it seems more appropriate to discuss these elsewhere than
on a QA list. I'll investigate setting something up on the Birmingham.pm
server, unless there is somewhere else that would be more appropriate.

Cheers,
Barbie.


Re: Kwalitee in your dependencies (was CPAN Upload: etc etc)

2006-01-28 Thread Barbie
On Sat, Jan 28, 2006 at 11:01:06AM -0500, David Golden wrote:
> 
> You need to deal with N/A's down the chain.  E.g. I prereq a module that 
>  can't be built in an automated fashion but requires human intervention 
> or choices or external libraries.  If you don't have it and the 
> automated build to try to get it fails, that really isn't my problem -- 
> I've been clear about the dependency: "If you have version X.YZ of 
> Module ABC::DEF, then I promise to pass my tests".

With CPAN::YACSmoke, if you require a distribution that results in a NA
report, your distribution is also classed as NA. Part of this is the
Perl version issue. CPAN and CPANPLUS will not install your distribution
if a pre-requisite up's requirement of Perl. This happened a lot in the
last year as ExtUtils::MakeMaker and Module::Starter insert the authors
version of Perl when they create a new distributions. As other authors
decide to use those distributions without meaning to, they also require
that version of Perl.

This is the reason I'm looking at developing SDKs for core modules that
don't have a life outside the core distribution, to help relieve some of
those dependencies.

The issue with external libraries and C compiler is awkward to reliably
do, as different platforms put them in different places, and often name
them differently according versions. This is something I want to try
either include into CPAN::YACSmoke, or have a way for authors to create
a dependency on an Alien:: or similar distribution, which can determine
a version of the require 3rd party software, as well as whether it is
installed.

> Think about the GD modules -- if the GD library isn't installed, GD 
> won't build and anything depending on it fails.  Should that fail a 
> "clean_install"?

A report should either not be submitted, or it is classed as something
that indicates that 3rd party software was not available to complete
testing. I'm leaning towards the former.

> (Contrast that with SQLite, which installs it for you.)

That's because the license enables Matt to do that. Plus it's the kind
of self contained release that you can get away with bundling it with
the distribution. libgd is used by plenty of other apps.

> >A FAIL should ONLY mean that some package has told the smoke system that 
> >thinks it should be able to pass on your platform, and then it doesn't.
> >
> >Failures that aren't the fault of the code (platform not compatible, or 
> >whatever) should be N/A or something else.
> 
> I think better granularity would be: "FAIL" -- failed its own test 
> suite; "DEPFAIL" -- couldn't get dependencies to pass their test suites; 
> and "N/A" -- incompatible platform or Perl version or not automatically 
> chasing dependencies.

With CPAN::YACSmoke, a "DEPFAIL" is marked as "ABORTED" and no report is
sent.

> >If you want to help out, we'd love to see you in irc.perl.net #pita.
> 
> Between tuit shortages and the day job (side q: what's this whole $job 
> thing I keep seeing in people's posts/blogs), a realtime channel like 
> IRC is hard to keep up with.  Is there a mailing list or wiki or some 
> such?  (Subversion repository?)

I'd like to see a mailing list specific for developers of the smoke
systems, as currently the ideas for CPAN::YACSmoke are usually just
discussed between Robert Rothenberg and myself. If we going to improve
reporting it would help to have a wider discussion group. Whenever
Robert or I have posted here, it hasn't really reach the right audience.

I'm glad that the AUTOMATED_TESTING flag is getting picked up now, but
it would be nice to throw those ideas around more often, and getting
them refined quicker.

I'm looking forward to evaluating PITA, but it will likely have to be on
a Perl 5.8.7 Windows box, as my 5.6.1 box can't install any of the PITA
distributions on CPAN at the moment, due to the dependency issue ;)

Cheers,
Barbie.


Re: Fwd: CPAN Upload: D/DO/DOMM/Module-CPANTS-Analyse-0.5.tar.gz

2006-01-27 Thread Barbie
On Fri, Jan 27, 2006 at 03:42:58PM +0100, Tels wrote:
> 
> I am still considering building something[0] that shows the 
> module-dependency as a graph to show how "bad" the problem has become. 
> 
> [0] As soon as I can extract the nec. data from CPANTS, which has failed 
> the last two times I tried that for very similiar reasons - lots of 
> dependencies, test failures, database scheme changed etc. ...

A member of Birmingham.pm has already written it, although his server
seems to be down at the moment. Considering it is quite a nice little
tool, I'll see if he'll let me host it on the Birmingham.pm server for
you all to have a play with.

Barbie.



Re: how to detect that we're running under CPAN::Testers?

2006-01-17 Thread Barbie
On Tue, Jan 17, 2006 at 06:06:50AM +1100, Adam Kennedy wrote:
> 
> At the moment all our output is structured XML files, so at some point I 
> need to write an XSL to translate it back down into something CPAN 
> Testers can deal with, and I can add whatever you want me to at that time.
> 
> Do you want the tag saying it's PITA-XSL-generated content, or the owner 
> of the testing system? (because I imagine we'll end up with a number of 
> those)

The fact that the report is generated using PITA. For me personally it
would be interesting to see where reports are generated from. At the
moment it's difficult to know which reports are manually produced.

There are several reports still that are getting misfiled, as it either
doesn't include the version number of the distribution or other
attributes are missing. It makes it easier to spot which mechanisms for
testing and reporting are getting it wrong, if the automated apps tag
the report.

Barbie.



Re: how to detect that we're running under CPAN::Testers?

2006-01-16 Thread Barbie
On Thu, Jan 12, 2006 at 03:17:55AM +0100, Sébastien Aperghis-Tramoni wrote:
> 
> AFAICT, serious smokers (the ones that automatically and regularly
> send CPAN Testers reports) all use CPAN::YACSmoke. The previously
> used one was cpansmoke, included with previous versions of CPANPLUS:
>   http://search.cpan.org/dist/CPANPLUS-0.0499/bin/cpansmoke

Currently there are no other ways of detecting automaing testing
reliably. Prior to YACSmoke there was $ENV{VISUAL} eq 'echo', but even
that wasn't guaranteed. I was working on adding the AUTOMATED_TESTING
flag to CPANPLUS, when the smoke testing got dropped from the
distribution. As such it was one of the first things to go into
YACSmoke. I did write to other writers of smoke test scripts suggesting
they do the same, but AFAIK none of them have implemented it.

> I don't think it provided a hint for telling a module whether it was
> automated testing or not, but I don't think that anybody still use it.

It didn't, and from the reports I've seen I don't think any automated
testing uses anything less than CPANPLUS 0.050. 

> That's something not indicated in the CPAN Testers Statistics site,
> which was finally made available (but very silently) by Barbie:
>   http://perl.grango.org/

I wouldn't say silently, as I did announce it in my use.perl journal.
However, I wasn't convinced that many people would be interested in it,
so I didn't make a big song and dance about it.

Anyhow, I haven't added the stats about whether a report is from
automated testing as you can't tell unless the test is using YACSmoke
as it adds a tag line in the report. Incidentally, Adam it would be
worth you doing the same with PITA, so these sorts of stats could be
gleaned in the future.

> Other reports may be send by people like me when they interactively
> install modules using CPANPLUS, or by hand using Test::Reporter.

There are still quite a number of interactive reports submitted,
although the bulk is automated.

Barbie.


Re: CPAN Testers results

2005-11-03 Thread Barbie
Quoting Ovid <[EMAIL PROTECTED]>:

> I've noticed that http://search.cpan.org/~ovid/HOP-Parser-0.01/,
> amongst other modules, has no CPAN test results appearing even though
> CPAN tester reports are coming in.  I've seen this for other modules,
> too.
> 
> Is there an announced reason for this I missed or is something down?

Unfortunately Leon has been having problems with his server [1], which is where
the parsing of all the reports is done and the master testers.db resides. Until
it's back online there won't be any updates.

[1] http://use.perl.org/~acme/journal/27294

Barbie.
-- 
Barbie (@missbarbell.co.uk) | Birmingham Perl Mongers user group |
http://birmingham.pm.org/

---
This mail sent through http://www.easynetdial.co.uk


Re: Probing requirements

2005-04-09 Thread Barbie
From: "Randy W. Sims" <[EMAIL PROTECTED]>

> I didn't like the Probe name too much at first, but it's kinda grown on
> me; seems very perlish. I've come to think of Config::* as a place for
> static configuration information such as the compiler used to compiler
> perl, $Config{cc}.

Probe to me implies much more than configuration. With your suggested
modules, I think Probe:: is the ideal name. Config:: to me implies something
that is loaded at startup and can be overriden. Probe:: on the other hand is
fixed and could be loaded at any time. There is no reason why a Probe::
module couldn't be used as part of configuration, but I think it has a
broader use.

I would suggest there is a higher level module Probe.pm that actually does
the basic searching of PATH, filesystems, etc, which would handle the OS
specific parts. With modules such as Probe::Compiler listing specific
compilers (c, fortran, cpp, etc) that can be searched for. Thus removing the
need for repeated code.

As a starting point it maybe worth looking at have_library in the
Makefile.PL [1] for XML-LibXML-Common [2]

[1] http://search.cpan.org/src/PHISH/XML-LibXML-Common-0.13/Makefile.PL
[2] http://search.cpan.org/dist/XML-LibXML-Common/

Barbie



RE: Kwalitee and has_test_*

2005-04-08 Thread Barbie
'm not fussed whether the pod testing is a kwalitee item, as I was
including those tests in my distributions before it was introduced. I look to
kwalitee items, simply as a checklist of things I may have missed from my
distributions. It does not represent good quality code, as there are many, many
far better modules to learn from than mine.

I also try and reliably contain a consistent set of POD headers too, but this is
a purely personal thing. Every author has their own interpretation of what
headings they should include. That's something else that could be considered a
kwalitee item, but shouldn't be. 

Anything that productively improves the kwalitee of CPAN and the distributions
on it is good. Anything that is going to have a bad impact on some good quality
work, is likely to mean CPANTS and the kwalitee system will get ignored.

Barbie.

-- 
Barbie (@missbarbell.co.uk) | Birmingham Perl Mongers user group |
http://birmingham.pm.org/

---
This mail sent through http://www.easynetdial.co.uk


RE: How to force tests to issue "NA" reports?

2005-04-08 Thread Barbie
On 07 April 2005 23:02 Ken Williams wrote:

> On Apr 6, 2005, at 7:13 AM, Robert Rothenberg wrote:
> 
>> Is there a way tests to determine that a module cannot be installed
>> on a platform so that CPANPLUS or CPAN::YACSmoke can issue an "NA"
>> (Not Applicable) report? 
>> 
>> CPANPLUS relies on module names (e.g. "Solaris::" or "Win32::") but
>> that is not always appropriate in cases where a module runs on many
>> platforms  except some that do not have the capability.
> 
> In those cases, who's to say that that platform won't get such
> capabilities in the future?  If the module author has to list the
> platforms on which their module won't run, it'll get out of date, and
> the list will likely be incomplete to start out with.

This is something that Robert and I have discussed. It is rather difficult to
decide how to approach this. 

However, thinking about what *might* happen in the future for a future platform,
is not relavent to what we were thinking. There real problems now, such as
alarm(), symlinks and fork() on Win32, that create problems when distributions
use them. There are some distributions that only require them for testing, which
to my mind currently means bogus FAIL reports are generated. If the distribution
author is able to indicate which platforms they are willing to support, then it
could mean a NA report is generated, until such a time as the author is willing
to support that platform.

The reasoning behind all this, is that several authors have raised the subject
of getting FAIL reports for platforms they are not willing or unable to support.
We were trying to think of an adequate way of avoiding sending them reports
which they are not interested in currently. I know it could easily be a matter
of pressing the delete key, but seeing as it has been a wishlist item that keeps
getting mentioned, Robert and I figured it might be something we can address.

>> There's also a separate issue of whether "NA" reports should be
>> issued if a library is missing.  (Usually these come out as
>> failures.) 
> 
> People looking at failure reports should be able to tell whether the
> failure occurred because of a missing prerequisite (of which libraries
> are one variety) or because of runtime building/testing
> problems.  The
> correct way to solve this would be to have a mechanism for declaring
> system library dependencies, then check before smoke-testing whether
> those dependencies are satisfied.

This is also something I've been thinking about for about a year. My only
conclusion is that an addition test grade be created, which implies that the
distribution could not be completed, due to dependancies outside the realm of
the distribution or the install mechanism. My patch to CPANPLUS was to not
produce a report, but I do think it could be something users may wish to know 
about.

> Unfortunately that's a large problem space, and it has eluded attempts
> at cross-platform solutions so far.  It would be really nice
> if it were
> solved, though.

You're right it is a large problem space. There have been a few attempts to
figure it out, but as different platforms have different methods of recording
library information, it isn't going to be a quick fix.


On 07 April 2005 23:12 Michael G Schwern wrote:

> On Thu, Apr 07, 2005 at 05:01:34PM -0500, Ken Williams wrote:
>>> Is there a way tests to determine that a module cannot be installed
>>> on a platform so that CPANPLUS or CPAN::YACSmoke can issue an "NA"
>>> (Not Applicable) report?
> 
> AFAIK NA reports are issued when a Makefile.PL dies due to a "require
> 5.00X" failing.  That's the only way I've seen anyway.

Not true. The current CPANPLUS now will produce a NA report if the perl version
is lower than that specified by the distribution/module. This is checked for as
part of the prepare, build and test stages, not just the prepare stage.

Previously, the only time a NA report was produced was when one of the 3 stages
failed, and the top level namespace matched a platform name that wasn't the
current platform. This was why a while ago there were some NA reports for
distributions in the 'MAC::' namespace. There was a case insensitive test that
thought it was in the 'Mac::' namespace.

Hope that clarifies a few points.

Cheers,
Barbie.

-- 
Barbie (@missbarbell.co.uk) | Birmingham Perl Mongers user group |
http://birmingham.pm.org/

---
This mail sent through http://www.easynetdial.co.uk


RE: Script to find Module Dependency Test Results...

2004-07-22 Thread Barbie

On 20 July 2004 22:30 Robert Rothenberg wrote:

> I have a prototype Perl script that will determine the
> dependencies of a given
> CPAN distribution, and then check CPAN Testers for any
> failure reports of that
> distro or dependent distros for a given platform.

One thing to remember, is that some tests are also specific to a particular version of 
Perl. At the moment I am noticing that several modules that I am going through 
testing, you have already tested, and CPANPLUS thinks that's okay. However, you are 
testing using 5.8.4, I've got 5.6.1. The two can have entirely different results.

Acme, was going to look at this at some point when he had time, so that the versions 
were more visible on the report summary pages. But at the moment, it's not always 
guaranteed that the summary is right. There are some reports that we both have 
completed, and it highlights the differences. Email::Simple [1] is an example.

[1] http://testers.cpan.org/show/Email-Simple.html

> I would like to work with other people to turn this into
> something of use to the community

It is a good idea, but it might be better to look at patching CPAN::WWW::Testers [2], 
rather than creating an extra script/module.

[2] http://search.cpan.org/dist/CPAN-WWW-Testers/lib/CPAN/WWW/Testers.pm

> This information would be of use to various quality-assurance
> types, as well
> as the module author.  It's probably of use to module users too.

As a report maybe unavailable, due to the fact that no-one has got around to testing 
it, a simple search mechanism (platform/perl version) using your script could generate 
the current info from the testers.db file.

I like the idea, and with the CPANTS work that Thomas is doing this could be very nice.

Barbie.



Re: how to run test.pl before the t/*.t tests ?

2004-07-08 Thread Barbie
From: "Andy Lester" <[EMAIL PROTECTED]>

> On Thu, Jul 08, 2004 at 04:20:38PM -0400, Michael G Schwern
([EMAIL PROTECTED]) wrote:
> > > > [2] Want some fun?  http://search.cpan.org/~dconway
> > >
> > > You have a sick sense of humour young man ;)
> >
> > He uses test.pl.  Sic 'em.
>
> That sort of cleanup is exactly what Phalanx is about.  I think
> Parse::RecDescent is on the Phalanx 100.

I was going to say that this sounds like a job for Andy's Phalanx team :)

Barbie.



Re: how to run test.pl before the t/*.t tests ?

2004-07-08 Thread Barbie
From: "Michael G Schwern" <[EMAIL PROTECTED]>
>
> It means test.pl's which use Test::More and fail will no longer cause
> 'make test' to fail.  But I doubt people are using test.pl and Test::More
> much.

I'm now trying to remember which distributions I tested recently with only
test.pl, and can't. Will have to keep and eye out for them and see whether
they do use Test::More. There are still quite a few just using test.pl, but
I think they are mostly, if not all, originally written when only Test.pm
was around.

Barbie.
-- 
barbie (@missbarbell.co.uk) | Birmingham Perl Mongers user group |
http://birmingham.pm.org



RE: how to run test.pl before the t/*.t tests ?

2004-07-08 Thread Barbie
On 08 July 2004 16:55 Michael G Schwern wrote:

> Little known fact:  The output of test.pl is completely ignored by
> "make test".  

... and really annoys cpan-testers who have to cut-n-paste all the reports that have 
NOT passed into FAIL reports. Though now I've fixed part of CPANPLUS it's not nearly 
as bad as it was.

> [1] Test::More automatically exits abnormally on failure but
> I'm considering
> changing that to no longer be the default.

Will this then mean all cpan-testing will PASS?

> [2] Want some fun?  http://search.cpan.org/~dconway

You have a sick sense of humour young man ;)


Barbie.
-- 
Barbie (@missbarbell.co.uk) | Birmingham Perl Mongers user group | 
http://birmingham.pm.org/



RE: testing File::Finder

2003-12-19 Thread Barbie
On 18 December 2003 21:44 Randal L. Schwartz wrote:

> I can add local symlinks
> and hardlinks.  I'll compute ownership out-of-band and compare it
> to the test result though... I wouldn't want someone extracting
> this as joebloe to fail because the uid wasn't root. :)

Another thing to bear in mind ... is this a Unix-like only module? If not, then 
symlinks will be a no go. Win32 doesn't support them, and I would imagine there are 
other OSs in the same position. 

Computing ownership on Win32 can be done via the Win32::LoginName(), 
Win32::NetAdmin::LocalGroupGetMembers() and Win32::NetAdmin::GroupGetMembers() 
functions if you wanted a separate Win32 test script, otherwise as soon as you start 
calling Unixy admin programs your distribution will likely get labelled (wrongly) 
under CPAN testing as 'NA' on other OSs.

Barbie.
-- 
Barbie (@missbarbell.co.uk) | Birmingham Perl Mongers | http://birmingham.pm.org/



RE: Phalanx / CPANTS / Kwalitee

2003-10-16 Thread Barbie
On 16 October 2003 05:47 Robert Spier wrote:

>> Yes. We've been thinking about this. It either needs stealing buildd
>> from Debian, having a box we don't mind destroying every so often, or
>> having a VMware virtual machine we can undo easily. What we need is
>> more free time ;-) 
>> 
> 
> User Mode Linux (limiting to Linux, of course) might be a lighter
> weight way to do this. 

Would this cope with Win32, MacOS or other OS specific modules?

Barbie.