Re: Why make up a Makefile.PL (was: Re: git tarballs / tarfile comments)

2008-09-04 Thread Thomas Klausner
Hi!

On Wed, Sep 03, 2008 at 03:16:35PM -0400, David Golden wrote:

 There are handful of things on CPAN that are just zipped .pm files.  I

cpants says:
cpants= select extension,count(*) from dist group by extension ;
 extension | count 
---+---
 tar.gz| 14762
 tgz   |   241
 zip   |   113

I've attachted the list, if this is of any help...

-- 
#!/usr/bin/perl  http://domm.plix.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}
package
---
 Acme-POE-Knee-1.10.zip
 AI-NeuralNet-BackProp-0.89.zip
 AI-NeuralNet-Mesh-0.44.zip
 Algorithm-Line-Bresenham-C-0.1.zip
 Algorithm-Loops-1.031.zip
 Archive-Tyd-0.02.zip
 BitTorrent-V0.1.zip
 Chatbot-Alpha-2.04.zip
 Convert-MIL1750A-0.1.zip
 CPAN-Test-Dummy-Perl5-Make-Zip-1.03.zip
 Crypt-XXTEA-1.00.zip
 Data-Iterator-0.021.zip
 DBIx-Compare-1.0.zip
 DBIx-Compare-ContentChecksum-mysql-1.0.zip
 DBIx-Fun-0.02.zip
 Devel-TraceSubs-0.02.zip
 DISCO-0.01.zip
 EasyDate-103.zip
 ESplit1.00.zip
 extensible_report_generator_1.13.zip
 ExtUtils-FakeConfig-0.11.zip
 Finance-Bank-ES-Cajamadrid-0.04.zip
 Finance-Edgar-0.01.zip
 Games-LogicPuzzle-0.20.zip
 Games-Multiplayer-Manager-1.01.zip
 Graph-Maker-0.02.zip
 HTML-EasyTable-0.04.zip
 Inline-Octave-0.22.zip
 Joystick-1.01.zip
 LIMS-Controller-1.6b.zip
 LIMS-MT_Plate-1.17.zip
 Lingua-EN-Dict-0.20.zip
 Lingua-EN-VerbTense-3.00.zip
 link_NCBI.zip
 Memo32-1.01b.zip
 mGen-1.03.zip
 Microarray-0.45c.zip
 MIDI-Trans-0.15.zip
 modules/etext/etext.1.6.3.zip
 modules/SelfUnzip-0.01.zip
 modules/SOM-0.0601.zip
 Module-Versions-0.02.zip
 MRTG-Config-0.04.zip
 MSN-PersonalMessage-0.02.zip
 NCBI-0.10.zip
 NetIcecast-1.02.zip
 Net-IPAddress-1.10-PPM.zip
 Net-Server-POP3-0.0009.zip
 Notes/Notes.zip
 NPRG-0.31.zip
 Nums2Words-1.12.zip
 Object-Interface-1.1.zip
 os2/OS2-FTP-0_10.zip
 os2/OS2-UPM-0_10.zip
 os2/tk/Tk-OS2src-1.04.zip
 Parse-EventLog-0.7.zip
 ParseTemplate-0.37.zip
 PCX-Loader-0.50.zip
 Perl56/Win32-PerfMon.0.07-Perl5.6.zip
 Perl6-Interpolators-0.03.zip
 perlipse/Perlipse-0.02.zip
 Prima-prigraph-win32-1.06.zip
 Reduziguais.zip
 Regex-Number-GtLt-0.1.zip
 resched-0.7.2.zip
 rms.zip
 smg.zip
 System-Index-0.1.zip
 SystemTray-Applet-0.02.zip
 SystemTray-Applet-Win32-0.01.zip
 Template-Ast-0.02.zip
 Term-Getch-0.20.zip
 Term-Sample-0.25.zip
 Test-Version-0.02.zip
 Tie-Tk-Text-0.91.zip
 TimeConvert0.5.zip
 tinyperl-1.0-580-win32.zip
 Tk-DiffText-0.19.zip
 Tk-TOTD-0.20.zip
 Tkx-FindBar-0.06.zip
 UDPServersAndClients.zip
 VCS-StarTeam-0.08.zip
 VMS-Device-0_09.zip
 VMS-FlatFile-0.01.zip
 VMS-Queue-0_58.zip
 vms-user-0_02.zip
 Win32-ActAcc-1.1.zip
 Win32-Capture-1.1.zip
 Win32-Encode-0.5beta.zip
 Win32-Env-Path-0.01.zip
 win32-guidgen-0.04.zip
 Win32-GUI-Scintilla-1.7.zip
 Win32-HostExplorer-01.zip
 Win32-MCI-Basic-0.02.zip
 Win32-MediaPlayer-0.2.zip
 Win32-MIDI-0_2.zip
 Win32-MMF-0.09e.zip
 Win32-MultiMedia-0.01.zip
 Win32-OLE-CrystalRuntime-Application-0.08.zip
 Win32-OLE-OPC-0.92.zip
 Win32-PerlExe-Env-0.04.zip
 Win32-Printer-0.9.1.zip
 Win32-ShellExt-0.1.zip
 Win32/SimpleProcess/SimpleProcess_1.0.zip
 Win32-SqlServer-2.004.zip
 Win32-SystemInfo-0.11.zip
 Win32-TaskScheduler2.0.3.zip
 Win32-TieRegistry-0.25.zip
 Win32-TSA-Notify-0.01.zip
 WordPress-V1.zip
 WWW-MySpaceBot-0.01.zip
 WWW-PDAScraper-0.1.zip
 WWW-TwentyQuestions-0.01.zip
(113 rows)



Module::Build 0.2809 release coming, should we test it?

2008-09-04 Thread Eric Wilhelm
Hi all,

Module::Build hasn't shipped a proper release for a good while, and a 
few alphas have gone out since then (including the one in 5.10.0).  Now 
I find myself apparently expected to ship it.

My examination of the .meta files in the cpan says 9095 distributions 
have a META.yml with 'generated_by' citing Module::Build.  To my 
knowledge, the testing of pre-release versions does not extend to 
building or testing these 9k+ distributions, and once it ships, the 
failure reports go to a lot of authors, not M::B maintainers.

So, that worries me.  Does anyone have the ability to setup a set of 
out-of-band tests to avoid spamming everyone else with my failures?  
Tools to generate this list of distributions from a cpan mirror could 
be made, or tell me what would help.  Andreas, I think you said you had 
run a round of tests on the alpha just before this one?  With the 
installer set to MB?


Oh, and the latest change is blocking TAP::Harness changing to 
a foo ... ok output format.


Statistical snacky bit:  the 'generated_by' count on the big three goes:  
extutils::makemaker: 23338, module::build: 9095, module::install: 8289.


Thanks,
Eric
-- 
We who cut mere stones must always be envisioning cathedrals.
--Quarry worker's creed
---
http://scratchcomputing.com
---


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Eric Wilhelm
# from David Golden
# on Wednesday 03 September 2008 14:09:

 if ... CPAN Testers is ...
   to help authors improve quality 
 rather than ...
   to give users a guarantee about ... any given platform.

... high quality author -- ... tester has a broken or misconfigured
toolchain  The false positive ratio will approach 100%.

__Repetition is desensitizing__ ... [more data will] improve confidence
in the aggregate result. ...  However, ... only need to hear it once.

... fail during PL or make ... valuable information to
know when distributions can't build ... [or] ... don't pass tests.  ...
not just skip reporting.  ... [but] false positives that provoke
complaint are toolchain issues during PL or make/Build.

... no easy way to distinguish the phase ... subject of an email.

I... partitions the FAIL space in a way that makes it easier for
authors to focus on which ever part of the PL/make/test cycle...

Yes.  Important points.  Excellent assessment.

__What we can fix now and what we can't__
...
However, as long as the CPAN Testers system has individual testers
emailing authors, there is little we can do to address the problem of
repetition.

This is true.  For me, the mail is such a completely random data point 
(no PASS mail, not all of the FAIL mail, etc) that I just delete it.

Once that is more or less reliable, we could restart email
notifications from that central source if people felt that nagging is
critical to improve quality.  Personally, I'm coming around to the
idea that it's not the right way to go culturally for the community.

Send one mail to each new author telling them about the setup and how to 
get rss or configure it to mail them, etc.  Telling people about 
something they don't know *once* is usually helpful.

But with the per-tester direct mail, the recipient is powerless to stop 
it, and feeling powerless tends to make people angry.

In the bigger scheme of things, I sort of think all new PAUSE authors 
should get a welcome e-basket with links, advice, etc, along with 
perhaps an optional (default?) monthly newsletter about new tools or 
something.

But, the trend generally seems to be away from mail and towards web 
services.  I'm in favor.

Some of these proposal would be easier in CPAN Testers 2.0, which will
provide reports as structured data instead of email text, but if exit
0 is a straw that is breaking the Perl camel's back now, then we
can't ignore 1.0 to work on 2.0 as I'm not sure anyone will care
anymore by the time it's done.

What are the issues in that time?  Code or non-code?

What we can't do easily is get the testers community to upgrade to
newer versions of the tools.  That is still going to be a matter of
announcements and proselytizing and so on.  But I think we can make a
good case for it, and if we can get the top 10 or so testers to
upgrade across all their testing machines then I think we'll make a
huge dent in the false positives that are undermining support for CPAN
Testers as a tool for Perl software quality.

Yes.  Further, if you can detect new tools from old, you have enough 
information to filter the results.

Hah!  You know, you have their e-mail address, right?  You can just send 
them nagging mail about how their tools are FAIL! ;-)

So, seriously Testing is good.  Knowing that weird platforms fail or 
pass is good.  Knowing that old toolchains fail is even useful to some 
extent, but I think the significant variables are different for 
different users.

So, what are these use cases?

Let's pretend that I'm a real jerk of an author and I only care about 
whether my code installs on a perl 5.8.8+ (a *real* perl -- no funky 
vendor patches) with a fully updated and properly configured toolchain 
and the clock set to within 0.65s of NTP time.  It would then be useful 
for me to be able to mark (for all the world to see) which of the 
machines reporting were using supported configurations.  That would 
both show *me* a valid fail and let my users know whether their 
configuration was supported.

The user could still see passes and fails (hey, they could even perhaps 
login and set a preference for what to show.)  And remember that the 
user is also often an author who is considering whether a dependency is 
a good choice for them or not.  If they need to support perl 4 and like 
having wonky vendor perls and purposefully set their clock 5min fast 
and configure CPAN.pm for prefer_installer = rand, seeing those 
results are helpful -- but seeing my implied unsupported 
configuration flag is also helpful.

Now, I'm not really a jerk, so I don't need the thing about the clock.  
I can probably live with false results from some redhat perl for a while 
too.  But I'm going to pretend that we do not have a broken or 
impossible to upgrade toolchain until it comes true.  My users do not 
have to share that delusion, but they do have to humor me.

So, what do you show the author and/or users by default?  That is 
possibly still rather sticky, but making 

Re: imaginary Makefile.PL (and scripts)

2008-09-04 Thread Andreas J. Koenig
 On Wed, 3 Sep 2008 13:24:34 -0700, Eric Wilhelm [EMAIL PROTECTED] said:

   That is different than a tarball though.  Does the script installation 
   have to be given up in order to eliminate the ambiguous behavior in the 
   case of a dist tarball?

Good point. I can probably limit it to cases of single file distros.
I'll look into this.

   On another note about scripts:  sleepserver still never made it into the 
   index despite my reading of mldistwatch and working to try to get the 
   META.yml 'provides' field right.  Is there something that says I have 
   to have a .pm file to get indexed?

There's something that says that 02packages.details.txt is about
namespaces which implicitly says it is about modules and not about
scripts. Package names within scripts are never a exposed (except the
perverse use cases like the user evals the script inside perl or so).

   That is, this distribution is:

 bin/sleepserver
 META.yml
 Build.PL
 t/00-load.t

I think this goes beyond the perl-qa agenda. I'll write you separately
about this.

   Which means that it can have dependencies and tests.

  Incidentally, I would love to be able to move forward to the time
  when there is neither Build.PL nor Makefile.PL.

  Hear, hear! :-)

   Is 'dynamic_config: 0' supported?  The Build.PL in the above distro is 
   not really needed.

In CPAN.pm it is supported, yes. In the PAUSE I see no use for it
because PAUSE takes every META.yml as the best it can get, so.

-- 
andreas


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread David Golden
On Thu, Sep 4, 2008 at 4:19 AM, Eric Wilhelm [EMAIL PROTECTED] wrote:
Some of these proposal would be easier in CPAN Testers 2.0, which will
provide reports as structured data instead of email text, but if exit
0 is a straw that is breaking the Perl camel's back now, then we
can't ignore 1.0 to work on 2.0 as I'm not sure anyone will care
anymore by the time it's done.

 What are the issues in that time?  Code or non-code?

mostly $job, @family, %life stuff

 Yes.  Further, if you can detect new tools from old, you have enough
 information to filter the results.

 Hah!  You know, you have their e-mail address, right?  You can just send
 them nagging mail about how their tools are FAIL! ;-)

I'd already come to the same realization, actually.

 So, what do you show the author and/or users by default?  That is
 possibly still rather sticky, but making it adjustable (perhaps
 according to the maintainer by default) should allow reasonable people
 to be happy enough.

That's the stuff that I'd probably defer to CPAN Testers 2.0.  Once
test reports are structured data, it will be much easier to filter on
toolchain, environement, etc.  Right now, someone would have to
effectively screen scrape the reports -- with variations for
CPANPLUS, CPAN::Reporter and their different formats over time.  It
could be done -- this is Perl after all -- but that's not where I'm
interested in putting my energies.

-- David


Re: FAIL Error - please fix your smoker configuration

2008-09-04 Thread David Golden
On Thu, Sep 4, 2008 at 4:56 AM, Shlomi Fish [EMAIL PROTECTED] wrote:
 -j2 is invalid for ./Build and you shouldn't use it with it. Alternatively,
 you can use perl Makefile.PL ; make ; , etc., which is also supported by
 the Error distribution.

 But as it stands, you're giving many false positives.

FYI, that's a CPAN.pm bug:

http://rt.cpan.org/Public/Bug/Display.html?id=32823

Fixed in CPAN 1.92_57.

I strongly encourage testers to upgrade to a recent CPAN dev version
if you're going to use -jN flags.

I've added the need to detect and discard that case to the
CPAN::Reporter TODO list.

-- David


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Greg Sabino Mullane

Two cents from someone who appreciates the hell out of the CPAN testing
service and eagerly awaits new reports every time I release a new version
of a module.

 However, from author's perspective, if a report is legitimate (and
 assuming they care), they really only need to hear it once.  Having
 more and more testers sending the same FAIL report on platform X is
 overkill and gives yet more encouragement for authors to tune out.
 
 So the more successful CPAN Testers is in attracting new testers, the
 more duplicate FAIL reports authors are likely to receive, which makes
 them less likely to pay attention to them.

Sorry, but paying attention is the author's job. A fail is something that
should be fixed, period, regardless of the number of them. As mentioned
elsewhere, the idea of author's receiving FAIL reports is outdated
anyway: they should be pulling them via a RSS feed.

 First, we can lower our collective tolerance of false positives -- for
 example, stop telling authors to just ignore bogus reports if they
 don't like it and find ways to filter them.

+1

 Second, we can reclassify PL/make/Build fails to UNKNOWN.

I don't like this:  failure by any other name would smell just as bad. In
other words, if an end user is not going to have a happy, functional
module after typing install Foo::Bar at the CPAN prompt, this is a failure
that should be noted as such and fixed by the author. Makefiles have a
surprising amount of power and flexibility in this regard.

 However, as long as the CPAN Testers system has individual testers
 emailing authors, there is little we can do to address the problem of
 repetition.

Yep. Use RSS or deal with the duplicates, I say.

 For those who read this to the end, thank you for your attention to
 what is surely becoming a tedious subject.

Thanks for raising it. I honestly feel the problem is not with the testers
or the testing service, but the authors. But perhaps I'm still grumpy from
the slew of modules I've come across on CPAN lately that are popular yet
obviously unmaintained, with bug reports, questions, and unapplied patches
that linger in the RT queues for years. It would be nice if we had some
sort of system that tracked and reported on that.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Andy Lester


On Sep 4, 2008, at 10:30 AM, Greg Sabino Mullane wrote:


So the more successful CPAN Testers is in attracting new testers, the
more duplicate FAIL reports authors are likely to receive, which  
makes

them less likely to pay attention to them.


Sorry, but paying attention is the author's job. A fail is something  
that

should be fixed, period, regardless of the number of them.



According to who?  Who's to say what my job as an author is?

--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance





Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread chromatic
On Thursday 04 September 2008 01:19:44 Eric Wilhelm wrote:

 Let's pretend that I'm a real jerk of an author and I only care about
 whether my code installs on a perl 5.8.8+ (a *real* perl -- no funky
 vendor patches) with a fully updated and properly configured toolchain
 and the clock set to within 0.65s of NTP time.

Oh great, yet another check I have to add to my Build.PL.  What's the magic 
cantrip for THAT one?

(Why yes, I *have* seen bugs related to time skew on network-mounted paths),
-- c


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread chromatic
On Thursday 04 September 2008 08:30:19 Greg Sabino Mullane wrote:

 Sorry, but paying attention is the author's job. A fail is something that
 should be fixed, period, regardless of the number of them.

My job is editor, not programmer.  Also novelist -- but again, not programmer.  
Certainly not CPAN programmer.

I volunteer as a programmer, but your goals don't automatically become my 
goals because you want them to become my goals.  They only become my goals if 
we reach some sort of contractual arrangement with mutual consideration, and 
then only for the scope of the contract.

Paying attention is not my job.  Releasing software I've written under a free 
and open license does not instill in me the obligation to jump whenever a 
user snaps his or her fingers.

I may do so because I take the quality and utility of my software seriously, 
but do not mistake that for anything which may instill in you any sort of 
entitlement.  That is an excellent way not to get what you want from me.

 I don't like this:  failure by any other name would smell just as bad. In
 other words, if an end user is not going to have a happy, functional
 module after typing install Foo::Bar at the CPAN prompt, this is a failure
 that should be noted as such and fixed by the author.

Then CPAN Testers reports should come with login instructions so that I can 
resurrect decade-old skills and perform system administration to fix broken 
installations and configurations -- oh, and as you say, a truly *modern* 
reporting system should publish these logins and passwords publicly in 
syndication feeds.

(I do see a couple of problems with that idea, however.)

I fail to understand the mechanism by which CPAN Testers has seemingly removed 
the ability of testers to report bugs to the correct places.  For example, 
consider Shlomi's earlier message suggesting that a misconfigured CPAN 
configuration (caused by a bug in CPAN.pm) was the cause of FAIL reports for 
one of Shlomi's distributions.  (I saw and responded to a similar report 
earlier this week.)

Yes, there was a bug and yes, it's important to get it fixed.

However, by what possible logic can you conclude that the appropriate way to 
get that bug fixed is to report it to people who, given all of the 
information detected automatically, *do not* maintain CPAN.pm?

Oh, perhaps you think, it's easy for them to read the reports and diagnose 
the problem remotely on machines they have never seen before, did not 
configure, and cannot probe -- and it's so easy for them to file a bug in the 
right place!  If you don't think that, precisely what *do* you think to 
produce such a bold assertion that it is Shlomi's job to install and 
reconfigure a new version of CPAN.pm for a CPAN Tester -- or for that matter, 
everyone with a misconfigured version of CPAN.pm which contains this bug?

It's not my job to fix bugs in *my own* distributions.  I do it because I 
care about quality and, contrary to what appears to be near-universal belief 
around here, I care that people can use my code.

It is most assuredly not my job to fix bugs in distributions I've never 
maintained, nor report bugs in distributions I may not use, nor report bugs I 
haven't encountered and diagnosed myself.

(The parallels to challenge-response email systems amuse me.)

 Thanks for raising it. I honestly feel the problem is not with the testers
 or the testing service, but the authors. But perhaps I'm still grumpy from
 the slew of modules I've come across on CPAN lately that are popular yet
 obviously unmaintained, with bug reports, questions, and unapplied patches
 that linger in the RT queues for years. It would be nice if we had some
 sort of system that tracked and reported on that.

Besides rt.cpan.org?

-- c


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread David Cantrell
On Thu, Sep 04, 2008 at 10:09:20AM -0700, chromatic wrote:

 I fail to understand ...

that much is obvious

 ... the mechanism by which CPAN Testers has seemingly removed
 the ability of testers to report bugs to the correct places.

What a lovely straw man!  Even nicer than the previous one.

   For example, 
 consider Shlomi's earlier message suggesting that a misconfigured CPAN 
 configuration (caused by a bug in CPAN.pm) was the cause of FAIL reports for 
 one of Shlomi's distributions.  (I saw and responded to a similar report 
 earlier this week.)

Change the record, please.  This one's getting boring.

Maybe I should start being equally loud and obnoxious about obviously
stupid and broken things like the existence of UNIVERSAL-isa.  It might
give you some appreciation for how you're coming across here.

-- 
David Cantrell | Enforcer, South London Linguistic Massive

Perl: the only language that makes Welsh look acceptable


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Greg Sabino Mullane
  Sorry, but paying attention is the author's job. A fail is something  
  that should be fixed, period, regardless of the number of them.
 
 According to who?  Who's to say what my job as an author is?

Obviously I should be semantically careful: job and author are
overloaded words. How about this:

It's a general expectation among users of Perl that module maintainers are
interested in maintaining their modules, and that said maintainers will try
their best to remove any failing tests (that are under their power to do
so).

The parenthetical bit at the end is in response to the broken-CPAN straw
man argument. Obviously (rare) things like that are out of control of the
author, along with bugs in any other dependencies, OS utility, etc.

I recognize that CPAN is a volunteer effort, but it does seem to me there
is a implicit responsibility on the part of the author to maintain the
module going forward, or to pass the baton to someone else. Call it a Best
Practice, if you will. The end-user simply wants the module to work.
Maintainers not paying attention, and the subsequent bitrot that is
appearing on CPAN, is one of Perl's biggest problems at the moment.
Careful attention and responsiveness to CPAN testers and to rt.cpan.org is
the best cure for this.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Andy Lester


On Sep 4, 2008, at 12:50 PM, David Cantrell wrote:



I fail to understand ...


that much is obvious



And here we have the core problem.  chromatic, among others, have  
expressed frustration about CPAN Testers.  The reaction has never been  
positive.  Here, chromatic is insulted for simply saying I would like  
it if...



--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance





Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread David Golden
On Thu, Sep 4, 2008 at 1:09 PM, chromatic [EMAIL PROTECTED] wrote:
 I fail to understand the mechanism by which CPAN Testers has seemingly removed
 the ability of testers to report bugs to the correct places.  For example,

I think it's a mistake to set this up as just an author-vs-tester
zero-sum situation.  I see this as a team game where (hopefully) we're
all pretty much on the same side in that we want stuff to work.

It shouldn't be any big deal to report a failure -- once -- to an
author.  That's just the normal bug-report cycle as an author might
get from any human user.  Author can look into it (if they care to),
decide if it's a legitimate bug of theirs or if it's upstream.  In
some cases upstream is the toolchain.  In other cases, it's
dependencies.

For example, RJBS noted how a CPAN Testers report helped him find a
dependency issue:  http://use.perl.org/~rjbs/journal/37336

The difference with toolchain-driven failures is that authors often
can't really do anything about them directly.  They can add
workarounds, yes, but not all authors want to spend their time doing
that.  (Nor should we expect them too, really.)  Usually, the easiest
answer is for the end-user to upgrade their toolchain.

That's why we need to partition the failure types, so that authors can
distinguish between test failures -- which we presume they are likely
to be interested in addressing -- instead of PL/make failures, which
they may not be interested in addressing.  It's not to say that one is
less of a problem for end-users.  Using UNKNOWN is just convenient at
the moment because it exists already and is minimally used.

That said, it should still be responsibility of testers to ensure they
have a reasonably sane configuration that could potentially be
successful at building and testing a distribution.  It does very
little good to have a broken CPAN that causes Build -j3 errors -- no
Build.PL could ever succeed and so the fact that a Build.PL dist
failed isn't telling us anything valuable about the distribution.

-- David


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Greg Sabino Mullane

 I may do so because I take the quality and utility of my software
 seriously, but do not mistake that for anything which may instill in you
 any sort of entitlement.  That is an excellent way not to get what you
 want from me.

It's not an entitlement, it's a shared goal of making Perl better. If a
maintainer is going to ignore test reports, perhaps its time to add a
co-maintainer.

 Then CPAN Testers reports should come with login instructions so that I
 can resurrect decade-old skills and perform system administration to fix
 broken installations and configurations -- oh, and as you say, a truly
 *modern* reporting system should publish these logins and passwords
 publicly in syndication feeds.

 (I do see a couple of problems with that idea, however.)

Besides the number of straw men starting to fill the room?

 However, by what possible logic can you conclude that the appropriate
 way to get that bug fixed is to report it to people who, given all of
 the information detected automatically, *do not* maintain CPAN.pm?

That's not my argument at all. But the great majority of
non-CPAN.pm-editing build errors can be fixed by the maintainer. Or at
least routed around for testing purposes.

 Oh, perhaps you think, it's easy for them to read the reports and
 diagnose the problem remotely on machines they have never seen before,
 did not configure, and cannot probe -- and it's so easy for them to file
 a bug in the right place!

I don't think this is easy at all. But it's also not quite the burden you
make it appear. All the testers I've contacted about helping me fix test
failures (build or otherwise) have been friendly and responsive.

  and unapplied patches that linger in the RT queues for years. It would
  be nice if we had some sort of system that tracked and reported on
  that.
 
 Besides rt.cpan.org?

Yes, something that indicates the age and number of open bugs for a
module, the age of any unapplied patches, and perhaps some other metrics
to indicate maintainedness. Cross referencing that with popularity and
dependency chains would be a great triage system to start whipping CPAN
into shape.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread chromatic
On Thursday 04 September 2008 11:30:51 David Golden wrote:

 It shouldn't be any big deal to report a failure -- once -- to an
 author.  That's just the normal bug-report cycle as an author might
 get from any human user.  Author can look into it (if they care to),
 decide if it's a legitimate bug of theirs or if it's upstream.  In
 some cases upstream is the toolchain.  In other cases, it's
 dependencies.

I can live with occasionally having to triage tricky reports which may need 
resolution elsewhere, provided that:

* CPAN Testers clients filter out the most common cases of toolchain 
misconfiguration (in progress, probably will require ongoing maintenance)

* Duplicate reports get filtered out (in planning)

* CPAN Testers appear to have done some triaging of failures on their own (at 
least reading the report and deciding if it's appropriate to send to the 
author before sending it)

These would make the process more worthwhile to me.  I only speak for myself 
here, of course, and you're well within your rights to ignore my suggestions 
or desires here.

 That said, it should still be responsibility of testers to ensure they
 have a reasonably sane configuration that could potentially be
 successful at building and testing a distribution.  It does very
 little good to have a broken CPAN that causes Build -j3 errors -- no
 Build.PL could ever succeed and so the fact that a Build.PL dist
 failed isn't telling us anything valuable about the distribution.

Yes!  That's the philosophy I want applied to all sorts of tests.  Does this 
test tell me anything valuable?  Does this test tell me anything actionable?  
Is that information worth the cost of the test?

(Don't worry; this is not a problem specific to CPAN Testers.  I see it in 
plenty of test suites, all the time.)

-- c


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Andrew Moore
Hi Andy -

On Thu, Sep 4, 2008 at 1:53 PM, Andy Lester [EMAIL PROTECTED] wrote:
 Why should I release my software on CPAN if part of the price of entry is
 being spammed and told what I should be doing?

Although the remark about CPAN authors' jobs was worded less than
optimally, part of the context of it was that these reports should not
get emailed directly to the authors from the testers. That sounds like
a point of very widespread agreement. It allows interested authors and
interested users to optionally check on test results.

It also sounds like there is work being done to cut down on some of
the toolchain problems that get reported as failures.

Do these two things help make the CPAN Testers stuff more useful or at
least less annoying for you?

-Andy


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Andy Lester


On Sep 4, 2008, at 2:11 PM, Andrew Moore wrote:


Do these two things help make the CPAN Testers stuff more useful or at
least less annoying for you?



The only thing that will make CPAN Testers less annoying at this point  
is if I am ASKED WHAT I WANT, instead of being told Here's what we're  
doing and dammit, you should like it!


It is a problem of attitude.  Who is serving who?  Who is the customer?

xoxo,
Andy

--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance





What do you want? (was Re: The relation between CPAN Testers and quality)

2008-09-04 Thread David Golden
On Thu, Sep 4, 2008 at 3:21 PM, Andy Lester [EMAIL PROTECTED] wrote:
 The only thing that will make CPAN Testers less annoying at this point is if
 I am ASKED WHAT I WANT, instead of being told Here's what we're doing and
 dammit, you should like it!

Andy,

What do you want?

More precisely, since we do have a starting point, what do you want
from distributed, volunteer testing of CPAN distributions across
multiple architectures and versions of perl?

I'm not being snide.  I've heard what you don't want.  I hope that you
see that there is interest in making things better.

So what would you find useful from a project like this?

-- David


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Andrew Moore
On Thu, Sep 4, 2008 at 2:21 PM, Andy Lester [EMAIL PROTECTED] wrote:
 The only thing that will make CPAN Testers less annoying at this point is if
 I am ASKED WHAT I WANT, instead of being told Here's what we're doing and
 dammit, you should like it!

You're right, Andy. I was being sort of vague there.

What do you want?

Do you see something (that hasn't already been discussed here in the
last two days) that the people writing this code could do to make it
more useful for you?

Please note: I have never contributed anything to the CPAN testing
project other than a few reports. It looks to me like there's a bit of
a communication gap here that I'm hoping to fill for just a moment.
There's a lot of people talking past each other instead of with each
other. The contributors appear to be interested in making a useful
tool, though, and it sounds like we're making progress.

-Andy


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Bram

Citeren Andy Lester [EMAIL PROTECTED]:



On Sep 4, 2008, at 1:33 PM, Greg Sabino Mullane wrote:


It's not an entitlement, it's a shared goal of making Perl better. If a
maintainer is going to ignore test reports, perhaps its time to add a
co-maintainer.



Yes, something that indicates the age and number of open bugs for a
module, the age of any unapplied patches, and perhaps some other metrics
to indicate maintainedness. Cross referencing that with popularity and
dependency chains would be a great triage system to start whipping CPAN
into shape.



Maybe what's so frustrating to me, and perhaps to chromatic, and
whoever else ignores CPAN Testers but doesn't discuss it, is that we're
being fed things that we should be thankful for and goddammit why
aren't we appreciative??!?


You shouldn't be thankful or appreciative. (IMHO)
The community should be (and is) thankful that you chose to put your  
code out there for everyone to use and improve.


Automated testing is easy and (I believe) isn't that time consuming.
Writing code, solving bugs, adding features and making the code more  
available (platform/configuration indipendent) isn't easy and  
certainly can consume lots of time.


Thanks should go to the person that spends/spended the most time on  
this. Meaning the author and contributors.




Here are the things that we have determined are quality.

Here are test reports reporting on failures for these things that we
care about you caring about.

Maybe you should add a co-maintainer.

Your responsibility as an author is to...


It's obvious that the main idea behind these messages is to improve  
the quality of the distributions on CPAN.


What the messages are ignoring is that it isn't (IMHO) the task/job of  
the author.


In a perfect world each FAIL report would come with a patch attached  
that the author can chose to apply. Unfortunally we are not living in  
a perfect world...



Is a FAIL under configuration XYZ a problem?
To some (real) users it certainly can be.
But they have three choiches:
- Find the source of the problem and fix it themself (and contribute a patch)
- Contact the author asking for help
- Look for another solution



CPAN Testers is entirely based on the concept of *unsolicited advice*
in the name of helping the author.


In essence it is (IMHO) about helping the users of the module and not  
the author.


The goal is to make the module more available.
If the author of the module choses to do so then that is great!

If the author does not then that is perfectly fine and then the FAILs  
still serve a purpose:
- someone else might care a lot about your module and noticing the  
FAIL reports and starts patching
- the users are (in theory) informed that the module is not expected  
to work under configuration XYZ.


(Obviously it would be best that the author in that case isn't spammed  
with FAIL reports but I belive work for that is in progress.)



From a beautiful article I've
reproduced at http://xoa.petdance.com/Unsolicited_advice

== snip ==

Life's little helpers reason that the first step toward improvement is
the realization that things need to be improved. That is why they feel
justified in approaching you when you are perfectly content in order to
point out that everything you do, eat, and love is a dreadful mistake.
Because they themselves are so full of good wishes for the rest of
humanity, they do not expect their beneficiaries to be petty. They
figure that upon being told how you have mismanaged your life, you will
be grateful for the offer of assistance and reassured that others are
watching out for you. It stands to reason that one who obviously does
not know what is best for himself would be relieved to find that others
are willing to take on that responsibility.

After all, they don't just stop after telling you that is wrong, but
always go on to explain in detail how you can do things the way they do
them. In other words, the right way.

== snip ==

And then, when we say OK, I'm not interested in stopping smoking,
losing weight, or checking for Perl version 5.6.1, we're told how full
of shit we are.


Which is a pity. Given that you already did a great job by making your  
code available to everyone.



Why should I release my software on CPAN if part of the price of entry
is being spammed and told what I should be doing?


People will always complain. It's a lot easier to complain that the  
author isn't doing their 'job' 'correctly' then helping the author out  
by sending patches.


What you shouldn't forget is you are hearing much less from the many  
many users that are using your software and are very grateful for and  
happy with it.




Kind regards,

Bram




Reporting Bugs Where they Belong (was Re: The relation between CPAN Testers and quality)

2008-09-04 Thread chromatic
On Thursday 04 September 2008 10:50:37 David Cantrell wrote:

 Maybe I should start being equally loud and obnoxious about obviously
 stupid and broken things like the existence of UNIVERSAL-isa.  It might
 give you some appreciation for how you're coming across here.

UNIVERSAL::isa and UNIVERSAL::can are examples of applying the design 
principle of Report Bugs Where They Are, Not Where They Appear.

(Anyone who's debugged memory corruption problems in C or C++ should recognize 
this principle.)

I've received far too many bug report for Test::MockObject where other code 
elsewhere used methods as functions to perform type checking.  I don't have 
the time, nor power, nor inclination to file the appropriate bugs for all of 
the CPAN, much less the DarkPAN, nor wait for fixes to all of those bugs and 
upgrades to all of the appropriate distributions.

Thus, U::i and U::c work around the problem by detecting the failure 
condition, revising it to work correctly, and reporting the bug at its point 
of origin.  (Earlier versions had one tremendous flaw in that they reported 
all *potential* failures, rather than actual actionable failures explicitly 
worked around.  This was a huge mistake to which I clung to stubbornly for 
far too long, and I've corrected it in recent versions.  However good my 
intentions in maintaining that feature, the effects worked against my goals.)

Hiding bugs doesn't get bugs fixed.  It only multiplies hacky workarounds.  
(Yes, U::i and U::c are hacky workarounds I'd love never to have to use.)  
Adding an extra step to the bug triaging and reporting process doesn't get 
bugs fixed either.

*That* is why I consider this philosophy important.  It's a question of 
spending limited time and resources where they're most effective.

As an aside, I'm open to the idea of loading U::i and U::c from T::MO only 
when people request a debugging mode, such as:

use Test::MockObject ':debug';

... but my concern is that no matter how well I document the idea that if 
T::MO and T::MO::E appear not to work correctly and that there may be 
method-as-function bugs causing the problem, I'll again get a flurry of bug 
reports that I'll have to shunt to other distributions.  More likely, they'll 
linger in a mail folder for a while and I'll delete them, months later.  I am 
*not* the person you want reporting bugs that don't affect me.  Attempts to 
make them affect me do not work.  That's just how my brain works.

-- c


Re: testers, authors, qa and making it all worthwhile

2008-09-04 Thread Eric Wilhelm
# from David Golden
# on Thursday 04 September 2008 11:30:

On Thu, Sep 4, 2008 at 1:09 PM, chromatic [EMAIL PROTECTED] wrote:
 I fail to understand the mechanism by which CPAN Testers has
 seemingly removed the ability of testers to report bugs to the
 correct places.  For example,

I think it's a mistake to set this up as just an author-vs-tester
zero-sum situation.  I see this as a team game where (hopefully) we're
all pretty much on the same side in that we want stuff to work.

Yes.  Thank you.

Having read a bit of the cpantesters archive, I now understand more 
about where everyone is coming from.

We all have to realize that this is a team game and that there are 
different interests and different camps.  The rules may at times be 
similar to a workplace, but if anyone is being paid here, they are 
almost certainly being paid to play some *other* game.  If this were a 
workplace, perhaps we would have all been fired long ago.  If testers 
tell development to do something and development tells the testers
no, and this goes on for years - the boss is going to call us all into 
the meeting room one morning and I think you can all imagine how angry 
she will be.

The same goes for testers being told to do something.  If the developers 
are running on a new toolchain, testers refusing to use it are not 
serving the team.  If we do have a customer running on an old 
toolchain, that's probably the integration department's problem and the 
development department should expect integration to do the work to 
backport and support those targets.

You might also note that a productive and functional team has a qa 
department checking the output of automated smoke tests and working to 
make this more useful to the developers.

But we don't have a boss here, so the developers get to decide what they 
want to do and the testers get to decide what they want to test, and 
does anybody know where the integration department is?

As for complaints:  Please don't get discouraged when receiving them and 
try to be constructive when sending them.  Complaints are useful - they 
tell us that something is wrong.  Now, I think a lot of the conflicts 
come from the fact that the cpantesters have robots to do their 
complaining in a massively parallel distributed way, whereas authors 
have to take time out of their coding to write individual handcrafted 
complaints about the robotic complaints.

I *could* write a program to complain back at every test report which 
doesn't have Module::Build installed or has the client configured to 
run Makefile.PL first or etc.  Would that be useful?  Probably not.

But I think we do need some system to register complaints about the 
complaints.  For example, the CPAN.pm -j 2 thing has probably resulted 
in a lot of spurious FAIL reports on various distributions which are 
not CPAN.pm.  How can we track those down and draw a line through them?  
Is this something that an awesome volunteer could do to watch for these 
things which would make the automated test output more useful?  Is 
there something that could be done to automate and distribute this sort 
of issue tracking and resolution?  Would that cut down on the false 
FAIL?  How about checking that all PASSes are truly passes?

In short, there is lots that could be done.  None of it is anybody's 
job, there is no one boss to hold anyone accountable, etc.  There will 
be no whipping or firing.  There will be lots of complaining, lots of 
good work, plenty of thankless work all around, some bad code, some bad 
decisions, some runaway bots, and probably even some well-written, 
well-tested, useful code.

I too want to improve the CPAN, and my particular bent is toward making 
APIs that make it easier to write good code and finding ways to do new 
and interesting things.  These are things that I personally enjoy doing 
and if I'm any good at it, maybe someone else can get some benefit from 
it.  Everything else is a yak to shave.  If you personally enjoy 
tackling what I consider a yak, I would be more than happy to see it 
shaven.  Putting a pile of yaks in front of someone else results in 
dead yaks instead of shaven ones.

Thanks,
Eric
-- 
Minus 1 million points for not setting your clock forward and trying the
code.
--Michael Schwern
---
http://scratchcomputing.com
---


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread David Cantrell
On Thu, Sep 04, 2008 at 06:50:37PM +0100, David Cantrell wrote:
 On Thu, Sep 04, 2008 at 10:09:20AM -0700, chromatic wrote:
  I fail to understand ...
 that much is obvious
 [etc]

My apologies chromatic, I shouldn't have lost my temper and said that.

-- 
David Cantrell | London Perl Mongers Deputy Chief Heretic

Planckton: n, the smallest possible living thing


revised meta.yml stats

2008-09-04 Thread Eric Wilhelm
Hi all,

I just realized that my meta.yml extractor was counting each dist once 
for every module in the dist (just going through 02packages.details.txt 
line by line = duh!)

Has anyone seen Randy Sims?  His site had something like this at one 
point, but thepierianspring.org = ENOIP!

So, my hacky code if you want it:
   http://scratchcomputing.com/tmp/mb.metacheck.pl

Anyway, the numbers...
  (I'm stripping the versions and lowercasing for purposes of grouping.)
  (e-mail addresses removed -- where did that meme come from?)



11029 with 'generated_by' set in 4 META.yml files of 16073 dists

extutils::makemaker: 6684
module::build: 2286
module::install: 1893
hand 1.0: 30
extutils::my_metafile: 22
jan dubois: 22
hand: 22
humans: 6
dist::zilla::plugin::metayaml: 5
vim: 3
author: 3
mike eldridge : 3
randy kobes: 3
h2xs: 3
hands: 2
dumpfile of yaml perl module: 2
docmaker: 2
build/version_check.pl: 2
e pkg v1.0.75l6d1c: 2
mark shelor: 2
denis petrov: 1
hand!: 1
curve::hands: 1
/usr/bin/vim: 1
dicop-server: 1
martin becker: 1
math-fractal-mandelbrot: 1
mischa poslawsky : 1
devel-size-report: 1
dicop-proxy: 1
dicop-base: 1
math-bigint-gmp: 1
hacked manually: 1
imager: 1
rick measham : 1
graph-usage: 1
sanko robinson : 1
kostas pentikousis: 1
mathew robertson: 1
makemaker at first, then by hand: 1
extutils::modulemaker 0.49: 1
jonathan leffler: 1
scratch: 1
graph-dependency: 1
andrew j. korty : 1
set-light: 1
convert-wiki: 1
slaven rezic: 1
mattia barbon, by hand: 1
geoff richards: 1
lingua-en-squeeze: 1
./util/write_yaml: 1
-- 
Left to themselves, things tend to go from bad to worse.
--Murphy's Corollary
---
http://scratchcomputing.com
---


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread David Cantrell
On Thu, Sep 04, 2008 at 02:21:23PM -0500, Andy Lester wrote:
 On Sep 4, 2008, at 2:11 PM, Andrew Moore wrote:
 Do these two things help make the CPAN Testers stuff more useful or at
 least less annoying for you?
 The only thing that will make CPAN Testers less annoying at this point  
 is if I am ASKED WHAT I WANT, instead of being told Here's what we're  
 doing and dammit, you should like it!

You've already made it perfectly clear to me that you have no interest
in receiving reports at present.  That's why I stopped sending them to
you.

I'll do the same for anyone.  You don't have to like it.  What you do
have to accept, however, is that like just about everything else perlish
the systems in place grew in an ad-hoc fashion because us perl
programmers have an annoying tendency to just fucking do it.  We see a
problem, we solve it.  We see something that we think needs doing, we do
it.  And then people build further ad-hoccery on top of that, and so on.
That's why we have the regrettable situation *currently* that you are
asked to opt out instead of opting in.  Sorry, but that's the way it is
at the moment.

Want another example of ad-hoc JFDI being bad?  Schwern can break
CPAN.  He's done it at least once recently.  It got fixed because the
people affected told him.  It would have been better if he'd passed his
changes to a release manager who would make sure they were all
thoroughly tested against the entire CPAN before release.

You may have noticed that when the CPAN-testers screw up *and we're told
about it* things get fixed too.  If I send a bogus report, I want to
know, and then if it's within my power I'll fix it and if it's not I'll
pass it up the chain to the right person.

What I'm not willing to do, however, is to manually check every report
and ensure perfection that way.  Why?  Because it takes too long, and I
have a job and a life.  And anyway, I'd still make mistakes - and even if
I don't make mistakes, people will still think I have.  Everyone makes
mistakes when they're doing a boring job, doubly so without the prospect
of reward.  So anyone who insists that I read every report I send to them
will just get no reports from me.

Again, all you have to do to stop me sending you reports is *tell me to
stop sending you reports*.

-- 
David Cantrell | Nth greatest programmer in the world

comparative and superlative explained:

Huhn worse, worser, worsest, worsted, wasted


Devel::Cover: metadata about files being reported on

2008-09-04 Thread James E Keenan
At 
http://thenceforward.net/parrot/coverage/configure-build/coverage.html I 
have for over a year displayed the results of coverage analysis on 
Parrot's configuration and build tools.


I have come to realize that while these reports are very useful for me 
as the maintainer of the Perl 5 aspect of these tools, they may be less 
useful for other Parrot developers because they display no metadata 
about the coverage run.  They do not, for example, display:


* the SVN revision number at which coverage was run
* which branch of the repository was being tested (if not trunk)
* the date and time coverage was run (though this is available if you go 
one directory level up)


Furthermore, suppose I wanted to display the coverage for a CPAN module. 
 It would be useful to display the module's official version number.


I could, I suppose, modify the shell script that runs my coverage 
analysis to also create a plain-text metadata file in the same directory 
as the .html files.  But that would require a user to do browser 
navigation.  Not elegant.


Are there any options currently available that would achieve these 
objectives?  Or do these go on the D::C wishlist?


Thank you very much.
Jim Keenan


Re: What do you want? (was Re: The relation between CPAN Testers and quality)

2008-09-04 Thread Andy Lester


On Sep 4, 2008, at 2:27 PM, David Golden wrote:


I'm not being snide.  I've heard what you don't want.  I hope that you
see that there is interest in making things better.



In no particular order:

I want nothing in my inbox that I have not explicitly requested.

I want to choose how I get reports, if at all, and at what frequency.

I want aggregation of reports, so that when I send out a module with a  
missing dependency in the Makefile.PL, I don't get a dozen failures in  
a day.  (Related, but not a want of mine, it could aggregate by  
platform so I could see if I had patterns of failure in my code).


I want to be able to sign up for some of this, some of that, on some  
of those platforms.


I want suggestions, not mandates, in how I might improve my code.  I  
want explanations on my CPAN Testers dashboard that explains why I  
would be interested in having such-and-such an option checked on my  
distributions.  See how the Perl::Critic policies have explanations of  
the problem, and why it can be a problem, in the docs for the code.


I want CPAN Testers to be as flexible as Perl::Critic, and even easier  
to do that flexing.


I want the understanding that not everyone shares the same coding  
ideals.


I want to select what kwalitee benchmarks I choose my code to be  
verified under, so that I can proudly say My modules meet these  
criteria across these platforms.  I want a couple dozen checkboxes of  
things that could be checked where I say All my modules had better  
match test X, Y and Z, and these specific modules had also better past  
A, B and C, too.


I want easily selected Kwalitee settings which group together  
options.  Slacker level means you pass these 10 tests, and Lifeguard  
level means you are Slacker + these other 15 tests, and Stringent  
level means something else, all the way up to Super Duper Batshit  
Crazy Anal Perfection level.


I want CPAN Testers to do what I can not easily do, which is test my  
code on other platforms on other versions of Perl.


I do NOT want CPAN Testers to do what I could easily do if I wanted,  
but do not, which is run tests that I don't care about.


I want CPAN Testers to be a service where people say Hey, have you  
seen CPAN Testers?  You've got to check it out, it will help out your  
code so much and then they tell their friends and they tell their  
friends, and passing a certain batter of CPAN Testers tests  
consistently is a badge of honor.


I want the Ruby guys go holy shit, I wish we had something like that.

xoxo,
Andy


--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance