Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread David Wheeler
On Apr 7, 2005, at 5:55 PM, Michael G Schwern wrote:
If you have isDeeply() there's little point to the eq* salad.
Hrm, fair enough. I'll comment them out, then...
Cheers,
David


Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread Michael G Schwern
On Thu, Apr 07, 2005 at 04:17:03PM -0700, David Wheeler wrote:
> Well, right now, isDeeply() should do the right thing. I could just 
> comment out the eqArray() and eqAssoc() functions, or make them tests, 
> too. That'd be pretty easy to do, actually.

If you have isDeeply() there's little point to the eq* salad.



Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread David Wheeler
On Apr 7, 2005, at 1:40 PM, Michael G Schwern wrote:
Zee goggles, zey do nothing!!!
I thought I eliminated the radiation...
Not so different, that's what I would have done were it not for the 
fact
that it alters caller().  If Javascript has no such problems then do 
it,
but I suspect it does.
I have no idea whether JavaScript even has the concept of caller(). I 
need to find that out, though, because right now all errors are 
reported in package '', file '', line 0. I'd really love to know how to 
get at that data.

if( !condition ) {
skip(because, number);
}
else {
...test...
}
Makes sense; that's what Ovid suggested, too. So I'll probably do that.
Could object destruction be used somehow?
That'd be nice, but I can't see how to plug into that in JavaScript. 
Besides, I think that global objects are only destroyed when you leave 
a page, not when code finishes executing.

My only other note in glancing over this is to not make the mistake of
parroting eq*, ie. functions which are not tests.  Proceed directly to
isDeeply() if possible.  If its not possible, if the function 
prototypes
in Javascript won't allow something like isDeeply() then make the eq*
functions real tests such that later isDeeply() style diagnostics can 
be
added.
Well, right now, isDeeply() should do the right thing. I could just 
comment out the eqArray() and eqAssoc() functions, or make them tests, 
too. That'd be pretty easy to do, actually.

Regards,
David


Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread David Wheeler
On Apr 7, 2005, at 12:46 PM, Ovid wrote:
Great work!
Thanks.
Output them to a Results object which, by default, sends the output to
document.write() but allows the user to redirect the output.  For
example, it might be nice to have test results pop up in a separate
window while the main page loads.
Hrm. I'll give that some thought. Not sure if it needs an object other 
than a function reference, though...

How about:
  if (some_condition) {
// do your tests
  }
  else {
skip(number_of_tests)
  }
That'd be easy enough.
And:
  while(todo(4)) { // sets an internal todo counter
// each test automagically decrements the todo counter
// ... but this should be transparent to the test code
  }
A little tricky, but do-able, I think.
Will this screw up a stack trace?
No, since right now there *is* no stack trace. I'm not sure how to get 
at that data in JavaScript.

Also, while javascript doesn't have
the Perl concept of context, are there any scoping issues this might
cause problems with?  Will passing a function reference have any chance
of altering the behavior of the code (it would in Perl).
I don't think so. Function references in JavaScript are purely lexical, 
so they always execute in the scope in which they are defined.

Regards,
David


Re: How to force tests to issue "NA" reports?

2005-04-07 Thread Michael G Schwern
On Thu, Apr 07, 2005 at 05:01:34PM -0500, Ken Williams wrote:
> >Is there a way tests to determine that a module cannot be installed on 
> >a platform so that CPANPLUS or CPAN::YACSmoke can issue an "NA" (Not 
> >Applicable) report?

AFAIK NA reports are issued when a Makefile.PL dies due to a "require 5.00X"
failing.  That's the only way I've seen anyway.



Re: How to force tests to issue "NA" reports?

2005-04-07 Thread Ken Williams
On Apr 6, 2005, at 7:13 AM, Robert Rothenberg wrote:
Is there a way tests to determine that a module cannot be installed on 
a platform so that CPANPLUS or CPAN::YACSmoke can issue an "NA" (Not 
Applicable) report?

CPANPLUS relies on module names (e.g. "Solaris::" or "Win32::") but 
that is not always appropriate in cases where a module runs on many 
platforms  except some that do not have the capability.
In those cases, who's to say that that platform won't get such 
capabilities in the future?  If the module author has to list the 
platforms on which their module won't run, it'll get out of date, and 
the list will likely be incomplete to start out with.


There's also a separate issue of whether "NA" reports should be issued 
if a library is missing.  (Usually these come out as failures.)

People looking at failure reports should be able to tell whether the 
failure occurred because of a missing prerequisite (of which libraries 
are one variety) or because of runtime building/testing problems.  The 
correct way to solve this would be to have a mechanism for declaring 
system library dependencies, then check before smoke-testing whether 
those dependencies are satisfied.

Unfortunately that's a large problem space, and it has eluded attempts 
at cross-platform solutions so far.  It would be really nice if it were 
solved, though.

 -Ken


Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread Michael G Schwern
On Thu, Apr 07, 2005 at 11:23:59AM -0700, David Wheeler wrote:
> Greetings fellow Perlers,
> 
> I'm pleased to announce the first alpha release of my port of  
> TestSimple/More/Builder to JavaScript. You can download it from:
> 
>   http://www.justatheory.com/downloads/TestBuilder-0.01.tar.gz

Zee goggles, zey do nothing!!!


> Please feel free to give it a try and let me know what you think. You  
> can see what the tests look like by loading the files in the tests/  
> directory into your Web browser. This is my first stab at what I hope  
> becomes a complete port. I could use some feedback/ideas on a number of  
> outstanding issues:
> 
> * Skip and Todo tests currently don't work because named blocks (e.g.,  
> SKIP: and TODO:) are lexical in JavaScript. Therefore I cannot get at  
> them from within a function called from within a block (at least not  
> that I can tell). It might be that I need to just pass function  
> references to skip() and todo(), instead. This is a rather different  
> interface than that supported by Test::More, but it might work.  

Not so different, that's what I would have done were it not for the fact
that it alters caller().  If Javascript has no such problems then do it,
but I suspect it does.


> Thoughts?

if( !condition ) {
skip(because, number);
}
else {
...test...
}


> * Currently, one must call Test._ending() to finish running tests. This  
> is because there is no END block to grab on to in JavaScript.  
> Suggestions for how to capture output and append the output of  
> _finish() are welcome. It might work to have the onload event execute  
> it, but then it will have to look for the proper context in which to  
> append it (a  tag, at this point).

Could object destruction be used somehow?

My only other note in glancing over this is to not make the mistake of
parroting eq*, ie. functions which are not tests.  Proceed directly to 
isDeeply() if possible.  If its not possible, if the function prototypes
in Javascript won't allow something like isDeeply() then make the eq* 
functions real tests such that later isDeeply() style diagnostics can be
added.



Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread Ovid
David,

Great work!

> * I have made no decisions as to where to output test results,  
> diagnostics, etc. Currently, they're simply output to
> document.write().  

Output them to a Results object which, by default, sends the output to
document.write() but allows the user to redirect the output.  For
example, it might be nice to have test results pop up in a separate
window while the main page loads.

> * Skip and Todo tests currently don't work because named blocks
> (e.g.,  
> SKIP: and TODO:) are lexical in JavaScript. Therefore I cannot get at

How about:

  if (some_condition) {
// do your tests
  }
  else {
skip(number_of_tests)
  }
  
And:

  while(todo(4)) { // sets an internal todo counter
// each test automagically decrements the todo counter
// ... but this should be transparent to the test code
  }

> that I can tell). It might be that I need to just pass function  
> references to skip() and todo(), instead. This is a rather different 
> interface than that supported by Test::More, but it might work.  
> Thoughts?

Will this screw up a stack trace?  Also, while javascript doesn't have
the Perl concept of context, are there any scoping issues this might
cause problems with?  Will passing a function reference have any chance
of altering the behavior of the code (it would in Perl).

> * Is there threading in JavaScript?

Used to be "no", but I'm not sure about the latest versions.

Cheers,
Ovid

-- 
If this message is a response to a question on a mailing list, please send
follow up questions to the list.

Web Programming with Perl -- http://users.easystreet.com/ovid/cgi_course/


Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread David Wheeler
On Apr 7, 2005, at 12:19 PM, Fergal Daly wrote:
Were you aware of JsUnit?
http://www.edwardh.com/jsunit/
Yes, it's in the "See Also" section of my docs.
I prefer the Test::More style of testing most of the time. I count 
myself
lucky I've never had to use a testing framework for javascript!
I guess that that would only because you don't write JavaScript?
Seriously, I wrote this because I don't want to write anything in 
JavaScript *without* tests. And xUnit is overkill for this sort of 
thing, in my view.

Besides, I'm sure that Adrian will soon take my code to port 
Test::Class to JavaScript, and then we can have both approaches! ;-)

Regards,
David


Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread Fergal Daly
Were you aware of JsUnit?

http://www.edwardh.com/jsunit/

I prefer the Test::More style of testing most of the time. I count myself
lucky I've never had to use a testing framework for javascript!

F

On Thu, Apr 07, 2005 at 11:23:59AM -0700, David Wheeler wrote:
> Greetings fellow Perlers,
> 
> I'm pleased to announce the first alpha release of my port of  
> TestSimple/More/Builder to JavaScript. You can download it from:
> 
>   http://www.justatheory.com/downloads/TestBuilder-0.01.tar.gz
> 
> Please feel free to give it a try and let me know what you think. You  
> can see what the tests look like by loading the files in the tests/  
> directory into your Web browser. This is my first stab at what I hope  
> becomes a complete port. I could use some feedback/ideas on a number of  
> outstanding issues:
> 
> * I have made no decisions as to where to output test results,  
> diagnostics, etc. Currently, they're simply output to document.write().  
> This may well be the best place in the long run, though it might be  
> nice to allow users to configure where output goes. It will also be  
> easy to control the output, since the output functions can easily be  
> replaced in JavaScript. Suggestions welcome.
> 
> * I have no idea how to exit execution of tests other than by throwing  
> an exception, which is only supported by JavaScript 1.5, anyway, AFAIK.  
> As a result, skipAll(), BAILOUT(), and skipRest() do not work.
> 
> * Skip and Todo tests currently don't work because named blocks (e.g.,  
> SKIP: and TODO:) are lexical in JavaScript. Therefore I cannot get at  
> them from within a function called from within a block (at least not  
> that I can tell). It might be that I need to just pass function  
> references to skip() and todo(), instead. This is a rather different  
> interface than that supported by Test::More, but it might work.  
> Thoughts?
> 
> * Currently, one must call Test._ending() to finish running tests. This  
> is because there is no END block to grab on to in JavaScript.  
> Suggestions for how to capture output and append the output of  
> _finish() are welcome. It might work to have the onload event execute  
> it, but then it will have to look for the proper context in which to  
> append it (a  tag, at this point).
> 
> * Anyone have any idea how to get at the line number and file name in a  
> JavaScript? Failures currently aren't too descriptive. As a result, I'm  
> not sure if level() will have any part to play.
> 
> * Is there threading in JavaScript?
> 
> * I haven't written TestHarness yet.
> 
> * I'm using a Module::Build script to build a distribution. I don't  
> think there's a standard for distributing JavaScript libraries, but I  
> think that this works reasonably well. I have all of the documentation  
> in POD, and the script generates HTML and text versions before creating  
> the tarball. The Build.PL script of course is not included in the  
> distribution. I started out trying to write the documentation in JSDoc,  
> but abandoned it for all of the reasons I recounted in my blog last  
> week.
> 
>
> http://www.justatheory.com/computers/programming/javascript/ 
> no_jsdoc_please.html
> 
> * Is there a way to dynamically load a JavaScript file? I'd like to use  
> an approach to have TestMore.js and TestSimple.js load TestBuilder.js.  
> I'd also like to use it to implement loadOk() (equivalent to use_ok()  
> and require_ok()).
> 
> More details are in the ToDo section of the TestBuilder docs.
> 
> Let me know what you think!
> 
> Regards,
> 
> David


Re: Kwalitee and has_test_*

2005-04-07 Thread Pete Krawczyk
Subject: Re: Kwalitee and has_test_*
From: David Golden <[EMAIL PROTECTED]>
Date: Thu, 07 Apr 2005 14:34:21 -0400

}What if I, as a developer, choose to run test as part of my development
}but don't ship them.  Why should I make users have to spent time waiting
}for my test suite to run?

Let's extend that argument a bit - I have two platforms I can test my 
module on - a Linux single-processor CPU and an OSX Panther system.  Both 
are only running one version of Perl (although I could do more, I just 
don't), and both are set up sane (to me).  I run my test suite on both 
before I ship my module off to CPAN.  No problems, right?

Except that my module might run on a Solaris box, or a Windows box, or any
number of alternate platforms, perls and environments that I cannot
envision right now.  The only reliable method a sysadmin has to find out
if the program is doing what the author intended is through some form test 
suite or through other people reporting their successful builds (e.g. how 
djb asks people to mail him a SYSDEPS file on successful install).

If I write something, I also want to make sure that if I receive a bug 
report, it's a real bug and not an environmental bug.  Having a test suite 
gives me a controlled environment in which I can (hopefully) reproduce a 
simple enough test to indicate what's wrong.

}The flip side, of course, is that by including test that are necessary
}for CPANTS, a developer inflicts them on everyone who uses the code.

As a sysadmin, I'd rather spend an extra 5 minutes (or even 5 hours) 
running a regression/testing suite to make sure it doesn't break something 
else than to have a surprise foisted on me at the least inopportune 
moment.  The only reason I really see D::C as not being appropriate for 
"make test" is because it's not a binary - it's more of a fuzzy "how 
much", which people will interpret differently and which may have no 
bearing on how the program operates.

}Counter:  developers should take responsibility for ensuring portability
}instead of hoping it works unti some user breaks it.

It's not just portability.  Should the module I wrote and tested on 5.6.1 
work on 5.8.6?  How about 5.005_04?  CPAN doesn't have a by-perl-rev 
repository for modules, and maintaining one would be a nightmare, at best.


I agree with your stance on Kwalitee.  I think it's important to 
understand that the presence of tests in the first place puts us light 
years ahead of many other systems.  Imagine if you had a full test suite 
(or even a partial) for Windows, or the Linux kernel, etc.  Sure, those 
things aren't necessarily public right now, but if I had a hardware-level 
test suite that simulated what I was actually doing, I could find out much 
quicker if that new stick of RAM I put in my computer was going to cause 
unexpected behavior.

-Pete K
-- 
Pete Krawczyk
  perl at bsod dot net




Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread David Wheeler
On Apr 7, 2005, at 11:32 AM, Christopher H. Laco wrote:
OK, now whos gonna build  JPANTS? :-)
JSPANTS, you mean? I think we need a CJSPAN, first. Alias?
Cheers,
David


Re: Kwalitee and has_test_*

2005-04-07 Thread David Golden
Let's step back a moment.
Does anyone object that CPANTS Kwalitee looks for tests?  Why not apply 
the same arguments against has_test_* to test themselves?  What if I, as 
a developer, choose to run test as part of my development but don't ship 
them.  Why should I make users have to spent time waiting for my test 
suite to run?

Keeping in mind that this is a thought exercise, not a real argument, 
here are some possible reasons (and counter arguments) for including 
test files in a distribution and for Kwalitee to include the existence 
of tests

* Shipping tests is a hint that a developer at least thought about 
testing.  Counter: It's no guarantee of the quality of testing and can 
be easily spoofed to raise quality.

* Tests evaluate the success of the distribution against its design 
goals given a user's unique system and Perl configuration.  Counter: 
developers should take responsibility for ensuring portability instead 
of hoping it works unti some user breaks it.

The first point extends very nicely to both has_test_* and coverage 
testing.  Including a test for pod/pod-coverage shows that the developer 
thought about it.  It doesn't mean that a developer couldn't do those 
things and just not create a *.t file for them, of course, or create a 
*.t file for them and not do those things, either.  The presence of a 
test is just a sign -- and one that doesn't require code to be run to 
determine Kwalitee.  The flip side, of course, is that by including test 
that are necessary for CPANTS, a developer inflicts them on everyone who 
uses the code.  That isn't so terrible for pod and pod coverage testing, 
but it's a much bigger hit for Devel::Cover.

Why not find a way to include them in the META.yml file and have the 
build tools keep track of whether pod/pod-coverage/code-coverage was 
run?  Self reported statistics are easy to fake, but so are the 
has_test_* Kwalitee checks as many people have pointed out.  Anyone who 
is obsessed about Kwality scores is going to fake other other checks, 
too.  And that way, people who have customized their environments can 
report that they are doing it.

As to the benefits of having Devel::Cover run on many environments and 
recording the output, rather than suggest developers put it in a *.t 
file -- which forces all users to cope with it -- instead why not build 
it into CPANPLUS as an option along the lines of how test reporting is 
done.  Make it a user choice, not a mandated action.

Ironically, for all the skeptical comments about "why a scoreboard" -- 
the fact that many people care about the Kwalitee metric suggests that 
it does serve some inspirational purpose.

Regards,
David Golden



Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread Christopher H. Laco
David Wheeler wrote:
Greetings fellow Perlers,
I'm pleased to announce the first alpha release of my port of  
TestSimple/More/Builder to JavaScript. You can download it from:

  http://www.justatheory.com/downloads/TestBuilder-0.01.tar.gz
Very cool. Very sick. :-)
OK, now whos gonna build  JPANTS? :-)
-=Chris


smime.p7s
Description: S/MIME Cryptographic Signature


Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread David Wheeler
On Apr 7, 2005, at 11:28 AM, Andy Lester wrote:
You are a crazy man.
Best feedback I ever had. Brilliant!
D


Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread Andy Lester
On Thu, Apr 07, 2005 at 11:23:59AM -0700, David Wheeler ([EMAIL PROTECTED]) 
wrote:
> I'm pleased to announce the first alpha release of my port of  
> TestSimple/More/Builder to JavaScript. You can download it from:

You are a crazy man.

xoxo,
Andy

-- 
Andy Lester => [EMAIL PROTECTED] => www.petdance.com => AIM:petdance


TestSimple/More/Builder in JavaScript

2005-04-07 Thread David Wheeler
Greetings fellow Perlers,
I'm pleased to announce the first alpha release of my port of  
TestSimple/More/Builder to JavaScript. You can download it from:

  http://www.justatheory.com/downloads/TestBuilder-0.01.tar.gz
Please feel free to give it a try and let me know what you think. You  
can see what the tests look like by loading the files in the tests/  
directory into your Web browser. This is my first stab at what I hope  
becomes a complete port. I could use some feedback/ideas on a number of  
outstanding issues:

* I have made no decisions as to where to output test results,  
diagnostics, etc. Currently, they're simply output to document.write().  
This may well be the best place in the long run, though it might be  
nice to allow users to configure where output goes. It will also be  
easy to control the output, since the output functions can easily be  
replaced in JavaScript. Suggestions welcome.

* I have no idea how to exit execution of tests other than by throwing  
an exception, which is only supported by JavaScript 1.5, anyway, AFAIK.  
As a result, skipAll(), BAILOUT(), and skipRest() do not work.

* Skip and Todo tests currently don't work because named blocks (e.g.,  
SKIP: and TODO:) are lexical in JavaScript. Therefore I cannot get at  
them from within a function called from within a block (at least not  
that I can tell). It might be that I need to just pass function  
references to skip() and todo(), instead. This is a rather different  
interface than that supported by Test::More, but it might work.  
Thoughts?

* Currently, one must call Test._ending() to finish running tests. This  
is because there is no END block to grab on to in JavaScript.  
Suggestions for how to capture output and append the output of  
_finish() are welcome. It might work to have the onload event execute  
it, but then it will have to look for the proper context in which to  
append it (a  tag, at this point).

* Anyone have any idea how to get at the line number and file name in a  
JavaScript? Failures currently aren't too descriptive. As a result, I'm  
not sure if level() will have any part to play.

* Is there threading in JavaScript?
* I haven't written TestHarness yet.
* I'm using a Module::Build script to build a distribution. I don't  
think there's a standard for distributing JavaScript libraries, but I  
think that this works reasonably well. I have all of the documentation  
in POD, and the script generates HTML and text versions before creating  
the tarball. The Build.PL script of course is not included in the  
distribution. I started out trying to write the documentation in JSDoc,  
but abandoned it for all of the reasons I recounted in my blog last  
week.

   
http://www.justatheory.com/computers/programming/javascript/ 
no_jsdoc_please.html

* Is there a way to dynamically load a JavaScript file? I'd like to use  
an approach to have TestMore.js and TestSimple.js load TestBuilder.js.  
I'd also like to use it to implement loadOk() (equivalent to use_ok()  
and require_ok()).

More details are in the ToDo section of the TestBuilder docs.
Let me know what you think!
Regards,
David


Re: Kwalitee and has_test_*

2005-04-07 Thread chromatic
On Thu, 2005-04-07 at 13:22 -0400, Christopher H. Laco wrote:

> How as a module consumer would I find out that the Pod coverage is 
> adequate again? Why the [unshipped] .t file in this case.

How as a module consumer would you find out that the test coverage is
adequate?

Furthermore, what if I as a developer refuse to install POD testing
modules yet ship their tests anyway.  The kwalitee metric assumes that
*I* have run the tests, but I haven't.

For modules with platform-specific behavior, it's *more* useful to make
module users run coverage tests than POD coverage and checking tests.
Which is more likely to vary?  Yet I don't hear a lot of people arguing
that the author making the users do his work for him is a sign of
kwalitee.

Do I have to write Test::Coverage to show what a bad idea this is?

-- c



Re: Kwalitee and has_test_*

2005-04-07 Thread Christopher H. Laco
Tony Bowden wrote:
On Thu, Apr 07, 2005 at 12:32:31PM -0400, Christopher H. Laco wrote:
CPANTS can't check that for me, as I don't ship those tests.
They're part of my development environment, not part of my release tree.
That is true. But if you don't ship them, how do I know you bothered to 
check those things in the first place?

Why do you care? What's the difference to you between me shipping a a .t
file that uses Pod::Coverage, or by having an internal system that uses
Devel::Cover in a mode that makes sure I have 100% coverage on everything,
including POD, or even if I hire a team of Benedictine Monks to peruse
my code and look for problems?
The only thing that should matter to you is whether the Pod coverage is
adequate, not how that happens.
I think you just answered youre own question, assuming you just agreed 
that I should care about whether your pod coverage is adequate.

How as a module consumer would I find out that the Pod coverage is 
adequate again? Why the [unshipped] .t file in this case.

The only other way to tell is to a) write my own pod_coverage.t test for 
 someone elses module at install time, or b) hand review all of the pod 
vs. code.  Or CPANTS.

-=Chris


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Kwalitee and has_test_*

2005-04-07 Thread Tony Bowden
On Thu, Apr 07, 2005 at 12:32:31PM -0400, Christopher H. Laco wrote:
> >CPANTS can't check that for me, as I don't ship those tests.
> >They're part of my development environment, not part of my release tree.
> That is true. But if you don't ship them, how do I know you bothered to 
> check those things in the first place?

Why do you care? What's the difference to you between me shipping a a .t
file that uses Pod::Coverage, or by having an internal system that uses
Devel::Cover in a mode that makes sure I have 100% coverage on everything,
including POD, or even if I hire a team of Benedictine Monks to peruse
my code and look for problems?

The only thing that should matter to you is whether the Pod coverage is
adequate, not how that happens.

Tony



Re: Kwalitee and has_test_*

2005-04-07 Thread Christopher H. Laco
Tony Bowden wrote:
On Thu, Apr 07, 2005 at 08:56:26AM -0400, Christopher H. Laco wrote:
I would go as for to say that checking the authors development 
intentions via checks like Test::Pod::Coverage, Test::Strict, 
Test::Distribution, etc is just as important, if not more, than just 
checkong syntax and that all tests pass.

CPANTS can't check that for me, as I don't ship those tests.
They're part of my development environment, not part of my release tree.
Tony

That is true. But if you don't ship them, how do I know you bothered to 
check those things in the first place?

[I don't think there is a right answer to that question by the way.]
I'm just saying that the presence of those types of tests bumps up some 
level of kwalittee, and they should be left alone within CPANTS.

-=Chris


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Why a scoreboard?

2005-04-07 Thread Ricardo SIGNES
* Adam Kennedy <[EMAIL PROTECTED]> [2005-04-06T23:29:40]
> >Finally, the scoreboard does have a purpose.  Part of the original idea of
> >CPANTS was to provide an automated checklist for a good distribution.
> >
> >Has a README...  check
> >Declares a $VERSION...   check
> >Well behaved tarball...  no
> 
> And as far as I can tell, got sidetracked along the way with "scores". 
> People have to actually fiddle to work out what there scores are...
> 
> Where's the per-module page on CPANTS that lists these simple check/fail?
> 
> Where's the per-author page that has the great big table with module 
> name down the side, and test names across the top and green/red boxes 
> indicating pass/fail?

http://rjbs.manxome.org/cpants/cpants.ql

source: http://rjbs.manxome.org/cpants/cpants.ql.txt

-- 
rjbs


pgps5LfXrx1gs.pgp
Description: PGP signature


Re: Kwalitee and has_test_*

2005-04-07 Thread Tony Bowden
On Thu, Apr 07, 2005 at 08:56:26AM -0400, Christopher H. Laco wrote:
> I would go as for to say that checking the authors development 
> intentions via checks like Test::Pod::Coverage, Test::Strict, 
> Test::Distribution, etc is just as important, if not more, than just 
> checkong syntax and that all tests pass.

CPANTS can't check that for me, as I don't ship those tests.

They're part of my development environment, not part of my release tree.

Tony


Re: Kwalitee and has_test_*

2005-04-07 Thread David Golden
This is an interesting point and triggered the thought in my mind that 
CPANTS "Kwalitee" is really testing *distributions* not modules -- i.e. 
the quality of the packaging, not the underlying code.  That's 
important, too, but quite arbitrary -- insisting that distributions test 
pod and pod coverage is arbitrary.  If CPANTS insisted that all modules 
in a distribution be in a lib directory, that would be arbitrary, too, 
but not consistent with general practice (fortunately, it's written to 
allow a single .pm in the base directory, otherwise there has to be a 
lib directory).

The point I'm making is that CPANTS -- if it is to stay true to purpose 
-- should stick to distribution tests and try to ensure that those 
reflect widespread quality practices, not "evangelization" (however well 
meaning) to push an arbitrary definition of quality on an unruly 
community.  Devel::Cover is a useful tool -- but it pushes further and 
further away from a widespread distribution-level measure of quality.  
(Whereas I see pod testing as analogoous to a compilation test and pod 
coverage testing being a documentation test -- both of which are 
reasonable things to include in a "high quality" test suite.)

David Golden
Christopher H. Laco wrote:
Because they're two seperate issues.
First, checking the pod syntax is ok for the obvious reasons. Broken 
pad leads to doc problems.

Second, we're checkling that the AUTHOR is also checking his/her pod 
syntax and coverage. That's an important distinction.

I would go as for to say that checking the authors development 
intentions via checks like Test::Pod::Coverage, Test::Strict, 
Test::Distribution, etc is just as important, if not more, than just 
checkong syntax and that all tests pass.

Givin two modules with a passing basic.t, I'd go for the one with all 
of the development side tests over the other. Those tests listed above 
signal [to me] that the author [probably] pays more loving concern to 
all facets of their module than the one with just the passing basic.t

-=Chris



Re: Kwalitee and has_test_*

2005-04-07 Thread Christopher H. Laco
Adam Kennedy wrote:
Adding a kwalitee check for a test that runs Devel::Cover by default
might on the surface appear to meet this goal, but I hope people
recognize it as a bad idea.
Why, then, is suggesting that people ship tests for POD errors and
coverage a good idea?

Although I've now added the automated inclusion of a 99_pod.t to my 
packaging system (less for kwalitee than that I've noticed the odd bug 
get through myself) why doesn't kwalitee just check the POD itself, 
rather than make a check for a check?

Adam K

Because they're two seperate issues.
First, checking the pod syntax is ok for the obvious reasons. Broken pad 
leads to doc problems.

Second, we're checkling that the AUTHOR is also checking his/her pod 
syntax and coverage. That's an important distinction.

I would go as for to say that checking the authors development 
intentions via checks like Test::Pod::Coverage, Test::Strict, 
Test::Distribution, etc is just as important, if not more, than just 
checkong syntax and that all tests pass.

Givin two modules with a passing basic.t, I'd go for the one with all of 
the development side tests over the other. Those tests listed above 
signal [to me] that the author [probably] pays more loving concern to 
all facets of their module than the one with just the passing basic.t

-=Chris


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Why a scoreboard?

2005-04-07 Thread Thomas Klausner
Hi!

On Thu, Apr 07, 2005 at 01:29:40PM +1000, Adam Kennedy wrote:

I did most of what you asked for on thursday, but in a hurry, so it might be
buggy..

> Where's the per-module page on CPANTS that lists these simple check/fail?

http://cpants.dev.zsi.at/metrics/
http://cpants.dev.zsi.at/metrics/Acme-BadExample-0.5.yml

> Where's the per-author page that has the great big table with module 
> name down the side, and test names across the top and green/red boxes 
> indicating pass/fail?

Well, nearly:

http://cpants.dev.zsi.at/authors/ADAMK.html

> I _know_ I've got some issues in my modules as uploaded, but in a sea of 
> 60 or 70 modules, I sure as hell don't have time to hunt through and 
> find them all (although I do try to get the obvious bits).

Gabor once had a maypole based interface for CPANTS, but it's currently
down. A webinterface isn't on my TODO list at this time. It probably won't
appear there any time soon. But feel free to download the database and set
something up.


-- 
#!/usr/bin/perl   http://domm.zsi.at
for(ref bless{},just'another'perl'hacker){s-:+-$"-g&&print$_.$/}


Re: Kwalitee and has_test_*

2005-04-07 Thread Thomas Klausner
Hi!

On Thu, Apr 07, 2005 at 01:17:40PM +1000, Adam Kennedy wrote:
> >Adding a kwalitee check for a test that runs Devel::Cover by default
> >might on the surface appear to meet this goal, but I hope people
> >recognize it as a bad idea.
> >
> >Why, then, is suggesting that people ship tests for POD errors and
> >coverage a good idea?
> 
> Although I've now added the automated inclusion of a 99_pod.t to my 
> packaging system (less for kwalitee than that I've noticed the odd bug 
> get through myself) why doesn't kwalitee just check the POD itself, 
> rather than make a check for a check?

It does:

no_pod_errors
Shortcoming: The documentation for this distribution contains syntactic
 errors in it's POD.
Defined in: Module::CPANTS::Generator::Pod

I added the check for Test::Pod because somebody requested it (together with
Test::Pod::Coverage).

While I can see the point why people object to this metrics, I currently
leave them in, mostly because I've got no time for CPANTS right now (mostly
because of the Austrian Perl Workshop organisation (shameless plug:
http://conferences.yapceurope.org/apw2005/)


-- 
#!/usr/bin/perl   http://domm.zsi.at
for(ref bless{},just'another'perl'hacker){s-:+-$"-g&&print$_.$/}


Re: Kwalitee and has_test_*

2005-04-07 Thread Adam Kennedy
Adding a kwalitee check for a test that runs Devel::Cover by default
might on the surface appear to meet this goal, but I hope people
recognize it as a bad idea.
Why, then, is suggesting that people ship tests for POD errors and
coverage a good idea?
Although I've now added the automated inclusion of a 99_pod.t to my 
packaging system (less for kwalitee than that I've noticed the odd bug 
get through myself) why doesn't kwalitee just check the POD itself, 
rather than make a check for a check?

Adam K


Re: Kwalitee and has_test_*

2005-04-07 Thread Adam Kennedy
David Cantrell wrote:
Thomas Klausner wrote:
I cannot check POD coverage because Pod::Coverage executes the code.

No it doesn't.  That said, if you don't want to run the code you're 
testing, you are, errm, limiting yourself rather badly.

Do YOU want to run all of CPAN?
I certainly don't.
Bulk testing requires that you don't have to run it.
Adam K


Re: Why a scoreboard?

2005-04-07 Thread Adam Kennedy
Finally, the scoreboard does have a purpose.  Part of the original idea of
CPANTS was to provide an automated checklist for a good distribution.
Has a README... check
Declares a $VERSION...  check
Well behaved tarball... no
And as far as I can tell, got sidetracked along the way with "scores". 
People have to actually fiddle to work out what there scores are...

Where's the per-module page on CPANTS that lists these simple check/fail?
Where's the per-author page that has the great big table with module 
name down the side, and test names across the top and green/red boxes 
indicating pass/fail?

I _know_ I've got some issues in my modules as uploaded, but in a sea of 
60 or 70 modules, I sure as hell don't have time to hunt through and 
find them all (although I do try to get the obvious bits).

Adam K


Re: Why a scoreboard?

2005-04-07 Thread Adam Kennedy
Hear hear. I'd rather see better-kwalitee kwalitee tests :)
Once the number and value of the kwalitee tests gets higher, it should 
be expected that people are almost never going to score perfect.

Adam K
Johan Vromans wrote:
Michael G Schwern <[EMAIL PROTECTED]> writes:

Has a README... 		check

Bonus points if it differs from the stub, and additional bonus points
if it really describes briefly what the product is.
Rationale: When browsing READMEs they are often meaningless.

Declares a $VERSION...		check

Bonus points if it a $VERSION that can be parsed and compaed unambigously.
Rationale: Many tools want/need to decide whether a version is actual,
or newer.
Just some thoughts.
-- Johan