Re: a less fragile way to test when we need to read and write to STDOUT?

2005-04-01 Thread George Nistorica
On Thu, 2005-03-31 at 16:46 +, Mark Stosberg wrote:
 Hello,

Hello,

 
 I've been working on a Perl test suite for darcs, with notable recent
 help from Schwern. 
 
 We used to have tests that looked like this:
 
like(`echo y | darcs command`,qr/$re/); 
 
 That would run the command and answer y to the first and only question
 it asked. It worked well enough, but I looked for for a pure Perl
 solution in the spirit of being more portable.
 
 I came up with this:
 
  {
  open2(*READ, *WRITE, $DARCS unpull -p add);
  print WRITE a\n;
  like( (READ)[4], qr/really unpull/i, additional confirmation is given 
 when 'all' option is selected);
  close(WRITE);
  close(READ);
  # (We never confirmed, so those patches are still there )
  }
 
 This is more time consuming to write, because not only is more verbose, but I
 know exactly how many lines to read on STDERR. Already for two people 
 something
 got slightly off, causing the test to hang indefinitely.
 
 Windows users weren't having problems with the first method, so maybe I
 should just go back to that. 
 
 I'm I missing an easier and less fragile way to test interactive
 commands with Perl?
 
 Thanks!
 
 Mark

For commands that need more than one input (i.e. shell installers) you
can use the Expect module, which you can use to test such programs, that
wait your input for more than one time.

 
 
 
 
 
 
 
 
 
 
 



Re: Kwalitee and has_test_*

2005-04-01 Thread Thomas Klausner
Hi!

On Sun, Mar 27, 2005 at 11:40:45AM +0100, Tony Bowden wrote:

 There are now two kwalitee tests for 'has_test_pod' and
 'has_test_pod_coverage'. These check that there are test scripts for
 POD correctness and POD coverage.

Actually they check if Test::Pod and Test::Pod::Coverage are used in a test
script.

 These seem completely and utterly wrong to me. Surely the kwalitee
 checks should be purely that the POD is correct and sufficiently covers
 the module?

Well, sort of. In fact, there is a check for pod correctnes (no_pod_errors).

I cannot check POD coverage because Pod::Coverage executes the code. Which
is something I do not want to do (for various reasons, see
  http://domm.zsi.at/talks/2005_brussels_cpants/s00023.html
).

 Otherwise a distribution which has perfectly correct POD, which
 completely covers the module, is deemed to be of lesser kwalitee, and
 listed on CPANTS as having shortcomings.

Well, kwalitee != quality. Currently, kwalitee basically only says how
well-formed a distribution is. For my definition of well-formed :-) But I'm
always open to suggestion etc. Several kwalitee indicators have been removed
because people convvinced me. Several where added (e.g. has_test_pod_coverage)

 We should be very wary of stipulating HOW authors have to achieve their
 quality. Saying you can only check your POD in one specific way goes to
 far IMO.
 
That's a good point.

OTOH, I know of several people who added Pod::Coverage to their test suites
(and hopefully found some undocumented methods...) because of this metric.
Thus one goal (raising the overall quality (!) of CPAN) is reached.

Anyway, I invite everybody to suggest new metrics or convince me why current
metrics are bad. And nobody is going to stop you from fetching the raw data
(either all the YAML files or the SQLite DB) and create your own Kwalitee
(yay, heretic kwalitee!).



-- 
#!/usr/bin/perl   http://domm.zsi.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}


Re: Kwalitee and has_test_*

2005-04-01 Thread chromatic
On Fri, 2005-04-01 at 21:00 +0200, Thomas Klausner wrote:

 On Sun, Mar 27, 2005 at 11:40:45AM +0100, Tony Bowden wrote:

  We should be very wary of stipulating HOW authors have to achieve their
  quality. Saying you can only check your POD in one specific way goes to
  far IMO.
  
 That's a good point.
 
 OTOH, I know of several people who added Pod::Coverage to their test suites
 (and hopefully found some undocumented methods...) because of this metric.
 Thus one goal (raising the overall quality (!) of CPAN) is reached.

Adding a kwalitee check for a test that runs Devel::Cover by default
might on the surface appear to meet this goal, but I hope people
recognize it as a bad idea.

Why, then, is suggesting that people ship tests for POD errors and
coverage a good idea?

-- c



Re: Kwalitee and has_test_*

2005-04-01 Thread Thomas Klausner
Hi!

On Fri, Apr 01, 2005 at 10:59:04AM -0800, chromatic wrote:

 Why, then, is suggesting that people ship tests for POD errors and
 coverage a good idea?

I'm not 100% sure if it's a good idea, but it's an idea. 

But then, if I write some test (eg to check pod coverage), why should I not
ship them? It's a good feeling to let others know that I took some extra
effort to make sure everything works.

Oh, and I really look forward to the time when I grok PPI and can add
metrics like has_cuddled_elses and force /my/ view of how Perl should look
like onto all of you. BWHAHAHAHA! 

OK, seriously. CPANTS currently isn't much more than a joke. It might have
some nice benefits, but it is far from a real quality measurment tool. Never
will it happen that the Perl community decides on one set of kwalitee
metrics. That's why we're writing Perl, not Python. 

Currently, CPANTS tests for what is easy to test, i.e. distribution layout.
Soon it will test more interesting stuff (eg. does prereq and used modules
match up or are used modules missing from prereq). But feel free to ignore
it...


-- 
#!/usr/bin/perl   http://domm.zsi.at
for(ref bless{},just'another'perl'hacker){s-:+-$-gprint$_.$/}


Testing Net-SSLeay

2005-04-01 Thread Walter Goulet
Hi,

I've been in contact with the author of Net-SSLeay about testing his
module. One limitation I have to work with is that the module has to
work out of the box with perl 5.6.0 which doesn't include the
Test::Simple and Test::More modules.

I guess this limits me to using the old Test module. He did make one
suggestion though; would it be worth the effort to selectively use
Test::More or Test depending on what's available?

Finally, I wanted to confirm an assumption: I can split test.pl into a
set of seperate t/*.t test scripts regardless of whether I'm using
Test or Test::More.

Thanks,
Walter


Re: Kwalitee and has_test_*

2005-04-01 Thread chromatic
On Fri, 2005-04-01 at 21:43 +0200, Thomas Klausner wrote:

 But then, if I write some test (eg to check pod coverage), why should I not
 ship them? It's a good feeling to let others know that I took some extra
 effort to make sure everything works.

If I use Devel::Cover to check my test coverage, why should I not ship
that?  If I write use cases to decide which features to add, why should
I not ship that?  If I use Module::Starter templates for tests and
modules, why should I not ship those?

They're all tools for the developer, not the user.  Any benefit they add
to the user comes from the developer using them before the user sees the
code.

If anything it's *more* useful to ship Devel::Cover tests than POD tests
-- cross-platform testing can be difficult and there's the chance of
gaining more data.  Who does it though?

 OK, seriously. CPANTS currently isn't much more than a joke. It might have
 some nice benefits, but it is far from a real quality measurment tool. Never
 will it happen that the Perl community decides on one set of kwalitee
 metrics. That's why we're writing Perl, not Python. 

Suggestion one: take down the scoreboard.  If people agree to your
measure of kwalitee, they can see their own scores.  If you don't intend
to promote your measure as the gold standard of kwalitee, make it
difficult for people to measure the rest of us against it.

Suggestion two: figure out which kwalitee criteria are valid and worth
taking seriously and which aren't and drop the second or make them
optional.  There are good, repeatable, automatably testable metrics that
are close enough to objective standards that it's worth promoting them.

Suggestion three: figure out what the goals of the POD kwalitee criteria
are and test those.  I don't think the criteria should be ships test
files containing strings that match heuristics for the presence and use
of two POD-testing modules.  If anything, a better criterion comes from
running those tests yourself.

There are legitimate questions of Quality that kwalitee can never
address, but there are legitimate questions of Quality that kwalitee
can.  I suggest to focus on those as far as possible.  Sure, it'll never
be the last word on what's good and what isn't, but the metrics could be
a lot more useful.  I would like to see that.

-- c



Re: Kwalitee and has_test_*

2005-04-01 Thread Tony Bowden
On Fri, Apr 01, 2005 at 09:00:17PM +0200, Thomas Klausner wrote:
 Anyway, I invite everybody to suggest new metrics 

I'd like the is pre-req thing to be more useful. Rather than a binary
yes/no thing (and the abuses it leads to), I'd rather have something
akin to Google's Page Rank, where the score you get is based not just on
the number of other modules who link to you, but how large their score
is in turn.

Schwern already gave a good example of this: Ima::DBI. It really only
has one other module for which it's a pre-req: Class::DBI. But that in
turn has quite a few others. So Ima::DBI should somehow get the benefit
of that.

And if all you're used by is 10 Acme:: modules written by yourself...

Tony



Re: [Module::Build] Re: Test::META

2005-04-01 Thread Ken Williams
On Mar 30, 2005, at 6:16 PM, Michael G Schwern wrote:
On Wed, Mar 30, 2005 at 05:53:37PM -0500, Randy W. Sims wrote:
Should we completely open this up so that 
requires/recommends/conflicts
can be applied to any action?

install_recommends = ...
testcover_requires = ...
etc.
This sounds useful and solves a lot of problems at one sweep.  You can 
use
the existing dependency architecture to determine what needs what.  
Such as
testcover needs both test_requires and testcover_requires.
There's a problem with this that I'm not sure how to solve: what 
happens when, as part of refactoring, a chunk of one action gets 
factored out to become its own sub-action?  The dependency may well 
pertain to the new sub-action instead of the original action, but 
distribution authors won't have any way to know this - or even if they 
did, they couldn't declare it in a forward-compatible way.

This is precisely the problem we're hitting with 'build_depends' vs. 
'code_depends'.  At one time, the 'build' action was a dependent of the 
'test' action.  So under our proposed dependency model, everything 
would work fine: before running the 'test' action, you run the 'build' 
action, which checks 'build_depends'.

Then we perform refactoring, and we create a 'code' action that the 
'build' and 'test' actions both can depend on.  The distribution author 
still declares dependencies using 'build_depends', though - so when we 
run the 'test' action, we first run the 'code' action, which has no 
declared dependencies, and we end up with a nasty runtime error rather 
than a nice specific error about dependencies.

Any solutions I'm missing?
 -Ken


Re: [Module::Build] Re: Test::META

2005-04-01 Thread Ken Williams
On Mar 29, 2005, at 10:44 PM, Randy W. Sims wrote:
Michael G Schwern wrote:
On Tue, Mar 29, 2005 at 08:33:48PM -0500, Randy W. Sims wrote:
A quickie sample implementation to add more meat. I didn't apply yet 
mainly because I'm wondering if we shouldn't bail and do a complete 
roll-back (eg. don't generate a Build script) if there are any 
failed requirements. Or should we bail, for example, during ./Build 
test if there are any test_requires failures? Or continue as is and 
just let it fail when it tries to use the missing requirements?
Continue.  Nothing's more frustrating than a system which refuses to 
even
try to go forward when some checklist is incomplete.
Hmm, I was actually sitting here playing with it again. But I was 
leaning more towards the 2nd option. The first option of bailing at 
Build.PL time obviously doesn't make sense as you can complete a build 
without running test. But does it make sense to test when a required 
testing module is missing?
Since the 'build', 'test', and 'install' actions are considered the 
critical path for installing a module, I think it makes sense to warn 
(not die) during perl Build.PL when one of their 
required/recommended/conflict dependencies aren't met.  Thereafter, 
only die/warn when running an action and its required/recommended 
dependencies aren't met.

 -Ken


Re: Testing Net-SSLeay

2005-04-01 Thread Walter Goulet
My impression from the author was that he didn't want me bundling any
additional modules with Net-SSLeay. Maybe I don't fully understand
your suggestion...

On Apr 1, 2005 2:07 PM, Randy W. Sims [EMAIL PROTECTED] wrote:
 Walter Goulet wrote:
  Hi,
 
  I've been in contact with the author of Net-SSLeay about testing his
  module. One limitation I have to work with is that the module has to
  work out of the box with perl 5.6.0 which doesn't include the
  Test::Simple and Test::More modules.
 
  I guess this limits me to using the old Test module. He did make one
  suggestion though; would it be worth the effort to selectively use
  Test::More or Test depending on what's available?
 
 Module::Build gets around this by bundling Test::More, placing it in
 t/lib/Test/More.pm. You can then use it for testing without needing to
 install it.
 
 Randy.



Re: [Module::Build] Re: Test::META

2005-04-01 Thread Christopher H. Laco
Ken Williams wrote:
Since the 'build', 'test', and 'install' actions are considered the 
critical path for installing a module, I think it makes sense to warn 
(not die) during perl Build.PL when one of their 
required/recommended/conflict dependencies aren't met.  Thereafter, only 
die/warn when running an action and its required/recommended 
dependencies aren't met.

 -Ken

I'll show my lack of historical knowledge here, and this is swaying just 
a little off topic.

If build, test, and install are considered the critical path, why was 
Build/make never changed to simple run test always as part of the 
builds success or failure?

Just curious. In a way, I'd be much happier if 'perl Build' or 'make' 
outright failed if the tests didn't pass, just like if there was a 
c/linking error.

-=Chris


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Testing Net-SSLeay

2005-04-01 Thread Paul Johnson
On Fri, Apr 01, 2005 at 01:47:36PM -0600, Walter Goulet wrote:

 Finally, I wanted to confirm an assumption: I can split test.pl into a
 set of seperate t/*.t test scripts regardless of whether I'm using
 Test or Test::More.

Yes.  Or neither or both.

-- 
Paul Johnson - [EMAIL PROTECTED]
http://www.pjcj.net


Re: [Module::Build] Re: Test::META

2005-04-01 Thread David Golden
Ken Williams wrote:
On Mar 30, 2005, at 6:16 PM, Michael G Schwern wrote:
On Wed, Mar 30, 2005 at 05:53:37PM -0500, Randy W. Sims wrote:
Should we completely open this up so that requires/recommends/conflicts
can be applied to any action?
install_recommends = ...
testcover_requires = ...
etc.

This sounds useful and solves a lot of problems at one sweep.  You 
can use
the existing dependency architecture to determine what needs what.  
Such as
testcover needs both test_requires and testcover_requires.

There's a problem with this that I'm not sure how to solve: what 
happens when, as part of refactoring, a chunk of one action gets 
factored out to become its own sub-action?  The dependency may well 
pertain to the new sub-action instead of the original action, but 
distribution authors won't have any way to know this - or even if they 
did, they couldn't declare it in a forward-compatible way.

I freely admit that I haven't been following this thread closely (I 
guess Ken's posts have a lower activation energy for me), but the 
suggested approach sounds way overengineered.  How many modules really 
needs this kind of thing?  I'm not sure that adding complexity to the 
requirements management system for those learning/using Module::Build is 
worth it for what I imagine to be relatively few modules that would wind 
up using such functionality.

I'd rather see requires/recommends kept at a high level and let 
individual actions/tests check for what they need and be smart about how 
to handle missing dependencies.

Regards,
David


Re: Testing Net-SSLeay

2005-04-01 Thread Andy Lester
On Fri, Apr 01, 2005 at 01:47:36PM -0600, Walter Goulet ([EMAIL PROTECTED]) 
wrote:
 I've been in contact with the author of Net-SSLeay about testing his
 module. One limitation I have to work with is that the module has to
 work out of the box with perl 5.6.0 which doesn't include the
 Test::Simple and Test::More modules.

I'd throw my hands up and let it go, then.  One of the key functions of
Phalanx is to modernize the testing infrastructure of the modules we
touch.  If he needs it to stay compatible back to the relative dark
ages, then let's just leave it that way.


-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Why a scoreboard?

2005-04-01 Thread Andy Lester
Why is there a scoreboard?  Why do we care about rankings?  Why is it
necessary to compare one measure to another?  What purpose is being
served?

xoxo,
Andy

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: Testing Net-SSLeay

2005-04-01 Thread Walter Goulet
Well ok, but then you have to pull it off of the Phalanx 100. Either
that, or we convince the author of the benefits of upgrading the
testing infrastructure. I'm not sure what is driving him to keep the
module compatible with 5.6.0 (especially since the testing modules
were added to 5.6.2).

On Apr 1, 2005 3:26 PM, Andy Lester [EMAIL PROTECTED] wrote:
 On Fri, Apr 01, 2005 at 01:47:36PM -0600, Walter Goulet ([EMAIL PROTECTED]) 
 wrote:
  I've been in contact with the author of Net-SSLeay about testing his
  module. One limitation I have to work with is that the module has to
  work out of the box with perl 5.6.0 which doesn't include the
  Test::Simple and Test::More modules.
 
 I'd throw my hands up and let it go, then.  One of the key functions of
 Phalanx is to modernize the testing infrastructure of the modules we
 touch.  If he needs it to stay compatible back to the relative dark
 ages, then let's just leave it that way.
 
 --
 Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance



Re: [Module::Build] Re: Test::META

2005-04-01 Thread Ken Williams
On Apr 1, 2005, at 2:55 PM, Christopher H. Laco wrote:
If build, test, and install are considered the critical path, why was 
Build/make never changed to simple run test always as part of the 
builds success or failure?

Just curious. In a way, I'd be much happier if 'perl Build' or 'make' 
outright failed if the tests didn't pass, just like if there was a 
c/linking error.

Yeah, good question.  I guess it's mostly historical.  There's nothing 
really stopping us from creating an 'everything' action that does a 
'build', 'test', and 'install' all in a row.  Or maybe just the 'build' 
and 'test'.

Anyone else like that idea?
 -Ken


Re: Why a scoreboard?

2005-04-01 Thread Michael G Schwern
On Fri, Apr 01, 2005 at 03:30:44PM -0600, Andy Lester wrote:
 Why is there a scoreboard?  Why do we care about rankings?  Why is it
 necessary to compare one measure to another?  What purpose is being
 served?

I presume you mean the CPAN scoreboard?  Or maybe the Kwalitee scoreboard,
it doesn't really matter the argument is the same.

One of the nice things about Open Source is not having to justify why someone
*else* wants to do something.  Someone has an itch to scratch, they scratch 
it, they make their work available for all to see.  You don't have to do
anything, so why be bothered if it doesn't serve a purpose?

Another way to look at it is sometimes its useful to just play with the data,
graph it in different ways and see what comes out.  Maybe nothing comes out.
Maybe something does.  Publish the results, see what happens.

Anyhow, not everything needs a purpose.  Not a serious one anyway.



Re: Why a scoreboard?

2005-04-01 Thread Andy Lester
Another way to look at it is sometimes its useful to just play with 
the data,
graph it in different ways and see what comes out.  Maybe nothing 
comes out.
Maybe something does.  Publish the results, see what happens.
I understand that, but it seems to have gone past playing data.
I'm just not comfortable with the rankings, with it being a 
competition.  People seem to be taking it too seriously, where instead 
of a way to check one's own actual kwalitee, whatever that may be, it's 
turned into arguing over whether or not some number or another is 
accurate.

I'd be far more interested with we can tell about modules on their own 
and how they can be improved, rather than make it a competition.

--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance