Re: use Tests; # ?

2006-07-17 Thread Fergal Daly

On 17/07/06, demerphq [EMAIL PROTECTED] wrote:

On 7/17/06, Torsten Schoenfeld [EMAIL PROTECTED] wrote:
 On Mon, 2006-07-17 at 11:39 +0200, demerphq wrote:

  Test names shouldnt be optional.

 I disagree.  I would find it cumbersome to have to come up with a
 description for each and every test.

I dont think its that cumbersome at all. Even stuff like

Fnorble 1
Fnorble 2

is sufficient.

  Finding a particular test in a file by its number can be quite
  difficult, especially in test files where you dont have stuff like
 
  'ok 26'.
 
  When ok() and is() are silently incrementing the counter and test
  names arent used how is one supposed to find the failing test? As you
  probably know it can be quite difficult.

 Well, if the test passes, there's no need to know where exactly it's
 located.  If it fails, the diagnostics contain the line number:

   not ok 6
   #   Failed test in t/xxx.t at line 26.

 I've never seen incorrect line numbers.

I have. Lots and lots and lots of times. I could do a survey but IMO
it would be a waste of time.

Anytime you need to do testing that doesnt exactly fit into the
provided tools from the Test::Builder suite you either need to design
a Test::Builder style module, or you get bogus line numbers because
the wrapper routines around the tests report the wrong thing.
Basically determining where the test originated is determined by
heuristic (much as Carp does its thing by heuristic). And as anybody
with comp-sci background knows heuristics are called that and not
algorithms because they are not provably correct. They get things
wrong.


It's not really a heuristic, it's perfectly reliable. Tthe problem is
that it requires action on the part of the Test::XXX author to get it
correct. There's a $Test::Builder::Level variable which should be
incremented every time you go further down the stack inside the
library, if you don't change it correctly then the user will get
incorrect line numbers.

A quick look on CPAN, showed up several Test::* modules that don't
bother to test it, probably because Test::Builder::Tester does not
make it particularly pleasant to do. Test::* modules that use
Test::Tester get it automatically because it doesn't depend on
scraping the diagnostic string.


A string in a test file is trivial to find. Open the test file in an
editor and do a search for the string, and presto you have the failing
test.


Test names are great. Line number are useless inside a loop,

F



Yves


--
perl -Mre=debug -e /just|another|perl|hacker/



Re: use Tests; # ?

2006-07-17 Thread Fergal Daly

On 17/07/06, demerphq [EMAIL PROTECTED] wrote:

On 7/17/06, Fergal Daly [EMAIL PROTECTED] wrote:
 On 17/07/06, demerphq [EMAIL PROTECTED] wrote:
  On 7/17/06, Torsten Schoenfeld [EMAIL PROTECTED] wrote:
   On Mon, 2006-07-17 at 11:39 +0200, demerphq wrote:
  
Test names shouldnt be optional.
  
   I disagree.  I would find it cumbersome to have to come up with a
   description for each and every test.
 
  I dont think its that cumbersome at all. Even stuff like
 
  Fnorble 1
  Fnorble 2
 
  is sufficient.
 
Finding a particular test in a file by its number can be quite
difficult, especially in test files where you dont have stuff like
   
'ok 26'.
   
When ok() and is() are silently incrementing the counter and test
names arent used how is one supposed to find the failing test? As you
probably know it can be quite difficult.
  
   Well, if the test passes, there's no need to know where exactly it's
   located.  If it fails, the diagnostics contain the line number:
  
 not ok 6
 #   Failed test in t/xxx.t at line 26.
  
   I've never seen incorrect line numbers.
 
  I have. Lots and lots and lots of times. I could do a survey but IMO
  it would be a waste of time.
 
  Anytime you need to do testing that doesnt exactly fit into the
  provided tools from the Test::Builder suite you either need to design
  a Test::Builder style module, or you get bogus line numbers because
  the wrapper routines around the tests report the wrong thing.
  Basically determining where the test originated is determined by
  heuristic (much as Carp does its thing by heuristic). And as anybody
  with comp-sci background knows heuristics are called that and not
  algorithms because they are not provably correct. They get things
  wrong.

 It's not really a heuristic, it's perfectly reliable.

The fact that it does the same thing every time for a given set of
input doesnt mean that it does the RIGHT thing. And i dont see how its
possible to automatically do the right thing every time. Therefore if
you need a custom wrapper you need to teach it to do the right thing.


The way it works is that Test::Builder expects there to be 1 layer of
stack between the call from the test script and the call into
Test::Builder-ok(). If your library is going to put more layers in
there then it has to tell Test::Builder about it. The standard way to
do this is to put

local $Test::Builder::Level = $Test::Builder::Level + 1;

at the top of every function in your library that does not actually
call -ok() but does call something that will call it (you have to do
a bit more work if you want to call -ok() directly and indirectly
from the same function). So Test::Builder knows how many levels of
stack to ignore to get out of the library and back to the test script.

Doing the right thing is not difficult, it's just involves some copy
and paste tedium. If you do the right thing though, you are guaranteed
to get the right result.



 Tthe problem is
 that it requires action on the part of the Test::XXX author to get it
 correct.

And often the testfile author as well if there isnt an off-the-shelf
Test:: module appropriate to the task at hand.

 There's a $Test::Builder::Level variable which should be
 incremented every time you go further down the stack inside the
 library, if you don't change it correctly then the user will get
 incorrect line numbers.

But what happens if you are using something like

sub do_test {
   my ($testname,$testvalue,$code)[EMAIL PROTECTED];
   my $expect=somefunc($testvalue);
   is ($expect,$code-($testvalue),$testname. somefunc);
   is( );
   is( );
}

then you end up with the tests reporting that the wrapper sub failed.
Or you have to rewrite the wrapper as a Test::Builder module. (Blech).


Absolutely, unless Test::Builder and TAP support nested blocks, we're
stuck in a world where structured programming is difficult (TAP
without nested blocks considered harmful :).

However in the case of the subroutine above, you could solve some of
the line number issues by have Test::Builder give a full stacktrace
rather than just reporting a single line number.


 A quick look on CPAN, showed up several Test::* modules that don't
 bother to test it, probably because Test::Builder::Tester does not
 make it particularly pleasant to do. Test::* modules that use
 Test::Tester get it automatically because it doesn't depend on
 scraping the diagnostic string.

So there are routes to make this easier, but it seems to me they are
likely to be a lot harder than providing a proper identification
string.


This doesn't make it easier to correctly do it, it just makes it
easier to test that you've correctly done it (free in fact).

F


  A string in a test file is trivial to find. Open the test file in an
  editor and do a search for the string, and presto you have the failing
  test.

 Test names are great. Line number are useless inside a loop,

Exaclty. Stuff like:

my @tests=( [ BFoo, *Foo*, ASCII Bold

Re: use Tests; # ?

2006-07-17 Thread Fergal Daly

On 17/07/06, chromatic [EMAIL PROTECTED] wrote:

On Monday 17 July 2006 11:37, Ovid wrote:

 For example, what could be done in TAP::Harness to improve the reporting on
 line numbers? That alone would be a nice benefit for folks.

I agree, but I disclaim the idea that there's a nice, general, working
heuristic.

The best I've come up with cheats by looking at call frames from a Test::*
package and chooses the first one not in those.  It will work often, but not
always.


What's the problem you're trying to solve? What's wrong with the current method?

F


Re: fetching module version from the command line

2006-07-13 Thread Fergal Daly

On 12/07/06, Smylers [EMAIL PROTECTED] wrote:

David Wheeler writes:

 On Jul 12, 2006, at 03:41, Gabor Szabo wrote:

 perl -MModule -e'print $Module::VERSION'

 I have this alias set up:

   function pv () { perl -M$1 -le print $1-VERSION; }

Along similar lines, I have this one-liner as ~/bin/pmv:

#! /bin/sh
perl -m$1 -le 'print '$1'-VERSION || die No VERSION in '$1'\n'

 I think that calling -VERSION is more correct.

So do I.  In fact I used to use $VERSION in my script, but changed to
-VERSION after some situation in which it worked and $VERSION didn't.
Sorry, I can't right now remember what that was.


These all fail for modules that do interesting things. For example
Test::NoWarnings performs a Test::Builder test in an END block to make
sure there were no warnings.

I could change it so that it tries to figure out whether it's being
used for real or not and disable the END block code but that's stress
and hassle. As a module author, as far as I'm concerned, if MakeMaker
can figure out my version then my job is done,

F


Re: fetching module version from the command line

2006-07-13 Thread Fergal Daly

On 13/07/06, Smylers [EMAIL PROTECTED] wrote:

Fergal Daly writes:

 On 12/07/06, Smylers [EMAIL PROTECTED] wrote:

  I have this one-liner as ~/bin/pmv:
 
  #! /bin/sh
  perl -m$1 -le 'print '$1'-VERSION || die No VERSION in '$1'\n'

 These all fail for modules that do interesting things. For example
 Test::NoWarnings performs a Test::Builder test in an END block to make
 sure there were no warnings.

So?  It still seems to work, for the purposes of determining what
version of the module is loaded:

  $ pmv Test::NoWarnings
  0.082


That's funny, it looks like I did put some code in to disable the END
block if it's required rather than used. Turns out I did this to
make MakeMaker happy, so MakeMaker does actually do a full require,

F


Re: TAP diagnostic syntax proposal

2006-07-13 Thread Fergal Daly

On 13/07/06, Geoffrey Young [EMAIL PROTECTED] wrote:

Jonathan Rockway wrote:
 While I agree with David, this argument is almost completely pointless.
 Nobody reads the raw TAP output!

are you serious?  listen to what they people here are saying - we _all_
read the raw TAP output, all the time, and not because we're TAP
developers interested in the underlying implementations.  as users, the
(current) raw TAP diagnostics helps us figure out why a test failed, and
if it doesn't make sense due to bad wording or reversed expectations
then it's that much harder than it needs to be.


Yeah, humans are the only things that read TAP diagnostics. That said
I don't really care whether my diagnostics are grammatically correct.
Short is good.

Oh! How about

# Got: 2
# Not: 1

short and rhyming, beat that,

F


Re: TAP diagnostic syntax proposal

2006-07-12 Thread Fergal Daly

If only we had some kind of standard language for marking things up
that was extensible... and wasn't met with universal disapproval,


F

On 12/07/06, Jonathan Rockway [EMAIL PROTECTED] wrote:


 Did you guys consider the problem of newlines in content?


This is a good question.  Implementing your own file format means you
have a big-bag-o-quoting problems.  How do you print a verbatim
newline?  What about a verbatim single quote?  What about Unicode?  What
about a new line then not ok - ++$counter? :)

http://cr.yp.to/qmail/guarantee.html says:

 When another programmer wants to talk to a user interface, he has to
 /quote/: convert his structured data into an unstructured sequence of
 commands that the parser will, he hopes, convert back into the
 original structured data.

 This situation is a recipe for disaster. The parser often has bugs: it
 fails to handle some inputs according to the documented interface. The
 quoter often has bugs: it produces outputs that do not have the right
 meaning. Only on rare joyous occasions does it happen that the parser
 and the quoter both misinterpret the interface in the same way.


Things to think about :)

Regards,
Jonathan Rockway



Re: TAP extension proposal: test groups

2006-07-03 Thread Fergal Daly

On 02/07/06, Adam Kennedy [EMAIL PROTECTED] wrote:


Fergal Daly wrote:
 On 02/07/06, Adam Kennedy [EMAIL PROTECTED] wrote:
  There's no way to declare a top-level plan. That is, I can't say how
  many groups of tests I'm going to run so there's effectively no plan,

 One point that Andy was extremely insistant on, and I think Schwern and
 I agree, is that the main plan is ALWAYS the total number of tests for
 the entire test script.

 In that case, groups form an additional set of checks, but do NOT alter
 the plan for the entire script.

 That contradicts #2 I don't want to have to count up the total number
 of tests in my
 file but I do want the protection of the plan. but looking again, I
 see that the example does include an overall plan that does count up
 the total.

There's four cases here.

1. Plan, no groups
2. No plan, no groups

As is now

3. Plan, with groups

The plan still is for the ENTIRE test script, but in addition within
that total you can define groups to add extra protection or grouping
information for diagnostics.

4. No plan, with groups

In THIS case, the total of the script does not matter or may not be
known, but you want protection of a sort of miniplan for specific
sections.

This does bring up a gap in the spec though (or I'm not remembering right).

If you have the following, how do you tell where the end of the group
is. Currently I think it would be implicit and unclear?

(noplan)
ok 1
ok 2
..2
ok 3
ok 4
ok 5
ok 6


That seems like a problem too but the one I'm trying to get at is

4 no plan, with groups

If your script exits prematurely after one of the groups,  the harness
will not notice because everything looks just fine. The solution to
this is not to use plan, with groups because then you have to count
all the tests individually which goes aginst objective #2,

F


Re: TAP extension proposal: test groups

2006-07-03 Thread Fergal Daly

On 03/07/06, Adam Kennedy [EMAIL PROTECTED] wrote:

 That seems like a problem too but the one I'm trying to get at is

 4 no plan, with groups

 If your script exits prematurely after one of the groups,  the harness
 will not notice because everything looks just fine. The solution to
 this is not to use plan, with groups because then you have to count
 all the tests individually which goes aginst objective #2,

But then we've had this problem up till now anyway.

If it exists prematurely with a good return code now, it's a correct
ending, if it returns with a bad return code it's an error.


2. I don't want to have to count up the total number of tests in my
file but I do want the protection of the plan

The protection of the plan is that when my script exits cleanly but
prematurely I find out. That's the only protection it gives.

Currently, the only way to get this protection is to count up all of
the tests. This grouping scheme does not change that.

The other objectives aren't terribly important to me (and I'm not even
sure #3 is solving the right problem).


The addition of groups will not change that behaviour in unplanned test
space, because what you want is a simply unknowable.


I'm not arguing about unplanned test space,

F


Re: TAP extension proposal: test groups

2006-07-02 Thread Fergal Daly

On 02/07/06, chromatic [EMAIL PROTECTED] wrote:

On Saturday 01 July 2006 16:46, Fergal Daly wrote:

 It looks like it's only one level of nesting. Any reason not to go the
 whole hog with something like

 ..1
 OK 1
 ..2
 ...1
 OK 2
 OK 3
 ...2
 OK 4
 ..3
 OK5

No one has provided an actual use case for it yet.  YAGNI.


I think I've misinterpreted the numbers. Each one is a plan, not a group number.

Here's the use case I was thinking of

use Test::More tests = 1;

my $l = Leopard-new();
IsALeopard($l);

sub IsALeopard {
 my $thing = shift;
 my $g = Group(tests = 4);
 IsACat($thing);
 HasSpots($thing);
 Colour(yellow);
 Colour(black);
 # $g gets destroyed, we leave the block
}

sub IsACat {
 my $thing = shift;

 my $g = Group(test = 3);
 IsAMammal($thing);
 HasWhiskers($thing);
 Likes(milk);
 # $g gets destroyed, we leave the block
}

which isn't supported supported by the above,

F


Re: TAP extension proposal: test groups

2006-07-02 Thread Fergal Daly

On 02/07/06, Adam Kennedy [EMAIL PROTECTED] wrote:

Fergal Daly wrote:
 It looks like it's only one level of nesting. Any reason not to go the
 whole hog with something like

 ..1
 OK 1
 ..2
 ...1
 OK 2
 OK 3
 ...2
 OK 4
 ..3
 OK5

I believe the conclusion here was that because demand for nested groups
appeared to be extremely limited, to START with just the one level, with
the notion of nested groups having that syntax, but not included in the
specification or implementation until there's been time for the initial
group code to settle down.

So we have a place to put nests should we need to, but it would
complicate implementation greatly if we had it immediately.


Since my understanding of the notation was wrong, my proposed
notations is wrong. That said, I'm not sure how the above extends to
nested groups.

F



Adam K



Re: TAP extension proposal: test groups

2006-07-02 Thread Fergal Daly

On 01/07/06, Michael G Schwern [EMAIL PROTECTED] wrote:

The PITA / TestBuilder2 BOF at YAPC whacked up this TAP extension.

Test groups in TAP.  There are several use-cases here.

1. I want to name a group of tests rather than the individuals.

2. I don't want to have to count up the total number of tests in my
file but I do want the protection of the plan.  I'd like to be able to
say I'm going to run 5 tests.  I'm going to run 4 more tests.  Now 8
more.

3. The spew to STDERR from my code when it does something wrong cannot
be associated with a single test.  But if I had a test grouping I
could associate it with that group.


Here's what we came up with.

1..10
..4 - name for this group
ok 1
ok 2
ok 3
ok 4
..2 - I will call this group Bruce
ok 5
ok 6
..4
ok 7
ok 8
ok 9
ok 10

Pros:
* Its backwards compatible.  The ..# lines are currently considered
junk and ignored.

* Its pretty readable.

* It solves #1

* Combined with 'no_plan' it solves #2.

  ..2
  ok 1
  ok 2
  ..3
  ok 3
  ok 4
  ok 5
  1..5

* It solves #3.

  1..5
  ..3
  ok 1
  Oh god, the hurting
  oh dear, oh god at Foo.pm line 23
  not ok 2
  # Failed test ...
  # got : this
  # expected: that
  ok 3
  ..2
  ok 4
  ok 5


Cons?


There's no way to declare a top-level plan. That is, I can't say how
many groups of tests I'm going to run so there's effectively no plan,

F


Re: TAP extension proposal: test groups

2006-07-02 Thread Fergal Daly

On 02/07/06, Adam Kennedy [EMAIL PROTECTED] wrote:

 There's no way to declare a top-level plan. That is, I can't say how
 many groups of tests I'm going to run so there's effectively no plan,

One point that Andy was extremely insistant on, and I think Schwern and
I agree, is that the main plan is ALWAYS the total number of tests for
the entire test script.

In that case, groups form an additional set of checks, but do NOT alter
the plan for the entire script.


That contradicts #2 I don't want to have to count up the total number
of tests in my
file but I do want the protection of the plan. but looking again, I
see that the example does include an overall plan that does count up
the total.

Is the example correct?

F


Re: TAP::Harness

2006-07-01 Thread Fergal Daly

This might seem like an odd question but will it be tightly tied to
TAP or will it be possible to use another protocol or an extension to
TAP?

F

On 01/07/06, Michael G Schwern [EMAIL PROTECTED] wrote:

Those of you who were/are at the YAPC Hackathon might know, I've begun
work on what started as Test::Harness 3 and is now TAP::Harness.  This
is brand new, ground up rewrite of the idea of a harness for TAP
sources (a foo.t file is a TAP source).  Its being designed to be
extendable to handle all the things we'd like to do with Test::Harness
over the last few years without having to worry about backwards
compat.

The design is currently a pile of sticky notes attached to my laptop
and a little bit of code.  chromatic, Andy, Adam Kennedy, Helen Cook
and myself have been working on it here at the YAPC hackathon.  I will
post up a sketch of that design a bit later, I have to go get a plane
shortly.  For now, here's a PFAQ (Preemptive FAQ).


* Why call it TAP::Harness?

TAP focuses it on the fact that this is about the Test Anywhere
Protocol, not just Perl's testing stuff.  By not putting it in the
Test:: namespace I hope to draw a stronger line between the
responsibilities of things like Test::More (your test program) and
TAP::Harness (the thing which runs and interprets your test program).

Harness is a link with its forerunner, Test::Harness.


* What about Test::Harness?

Test::Harness remains its own thing.

At some point in the future Test::Harness will likely be gutted and
turned into a thin wrapper around TAP::Harness.  I'm not caring about
this right now.


* Is it going to use Test::Harness::Straps?

No.  I will be stealing lots of code from Straps, but I will not be
using Straps.  Straps has too many design flaws.  It tries to do too
many parts of the TAP processing.  It also doesn't do all the TAP
processing leaving the Straps user to do some of that.  And the
callback system doesn't work very well.


* Should I continue to work on Test::Harness?

Yes.  While I am optimistic, I make no promises as to when
TAP::Harness is going to be stable.  So keep working on Test::Harness,
keep using Test::Harness::Straps.


* Will TAP::Harness go into the core?

Probably at some point.  I don't care right now.


* Will I be able to do X with TAP::Harness?

The goal is to encompass the largest set of X.  Another goal is to
have the extender be able to focus on the display aspects and not the
TAP parsing.

Right now the use cases I have in mind include things such as
parallelized test runs, fancy GUI and HTML outputs (for example,
TAP::Model::HTMLMatrix), multiple, non-Perl TAP sources (ex. contact a
URL and get the TAP from that; run HTML through a validator which
produces TAP; run a C program which spits out TAP), enhanced TAP
output (ex. colors; levels of verbosity), and the ability to smoothly
handle TAP extensions.


* Will TAP::Harness include X extension to TAP?

No.  TAP::Harness is using the current TAP spec from
Test::Harness::TAP.  Extending TAP is another problem.


* Will I be able to test X (HTML, PHP, Javascript, Monkeys...) with
TAP::Harness?

Yes.  You will be able to write new TAP source plugins for whatever
you want.  As long as it winds up producing a stream of TAP at the
end.


* Where's the code?

svn.schwern.org.  There's not a whole lot there yet.


* How can I help?

Provide use cases, what would you want to do with Test::Harness if you
could?  What are you doing with Straps?  What features do other
testing systems (JUnit, for example) have that you'd like to see in
Perl?  Once I post the design, pick it to pieces.



Re: TAP::Harness

2006-07-01 Thread Fergal Daly

On 01/07/06, Andy Lester [EMAIL PROTECTED] wrote:


On Jul 1, 2006, at 2:45 PM, Fergal Daly wrote:

 This might seem like an odd question but will it be tightly tied to
 TAP or will it be possible to use another protocol or an extension to
 TAP?

Yes.  It is about TAP.  That's why it's TAP::Harness.


I'm none the wiser. So I'll just remark that, if possible, it would be
nice if the protocol was pluggable,

F



xoa

--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance







Re: TAP extension proposal: test groups

2006-07-01 Thread Fergal Daly

It looks like it's only one level of nesting. Any reason not to go the
whole hog with something like

..1
OK 1
..2
...1
OK 2
OK 3
...2
OK 4
..3
OK5

F

On 01/07/06, Michael G Schwern [EMAIL PROTECTED] wrote:

The PITA / TestBuilder2 BOF at YAPC whacked up this TAP extension.

Test groups in TAP.  There are several use-cases here.

1. I want to name a group of tests rather than the individuals.

2. I don't want to have to count up the total number of tests in my
file but I do want the protection of the plan.  I'd like to be able to
say I'm going to run 5 tests.  I'm going to run 4 more tests.  Now 8
more.

3. The spew to STDERR from my code when it does something wrong cannot
be associated with a single test.  But if I had a test grouping I
could associate it with that group.


Here's what we came up with.

1..10
..4 - name for this group
ok 1
ok 2
ok 3
ok 4
..2 - I will call this group Bruce
ok 5
ok 6
..4
ok 7
ok 8
ok 9
ok 10

Pros:
* Its backwards compatible.  The ..# lines are currently considered
junk and ignored.

* Its pretty readable.

* It solves #1

* Combined with 'no_plan' it solves #2.

  ..2
  ok 1
  ok 2
  ..3
  ok 3
  ok 4
  ok 5
  1..5

* It solves #3.

  1..5
  ..3
  ok 1
  Oh god, the hurting
  oh dear, oh god at Foo.pm line 23
  not ok 2
  # Failed test ...
  # got : this
  # expected: that
  ok 3
  ..2
  ok 4
  ok 5


Cons?



Re: Non-Perl TAP implementations (and diag() problems)

2006-04-19 Thread Fergal Daly
On 4/19/06, Ricardo SIGNES [EMAIL PROTECTED] wrote:
 * Ovid [EMAIL PROTECTED] [2006-04-19T04:02:31]
  From a parser standpoint, there's no clean way of distinguishing that
  from what the test functions are going to output.  As a result, I
  really think that diag and normal test failure information should be
  marked differently (instead of the /^# / that we see).

 I strongly agree.  This came up when TAP was being named, and also during a
 previous Harness upgrade.  Unfortunately, Harness and Straps discard comments
 and, as mentioned before, the STDOUT/STDERR makes it hard to associate
 diagonstic output with tests.

 How many things rely on which stream the various output methods in
 Test::Builder use?  I know there was trouble last time that things changed, 
 but
 wasn't that entirely because of the disconnect between Test::Builder and
 Test::Builder::Tester?  Since they're now together, isn't that particular 
 issue
 solved?

 There are other things that test test output, like Test::Tester.  Will they
 break?  To find out, I downloaded a pristene copy of the latest Test-Simple 
 and
 Test-Tester and changed Test::Builder to use STDOUT for failure_output.  The
 only test that failed in the whole set was one that checked whether
 failure_output defaulted to STDERR.

Test-Tester (and modules using it) should not be impacted by this as
it is based on the Test::Builer object API. It collects the test
results while they are still Perl data, rather than scraping them out
of the test stream.

I'm not sure what test you are referring to, as far as I can tell,
Test-Tester doesn't check anything about failure_output. What .t file
is it in?

F


 I think that a real investigation into the impact of using one stream by
 default is in order.

 --
 rjbs


 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.2 (GNU/Linux)

 iD8DBQFERjFI5IEwYcR13KMRAhL+AKCJWpGtsfclNyyQtfKm61W6F6bVgQCfaNbc
 V2TENE9WeZ2Fm+ZMTRjPi2U=
 =kG3g
 -END PGP SIGNATURE-





Re: Non-Perl TAP implementations

2006-04-19 Thread Fergal Daly
On 4/18/06, Adam Kennedy [EMAIL PROTECTED] wrote:
 The aformentioned change to Test::Builder broke 3 different Test-Testing
 modules that relied on it.

3? I only know of 2 - Test::Builder::Tester (which scrapes and broke)
and Test::Tester (which doesn't scrape and didn't break). Is there
another Test-Testing module?

F


 That broke 28 Test modules which used them.

 That broke 115 various CPAN modules.

 That broken 880 other CPAN modules.

 And so on and so forth, until the end number ended up somewhere between
 2000 and 3000 distributions.

 So 30% of all CPAN modules broke.

 This included things like almost ALL of WWW::, because Mechanize got
 sucked into it. God knows how many darkpan Mech modules got hurt as well.

 Do we blame ALL of those 100s of developers?

 Does blame even matter at that point?

 This is what I mean by becoming an API even though you didn't want to be.

 Adam K



Re: Non-Perl TAP implementations

2006-04-19 Thread Fergal Daly
On 4/18/06, Ovid [EMAIL PROTECTED] wrote:
 --- David Wheeler [EMAIL PROTECTED] wrote:
  Test.Simple—JavaScript. It looks and acts just like tap, although in
  reality it's tracking test results in an object rather than scraping
  them from a print buffer.
 
 http://openjsan.org/doc/t/th/theory/Test/Simple/

 Tracking the results in an object is a better choice than scraping from
 a print buffer.  One of the frustrating issues with Perl's testing
 tools is the limited flexibility we have due to reading the output from
 STDOUT.

One other reason (that I didn't see mentioned) is that objects imply
that the harness and tests are in the same process which means that
the tests can corrupt the harness and that the harness can fail to
report if the test process dies,

F


 The TAP output should really just be for humans.  It should also be
 reconfigurable, but obviously we can't do that because Test::Harness
 would choke.

 Since it looks like we're going to stick with reading information from
 a print buffer, we should at least publish an EBNF grammar for the
 output.  (Interestingly, if we did that, we could potentially
 incorporate that into Test::Harness and allow folks to provide their
 own grammars and thus structure the output to better suit their needs.
 Of course, I would like a Ponie with that, too).

 Cheers,
 Ovid

 --
 If this message is a response to a question on a mailing list, please send 
 follow up questions to the list.

 Web Programming with Perl -- http://users.easystreet.com/ovid/cgi_course/



Re: Non-Perl TAP implementations

2006-04-19 Thread Fergal Daly
On 4/19/06, David Wheeler [EMAIL PROTECTED] wrote:
 On Apr 19, 2006, at 12:14, Fergal Daly wrote:

  One other reason (that I didn't see mentioned) is that objects imply
  that the harness and tests are in the same process which means that
  the tests can corrupt the harness and that the harness can fail to
  report if the test process dies,

 Well, the harness can be corrupted by bad output, too (do something
 like this in a test to see what I mean:

print STDOUT ok - Ha ha ha!\n;

 ). But in the JavaScript port, tests run in a browser, and there's no
 such thing a separate processes, so I had no choice there. So I
 decided to do both things: Test.Harness uses the objects it collects
 from Test.Builder to summarize test pasess, failures, what to output
 to the browser, and what not. But Test.Builder also sends all output
 to a series of appropriate function calls (which in the browser all
 go to the same place), so the test can run without the harness and
 display results, and so that some other harness could potentially
 scrape the output to summarize the results.

Yes, in the context of javascript testing, there's no choice but, for
example, I have used DUnit (Delphi) and it has a nice GUI test runner
but it's very annoying to have your test launching and result display
program wedge, crash or silently disappear because of a problem with
the module you're testing,

F


Re: [PATCH] Forking tests with Test::More

2006-03-28 Thread Fergal Daly
A  far simpler solution (that I've posted before recently) is to
output test numbers like

.1.1
.1.2
.1.3
.2.1
.2.2
.1.4

etc where the first number signifies the thread/process and the second
is just an increasing sequence within that thread. The . is there at
the start so that Test::Harness doesn't get upset.

Interprocess comms using Storable seems like overkill and sounds like
the sort of thing that would have fun bugs,

F

On 3/28/06, Tassilo von Parseval [EMAIL PROTECTED] wrote:
 Hi,

 I was told that Test::More patches should now go to this list so here we
 go.

 The attached patch serves as a draft for enabling test-scripts that fork
 without the test-counter getting confused. It does so by using a
 Storable imaged shared between the processes. The patch however does
 need some modification because there's a race condition in there. It
 uses lock_nstore and lock_retrieve to store the current test-metrics
 thusly:

 +sub _inc_testcount {
 +my $self = shift;
 +
 +if( not $self-{Forked} ) {
 +   lock $self-{Curr_Test};
 +   $self-{Curr_Test}++;
 +   return;
 +}
 +
 +# we are running in forked mode, therefore
 +# get data from disk, modify and write back
 +
 +my $stats = lock_retrieve( $self-{Forked} );
 +$self-{Curr_Test} = ++$stats-{Curr_Test};
 +$self-{Test_Results} = $stats-{Test_Results};
 +lock_nstore( $stats = $self-{Forked} );
 +}

 This is not quite correct. Instead, the member $self-{Forked} should be
 turned into a rw-filehandle to the storable image (it is the path to the
 image right now) and _inc_testcount() would become something like that:

 ...
 # we are running in forked mode, therefore
 # get data from disk, modify and write back

 # enter criticial region:

 lock $self-{Forked}, LOCK_EX;
 my $stats = fd_retrieve($self-{Forked});
 $self-{Curr_Test} = ++$stats-{Curr_Test};
 $self-{Test_Results} = $stats-{Test_Results};
 nstore_fd( $stats = $self-{Forked} );
 lock $self-{Forked}, LOCK_UN;

 # criticial region left

 A similar approach is needed for _store() and essentially for everything
 that now uses lock_nstore/lock_retrieve.

 Also, a test-case for this feature is tricky to conceive as
 Test::Builder::Tester can't be used here. I supplied one but it's quite
 messy.

 I am right now in the middle of relocating to NY so I don't have the
 time to do these modifications myself so maybe someone with more time on
 his hands could look after that. It's not so tricky and mostly involves
 some local changes to the enclosed patch.

 Cheers,
 Tassilo
 --
 use bigint;
 $n=71423350343770280161397026330337371139054411854220053437565440;
 $m=-8,;;$_=$n(0xff)$m,,$_=$m,,print+chr,,while(($m+=8)=200);





Re: [PATCH] Forking tests with Test::More

2006-03-28 Thread Fergal Daly
On 3/28/06, Adam Kennedy [EMAIL PROTECTED] wrote:
 Tassilo von Parseval wrote:
  On Tue, Mar 28, 2006 at 09:47:54AM +0100 Fergal Daly wrote:
 
 A  far simpler solution (that I've posted before recently) is to
 output test numbers like
 
 .1.1
 .1.2
 .1.3
 .2.1
 .2.2
 .1.4
 
 etc where the first number signifies the thread/process and the second
 is just an increasing sequence within that thread. The . is there at
 the start so that Test::Harness doesn't get upset.
 
 Interprocess comms using Storable seems like overkill and sounds like
 the sort of thing that would have fun bugs,
 
 
  I really don't care how it is done, as long as it is eventually done at
  all. :-)
 
  As I can see it, there now exist at least two propositions on how to
  fix this problem. The ones responsible for Test::More/Test::Harness
  should take any of these proposed solutions and put them in.
 
  I just would like to be able to write test-scripts that fork without
  these annoying and ugly counter-mismatch messages. For that I sent one
  possible solution which ends my responsibilities in this matter. :-)
 
  Cheers,
  Tassilo

 Well three, if you include my redirect fork output to seperate files,
 and then merge back in at SIGCHLD/END-time proposal, which would also
 allow things like testing using other languages.

 Really, I just want a solution that works on all platforms, and doesn't
 involve changing the TAP protocol, because of the number things
 generating TAP that aren't in Perl, but being read by Perl.

 Changing protocols can have big consequences.

What's changing the protocol?

F


Re: [PATCH] Forking tests with Test::More

2006-03-28 Thread Fergal Daly
That's why I said you prefix with a ..

This has the effect of making it not a number as far as TAP is
concernted, instead it becomes part of the name.

Of course it would be better to allow .s in the number, that way you
can check that they are increasing properly and allows you to have
sub-plans (so each fork/thread/block can have it's own plan) rather
than just having 1 single overall plan for the whole thing. Having 1
overall plan means that if one fork skips a test and the other does an
extra test you won't notice.

Given that x.x.x.x currently causes an error for TAP, changing the
protocol to allow it would not break anything.

But as I said, I wasn't proposing that - although I would be happy to see it,

F


On 3/28/06, Adam Kennedy [EMAIL PROTECTED] wrote:

  What's changing the protocol?

 As I understand it from
 http://search.cpan.org/~petdance/Test-Harness-2.56/lib/Test/Harness/TAP.pod
 the test number must be a number.

 It does refer specifically to it having to start with a digit, but I'm
 assuming that by number it means [1-9]\d*

 .1.2 would on the face of it seem to not match this pattern.

 Adam K



Re: [PATCH] Forking tests with Test::More

2006-03-28 Thread Fergal Daly
On 3/28/06, Tassilo von Parseval [EMAIL PROTECTED] wrote:
 On Tue, Mar 28, 2006 at 11:27:15AM +0100 Fergal Daly wrote:
  That's why I said you prefix with a ..
 
  This has the effect of making it not a number as far as TAP is
  concernted, instead it becomes part of the name.
 
  Of course it would be better to allow .s in the number, that way you
  can check that they are increasing properly and allows you to have
  sub-plans (so each fork/thread/block can have it's own plan) rather
  than just having 1 single overall plan for the whole thing. Having 1
  overall plan means that if one fork skips a test and the other does an
  extra test you won't notice.
 
  Given that x.x.x.x currently causes an error for TAP, changing the
  protocol to allow it would not break anything.

 But you see, it does make a change in (or an addition to) the protocol
 necessary afterall since it currently doesn't work as you said yourself.

No change is required in TAP to suport numbers that begin with a .
because TAP interprets them as names not numbers. (TAP protocol says
that test numbers are optional).

Forget my comments about altering TAP, that was wishlist stuff and
should not be confused with my suggestion for the current problem.

 No matter what you do, you either have to change Test::Harness or the
 module generating the TAP output. Or even both as in your case.

No.

 I think
 any conceivable solution will have its ugly points. It's now a matter of
 finding a simple and robust approach and put it in.

Anything that attempts to synchronise across processes is harder to
make robust and less likely to be simple.

F


 Cheers,
 Tassilo
 --
 use bigint;
 $n=71423350343770280161397026330337371139054411854220053437565440;
 $m=-8,;;$_=$n(0xff)$m,,$_=$m,,print+chr,,while(($m+=8)=200);




Re: [OT] TDD only works for simple things...

2006-03-28 Thread Fergal Daly
I don't know of examples off-hand but I think in a way they're
correct. If you write lots of code first and then try to test it, you
will look and say it's not possible to test this so I could not
possibly have written my tests beforehand - those TDD guys are fools.
If you write the tests beforehand (or even if you just write your code
with an eye towards how it will be tested) you end up designing your
systems so that even the biggest most complex pieces are testable.

So until you actually get the testing bug, it's true that only your
simplest designs are testable.

Also, the problem with php (assuming you use it as a webpage
generator) is that it encourages you to embed code in your HTML and so
yes, it is naturally difficult to test,

F

On 3/28/06, Geoffrey Young [EMAIL PROTECTED] wrote:
 hi all :)

 for those interested in both php and perl, it seems that php's native .phpt
 testing feature will soon produce TAP compliant output - see greg beaver's
 comments here

   http://shiflett.org/archive/218#comments

 so, TAP is slowly dominating the world... but we all knew that already :)

 what actually prompted me to write is a comment embedded there:

 Only the simplest of designs benefits from pre-coded tests, unless you have
 unlimited developer time.

 needless to say I just don't believe this.  but as I try to broach the
 test-driven development topic with folks I hear this lots - not just that
 they don't have the time to use tdd, but that it doesn't work anyway for
 most real applications (where their app is sufficiently real or large
 or complex or whatever).

 since I'm preaching to the choir here, and I'd rather not get dragged into a
 yes it does, no it doesn't match, is there literature or something I can
 point to that has sufficient basis in real applications?  I can't be the
 only one dealing with this, so what do you guys do?

 --Geoff



Re: [OT] TDD only works for simple things...

2006-03-28 Thread Fergal Daly
On 3/28/06, Tels [EMAIL PROTECTED] wrote:
 Moin,

 On Tuesday 28 March 2006 17:14, Fergal Daly wrote:
  I don't know of examples off-hand but I think in a way they're
 [snipabit]
  Also, the problem with php (assuming you use it as a webpage
  generator) is that it encourages you to embed code in your HTML and so
  yes, it is naturally difficult to test,

 Well, duh! If you break one of the general rules of coding[0], you have to
 live with the consequences.

Yes but everything you're doing is wrong, you're stupid probably
isn't the counter-argument that Geoff is looking for :)

F


 Best wishes,

 Tels

 0: DMCADS! - Don't mix code and data, stupid!

 --
  Signed on Tue Mar 28 19:19:53 2006 with key 0x93B84C15.
  Visit my photo gallery at http://bloodgate.com/photos/
  PGP key on http://bloodgate.com/tels.asc or per email.

  To be beautiful is enough! If a woman can do that well who should
  demand more from her? You don't want a rose to sing. -- Thackeray






Re: [OT] TDD only works for simple things...

2006-03-28 Thread Fergal Daly
On 3/28/06, David Cantrell [EMAIL PROTECTED] wrote:
 Geoffrey Young wrote:
  David Cantrell wrote:
 Try writing a test suite ahead of time for a graphing library.  It's
 possible (indeed, it's trivial - just check the md5 hashes of the images
 that are spat out against images that you have prepared ahead of time in
 some other way) but it would be damnably time-consuming to create those
 tests.  Consequently, I've not bothered.  I throw data at it, and look
 at the results.  If the results are good I then put an md5 hash of the
 image into a regression test.
  well, ok, I'll agree with you if you look at it that way.  but I think tdd
  ought to happen at much lower level than that - certainly there's more to
  test than just spitting out an image?  you're probably calling several
  different subroutines in order to generate that image, each of which can be
  developed using tdd, and each of which gets more and more simple as you get
  deeper into the application I'd suspect.

 There are lots of bits which *can* be tested and which are (or will be
 anyway once the design has settled down in my head), but they're all to
 do with wrangling the data that the user supplies into nice structures.
   Frankly, they're the easy bits.  Those internal methods and those data
 structures are not for public consumption.  In fact, the only methods
 for public consumption that return anything useful are the constructor
 and the method which spits out an image.  All the other methods will
 either silently consume data or will die if you pass them a recipe for
 pie instead of a list of numbers (for example).

 Unfortunately, the draw() method is the one that's the hardest to write,
 the one that is the most prone to error, the hardest to debug, and the
 one where it's hardest to write tests in advance.

There are things you can do though. For example you can make draw take
a canvas argument. Normally the canvas object would end up producing a
gif but when testing, you pass in a canvas which just records all the
requested operations. You can then make sure that draw() tried to draw
the right number of points, lines, circles, squares etc, that it
didn't try to draw outside the boundaries, that it tried to put labels
close to the things that were being labelled etc etc.

If you were graphing a mathematical function you can check that for
each (x, y) that was rendered that

abs(f(x)- y)  epsilon

for some acceptable epsilon you can also make sure that there was an x
for each point visible on the x-axis.

Tests like these (particularly the mathematical function graphing
tests) can definitely be written in advance,

F


 --
 David Cantrell



Re: Surprising use_ok false positive

2006-03-06 Thread Fergal Daly
On 3/5/06, Chris Dolan [EMAIL PROTECTED] wrote:
 On Mar 5, 2006, at 3:55 PM, David Wheeler wrote:

  On Mar 5, 2006, at 13:52, Chris Dolan wrote:
 
  Advice?  While this example is contrived, the eval
  { require ... } idiom is used often in the wild, so this is not a
  wholly unrealistic scenario.
 
  Of course it should be
 
eval { require Bar; 1; } or die $@;
 
  But I agree that it seems like if the load failed in your eval, it
  should fail for Test::More, too. But maybe even a failed require
  sets the record in %INC?

 In this case, Bar.pm is intended to represent optional functionality
 that Foo.pm conditionally loads.  So, adding a die would be
 counterproductive.  The problem is that I don't know how to
 distinguish between a load failure or a compile failure.  There must
 be a way to do that right?

The way to do it right is to run Perl's module finding subroutine and
if it finds the requested module then you can require it without an
eval (or just do path/to/module), if it doesn't then you skip it.

Sadly Perl does not give access to the module finding routine. I have
a vague idea that there's something on CPAN that implements it
(perldoc -f require shows some example code) but that won't handle all
the tricks you can do wth objects etc in @INC.

Another option is to eval and analyse $@ afterwards, maybe use a regex
to see it contains an error about Can't locate Foo/Bar.pm.

Last option that I can think of is do the eval {require } and if it
fails, check %INC. If the module isn't there then you you know it
failed because it wasn't found. If the module is there then you know
it was found but died for some other reason so you should rethrow the
error with die [EMAIL PROTECTED]

I think the latter is probably the most reliable as hitting $@ with
regexes could lead to nasty surprises,

F


 Chris
 --
 Chris Dolan, Software Developer, Clotho Advanced Media Inc.
 608-294-7900, fax 294-7025, 1435 E Main St, Madison WI 53703
 vCard: http://www.chrisdolan.net/ChrisDolan.vcf

 Clotho Advanced Media, Inc. - Creators of MediaLandscape Software
 (http://www.media-landscape.com/) and partners in the revolutionary
 Croquet project (http://www.opencroquet.org/)





Re: Test::Builder feature request...

2006-02-08 Thread Fergal Daly
On 2/8/06, Adam Kennedy [EMAIL PROTECTED] wrote:
 Geoffrey Young wrote:
  hi all :)
 
  there's a feature split I'm itching for in Test::Builder, etc - the
  ability to call is() and have it emit TAP free from the confines of
  plan().  not that I don't want to call plan() (or no_plan) but I want to
  do that in a completely separate perl interpreter.  for example, I want
  to do something that looks a bit like this
 
use Test::More tests = 1;
 
print qx!perl t/response.pl!;
 
  where response.pl makes a series of calls to is(), ok(), whatever.
  while this may seem odd it's actually not - I'd like to be able to
  plan() tests within a client *.t script but have the responses come from
  one (or more) requests to any kind of server (httpd, smtp, whatever).
 
  currently in httpd land we can do this by calling plan() and is() from
  within a single server-side perl script, but the limitation there is
  that you can only do that once - if I want to test, say, keepalives I
  can't have a single test script make multiple requests each with their
  own plan() calls without things getting tripped up.
 
  so, I guess my question is whether the plan-is linkage can be broken in
  Test::Builder/Test::Harness/wherever and still keep the bookkeeping in
  tact so that the library behaves the same way for the bulk case.  or
  maybe at least provide some option where calls to is() don't bork out
  because there's no plan (and providing an option to Test::More where it
  doesn't send a plan header).
 
  so, thoughts or ideas?  am I making any sense?
 
  --Geoff

 One of the problems is going to be numbering, surely?

 I've just started myself mucking around with some ideas where I wanted
 to fork off a server process and then test in BOTH halves of a
 connection at the same time. It sounds like something relatively similar
 to what you need to do.

 One of the things I didn't really like about generating fragments is you
 don't really get a chance to count each set, only the total (or worse,
 no plans at all).

 What I think might be a useful approach is being able to merge
 fragments to test output.

 So the lines from the external fragment would be parsed in, checked (in
 plan terms) and then re-emitted into the main test (which would have a
 plan totallying the whole group).

A long time ago, I suggested (and implemented) the idea of nested test
numbers. The idea being that your output looks like

1 # ok
2.1 # ok
2.2 # ok
2.3 # ok
3.1.1.1 # ok
...

you get the idea the only rule would be that

a.b.c.d

must come before

a.b.c.d+1

in the output. Each block can have a plan if you like then you just
create a block for each process/thread that will emit test results.
I've a feeling that Test::Harness would barf on the above output but
if you prefix all the numbers with . then it's happy. Of course it
would be good to have a version of TH that also undertands these
nested test number properly, the . thing just lets you keep backward
compatibility.

So this solves the present problem and it also solves the problem of
it being a pain to have a plan when you have data driven testing
(#tests = #data x #tests per datum and other adjustments and don't
forget those data that get an extra test etc etc). You can also put a
group of tests into a subroutine and just plan for 1 test for each
time the sub is called.

Anyway, I hereby suggest it again but this time without an
implementation. The last time, the biggest part of the implementation
was rewiring Test::Builder to use a blessed ref rather than lexical
variables for it's object attributes but now TB is like that by
default, the rest shouldn't be too hard :)

F


Re: bug with Test::Exception? or imacat's autotest box?

2006-01-31 Thread Fergal Daly
I have a fail against a module for exactly the same reason. I
initially blamed Module::Build but they convinced me it was Imacat's
setup. Apparently the output looks like an old version of something or
other.

http://rt.cpan.org/NoAuth/Bug.html?id=15034

has details.

Imacat didn't respond to my email at the time,

F

On 1/31/06, Tyler MacDonald [EMAIL PROTECTED] wrote:
 Take a look at this output:

 http://www.nntp.perl.org/group/perl.cpan.testers/285112

 It looks like this particular system is not noticing that Test::Exception
 requires Sub::Uplevel, then gets confused thinking it was *my* module that
 needed Sub::Uplevel. What's even more concerning is the presence of line
 noise right after the make test FAILED... Any idea what can be going on
 here?

 Thanks,
 Tyler




Re: Flexible testing

2005-12-22 Thread Fergal Daly
Test::Harness doesn't mind if you don't have numbers on your tests
(not sure if this is by design or just by implementation) so this
test script

print ok a hello\n;
print not ok b hello\n;
print 1..2\n;

Gives

t/aFAILED test 2
Failed 1/2 tests, 50.00% okay
Failed Test Stat Wstat Total Fail  Failed  List of Failed
---
t/a.t  21  50.00%  2
Failed 1/1 test scripts, 0.00% okay. 1/2 subtests failed, 50.00% okay.
make: *** [test_dynamic] Error 255

so (assuming this is valid TAP) you might be able to get away with
just overriding a bit of Test::Builder so that it outputs letters
rather than numbers,

F

On 12/21/05, Joe McMahon [EMAIL PROTECTED] wrote:
 Here's a scenario we have here at Yahoo!.

 The code we're testing depends on XML feeds from backend servers, which
 may sometimes be overloaded and not respond. The frontend servers work
 around this, but it would be better if we could fail a test, wait a
 bit, then go back and run it again a few times until either it
 eventually passes or never passes at all.

 Test::Builder does support time travel via current_test() - i.e., you
 reset the test number, and Test::Builder forgets the intervening tests.
 Which would be great, except the test output has already gone off to
 STDOUT/STDERR, so the followup Test::Harness-based code that reads the
 TAP gets confused.

 For example, if I run the following dummy.t

 print EOS;
 1..1
 not ok 1 ... failed once
 not ok 1 ... failed twice
 ok 1 ... worked
 EOS

 under prove -v, I get
 dummy1..1
 not ok 1 ... failed once
 not ok 1 ... failed twice
 ok 1 ... worked
 Test output counter mismatch [test 3]
 Don't know which tests failed: got 1 ok, expected 1
 Failed Test Stat Wstat Total Fail  Failed  List of Failed
 
 ---
 dummy.t1   ??   %  ??
 Failed 1/1 test scripts, 0.00% okay. 0/1 subtests failed, 100.00% okay.

 The question is, is this valid TAP? The TAP document implies but does
 not explicitly state that the numbers must be in strictly ascending
 order. Test::Builder implies that repeated numbers, or reused numbers,
 should be treated as forgotten tests. Is Test::Builder wrong? If so,
 what is the best way to deal with tests that might eventually succeed
 if retried? Should Test::Harness just believe the TAP input and not
 count tests itself?

 Obviously, you can wrap up the actual test in a retry loop/function,
 but this doesn't match up with the simplicity of Test::More and related
 testing methods. It seems like the only way to address this is to
 subclass Test::Builder (or write a new class) that buffers up the test
 output and only outputs it after tests are committed (i.e., I've run
 this N times and am sticking with the final result).

 Or am I stretching TAP too far? Thoughts?

   --- Joe M.




Re: cpan testers and dependencies

2005-10-13 Thread Fergal Daly
On 10/12/05, David Landgren [EMAIL PROTECTED] wrote:
 Fergal Daly wrote:
  http://www.nntp.perl.org/group/perl.cpan.testers/257538
 
  shows a fail for Test-Benchmark but the fail seems to be caused by
  CPANPLUS not installing dependencies:

 Apparently it's a bug in CPANPLUS that stops it from keeping track of
 grand children dependencies. @INC winds up only containing the first
 level of prerequisites. That is, if A prereqs B, and B prereqs C, then
 after having built C and then B, when testing A, only B appears in @INC.
 There's a bug report on this on RT.

 In the meantime, I've given up smoking :(

As much as that sucks I'm not sure it's the cause. In this case it was
a direct prereq not a grandchild. Now that I've got RT open I guess
I'll file a bug too,

F


Re: cpan testers and dependencies

2005-10-13 Thread Fergal Daly
On 10/13/05, James E Keenan [EMAIL PROTECTED] wrote:
 David Landgren wrote:
  Fergal Daly wrote:
 
  http://www.nntp.perl.org/group/perl.cpan.testers/257538
 
  shows a fail for Test-Benchmark but the fail seems to be caused by
  CPANPLUS not installing dependencies:
 
 
  Apparently it's a bug in CPANPLUS that stops it from keeping track of
  grand children dependencies. @INC winds up only containing the first
  level of prerequisites.

 Without disputing the accuracy of that bug report for CPANPLUS
 (https://rt.cpan.org/NoAuth/Bug.html?id=14760), I doubt that's the
 problem here.

 Test-Benchmark-0.04/Makefile.PL includes this code:

  PREREQ_PM = {
  Benchmark = 1,
  'Test::Builder' = 0,
  'Test::Tester' = 0.103,
  'Test::NoWarnings' = 0,
  },

 Benchmark and Test::Builder are core, so there are no missing
 dependencies there.  AFAICT, Test-Tester-0.103 and the latest version of
 Test::NoWarnings do not 'use' or 'require' any non-core modules other
 than the superclasses included in their distributions.  So I don't think
 that multiple levels of prerequisites can be the source of this problem.

 Another reason why I don't think it's a CPANPLUS problem is that I
 encountered the same bug as did cpan.tester imacat in the report F
 cited, namely, that # Looks like you planned 36 tests but only ran 29.
   But I got that FAIL not by using CPANPLUS but by downloading the
 module with LWP::Simple::getstore, then manually trying to build the
 module with tar and make.  As I reported off-list to Fergal, I had an
 older version of Test::Tester (0.07) installed on my box.
 Test-Benchmark-0.04/Makefile.PL should have forced me to upgrade, just
 as it should have forced the cpan tester to upgrade.  But it didn't, for
 reasons unknown.

 When, however, I manually upgraded to Test-Tester-0.103, I re-ran 'make
 test' for Test-Benchmark-0.04 and all tests passed.  I've diffed v0.07
 and v0.103 of Test-Tester, but nothing leaps out and says I'm what's
 missing from the older version.  I'm also puzzled by the fact that the
 block of tests which are failing in Test-Benchmark-0.04/t/test.t,

check_test(
  sub {
  is_faster(-1, 2, $fac10, $fac30, 30 2 times faster than 10);
  },
  {
  actual_ok = 0,
  },
  30 2 times than 10
);

 ... doesn't look materially different from the preceding block of tests:

check_tests(
  sub {
  is_faster(-1, $fac20, $fac10, 20 faster than 10 time);
  is_faster(1000, $fac20, $fac10, 20 faster than 10 num);
  },
  [
  {
  actual_ok = 0,
  },
  {
  actual_ok = 0,
  },
  ],
  20 slower than 10
);


 So, while I know that getting the latest version of Test-Tester clears
 up the problem with Test-Benchmark-0.04, I'm stumped as to why
 Test-Benchmark's Makefile.PL didn't force me to upgrade to
 Test-Tester-0.103.  And I'm stumped as to why the last block of tests
 fails when very similar blocks pass.

To clarify, there are 2 problems.

1 the plan is out of whack. This is expected, I updated Test::Tester
so that it tests more things and as a result I updated all the modules
that depend on it. I also updated the PREREQs of those modules to
require the latest Test::Tester

2 the last test is failing. I'm not sure why but the nature of
Test::Benchmark is that things can fail occasionally because
benchmarks are not reliable, in fact the test that seems to be failing
is one that compares running factorial of 30 against factorial of 10
and expects them to differ by a factor of 2. This might be a bad test.
If (on your machine) it works consistently with 0.103 and fails
consitently with earlier version then that's puzzling but I can't
reproduce it.

The Makefile.PL won't force you to upgrade, it should warn you that
you have the wrong version and CPANPLUS should then ensure that the
correct one is installed before continuing. I tried it yesterday with
plain old CPAN.pm and it correctly fetched the latest version. A bug
has been filed,

F


cpan testers and dependencies

2005-10-12 Thread Fergal Daly
http://www.nntp.perl.org/group/perl.cpan.testers/257538

shows a fail for Test-Benchmark but the fail seems to be caused by
CPANPLUS not installing dependencies:

---
[MSG] [Sun Oct  9 02:42:22 2005] Module 'Test::Benchmark' requires
'Test::Tester' version '0.103' to be installed
...
PREREQUISITES:

Here is a list of prerequisites you specified and versions we
managed to load:

Benchmark  1.07
Test::Builder  0.22
Test::NoWarnings  0
Test::Tester  0
---

Anyone else seen thing?

F


Re: Spurious CPAN Tester errors from Sep 23rd to present.

2005-10-06 Thread Fergal Daly
On 10/5/05, Michael G Schwern [EMAIL PROTECTED] wrote:
 On Wed, Oct 05, 2005 at 11:22:40PM +1000, Adam Kennedy wrote:
  Please ignore these bad reports. I've contacted Schwern to get that
  specific change to Test::More backed out ASAP. These problem, if you get
  any, should go away shortly.
 
  Given that the repair alternatives are to backout the Test::More change
  and increment the T:B:Tester Test::More version to the newest (or at
  least, NOT the current) or to change all 26 other modules, backing out
  seems the sanest option at this point

 I'm disinclined to back this change out.  See the bug for reasoning.
 http://rt.cpan.org/NoAuth/Bug.html?id=14936

 To sum up:

 * There was ample warning.
 * You shouldn't be relying on screen scraping.

I think you're right but Isn't that the basis of TBT? (and the version
that's been ported to Perl 6 incidentally).

 * The fix to Test::Builder::Tester is trivial.

And will result in yet more screen scraping.

 * Rolling back the change is a pain in my ass.

 The bug for this in Test::Builder::Tester is here:
 http://rt.cpan.org/NoAuth/Bug.html?id=14931


 --
 Michael G Schwern [EMAIL PROTECTED] http://www.pobox.com/~schwern
 Stabbing you in the face for your own good.



Re: Test::Builder proposed enhancement

2005-10-06 Thread Fergal Daly
On 10/5/05, Joe McMahon [EMAIL PROTECTED] wrote:
  From Schwern's comment:

 I'll consider putting in some more information into the
 Test::Builder-details so information like the file and line number
 where the test occured can be gotten without scraping.

 I'd really like to have this as well. Current projects could really use
 it.

For me the correct way to check line number and file is to not check
them at all, they're a moving target and lead to things like the
line_num() function.

A more useful thing to check is the perceived stack depth - that is

(actual stack depth) - $Test::Builder::Level

because this is what determines whether line num and file will be
correct or not. Better still it should be the same for every test.

Of course if you're just trying to output the line num and file then
it's not useful,

F


Re: rantTesting module madness

2005-09-12 Thread Fergal Daly
I think actually you've bought a self-levelling washing machine and
there should be no need for a level but if you value your kitchen and
your clothes you have your own level,

F

On 9/12/05, Nicholas Clark [EMAIL PROTECTED] wrote:
 On Sun, Sep 11, 2005 at 12:35:43PM -0500, Andy Lester wrote:
  Usually, Test::* modules are only used for the test phase.
 
  I really don't understand the idea of only used for the test phase,
  as if the tests don't matter, or if there are levels of failure.
  Either they install OK on the target system, and you can use them
  with confidence, and they've done their job, or you're going to
  ignore the tests completely and then who needs 'em?
 
  It's like if I'm installing a washing machine, and I don't have a
  level.  I can say Ah, I only need it for the installation, and it
  looks pretty level, so I don't need the level, or I can say I'm not
  using this appliance until I've proven to myself that the machine is
  level and won't cause me any problems in the future because of an
  imbalance.
 
 This is a good analogy. It's correct.
 
 But the assumptions behind it cover only one case.
 
 It's as if the requirements for the washing machine say:
 
 To install and use this machine you will need:
 
 * a power supply
 * a water supply
 * drainage
 * a level
 
 
 which is valid if you're both the installer and the user. But if someone
 else helps you install the machine, then you don't actually need the level,
 if they bring theirs and use it for the install.
 
 
 I think that the build_requires/test_requires distinction *is* important, if
 it can be made, as it eases the lives of anyone wishing to package up
 modules, build them from source in one place, and then distribute their
 packages to many other machines, be they OS vendors or sysadmins. The tests
 are run and pass on the build machine, prior to packaging. But the automatic
 dependency system doesn't need to make installation of this module depend on
 installing Test::* onto the production machine. (for the general case)
 
 
 But it's only important if it's easy to make. And I'd much prefer time and
 effort to go into writing better modules, better tests, and better tools,
 than generating heat.
 
 Nicholas Clark



Re: rantTesting module madness

2005-09-11 Thread Fergal Daly
On 9/11/05, Adam Kennedy [EMAIL PROTECTED] wrote:
 And for something simple as the tests don't generate warnings, I would
 think module has excessive dependencies is a bug in Test::Deep, rather
 than a more general problem.

I'd say obvious, necessary and a simple idea but if you think
it's simple, off you go and golf it (don't forget the stack traces).

I'd actually say that the need to _install_ something to test that you
don't generate warnings is the bug. When it comes to unit tests, the
warning detector is something that you should have to switch _off_.
This should not be taken as a criticism of Test::Simple by the way,
the fact that I can so easily create T::NW is pretty cool,

F


Re: rantTesting module madness

2005-09-10 Thread Fergal Daly
On 9/10/05, Tels [EMAIL PROTECTED] wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 
 Moin,
 
 you are in a maze of Test modules, all looking alike. You are likely being
 beaten by a dependecy.
 
 This is a mini-rant on how complex the tesing world for Perl modules has 
 become. It starts harmless, like you want to install some module. This
 time it was CPAN-Depency.


What do you mean has become all the modules you encountered are at least 2 
years old but apart form anything that's a bit like coplaining about how 
complex the world of coputers has become, it used to be that I could just 
switch on, press Shift-RunStop, press play on my tape desk, make a cup of 
tea and then start playing whatever Commodore 64 game I wanted. It's all so 
complex now with ADSL, hard drives, wireless, linux distributions and web 
browsers.

There is a genuine problem here though and that's the fact that MakeMaker 
and possibly Module::Build don't allow you to specify testing requirements 
separately from building requirements and run time requirements but most 
people don't ever see it thanks to CPAN.pm.

You can also work around the problem without a mini-cpan by using CPAN.pm to 
install the module onto a connected box with automatic dependency following 
turned off and noting what else is needed. It's awkward but you're going to 
pay a price if you disconnect from the internet,

By the way, there's a new version Test-NoWarnings on CPAN that doesn't emit 
(harmless) warnings during self-testing,

F


Since for security reasons your Perl box is not connected to the net, you
 fetch it and all dependencies from CPAN and transfer them via sneaker net 
 and USB stick. It includes some gems like:
 
 'Test::Deep' = 0,
 'Test::Warn' = 0,
 
 Huh? Never heard of them, but if it needs them, well, we get 'em.
 Presumable they are only needed for testing the module, but who knows? 
 
 However, as you soon find out, Test::Deep needs these two:
 
 Test::Tester = '0.04',
 Test::NoWarnings = '0.02',
 
 Put on your high-speed sneakers, grumble shortly and fetch them. 
 
 Test::Tester is moderate, it only needs Test::Builder, which we somehow
 already got. And Test::NoWarnings needs only Test::Tester (are you
 confused yet?), so we are clear. Except for one test failure in
 Test::NoWarnings: 
 
 t/noneYou should load Test::Tester before Test::Builder (or
 anything that loads Test::Builder)
 
 I call that warning ironic. Anyway, now on to Test::Warn (not to be
 confused with test::NoWarnings). It needs: 
 
 Warning: prerequisite Array::Compare 0 not found.
 Warning: prerequisite Sub::Uplevel 0 not found.
 Warning: prerequisite Test::Builder::Tester 0 not found.
 Warning: prerequisite Test::Exception 0 not found. 
 Warning: prerequisite Tree::DAG_Node 0 not found.
 
 Ugh! Test::Builder::Tester? Is there also a Test::Tester::Builder? And
 when does the madness end? At this point I got testy (no pun intended)
 and seriously considered screwing CPAN-Dependecy... 
 
 One saw me continuing, however, until I found out that Array::Compare
 needs Module::Build, and I don't have this, either - and most of it's
 dependecies are missing here, also. Aarg!
 
 I am all for putting often used stuff into extra modules, but I think this 
 
 has gone way to far, especially the user will go through all this just so
 that Random-Module-0.01 can run it's freaky test suite
 
 /rant
 
 Best wishes,
 
 Tels, who was last seen FYAMFC (Fetching Yet Another Module From CPAN) 
 
 - --
 Signed on Sat Sep 10 17:23:26 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.
 
 Duke Nukem Forever will come out before Unreal 2. - George Broussard,
 2001 (http://tinyurl.com/6m8nh)
 
 -BEGIN PGP SIGNATURE- 
 Version: GnuPG v1.2.4 (GNU/Linux)
 
 iQEVAwUBQyL+jncLPEOTuEwVAQFiyQf9Fx/C+mqQNaQt4i9PQsSfJX9Td2U3UeMp
 dHBpBAGl8pNIZUtKFOFXHqLhrR/d94SM69MfN9VpFEfnLD5h6DjPsqXoO9FTWswV
 ALyll3uHX8M8S+hm6qGWewXY7wk8o0WekFe70zQ4qCgBQO7P2nnXZEZzCV/Pzw9Q 
 WEb/GULo2z1PvhLfgvieTCGb8kt1JKIKZI0OpO6MZcoB8GvoQ7XxfXt44zPKdFTc
 2pxo7UVlIun3dfJI4CDkFx1jIgazmWpBAFd0BYcljhZgcp8J64WZc2twfQlSpxzt
 wjtSmaWCGdMXsZyfoQWvPY4R5WAVSo/1AOThCQSzPABEH2aoWw0H0w==
 =CRZL
 -END PGP SIGNATURE- 



Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-05 Thread Fergal Daly
On 7/4/05, Andrew Pimlott [EMAIL PROTECTED] wrote:
 On Mon, Jul 04, 2005 at 12:36:29AM +0200, demerphq wrote:
  On 7/3/05, Andrew Pimlott [EMAIL PROTECTED] wrote:
  Would using
  
   my $s = sub { $a-[0] = 1; $_[0]; }
  
   above also be looking at refaddrs?
 
  No. But it wouldnt be symmetric would it?
 
 It's no less symmetric that the first example.  In fact, I would say
 it's symmetric.  I'm calling the same code on each.  What is your
 definition?  I would guess your definition is either circular, or would
 restrict one to an unrealistically small subset of Perl.  In the real
 world, code like the above is perfectly normal.

There's an easy way to see what's accptable and what's not and what
exactly this level equality means. Consider the following code
template:

###
# lots of stuff doing anything you like including
# setting global variables

my $value = do {
# these can access any globals etc
  my $a = one_way(); 
  my $b = another_way();
  is_very_deeply($a, $b) || die they're distinuguishable;

  # choose one of $a and $b at random
  rand(2)  1 ? $a : $b;
};

print test($value);
###

Assuming:

1 nothing looks at ref addrs (they may compare refs addrs, they can
even store them in variables for comparison later on as long as
nothing depnds on the actual value, just it's value in comparison to
other ref addrs).
2 - one_way() and another_way() don't have side effects ***

Then test() cannot tell whether it got $a or $b. That is, any attempt
by one_way() or another_way() to communicate with test() will be
caught by is_very_deeply().

In this case it's clear that

sub test
{
  $_[0] == $a
}

is not acceptable because only one of $a and $b ever makes it back
into program flow and at that point it's got a new name.

If you think this situation is contrived, it is. The point is to try
to clarify which operations are legal and which aren't and why.

This test isn't supposed to be used in day to day programming. It's
for use in test scripts to make sure that 2 different ways of
constucting something agree to the greatest degree possible given and
in test scripts you should control the environment as much as possible
so contrived isn't really such a problem.

 Again, your form of equality is perfectly good, but it's not privileged
 over any other.  Both your equality and is_deeply are belied by totally
 normal, plausible code.  That's all I wanted to point out.

It's privileged in that it's the strictest possible form of equality
that doesn't require ref addr comparison. That is, if this one says
yes then so will any other.

It's privileged because it's the only test that works in the code
above, if you replace is_very_deeply with any other less strict form
of equality you can easily concoct a one_way(), another_way() and
test() that can detect which of $a or $b was selected.

F

*** If you think this is too restrictive then you can rewrite this
with fork() or threads, an is_very_deeply that is able to see into
both processes and get rid of the rand. You also have to pass all the
side-effected variables into is_very_deeply too.


[ANNOUNCE] Test::Tester 0.102

2005-07-05 Thread Fergal Daly
Fix a problem with the easy way of doing things.

Warn if Test::Tester isn't the first Test::Builder module loaded as
this can cause problems when doing things the easy way.


Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-05 Thread Fergal Daly
On 7/5/05, Andrew Pimlott [EMAIL PROTECTED] wrote:
 On Tue, Jul 05, 2005 at 01:24:38AM +0100, Fergal Daly wrote:
  There's an easy way to see what's accptable and what's not and what
  exactly this level equality means. Consider the following code
  template:
 
  ###
  # lots of stuff doing anything you like including
  # setting global variables
 
  my $value = do {
  # these can access any globals etc
my $a = one_way();
my $b = another_way();
is_very_deeply($a, $b) || die they're distinuguishable;
 
# choose one of $a and $b at random
rand(2)  1 ? $a : $b;
  };
 
  print test($value);
  ###
 
 my $x = [];
 sub one_way = { $x }
 sub another_way = { [] }
 sub test = { $_[0] == $x }
 
 I don't think this breaks your rules, but see below.

You're right, I messed that up by trying to allow the use globals in
the structure.

If you don't use globals (you can still have lexically scoped globals
in the wherever one_way() and another_way() are defined, as long as
test() has no way of reaching them) then it's true, if you do want to
use globals then you actually have to test

is_very_deeply(
# all the globals that either of them include in their results
# the lists must be identical otherwise the result will obviously
distinguishable
  [$a, \$global1, \$global2, ...],
  [$b, \$global1, \$global2, ...]
);


  Then test() cannot tell whether it got $a or $b. That is, any attempt
  by one_way() or another_way() to communicate with test() will be
  caught by is_very_deeply().
 
  In this case it's clear that
 
  sub test
  {
$_[0] == $a
  }
 
  is not acceptable because only one of $a and $b ever makes it back
  into program flow and at that point it's got a new name.
 
 I don't understand what you're saying here.  As you've written it, $a in
 test is unrelated to the $a and $b in your do statement above, so your
 test will return false in both cases.  Is that all you meant?

Kind of. The point was that you can't even refer to $a (the one that's
in the do block) or $b in this situation.

Here's another way to see why disallowing sub { $_[0] == $a } is not
an artificial restriction. When you're testing you construct $a and $b
and in this situation you could use $a or $b to try to identify which
one you have but testing is not what you really care about. What
matters is if there will be a difference in behaviour when a client
program uses your library. In this situation you don't have  $a _and_
$b - only one of them gets created because the client program uses
only one of the data constuctors - so sub {$_[0] == $a} either makes
no sense (in that there is no $a) or is always true because $_[0] is
always $a (because there is no $b).

 Anyway, I don't think you're rejecting my test.  If you do reject my
 test, tell me which assumption I violated.

None, I came up with the same example just as I was getting into bed.
I should have thought more about the globals before allowing them,

F


Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-03 Thread Fergal Daly
On 7/3/05, Andrew Pimlott [EMAIL PROTECTED] wrote:
 On Sat, Jul 02, 2005 at 07:34:47PM +0100, Fergal Daly wrote:
  On 7/2/05, Andrew Pimlott [EMAIL PROTECTED] wrote:
   Citing computer science as the basis of your position is just too
   much.  The computer science answer to the comparison of references is
   that they are equal if and only if they are the same reference.
 
  Actually what Yves wants is known a testing if 2 structures are
  bisimulable in computer science.
 
 Can you give me a hint as to the difference in a language like Perl?

When they are the same reference they are the same reference (can't
think of any other way of saying it). When they are bisimulable,
they're not the same reference but there is nothing you can do to tell
them apart (except actually looking at refaddrs). Obviously if you
change 1 of them you can tell them apart but if you make the same
change to the other they become indistinguishable.

http://en.wikipedia.org/wiki/Bisimulation

has some unhelpful definitions.

At it's simplest it's being able to tell the difference between 2
pointers to distinct empty arrays and 2 pointers to the same empty
array. It's an important difference.

   That this is just one example, and if you try to worm out by saying
   such-and-such operation is not allowed, I'll find you another.
 
  I'm not sure you will. As long as this is_deeply looks inside the
  objects of tied and overloaded refsthen the only operations that needs
  to be banned are those which look at the address of a reference
  (except to compare it to the address of another reference). If you
  exclude those operations (which are fairly useless outside of
  debugging) then I don't think anything else goes wrong.
 
 What about
 
 my $x = [];
 my $a = [$x, []];
 my $b = [[], $x]
 is_deeply($a, $b);  # passes
 $a-[0][0] = 1;
 $b-[0][0] = 1;
 is_deeply($a, $b);  # fails

The first call is actually a fail. Let's give them names

$x = []; $y = []; $z = []
$a = [$x, $y]
$b = [$z, $x]

now it's easier to see, comparing the 0th elements will pair $x with
$z and comparing the 1st elements will fail because $x has already
been paired with $z so can't be paired with $y. However it raises an
important issue that I hadn't considered.

I (and I think Yves) had always been thinking in terms of 2 structures
that had been produced independently, that is nothing in $a can be
part of $b but that's not realistic. In real test scripts, chunks of
the expected and the received values will be shared. The solution
there is that whenever a ref appears on both sides it can only match
up with itself. So I'd even say that

is_deeply( [$x, $y], [$y, $x]);

should fail.

 I was thinking that the comparison function would be a class method that
 would be called after verifying that two references point to objects in
 the same class.  I think that should be safe enough.

The bug might depend on the data so although the code might be
identical on both sides, the code-path might be different. If you
realy want deep testing with custom tests that can apply deep inside
Test::Deep, already does it,

F


Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-03 Thread Fergal Daly
See my reply to Andrew for the $a. stuff and see my reply a long
time ago when I also said that is_deeply should stay the same (both
for this case and others).

I'm just defending the idea that such a comparison is self-consistent,
possible and useful,

F

On 7/2/05, Eirik Berg Hanssen [EMAIL PROTECTED] wrote:
 Fergal Daly [EMAIL PROTECTED] writes:
 
  The point about modification is that if 2 things start out equal to
  one another and they are modified in the same way then they should
  still be equal to one-another.
 
 
   That implies that two array refs are not equal:
 
 
 use Test::More 'no_plan';
 $x = [];
 $y = [];
 is_deeply($x, $y); # Equal, but should not be:
 $x .= ;  # after the same modification
 $y .= ;  # of the two things, they are
 is_deeply($x, $y); # not equal!
 __END__
 ok 1
 not ok 2
 # Failed test (- at line 7)
 #  got: 'ARRAY(0x812b468)'
 # expected: 'ARRAY(0x812b54c)'
 1..2
 # Looks like you failed 1 tests of 2.
 
 
   Currently, is_deeply's idea of equivalence does not include that the
 equivalent structures are equivalent after the same modification.  Or
 even that they can be modified the same way:
 
 
 use Test::More 'no_plan';
 $x = \do{ my $t = 1 };
 $y = \1;
 is_deeply($$x, $$y); # Equal, but should not be:
 eval { $$x++ };  # after the same modification
 eval { $$y++ };  # of the two things, they are
 is_deeply($$x, $$y); # not equal!
 __END__
 ok 1
 not ok 2
 # Failed test (- at line 7)
 #  got: '2'
 # expected: '1'
 1..2
 # Looks like you failed 1 tests of 2.
 
 
   Note the similarity between the previous and this:
 
 
 use Test::More 'no_plan';
 $t = 1;
 is_deeply($t, 1); # Equal, but should not be:
 eval q{ $t++ };   # after the same modification
 eval q{ 1++ };# of the two things, they are
 is_deeply($t, 1); # not equal!
 __END__
 ok 1
 not ok 2
 # Failed test (- at line 6)
 #  got: '2'
 # expected: '1'
 1..2
 # Looks like you failed 1 tests of 2.
 
 
   ... and what do you know, I would welcome is_deeply to continue
 behaving like this.  :-)
 
 
 
 Eirik
 --
 You just paddle around there awhile, and I'll explain about these poles ...
 -- Sally Brown
 Is that in Europe?
 -- Joyce Brown



[ANNOUNCE] Test::NoWarnings 0.08

2005-07-03 Thread Fergal Daly
No change to the module but one of the test scripts needed fixing
because it was doing something improper that used to be harmless but
isn't anymore.

F


Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-03 Thread Fergal Daly
On 7/3/05, Andrew Pimlott [EMAIL PROTECTED] wrote:
 How about
 
 my $a = [];
 my $b = [];
 my $s = sub { $_[0] == $a; }
 is_deeply($a, $b);  # passes
 is_deeply($s-($a), $s-($b));  # fails

Near the top of the last mail I said there is nothing you can do to
tell them apart (except actually looking at refaddrs) which is
bascially what you've done here. In fairness when I originally allowed
refaddr comparisons I didn't do it clearly enough. What I said was
the only operations that needs to be banned are those which look at
the address of a reference (except to compare it to the address of
another reference). What I forgot to say was another reference
within the same structure.

Again it comes down to my thinking of these 2 structures as 2 separate
things maybe even on 2 separate computers (or produced by 2 differrent
programs - perhaps before and after a change) and you want to know
whether they have produced answers which are completely
interchangeable. When you think in those terms, comparing addresses
from one structure to those in the other makes no sense.

You can say I'm moving the goal posts but if you allow comparisons
like that, the whole concept is trivially broken

# $a and $b are anything that will pass
is_deeply($a, $b); # pass
is_deeply($a == $a, $b == $a); # fail

no need to introduce subs at all.

I could have banned all refaddr operations but that's not necessary,
unfortunately I was not clear enough on when they are acceptable. If
you think in terms of parallel processes where is_deeply() is
something that can see the data in both processes but nothing else is
shared then it's more clear which refaddr operations make sense and
which don't.

Anyway...

Do you agree that there's a difference between [ [], [] ] and [ ([]) x 2 ]?
Do you agree that it's possible to detect this difference?
Do you agree that it's possible to detect that 2 structures have no
such difference?
Do you agree that this differrence can be a source of bugs?

  The bug might depend on the data so although the code might be
  identical on both sides, the code-path might be different.
 
 If you're saying the comparison function might be buggy, ok sure.
 However, there are cases where objects in a class can have variant
 representations and still be observationally the same (assuming the user
 respects some abstraction boundary).  In this case, a comparison
 function for that class is appropriate.

Fair enough, I suppose elsewhere you have tested this comparison
function. However is_deeply is unlikely to ever start behaving
differently for different classes, it ws brought way back and is the
reason Test::Deep was written,

F


Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-02 Thread Fergal Daly
Here's a way of looking at it that doesn't require you to consider
what happens if you alter the structures.

Let's say you have a Person class with a Name an Age and a House class
with Owner and Resident.

Now imagine there are 2 people who have the same name and age but are
different people.

my $p1 = Person-new(Name = Fergal Daly, Age = 31);
my $p2 = Person-new(Name = Fergal Daly, Age = 31);

They live in 2 houses but one of the owns both houses.
my $h1 = House-new(Owner = $p1, Resident = $p1);
my $h2 = House-new(Owner = $p1, Resident = $p2);

The houses look identical if you only consider values however Yves
wants to also consider identities. $h1 is owner-occupied $h2 is
presumably being rented.

Here's what CalculateRent could look like:

sub CalculateRent
{
  my $house = shift;
  if ($house-Owner eq $house-Resident)
  {
return 0;
  }
  else
  {
return ValueHouse($house) / 360;
  }
}

so curently

is_deeply($h1, $h2)

passes but

CalculateRent($h1) == CalculateRent($h2)

fails so there it definitely something unequal about $h1 and $h2.
There is a stronger form of equality that could be tested which would
guarantee that if

is_really_deep($h1, $h2)

passes then

AnyFunction($h1) == AnyFunction($h2)

would also pass (assuming there are no conflicting side effects and
assuming nothing looks directly at reference addresses - apart from
debugging, there's no reason to ever do this anyway),

F

 On 7/2/05, Michael Peters [EMAIL PROTECTED] wrote:
 demerphq wrote:
 
  I wasn't suggesting that this should fail and wouldnt suggest it should 
  either.
 
  I was suggesting that
 
  my $a=[];
  is_deeply([$a,$a],[[],[]])
 
 So doesn't that just come down to
 is_deeply([], [])
 failing?
 
 Can we really say that
 x=y; but x,x != y,y?
 
 If that is the case, the it is completely non-intuitive.
 
 
 --
 Michael Peters
 Developer
 Plus Three, LP
 



Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-02 Thread Fergal Daly
On 7/2/05, Michael Peters [EMAIL PROTECTED] wrote:
 But if we say
x=y and x=z can we then say that x,x != y,z
 
 If say
$x = [];
$y = [];
$z = [];
is_deeply($x, $y); # passes
is_deeply($x, $z): # passes
is_deeply([$x,$x], [$y, $z]); # fails for some reason
 
 If we broke this out into a formal logical proof, the only way
 that x,x != y,z would would is if x != y or x != z, or both.

The reason this happens is because the calls to is_deeply are entirely
independent. If is_deeply behaved as Yves wanted _and_ the results of
multiple calls were consistent then actually the second call would
fail because we've already matched $x with $y so we can't match it up
with $z too.

It really comes down to 2 questions

1 Is there a difference between an array containing references to 2
empty arrays and an array containing 2 references to the same empty
array. Quite clearly the answer is yes if you don't believe it, see
the party example below.

2 Given that they are different, is it ever a significant difference?
For example could it cause a bug if one of them was replaced by the
other? Again clearly the answer is yes.

Yet another analogy - you have a 2 black boxes each with 2 pipes in at
the top and 2 taps at the bottom. You pour wine into the left pipe on
both and lable the left tap wine. You pour beer into the right pipe
on both and label the right tap beer. Your friends arrive for the
party and start pouring themelves drinks. Some of them get wine, some
get beer but some get a horrible mixture of wine and beer from both of
its taps.

One of the black boxes was

my $x=[];
my $y=[];
[$x,$y];

the other was

my $x=[];
[$x,$x];

F


Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-02 Thread Fergal Daly
On 7/2/05, Michael Peters [EMAIL PROTECTED] wrote:
 That's what I'm trying to point out. If we follow the reasoning fully
 out then the second call to is_deeply() would need to fail as well as
 the first. Try explaining that to the someone using it.
 
 calls to is_deeply() *need* to behave independently. It should only fail
 if the things being compared are structurally equivalent. To have it
 'maintain state' between calls (even if that state is maintained through
 inside of other structures) would just be asking for weird things to happen.

Nobody is suggesting it should maintain state between calls. I was
just commenting on your  proof.

You took the result of 2 calls to is_deeply and used it to deduce the
result of a 3rd call but this is not a valid deduction because the
calls are totally independent. If you want to be able to use earlier
results to deduce later ones then you _would_ have to maintain state.

Again nobody is advocating it but let's say it does maintain state.
Then the contradiction goes away because we'll have

is_deeply($x, $y); # passes
is_deeply($x, $z); # fails because $x is now paired with $y
is_deeply([$x,$x], [$y, $z]); # fails $x != $z

 If we try to have is_deeply() modify the arguments it recieves to see if
 they behave the same then we run into all kinds of issues with tied
 structures or objects that are mere interfaces with other, more
 permanent storage.

Nobody suggested modifying things during the test either, it wouldn't
help anything.

The point about modification is that if 2 things start out equal to
one another and they are modified in the same way then they should
still be equal to one-another.

is_deeply doesn't test for this kind of equality but you might think
it does from the name and from the docs. It would be relatively easy
to make it test for this kind of equality but that won't happen.

I can understand you saying that Yves' notion of equality is not what
you consider useful at the moment but you appear to be saying that
it's not a valid or useful notion of equality at all,

F


Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-02 Thread Fergal Daly
On 7/2/05, Andrew Pimlott [EMAIL PROTECTED] wrote:
 On Sat, Jul 02, 2005 at 08:55:34AM +0200, demerphq wrote:
  The entire basis of computer science is based around the idea that if
  you do the same operation to two items that are the same the end
  result is the same.
 
 Citing computer science as the basis of your position is just too
 much.  The computer science answer to the comparison of references is
 that they are equal if and only if they are the same reference.

Actually what Yves wants is known a testing if 2 structures are
bisimulable in computer science.

 Otherwise, one will always be able to observe differences between them,
 and in Perl it's particularly easy:
 
 $a = [];
 $b = [];
 is_deeply($a, $b);  # you say should pass
 $a = $a;  # doing the same operation
 $b = $b;  $ to $a and $b
 is_deeply($a, $b);  # fails
 
 That this is just one example, and if you try to worm out by saying
 such-and-such operation is not allowed, I'll find you another.

I'm not sure you will. As long as this is_deeply looks inside the
objects of tied and overloaded refsthen the only operations that needs
to be banned are those which look at the address of a reference
(except to compare it to the address of another reference). If you
exclude those operations (which are fairly useless outside of
debugging) then I don't think anything else goes wrong.

 So neither position is right, though either could be useful in a
 particular case.  I agree with you (yves) that considering aliases
 within the objects being compared could be useful.
  I also think that
 any such comparison should allow objects to override the comparison.

You need to be very careful there. Allowing objects on the expected
side to control the comparison is ok but If you allow the data that
you are testing to control it then your test is effectively
meaningless. When testing you have to assume there are bugs in what
you are testing (otherwise why test it?). So you definitely don't want
to hand control over to something which you already suspect to be
buggy because it might say pass for the wrong reasons. For example,
you might think you checked that something was the string Bob when
actually it was a buggy object that overloaded string comparison to
always return true and in fact will look like Alice if you ever
stringify it.

 Finally, I think that comparing functions (which started this
 discussion) is insane!

:)

F


Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-01 Thread Fergal Daly
On 7/1/05, Michael G Schwern [EMAIL PROTECTED] wrote:
 is_deeply() is not about exact equivalence.  Its about making a best fit
 function for the most common uses.  I think most people expect [$a, $a] and
 [$b,$c] to come out equal.

 Test::Deep is for tweaked deep comparisons.

Test::Deep doesn't do what Yves wants. I've been aware of the problem
pretty much since I wrote it, I think the technical term for what Yves
wants is bisimulation. It's not trivial to add to is_deeply or TD but
TD already has a comparison cache with transactions which is part of
the solution so I think it would be possible to add it and make it
configurable,

F


is_deeply and overloading

2005-07-01 Thread Fergal Daly
What's going on with overloading in 0.60? The docs say it will compare
a string-overloaded object with a string but when I run the code below
I get

===
# x = stringy
not ok 1
# Failed test (over.pm at line 8)
Operation `eq': no method found,
left argument in overloaded package over,
right argument has no overloaded magic at
/usr/lib/perl5/5.8.5/Test/More.pm line 1073.
===

If I uncomment the eq overloading I get

===
# x = stringy
not ok 1
# Failed test (over.pm at line 8)
# Structures begin differing at:
#  $got = Does not exist
# $expected = 'stringy'
===

which is even stranger.

F

use strict;
use warnings;

use Test::More 'no_plan';

my $x = over-new();
diag(x = $x)
is_deeply($x, stringy);

package over;

use overload
#  'eq' = \get_eq,
  '' = \get_s;

sub new { return bless {}, over }

sub get_s {  return stringy }

sub get_eq { return 1 }


Re: is_deeply() and code refs

2005-06-28 Thread Fergal Daly
On 6/27/05, Michael G Schwern [EMAIL PROTECTED] wrote:
 On Mon, Jun 27, 2005 at 11:20:07AM +0100, Fergal Daly wrote:
   I'm perfectly happy to punt this problem over to B::Deparse and let them
   figure it out.  As it stands B::Deparse is the best we can do with code
   refs.  What's the alternative?
 
  I'd argue that currently the best you can do is == because it never
  gives a false positive. Consistent false negatives are harmless.
 
 But it does reduce its utility.  Means you can't use is_deeply() to test
 serialization.  How important is that, I wonder.
 
 It also means code refs are treated differently than all other refs.  In
 all other cases we peek inside the data referred to by the reference.  That's
 why its a deep check.

Forgetting philosphical arguments about what's the right thing to do,
I think the strongest point against this is that there may be people
out there who expect the current behaviour, they expect 2 different
closures to be unequal, they may even have tests that depend on this
and which have caught legitimate bugs in the past. These tests will no
longer do what they were originally written to do.

  If someone really wanted, they could use XS to create subref_eq which
  pokes around inside closures, comparing the code and also comparing
  the value of closed variables but that seems extreme,
 
 At that point you might as well just fix B::Deparse.

True,

F


Re: is_deeply() and code refs

2005-06-28 Thread Fergal Daly
On 6/28/05, Michael G Schwern [EMAIL PROTECTED] wrote:
 On Mon, Jun 27, 2005 at 11:34:58PM +0100, Fergal Daly wrote:
  Forgetting philosphical arguments about what's the right thing to do,
  I think the strongest point against this is that there may be people
  out there who expect the current behaviour
 
 The current behavior is to vomit all over the user's lap.  Some people
 might enjoy this.  Whatever floats your boat. ;P
 
 $ perl -wle 'use Test::More tests = 1;  is_deeply( sub { 42 }, sub { 42 } )'
 1..1
 WHOA!  No type in _deep_check
 This should never happen!  Please contact the author immediately!
 # Looks like your test died before it could output anything.

That's only 2 months old (according to CPAN) before that it would have
just failed or passed (failed in this case). Is it in bleadperl? I'd
be amazed if no one anywhere was using is_deeply with coderefs.

F


Re: is_deeply() and code refs

2005-06-27 Thread Fergal Daly
On 6/26/05, Michael G Schwern [EMAIL PROTECTED] wrote:
  For 3, it looks like B::Deparse does't handle the data at all so even
  if the deparsed subs are identical they may behave totally
  differently.
 
 This will simply have to be a caveat.  Fortunately, if B::Deparse ever gets
 this right we'll immediately benefit.

I'm not sure there is a right way to deparse closures (in general).
For example if a variable is shared between 2 closures then it only
makes sense to deparse both of them together. Deparsing them in turn
will lose the sharedness info.

How many people would actually understand that caveat?

I'm ok with your test will fail even though you'd think it would
pass (the current situation) but  I don't like your test will pass
even though it should fail. The former means you need to test in a
different way and you know about it straight away, the latter means
you're not sure that your tests really passed at all.

  For 2 B::Deparse works and might be the only way but then again, it
  might be better to just get access to the original string of code
  before it gets compiled,
 
 If they want to compare the original string they should have put it in their
 data structure.  Simp.

That is exacltly what I was suggesting,

F


Re: is_deeply() and code refs

2005-06-27 Thread Fergal Daly
On 6/27/05, Michael G Schwern [EMAIL PROTECTED] wrote:
 On Mon, Jun 27, 2005 at 01:41:30AM +0100, Fergal Daly wrote:
  I'm not sure there is a right way to deparse closures (in general).
  For example if a variable is shared between 2 closures then it only
  makes sense to deparse both of them together. Deparsing them in turn
  will lose the sharedness info.
 
 It would be less wrong than what B::Deparse is currently doing:  completely
 ignoring closure data.
 
 
  How many people would actually understand that caveat?
 
 About the same amount that use and understand closures.

Those that use are a subset of those that understand.

 I'm perfectly happy to punt this problem over to B::Deparse and let them
 figure it out.  As it stands B::Deparse is the best we can do with code
 refs.  What's the alternative?

I'd argue that currently the best you can do is == because it never
gives a false positive. Consistent false negatives are harmless.

For 2 B::Deparse works and might be the only way but then again, it
might be better to just get access to the original string of code
before it gets compiled,
  
   If they want to compare the original string they should have put it in 
   their
   data structure.  Simp.
 
  That is exacltly what I was suggesting,
 
 So are we just having an agreement?

I'm not sure. My suggestion is rather than facilitating it behind the
scenes, make it explicit and force it done by hand.

It's actually the sort of thing that can be done as an extension to
Test::Deep. So you could do

cmp_deeply(
  [$subref1],
  [deparse_eq($subref2)]
);

I might throw that in tonight, it's almost a one-liner.

If someone really wanted, they could use XS to create subref_eq which
pokes around inside closures, comparing the code and also comparing
the value of closed variables but that seems extreme,
 
F


Re: is_deeply() and code refs

2005-06-26 Thread Fergal Daly
You have 3 situations

1 the refs came from \somefunc
2 the refs come from evaling strings of code
3 the refs are closures and therefore have some data associated with them

For 3, it looks like B::Deparse does't handle the data at all so even
if the deparsed subs are identical they may behave totally
differently.

For 1 a simple comparison does the trick.

For 2 B::Deparse works and might be the only way but then again, it
might be better to just get access to the original string of code
before it gets compiled,

F

On 6/26/05, David Landgren [EMAIL PROTECTED] wrote:
 Tels wrote :
  -BEGIN PGP SIGNED MESSAGE-
 
  Moin,
 
  On Sunday 26 June 2005 07:18, Collin Winter wrote:
 [...]
 After tinkering with B::Deparse for a bit, I think this particular
 oddity may just be a result of poorly-written docs (or, more
 probably, poorly-read on my part). The module seems to do the right
 thing in all cases I could come up with (i.e., it only optimises out
 truly-useless constants), so it should be safe to use for this
 particular purpose. With this matter sorted, I've started on the code
 and requisite tests to make the new stuff work.
 
 
  Just for clarification: this means that:
 
is_deeply( sub { 1 + 2; }, sub { 3; } );
 
  should/will pass because the subs compile to the same code?
 
  is_deeply( sub {cos(0) + sqrt(4)}, sub {3} );
 
 does as well, fwiw. So do looping constructs that map to the same thing:
 
  is_deeply(
  sub { my $x=0; $x += $_ for 1..10;$x },
  sub { my $x=0; for( 1..10 ) { $x += $_ }; $x },
  );
 
 
 Michael Schwern wrote at the beginning of this thread:
 
   What it *shouldn't* do is what Test.pm does, namely execute the
   code ref and compare the values returned.  It would just compare
   the refernces.
 
 Why should it not do that? Is this because of subs with side effects?
 Isn't that more an issue of Doctor, it hurts when I hit my knee with a
 hammer?
 
 David
 



Re: verbose diagnostics

2005-04-28 Thread Fergal Daly
Where is TEST_VERBOSE documented? I see HARNESS_VERBOSE in

http://search.cpan.org/~petdance/Test-Harness-2.48/lib/Test/Harness.pm

F

On 4/28/05, Adrian Howard [EMAIL PROTECTED] wrote:
 
 On 28 Apr 2005, at 14:23, Paul Johnson wrote:
 
  Using Test::More, I would like to send some diagnostics to be seen only
  when the harness is running in verbose mode.
 [snip]
 
 diag some verbose diagnostics if $ENV{TEST_VERBOSE};
 
 ?
 
 Adrian
 



Re: TestSimple/More/Builder in JavaScript

2005-04-07 Thread Fergal Daly
Were you aware of JsUnit?

http://www.edwardh.com/jsunit/

I prefer the Test::More style of testing most of the time. I count myself
lucky I've never had to use a testing framework for javascript!

F

On Thu, Apr 07, 2005 at 11:23:59AM -0700, David Wheeler wrote:
 Greetings fellow Perlers,
 
 I'm pleased to announce the first alpha release of my port of  
 TestSimple/More/Builder to JavaScript. You can download it from:
 
   http://www.justatheory.com/downloads/TestBuilder-0.01.tar.gz
 
 Please feel free to give it a try and let me know what you think. You  
 can see what the tests look like by loading the files in the tests/  
 directory into your Web browser. This is my first stab at what I hope  
 becomes a complete port. I could use some feedback/ideas on a number of  
 outstanding issues:
 
 * I have made no decisions as to where to output test results,  
 diagnostics, etc. Currently, they're simply output to document.write().  
 This may well be the best place in the long run, though it might be  
 nice to allow users to configure where output goes. It will also be  
 easy to control the output, since the output functions can easily be  
 replaced in JavaScript. Suggestions welcome.
 
 * I have no idea how to exit execution of tests other than by throwing  
 an exception, which is only supported by JavaScript 1.5, anyway, AFAIK.  
 As a result, skipAll(), BAILOUT(), and skipRest() do not work.
 
 * Skip and Todo tests currently don't work because named blocks (e.g.,  
 SKIP: and TODO:) are lexical in JavaScript. Therefore I cannot get at  
 them from within a function called from within a block (at least not  
 that I can tell). It might be that I need to just pass function  
 references to skip() and todo(), instead. This is a rather different  
 interface than that supported by Test::More, but it might work.  
 Thoughts?
 
 * Currently, one must call Test._ending() to finish running tests. This  
 is because there is no END block to grab on to in JavaScript.  
 Suggestions for how to capture output and append the output of  
 _finish() are welcome. It might work to have the onload event execute  
 it, but then it will have to look for the proper context in which to  
 append it (a pre tag, at this point).
 
 * Anyone have any idea how to get at the line number and file name in a  
 JavaScript? Failures currently aren't too descriptive. As a result, I'm  
 not sure if level() will have any part to play.
 
 * Is there threading in JavaScript?
 
 * I haven't written TestHarness yet.
 
 * I'm using a Module::Build script to build a distribution. I don't  
 think there's a standard for distributing JavaScript libraries, but I  
 think that this works reasonably well. I have all of the documentation  
 in POD, and the script generates HTML and text versions before creating  
 the tarball. The Build.PL script of course is not included in the  
 distribution. I started out trying to write the documentation in JSDoc,  
 but abandoned it for all of the reasons I recounted in my blog last  
 week.
 

 http://www.justatheory.com/computers/programming/javascript/ 
 no_jsdoc_please.html
 
 * Is there a way to dynamically load a JavaScript file? I'd like to use  
 an approach to have TestMore.js and TestSimple.js load TestBuilder.js.  
 I'd also like to use it to implement loadOk() (equivalent to use_ok()  
 and require_ok()).
 
 More details are in the ToDo section of the TestBuilder docs.
 
 Let me know what you think!
 
 Regards,
 
 David


Re: Test::Builder-create

2005-03-10 Thread Fergal Daly
On Tue, Mar 08, 2005 at 10:11:09PM -0500, Michael Graham wrote:
 
   Would this make it possible to run many test scripts (each with its own
   plan) within the same perl process?  'Cos that would be nifty.
 
  Yes.  Though beyond testing testing libraries I don't know why you'd want to
  do that.
 
 Well, all I was going to do was try to shave a few seconds off the
 running time of my test suite (which is now climbing up to the 10 minute
 mark).  I figured I could do the mod_perl thing:  preload all my modules
 and do most of my setup at startup and then require each of the test
 scripts.  Dunno if it will be worth the effort but it was something
 I was going to play with for a couple of hours.

If script startup and module loading really is the culprit you could try the
mod_perl approach.

Load all required modules and then for each script, fork a new perl process
which uses do testxxx.t to run each script.

Not sure how windows friendly this is though but that might not matter to
you,

F


Re: testing STDOUT and STDERR at the same time with Test::Output

2005-03-08 Thread Fergal Daly
On Tue, Mar 08, 2005 at 04:56:08PM +, Mark Stosberg wrote:
 Hmm...maybe Test::Output just needs a new feature:
 
  # Because sometimes you don't care who said it. 
  stdout_or_stderr_is()

Test::Output allows

my ($stdout, $stderr) = output_from {...};

then you can do your own tests, otherwise you'd have to add

stdout_or_stderr_isnt()
stdout_or_stderr_like()
stdout_or_stderr_unlike()

too,

F


Re: testing STDOUT and STDERR at the same time with Test::Output

2005-03-08 Thread Fergal Daly
On Tue, Mar 08, 2005 at 09:34:17AM -0800, Michael G Schwern wrote:
 There's no equivalent to this?
 
   my $output = `some_program 21`;
 
 Where STDOUT and STDERR are combined into one stream, keeping the order
 correct.

If there is it's not in the docs. They show things like

output_like  ( $coderef, $regex_stdout, $regex_stderr, 'description' );

that is 2 regexes stdout and stderr separately.

In the case of though darcs though, is Perl just testing the output of
commands that have been systemed? If so they could just add 21 to the
command line and then ignore stderr,

F


Re: testing STDOUT and STDERR at the same time with Test::Output

2005-03-08 Thread Fergal Daly
On Tue, Mar 08, 2005 at 10:14:01AM -0800, Michael G Schwern wrote:
 On Tue, Mar 08, 2005 at 05:48:28PM +, Fergal Daly wrote:
  In the case of though darcs though, is Perl just testing the output of
  commands that have been systemed? If so they could just add 21 to the
  command line and then ignore stderr,
 
 Darcs runs on non-Unix.  21 is not cross-platform.

I ported something form linux to win not so long ago and it worked. Googling
for

21 windows

turns a few batch files that use it and also

http://mailman.lyra.org/pipermail/scintilla-interest/2002-September/001629.html

which claims it's OK for NT and 2000 but not for Win 9x.

Anyway, darcs is already expecting a bash-like shell for configure and GHC
needs mingw. I'm not too familiar with win but it should be possible to
twiddle the command line so that even on win 9x the system() call ends up in
bash rather than command.com but I don't imagine too many people are use
darcs and win 9x,

F


Re: Test::Builder-create

2005-03-08 Thread Fergal Daly
By singleton do you mean that there's only ever 1 Test::Builder::Counter and
it's shared between all the Test::Builder objects? That's necessary in order
to maintain consitent numbering of tests but it doesn't allow for a
second counter to be established to temporarily count subtests (for example
when testing a test module).

One way to allow this is to have a singleton Test::Builder and multiple
Test::Builder::Run objects and Test::Builder can decide which
Test::Builder::Run object is live. This effectively what Test::Tester
does.

That will lose the default level management. But I don't really understand
the default level thing. The level has to be cumulative across test modules,
incrementing for each subroutine call. What does it mean for Test::D to have
a default level of 2? And what happens if Test::C::do_multiple_tests() calls
into Test::D?

F

On Tue, Mar 08, 2005 at 11:24:59AM -0800, chromatic wrote:
 Hm, not anywhere close.  I think they were on your laptop, the one you
 took apart at the last TPC in San Diego.
 
 I've been writing notes for Test::Builder for Perl 6, though.  It's a
 little something like:
 
 Test::Builder - the primary interface.  Not a singleton anymore.  Test
 modules create new instances and tell it the default level.  This
 removes a global variable.
 
 Test::Builder::Output - a singleton contained within Test::Builder.
 This holds the output filehandles.
 
 Test::Builder::TestRun - a singleton contained within Test::Builder.
 This holds the plan, the counter, and the test details.
 
 Test::Builder::Counter - a singleton contained within
 Test::Builder::TestRun.  This counts the tests.
 
 Test::Builder::TestResults - the parent class for all types of tests
 (pass, fail, todo, skip).  This may be too much detail here, but I like
 the idea of these knowing how to format themselves for output.
 
 By default, everything looks the same to using modules, except that the
 level trickery can go away, mostly.  It should be easy to swap out
 TestRun and Counter and TestResults within the current Test::Builder
 object temporarily, in the case of wanting to run a lot of subtests
 without increasing the number, for example.
 
 Just some ideas,
 -- c


Re: Test::Builder-create

2005-03-08 Thread Fergal Daly
On Tue, Mar 08, 2005 at 12:50:29PM -0800, chromatic wrote:
 On Tue, 2005-03-08 at 20:40 +, Fergal Daly wrote:
 
  By singleton do you mean that there's only ever 1 Test::Builder::Counter and
  it's shared between all the Test::Builder objects?
 
 By default there's only one.  You can create others, if necessary.

  One way to allow this is to have a singleton Test::Builder and multiple
  Test::Builder::Run objects and Test::Builder can decide which
  Test::Builder::Run object is live. This effectively what Test::Tester
  does.
 
 That's what I have in mind.

Cool, so actually T::B::Counter and T::B::Run are not singletons and
Test::Builder is.

  That will lose the default level management. But I don't really understand
  the default level thing. The level has to be cumulative across test modules,
  incrementing for each subroutine call. What does it mean for Test::D to have
  a default level of 2? And what happens if Test::C::do_multiple_tests() calls
  into Test::D?
 
 The level is the number of entries in the call stack that a test module
 puts between where users use the test function and where Test::Builder
 receives the results.  It tells Test::Builder how many frames to discard
 when reporting failure files and lines.

I know what it is, I just don't understand what it means for a module to
have a default level. So:

1 - What does it means for Test::D to have a default level of 2? Does it mean
that all calls on the T::B object have to originate from exactly 2 levels of
calls into Test::D? If so, what can I do if some are from depth 3 or 1?

2 - What happens if Test::C::do_multiple_tests() calls Test::D::some_test()?
The correct level is now
Test::C's default level + Test::D's default level - 1

(or + 1 depending on the exact definition of level). Unless less Test::C is
testing Test::D in which case... also Test::Builder may not even know how to
find out what Test::C's default level is (it can't necessarily see Test::C's
Test::Builder object and in fact Test::C may have created more than 1).

F


Re: Test::Builder-create

2005-03-08 Thread Fergal Daly
On Tue, Mar 08, 2005 at 01:42:49PM -0800, Michael G Schwern wrote:
 On Tue, Mar 08, 2005 at 09:36:24PM +, Fergal Daly wrote:
  Cool, so actually T::B::Counter and T::B::Run are not singletons and
  Test::Builder is.
 
 No, other way around.  When a TB instance needs a TB::Counter it just says 
 $tb-counter which, normally, returns a singleton but you can alter counter()
 so it returns something else.

A singleton is a class that only ever has 1 instance in existence. If there
can be multiple instances of TB::Counter or TB::Run then by definition
neither of them are singletons. Conversely if there is only ever 1 instance
of the Test::Builder class, as chromatic said in his reply, then it is a
singleton.

  If so, what can I do if some are from depth 3 or 1?
 
 You temporarily change the level inside the scope of that function.

So I'll have to call local on the default_level field of the TB object?

  2 - What happens if Test::C::do_multiple_tests() calls Test::D::some_test()?
  The correct level is now
  Test::C's default level + Test::D's default level - 1
 
 This is why the idiom is currently:
 
   local $Level = $Level + 2;
 
 or whatever.  You add to the existing level rather than just setting it anew.

But without a global variable there is no existing level. Here's a simpler
example of the problem. Currently I can do

package Test::AllArray;

use Test::More;

sub is_all
{
my $array = shift;
my $exp = shift;
my $name = shift;
local $Test::Builder::Level = $Test::Builder::Level + 1;

for (my $i=0; $i  @$array; $i++)
{
is($array[$i], $exp, $name - $i);
}
}

How can I do that without a global variable?

F


Re: Test::Builder-create

2005-03-08 Thread Fergal Daly
On Tue, Mar 08, 2005 at 05:05:02PM -0500, David Golden wrote:
 doing the what I want before I write it.  I think the approach works, 
 but all this mucking about in the internals of Test::Builder feels like 
 voodoo.

All the vodoo has already been done for Test::Tester.

my @results = run_tests(
  sub {
is_this(..);
is_that(..);
is_theother(..)
  },
);

for my $res (@results)
{
  # $res contains a hash of pass/fail, diagnostics etc etc
}

F


Re: Test::Builder-create

2005-03-08 Thread Fergal Daly
On Tue, Mar 08, 2005 at 03:05:16PM -0800, Michael G Schwern wrote:
  A singleton is a class that only ever has 1 instance in existence. If there
  can be multiple instances of TB::Counter or TB::Run then by definition
  neither of them are singletons. Conversely if there is only ever 1 instance
  of the Test::Builder class, as chromatic said in his reply, then it is a
  singleton.
 
 I never worried too much about definitions.
 
 Point is, normally there's one and only one TB::Counter object floating
 around but if you really need to you can make another.

That's what I had hoped for, just those pesky definitions mad me think it
was otherwise.

  How can I do that without a global variable?
 
 An excellent question.  It may have to be:
 
   local $tb-{Level} = $tb-{Level} + 1;
 
 because local doesn't work on accessors.  However localizing hash keys is
 its own special layer of hell.

Except that involves finding Test::More's $tb from the outside which is
probably held in a lexically scoped variable (for good reason). Even if you
changed this, going back to the original more complicated problem, imagine
Test::AllArray has it's own $tb with it's own default level, how does
Test::More's $tb find this when it needs to calculate the current total
level.

I think this fundamentally requires a global variable.

The goal is to find out when did we leave the script and enter Test land. We
don't necessarily have to do it by manually tracking levels.

If every externally callable test function does

local $Entered = $Entered || current_level()

at the start. This sounds painful but it can all be handled automagically -
bsaically have something wrap all the subs in the package in something like

sub
{
  local $Entered = $Entered || current_level();
  $orig(@_);
}

then any $tb can easily find where we entered Test land.

An alternative would to let TB default to assuming that we're only 1 level
away but if it finds something in $Entered it will use that instead. Then
you could leave most of the functions unwrapped and just wrap functions that
call other functions. Plus, accidentally wrapping a function that didn't
need it would do no harm.

If a test testing module wanted to temporarily pretend that we're entering
Test land for the first time it can just do local $Entered.

Recording the entry level directly might even be better than recording
offsets and calculating the level. I can think of strange situations
involving passing subrefs to other modules where you basically have no idea
by how much you should be increase $Level. I can't think of any time I'd
want to do that in a test module but who knows what someone else would want
to do,

F


Re: How can I suspend/control a Test::Builder testing session?

2005-02-26 Thread Fergal Daly
Look at Test::Tester, it hijacks the Test::Builder object and replaces with
another that sometimes delegates to the real Test::Builder object and other
times delegates to a custom Test::Builder object that just collects results
without outputting anything or affecting the real test results.

You can probably do what you need just with Test::Tester::run_tests. This
returns are an array of test results including, pass/fail, diagnostics and
all the other stuff that's mentioned under details in the Test::Builder
docs, so for example

my @results = run_tests(
sub {
for my $i (1..1000)
{
# these test calls get collected for later analysis
ok(test_for_i($i), i=$i); # name is i=$i
}
}
}

my @bad = grep {$_-{ok} == 0} @results;

if (@bad)
{
@values = join(\n, map {$_-{name}} @bad);
# these calls go to the real Test::Builder
fail();
diag(@bad. tests failed, value were\n$values\n);
}

F

On Sat, Feb 26, 2005 at 06:00:02PM -0500, Tom Moertel wrote:
 Is there any good way to temporarily suspend or, even better, take
 control of a Test::Builder testing session?
 
 The reason I ask is because I am working on a specification-based
 testing system for Perl that utilizes random-testing methods in which
 a single property check can involve thousands of random test cases.
 It would be great if testers could take advantage of Test::More
 and other Test::Builder-based testing modules in their property
 declarations, but right now that is tricky.
 
 The problem is that when somebody uses a function such as like() or
 is_deeply() within a property, it ends up getting called thousands of
 times when the property is checked.  Test::Builder naturally thinks
 each one of these calls represents a complete test case, and it
 generates thousands of ok outputs and so on.  Oops.
 
 What I want is for each property check to be seen as a single test.
 
 How I do do this now is somewhat hackish.  For the duration of a
 property check, I redefine some Test::Builder internals like so:
 
 sub check_property {
 no warnings 'redefine';
 my $property = shift;
 my $diags = [];
 local *Test::Builder::ok   = sub { $_[1] ? 1 : 0 };
 local *Test::Builder::diag = sub { shift; push @$diags, @_; 0 };
 return ( $diags,
  Test::LectroTest::TestRunner-new(@_)-run($property) );
 }
 
 The idea is to sever the part of Test::Builder that reports back to
 the caller (which I want) from the part that reports back to the test
 harness (which I do not want).  It seems to work fine, but mucking
 around with another module's internals carries with it an element of
 risk that I don't like, and I would rather have a cleaner option.
 
 Is there a cleaner option?  If not, can/should we create one?
 
 Cheers,
 Tom
 
 
 For more background, see the following:
 
 http://community.moertel.com/LectroTest
 http://search.cpan.org/dist/Test-LectroTest/lib/Test/LectroTest/Compat.pm


Re: TAP docs

2005-02-21 Thread Fergal Daly
On Mon, Feb 21, 2005 at 08:31:43PM -0600, Andy Lester wrote:
 was expected.  I propose to fix this by allowing, in place of a plan at
 the beginning, something like the line ends with plan.
 
 In effect, finding
 
   ok 1
 
 as the first line means ends with plan.

I think that's not mentioned anywhere, the current version does not numbered
tests mean a plan is required. So how about

1..10

means you plan on running 10 tests. This is a safeguard in case your test
file dies silently in the middle of its run.

If your tests are numbered then a plan is mandatory. In certain instances a
test file may not know how many test points it will ultimately be running. 
In this case the plan can be the last non-diagnostic line in the output.

If your tests are not numbered the plan is optional but if there is a plan
before the test points it must be the first non-diagnostic line output by
the test file.

F


Re: TAP Version (was: RE: Test comments)

2005-02-18 Thread Fergal Daly
I was thinking of knocking together Test::AnnounceVersion.

use Test::AnnounceVersion qw(A::List Of::Modules);

which results in

# using version 1.5 of A::List
# using version 0.1 of Of::Modules

supplying no import args would make it output $VERSION from every package it
can find.

If you don't want to make it mandatory (in case it's not installed
somewhere) just do

use Test::AnnounceVersion qw(A::List Of::Modules);

Or something similar could be added to Test::More or Builder,

F

On Fri, Feb 18, 2005 at 10:01:31AM -0800, chromatic wrote:
 On Fri, 2005-02-18 at 09:25 -0500, Geoffrey Young wrote:
 
  yeah, I'll second this, at least so far as adding a version component to
  Test::More goes (which is different than adding a TAP version, which I don't
  have an opinion on:).  Test.pm currently prints out
  
# Using Test.pm version 1.24
  
  and Apache-Test follows suit with
  
# Using Apache/Test.pm version 1.16
  
  and I always wished that Test::More and friends would follow suit.  for
  instance, it might help if someone reports test failures and they're using a
  version of is_deeply() that has a known issue on older versions.
 
 Hm, that does seem valuable.  Should all test modules report their
 versions by default, though?  Should they respect a verbose flag
 somewhere instead?
 
 Test::Builder could report it for them automatically, if the answer to
 at least one question is yes.
 
 -- c


[ANNOUNCE] Test-Tester-0.101

2005-02-16 Thread Fergal Daly
New version available. Improvements are

- colour. Controlled via $TESTTESTERCOLOUR environment variable (also takes
American spelling :)

- surround the diag string with '' so that even without colour, trailing
spaces are easier to spot

- added an option to help with non-english charsets. All space and chars
outside 33-126 range can be shown as \{nnn}. Useful if you're dealing with
homographs or you're putting TABs in your diagnostic output (although that's
probably a bad idea)

- you no longer have to add the trailing backslash for diag output, this
behaviour is more consistent with Test::Builder's behaviour.

Fergal


Re: eq_array testing values prematurely...

2005-02-07 Thread Fergal Daly
It seems to me that that would just hide other problems.  This function is
for comparing 2 arrays and if neither of them things passed in are actually
arrays then it's quite right to issue a warning.

Why is this test passing undef into both arguments of eq_array?

Fergal


On Tue, Feb 08, 2005 at 05:02:50PM +1100, [EMAIL PROTECTED] wrote:
 I've written some coverage tests for Ima::DBI as part of Phalanx, but I 
 get a warning under -W
 
 promptHARNESS_PERL_SWITCHES=-W make test
 
 And got these warnings
 
 [EMAIL PROTECTED] Ima-DBI-0.33]$ HARNESS_PERL_SWITCHES=-W make test
 PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
 test_harness(0, 'blib/lib', 'blib/arch') t/*.t
 t/DBIok 3/0Use of uninitialized value in string eq at 
 /usr/lib/perl5/5.8.0/Test/More.pm line 1013.
 Use of uninitialized value in string eq at 
 /usr/lib/perl5/5.8.0/Test/More.pm line 1013.
 t/DBIok
 All tests successful.
 Files=1, Tests=54,  0 wallclock secs ( 0.32 cusr +  0.03 csys =  0.35 CPU)
 
 Investigating further, that line in Test::More is
 
 sub eq_array  {
my($a1, $a2) = @_;
 
return 1 if $a1 eq $a2;
 ...
 
 Now the more recent versions of eq_array (you can see I'm using 5.8.0) 
 try to protect it a bit from non-array references, but even running the 
 latest version of Test::More::eq_array (and _eq_array) still gives this 
 warning.
 
 So I changed it to this
 
 sub eq_array  {
my($a1, $a2) = @_;
 
if (defined $a1 and defined $a2) {
  return 1 if $a1 eq $a2;
}
 
 And we get
 
 [EMAIL PROTECTED] Ima-DBI-0.33]$ HARNESS_PERL_SWITCHES=-W make test
 PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e 
 test_harness(0, 'blib/lib', 'blib/arch') t/*.t
 t/DBIok
 All tests successful.
 Files=1, Tests=54,  1 wallclock secs ( 0.33 cusr +  0.02 csys =  0.35 CPU)
 
 I'm guessing this is the right forum to post this too - unless I should 
 go right ahead and file with RT...?
 
 
 -- 
 Leif Eriksen
 Snr Developer
 http://www.hpa.com.au/
 phone: +61 3 9217 5545
 email: [EMAIL PROTECTED]


Re: is_deeply hangs

2005-01-23 Thread Fergal Daly
What version of Test::More? Only the most recent versions can handle
circular data structures, so I'd guess you have a circular data structure
and an older version,

Fergal


On Sun, Jan 23, 2005 at 09:22:19AM -0800, Ovid wrote:
 (Aargh!  This time I'll send this from the correct email address.)
 
 Hi all,
 
 I didn't find that this is a known issue reported somewhere so I
 thought I would post it here.
 
 This program hangs when it hits is_deeply.  I eventually get an out of
 memory error.
 
   #!/usr/local/bin/perl
   use AI::Prolog::Parser;
   use AI::Prolog::Term;
   use AI::Prolog::Engine;
   use Test::More qw/no_plan/;
   use Test::Differences;
   use Clone qw/clone/;
 
   my $database = AI::Prolog::Parser-consult('END_PROLOG');
   append([], X, X).
   append([W|X],Y,[W|Z]) :- append(X,Y,Z).
   END_PROLOG
 
   my $parser = AI::Prolog::Parser-new(append([a],[b,c,d],Z).);
   my $query  = AI::Prolog::Term-new($parser);
   my $engine = AI::Prolog::Engine-new($query,$database);
   my $cloned_db = clone($database);
   eq_or_diff $cloned_db, $database, 'eq_or_diff says they are the
 same';
   is_deeply $cloned_db, $database, '... but this hangs';
 
 AI::Prolog is not yet on the CPAN, so if someone want's to test this,
 they can grab it from
 http://users.easystreet.com/ovid/downloads/AI-Prolog-0.01.tar.gz
 
 I didn't do too much research into this as eq_or_diff() solves my
 problem, but we appear to have an infinit loop in Test::More::eq_hash.
 
 Cheers,
 Ovid
 
 
 =
 If this message is a response to a question on a mailing list, please send
 follow up questions to the list.
 
 Web Programming with Perl -- http://users.easystreet.com/ovid/cgi_course/


Re: is_deeply hangs

2005-01-23 Thread Fergal Daly
Oops, actually the latest is_deeply doesn't correctly handle _all_ circular
structures. If the circle includes a hash or an array it will work but if it
only includes scalar references then it will recurse indefinitely. I've
filed a bug report on rt. Test case below

use strict;
use warnings;

use Test::More 'no_plan';

my ($r, $s);

$r = \$r;
$s = \$s;

is_deeply($s, $r);

Fergal

On Sun, Jan 23, 2005 at 07:13:13PM +, Fergal Daly wrote:
 What version of Test::More? Only the most recent versions can handle
 circular data structures, so I'd guess you have a circular data structure
 and an older version,
 
 Fergal


Re: is_deeply hangs

2005-01-23 Thread Fergal Daly
Final reply to myself :-).

Attached is a patch that fixes this test case without breaking any others so
I think it's OK. It basically centralises the circular ref checking into
_deep_check and then reroutes eq_array and eq_hash into that. This ensures
that all types of refs are checked for cirularitinessness.

Fergal


On Sun, Jan 23, 2005 at 08:04:23PM +, Fergal Daly wrote:
 Oops, actually the latest is_deeply doesn't correctly handle _all_ circular
 structures. If the circle includes a hash or an array it will work but if it
 only includes scalar references then it will recurse indefinitely. I've
 filed a bug report on rt. Test case below
 
 use strict;
 use warnings;
 
 use Test::More 'no_plan';
 
 my ($r, $s);
 
 $r = \$r;
 $s = \$s;
 
 is_deeply($s, $r);
 
 Fergal
 
 On Sun, Jan 23, 2005 at 07:13:13PM +, Fergal Daly wrote:
  What version of Test::More? Only the most recent versions can handle
  circular data structures, so I'd guess you have a circular data structure
  and an older version,
  
  Fergal
--- ./t/circular_data.t.orig2005-01-23 20:10:00.085678928 +
+++ ./t/circular_data.t 2005-01-23 20:12:29.096025912 +
@@ -13,7 +13,7 @@
 }
 
 use strict;
-use Test::More tests = 5;
+use Test::More tests = 6;
 
 my $a1 = [ 1, 2, 3 ];
 push @$a1, $a1;
@@ -31,3 +31,10 @@
 
 is_deeply $h1, $h2;
 ok( eq_hash  ($h1, $h2) );
+
+my ($r, $s);
+
+$r = \$r;
+$s = \$s;
+
+ok( eq_array ([$s], [$r]) );
--- ./lib/Test/More.pm.orig 2005-01-23 20:09:45.425907552 +
+++ ./lib/Test/More.pm  2005-01-23 20:18:05.675858000 +
@@ -1112,7 +1112,7 @@
 sub eq_array {
 local @Data_Stack;
 local %Refs_Seen;
-_eq_array(@_);
+_deep_check(@_);
 }
 
 sub _eq_array  {
@@ -1125,13 +1125,6 @@
 
 return 1 if $a1 eq $a2;
 
-if($Refs_Seen{$a1}) {
-return $Refs_Seen{$a1} eq $a2;
-}
-else {
-$Refs_Seen{$a1} = $a2;
-}
-
 my $ok = 1;
 my $max = $#$a1  $#$a2 ? $#$a1 : $#$a2;
 for (0..$max) {
@@ -1171,6 +1164,13 @@
 $ok = 1;
 }
 else {
+if( $Refs_Seen{$e1} ) {
+return $Refs_Seen{$e1} eq $e2;
+}
+else {
+$Refs_Seen{$e1} = $e2;
+}
+
 my $type = _type($e1);
 $type = '' unless _type($e2) eq $type;
 
@@ -1213,7 +1213,7 @@
 sub eq_hash {
 local @Data_Stack;
 local %Refs_Seen;
-return _eq_hash(@_);
+return _deep_check(@_);
 }
 
 sub _eq_hash {
@@ -1226,13 +1226,6 @@
 
 return 1 if $a1 eq $a2;
 
-if( $Refs_Seen{$a1} ) {
-return $Refs_Seen{$a1} eq $a2;
-}
-else {
-$Refs_Seen{$a1} = $a2;
-}
-
 my $ok = 1;
 my $bigger = keys %$a1  keys %$a2 ? $a1 : $a2;
 foreach my $k (keys %$bigger) {


Re: [ANNOUNCE] Test::Simple 0.48_02

2004-07-19 Thread Fergal Daly
On Mon, Jul 19, 2004 at 04:25:35PM +0100, Adrian Howard wrote:
 Which causes anything testing test diagnostic output with 
 Test::Builder::Tester to fall over. Test::Class, Test::Exception  
 Test::Block's test suites now all fail.
 
 Pooh sticks.
 
 My temptation is to say the new behaviour is the right one and patch 
 T::B::T and friends?

Don't know if anyone besides me uses it but Test::Tester is unaffected by
this. It gathers it's information by overriding the default Test::Builder
object and so it's protected from this sort of change,

F


[ANNOUNCE] Test::Tester 0.09

2004-07-12 Thread Fergal Daly
Test::Tester is a(nother) module to allow you to test your test modules,
hopefully with the minimum of effort and maximum flexibility. With version
0.09, the final bit of interface awkwardness is gone and test scripts can
now look like this

use Test::Tester; # load me first

use Test::MyNewModule qw( is_myequal ); # the test subject

check_test(
  sub { is_myequal('this', 'that', 'this vs that') },
  {
ok = 0, # expect it to fail
name = 'this vs that', # optional
diag = 'this' is not equal to 'that' # optional
  }
); 
   
It plays nicely with other Test::Builder based modules so if you need to  
analyse the test results in a more sophisticated way (maybe your test
outputs complicated diagnostics), you can get direct access to the test
results and use Test::More::like() for example, to check that it's ok.  

In fact with this version you can even use functions from Test::MyNewModule
to test another function from Test::MyNewModule (this is not necessarily a
good thing to do!).

Also new in this edition is a nice way to make sure you've correctly set
$Test::Builder::Level,
 
F


Re: [IDEA] Drop into the debugger on failure

2004-07-09 Thread Fergal Daly
On Thu, Jul 08, 2004 at 08:40:54PM -0400, Michael G Schwern wrote:
 On Thu, Jul 08, 2004 at 11:53:52PM +0100, Fergal Daly wrote:
  The main point was that the OO way works right now,
 
 So does event hooks.  Hooks are things you can hang stuff off of, but
 they're also used to snare things that might not want to be snared.
 
 In other words...
 
   use Test::Builder;
   use Hook::LexWrap;
 
   wrap 'Test::Builder::ok', 
   post = sub { 
   my $tb = shift;
   my $ok = $_[-1];
 
   enter_the_debugger if !$ok;
   };
 
 Or something like that.

Is there a LexWrap equivalent of

use Test::Builder::Vapour::Override;

sub diag {
  my ($self, $diag) = @_;
  $self-SUPER::diag(colour_me($diag));
}

? It seems that LexWrap wrappers can't do this as they can't change the
args.
 
If you want to make it all possible, via events and hooks then I think it
does require putting callbacks into or around all the relevant methods -
which is what I thought you were proposing in the first mail. Not difficult
but not trivial either.

While you're here, any chance you'd consider converting Test::Builder to be
hash based rather than class based with a way of creating new instances.
Obviously there should be only 1 real Test::Builder object but allowing
other instances would make test-module testing easier and test nesting
(suites of suites etc) possible,

F



Re: [IDEA] Drop into the debugger on failure

2004-07-09 Thread Fergal Daly
On Fri, Jul 09, 2004 at 11:00:28AM -0400, Michael G Schwern wrote:
 Never underestimate The Damian.

For a moment I though I had.

 #!/usr/bin/perl -w
 
 use Hook::LexWrap;
 use Test::Builder;
 use Term::ANSIColor;
 
 wrap *Test::Builder::diag, pre = sub {
 $_[1]   = color('red') . $_[1];
 $_[-2]  =~ s/$/color 'reset'/e;
 };
 
 use Test::More tests = 1;
 
 fail(Its fun to COLOR!);
 
 Seems altering the elements of @_ works but adding them does not.  He's
 relying on @_ aliasing rather than passing @_ around explicitly.  I guess
 only the elements of @_ are aliased, not the whole array.

Add

diag(Its fun to COLOR!);

to the above to see why I doubted.

When you call fail(), the message gets stuffed into a variable and then
passed to the wrapped sub but calling diag directly means $_[1] is a
constant and the wrapper dies with

Modification of a read-only value attempted at sch line 8.

  While you're here, any chance you'd consider converting Test::Builder to be
  hash based rather than class based with a way of creating new instances.
  Obviously there should be only 1 real Test::Builder object but allowing
  other instances would make test-module testing easier and test nesting
  (suites of suites etc) possible,
 
 Yes, this is planned for 0.50.  If nothing else it will let Test::Builder
 test itself and make Mark Fowler handy.

I did it about a year ago. I'll dig out the patch and see if it still
applies,

F


Re: Test::More::is_deeply() bug

2004-07-08 Thread Fergal Daly
On Thu, Jul 08, 2004 at 03:22:57PM -0400, Michael G Schwern wrote:
 head scratch What version of Test::More is that?

Not the one it should have been! I had patched my version in work long ago
and forgot about it. Oddly, someone else posted a patch against the original
for the same thing on p5p the next day, so I didn't bother fixing mine up,

F



Re: [IDEA] Drop into the debugger on failure

2004-07-08 Thread Fergal Daly
On Thu, Jul 08, 2004 at 01:59:35PM -0400, Michael G Schwern wrote:
 Likely you'd control if you wanted this behavior with 
 HARNESS_PERL_SWITCHES=-MTest::AutoDebug
 
 This can be implemented, currently, by adding a post hook onto 
 Test::Builder-ok() with Hook::LexWrap or Sub::Uplevel.  I'm considering
 future versions of Test::Builder to offer some sort of event subscriber
 system so people can more easily do this sort of thing.

The debugger thing sounds very nice but rather than event hooks, why not use
OO.

package Test::Debug;

use Test::Builder;

@ISA= 'Test::Builder';

*Test::Builder::new = sub {Test::Debug};

sub ok
{
my $self = shift;
my $ok = shift;

if ($ok)
{
local $Test::Builder::Level = $Test::Builder::Level + 1;
$self-SUPER::ok($ok);
}
else
{
debugger_me();
}
}

most of the above could be in Test::Builder::Override,

F




Re: [IDEA] Drop into the debugger on failure

2004-07-08 Thread Fergal Daly
On Thu, Jul 08, 2004 at 04:37:06PM -0400, Michael G Schwern wrote:
 With inheritence, only one variant can be used at a time.
 
 With event subscribers, lots of variants can be used at a time.
 
 Consider what happens when you want to use Test::AutoDebug and a hypothetical
 module which colors failure diagnostics.  If they're both implemented as their 
 own Test::Builder subclasses only one can be in operation at a time.  With
 an event model they're just hooking into the existing object and can
 both be informed of an event and take whatever action they like.

That's fair enough but to get the diag colourer and every other possibility
would require event hooks in a variety of places. Test::Builder's interface
is not so complicated so it's not such a big undertaking.

The main point was that the OO way works right now,

F





Re: Test::More::is_deeply() bug

2004-06-30 Thread Fergal Daly
There are patches in the archives for this and a couple of other bugs but
they were submitted along with another change that wasn't acceptable so they
were never applied. A search for is_deeply should find the patches and a
long argument,

F

On Wed, Jun 30, 2004 at 09:12:45AM -0400, Geoffrey Young wrote:
 hi all
 
 I'm not sure if this is the proper forum for reporting Test::More bugs, but
 I know some of the interested parties are listening :)
 
   use Test::More tests = 2;
 
 
   is(undef,undef);
   is_deeply([undef],[undef]);
 
 
   # both of these should fail
   is('',undef);# this fails
   is_deeply([''],[undef]); # this does not
 
 --Geoff


Re: Test::More::is_deeply() bug

2004-06-30 Thread Fergal Daly
On Wed, Jun 30, 2004 at 09:59:15AM -0400, Geoffrey Young wrote:
 I found some stuff about is_deeply() and stringified references, but it
 doesn't look to me to be the same thing.  the problem I am describing is the
 difference specifically between an empty string and undef.
 
   is_deeply([''],[undef]);
 
 improperly says that the two arrays are equal.
 
   is_deeply(['foo'],[undef]);
 
 behaves as expected.
 
 am I missing something is the discussion?

Actually, it seems that some of the patches were applied. The problem is
that is_deeply() delegates to -is_eq() for non deep arguments but handles
it's own string comparison once you descend into the structure. The patch
below seems to fix it,

F


--- More.pm.orig2004-06-30 15:15:24.182762112 +0100
+++ More.pm 2004-06-30 15:16:36.330793944 +0100
@@ -1035,7 +1035,9 @@
 # Quiet uninitialized value warnings when comparing undefs.
 local $^W = 0; 
 
-if( ! (ref $e1 xor ref $e2) and $e1 eq $e2 ) {
+if( ! (ref $e1 xor ref $e2) and 
+! (defined $e1 xor defined $2) and
+$e1 eq $e2 ) {
 $ok = 1;
 }
 else {



Re: C/C++ White-Box Unit Testing and Test::More

2004-06-26 Thread Fergal Daly
On Fri, Jun 25, 2004 at 10:13:52PM +0100, Adrian Howard wrote:
 On 25 Jun 2004, at 16:51, Fergal Daly wrote:
 [snip]
 NB: I haven't used xUnit style testing so I could be completely off 
 the mark
 but some (not all) of these benefits seem to be available in T::M land.
 
 Just so I'm clear - I'm /not/ saying any of this is impossible with 
 T::M and friends. That's obviously silly since you can build an xUnit 
 framework with Test::Builder and friends.
 
 What xUnit gives you is a little bit more infrastructure to make these 
 sorts of task easier.

That's fair enough but that infrastructure is just extra baggage in some
cases.

Actually, just after I wrote the email, I realised I had used xUnit before,
in Delphi. With DUnit, testing a single class takes a phenomenal amount of
boilerplate code and I guess that's why I'd blocked it from my memory :).

As you say, we already have a good chunk of xUnit style with Test::Harness,
with each .t file corresponding somewhat to a suite but without the
nestability.

I think the baggage only pays for itself when you end up doing a lot of
inheriting between test classes,

F




Re: C/C++ White-Box Unit Testing and Test::More

2004-06-26 Thread Fergal Daly
On Fri, Jun 25, 2004 at 02:18:49PM -0500, Andy Lester wrote:
 On Fri, Jun 25, 2004 at 04:51:29PM +0100, Fergal Daly ([EMAIL PROTECTED]) wrote:
   * I never have to type repetitive tests like
   
 isa_ok Foo-new(), 'Foo'
   
   again because it's handled by a base class that all my test classes 
   inherit from.
 
 Repetition is good.  I feel very strongly that you should be checking
 your constructor results in every single test, and checked against
 literals, not variables.
 
 my $foo = My::Foo-new();
 isa_ok( $foo, 'My::Foo' );
 # and then use it.
 #
 # Later on...
 my $foo = My::Foo-new( bar = 14, bat = \$wango );
 isa_ok( $foo, 'My::Foo' );
 
 The more checks you have, the better.  Sure, the first isa_ok
 technically covers the constructor, but why not check after EVERY
 constructor?  The 2nd example is really an entirely different test.

@_ solves that.

sub constructor_ok
{
my $class = shift;
isa_ok($class-new(@_), $class);
}

I don't think xUnit style makes it any easier to run the same test with many
different inputs,

F



Re: C/C++ White-Box Unit Testing and Test::More

2004-06-25 Thread Fergal Daly
On Fri, Jun 25, 2004 at 04:05:09PM +0100, Adrian Howard wrote:
 
 On 24 Jun 2004, at 20:19, Andrew Pimlott wrote:
 
 On Thu, Jun 24, 2004 at 05:08:44PM +0100, Adrian Howard wrote:
 Where xUnit wins for me are in the normal places where OO is useful
 (abstraction, reuse, revealing intention, etc.).
 
 Since you've thought about this, and obviously don't believe it's OO 
 so
 it's better, I'd be interested in seeing an example if you have one in
 mind.

NB: I haven't used xUnit style testing so I could be completely off the mark
but some (not all) of these benefits seem to be available in T::M land.

 Off the top of my  head.
 
 * I never have to type repetitive tests like
 
   isa_ok Foo-new(), 'Foo'
 
 again because it's handled by a base class that all my test classes 
 inherit from.

sub constructor_ok
{
my $class = shift;

isa_ok $class-new, $class;
}

 * I can create units of testing that can be reused multiple times. If I 
 have an Iterator interface I can write a test suite for it once and 
 reuse it any class that implements the Iterator interface.

What's stopping you doing this in T::M,

sub test_iterator
{
my $iterator = shift;
# test various things about $iterator.
}

 * I have conception level available higher than individual tests (in 
 T::M land) or asserts (in xUnit land). I can say something like:
 
   sub addition_is_commutative : Test {
   is 10 + 5, 15;
   is 5 + 10, 15;
   };
 
 and talk about addition_is_commutative test as a concept separate from 
 the tests/assertions that implement it. I can easily move test methods 
 around as I refactor without having to worry about it breaking some 
 other part of the test suite.

I don't get this. What is the difference between having this as a method vs
as a sub?

 * The setup/teardown methods provide an infrastructure for creating 
 test fixtures and isolating tests, which can often save typing and 
 speed everything up considerably.
 
 
 * Need to check that a class invariant still holds after each test? 
 Chuck it in a teardown method.

The T::M land you could put your setup and teardown in modules and call them
before and after. Then if they're named consistently you could automate that
at which point, you'd almost have xUnit. So xUnit seems to win here for sure,

F


Re: testing for unsuccessful require - mocking require ?

2004-06-19 Thread Fergal Daly
Below is Hide.pm, you can use it like this

use Hide qw( Foo );

require Bar; # will be fine
require Foo; # will fail

I just wrote it now. It seems to work (even if @INC already contains
subrefs, arrayrefs or objects). You can even use it twice and everything
should just work. It seems to have a problem with XS modules though, I'll
look into that.

If you think it's useful, I'll add docs and a test suite and put it on CPAN,

F

On Sat, Jun 19, 2004 at 09:11:43PM -0200, Gabor Szabo wrote:
 
 I would like to test a module for unsuccessful require while the
 required module is installed. That is I'd like ot test how my code would
 work if Foo.pm was not present.

use strict;
use warnings;

package Hide;

use Scalar::Util qw( blessed reftype );

sub import
{
my $pkg = shift;
my @hide = @_;

# these variables are used in the closure below.
my @inc;
my %hide;

@inc = @main::INC;

@hide = map {s#::#/#g; $_.pm} @hide;
@[EMAIL PROTECTED] = ();

# replace the real @INC with one which will fail when any of the hidden
# modules are required but will act like a normal require for all others

@main::INC = sub {
my $self = shift;
my $fn = shift;

if (exists $hide{$fn})
{
return undef;
}

foreach my $path (@inc)
{
my $fh;
if (ref $path)
{
if (blessed $path)
{
$fh = $path-INC($fn)
}
elsif(reftype $path eq ARRAY)
{
$fh = $path-[0]-($path, $fn);
}
elsif(reftype $path eq CODE)
{
$fh = $path-($path, $fn);
}
}
else
{
my $full = $path/$fn;
open($fh, $full) || next;
$main::INC{$fn}= $full;
}
return $fh;
}
return undef;
};
}

1;


Re: ok(1,1) vs. ok ('foo','foo') in Test::More

2004-02-03 Thread Fergal Daly
On Tuesday 03 February 2004 20:46, Tels wrote:
 PS: Thanx for your suggestion, but what exactly does this do:
 
 sub ok
 {
 @_ = 1;
 goto Test::More::ok;
 }
 
 Pass a single (1), or only the first argument? *puzzled*

It passes a single (1) :-( It should be

$#_ = 0;

I got too used to use @array as a scalar, I forgot that doesn't work for 
setting,

F



Re: ok(1,1) vs. ok ('foo','foo') in Test::More

2004-02-03 Thread Fergal Daly
 If I could just change Test; to Test::More; without hundreds of warnings 
 springing on me I know I would convert the test scripts and then change 
 them step by step over to the new code (or not change them at all, because 
 don't change working code..)

If you don't mind adding a

use MyModule::OKSwapper;

after Test::More in every file then you could have

package MyModule::OKSwapper;

require Test::More;
require Exporter;
@ISA = qw( Exporter);
@EXPORT= qw( ok );

sub ok
{
@_ = 1;
goto Test::More::ok;
}

Then, when you fix all the ok()s in a file, just delete the

use MyModule::OKSwapper

If you put the module in your t/MyModule/OKSwapper.pm then it won't be 
installed with make install,

F



Re: Test::More::todo_some ? (was: Re: Setting TODO by test number)

2004-01-13 Thread Fergal Daly
On Tue, Jan 13, 2004 at 04:35:21PM +0100, Elizabeth Mattijsen wrote:
 At 16:47 + 1/12/04, Fergal Daly wrote:
 You can just do Test::Builder-new to get the Test::Builder object. It will
 be the same one used by Test::More because it's a singleton. That way you
 should need no patches,
 
 In the end I came up with this code.  It's pretty simple and 
 straightforward and maybe would be nice to include with Test::More.

I definitely have a use for that but basing it on test numbers seems a bit
dodgy as a small tweak here or there (especially with looped tests) can
throw the numbers out completely.

I think it's a good idea to give all tests unique names. I try to do this bu
I don't always succeed. It would be nice to be able to base the todos on the
name rather than the number. So say you're testing out your new addition
function and for some reason it's pretty good so far but has trouble with 2
and 2

my %todo =(
  add 2 + 2 = Put 2 and 2 together and gets 5
);

foreach my $i (0..100)
{
  foreach my $j (0..100)
  {
todo_some(
  \%todo,
  sub{is(my_add($i, $j), $i + $j, add $i + $j)}
);
  }
}

now I don't have to worry about changing the loop bounds or adding tests
before the loop,

F


Re: Test::More::todo_some ? (was: Re: Setting TODO by test number)

2004-01-13 Thread Fergal Daly
On Tuesday 13 January 2004 16:41, Elizabeth Mattijsen wrote:
 Maybe the first parameter of todo_some should also accept a code 
 reference to a subroutine which, given the other parameters, is 
 supposed to give the TODO reason?  That would make it even more 
 flexible, I would think.

I think at that point put a standard todo inside your loop is easier probably

TODO:{
  todo_skip because, 1 if (some condition);
  #... the real test
}

You coukd have a skip_some_by_name() or skip_some_by_number

Better still, use just 1 function and if the hash key is a number then assume 
it's a test number if it's not, then assume a test name. This should be safe 
enough as Test::Builder already complains when you use a number as a test 
name so there's no great harm in making it a prerequisite to using this 
function,

F



Re: Setting TODO by test number

2004-01-12 Thread Fergal Daly
You can just do Test::Builder-new to get the Test::Builder object. It will
be the same one used by Test::More because it's a singleton. That way you
should need no patches,

F

On Mon, Jan 12, 2004 at 05:26:59PM +0100, Elizabeth Mattijsen wrote:
 I'm using Test::xxx as a tool for testing the functioning of a rather 
 large C program (currently at 112K+ tests).  Many of the tests 
 consist of running combinations of parameters in many nested loops. 
 Sometimes some of these tests fail.  For example, out of a 
 test-script that has about 8000 tests, only 20 will fail along the 
 way.
 
 
 I would like these tests to be marked as TODO.  But that's 
 virtually impossible with the current way you specify TODO tests, as 
 the failures only happen with a specific combination of parameters, 
 usually at least 3 levels deep in loops.
 
 
 Now, the test output tells me the test number it failed.  What the 
 exact combination of parameters is, is less important to me in many 
 cases.  The fact that the test (unexpectedly) fails is more important 
 to me.
 
 Now I only have the option of skipping the entire set of nested 
 loops, as I don't want it to produce any test failures on expected 
 errors.  What I would like to do is just somehow give it a list of 
 test numbers to be marked as TODO.  And almost everything is there 
 already: setting $TODO to a non-empty string is the only thing needed 
 to make all subsequent tests marked as todo.  I just lack the 
 method to set $TODO at the right moment (or I have missed it somehow).
 
 So, what I'd like to add for myself is something like:
 
 todo_ok( test,{
  1001 = a b c still fails, wonder why,
  2345 = d e gf to be investigated,
 },ok text );
 
 
 The conundrum I'm facing with this is that the current_test method 
 of Test::Builder is not available from Test::More.  And the 
 Test::Builder object being used in a Test::More run is also not 
 available in Test::More.  And I don't want to make another Test::More.
 
 
 So I see basically three solutions to this problem:
 
 1. patch Test::More so that the Test::Builder object can be obtained 
 from a test.
 
 Something like adding sub Test { $Test } to Test::More
 
 
 2. patch Test::More to export current_test
 
 Something like adding sub current_test { $Test-current_test } to 
 Test::More
 
 
 3. patch Test::More to export todo_ok
 
 From within Test::More it should be trivial to create todo_ok, but 
 does this itch of mine warrant includion in Test::More?  And why 
 wouldn't then todo_is, todo_cmp_ok also be made?
 
 
 I think I prefer 1 as it will allow you to possibly do other things 
 in the future apart from accessing current_test.  The solution is 
 more generic and only accessible to those people who actually takie a 
 look at the pod / source of Test::Builder.  Solution 2 would on one 
 hand be too specific and on the other hand not specific enough. 
 Option 3 introduces bloat (is that a problem?).
 
 
 If there is another way I could do this, I'm open to that as well... 
 And I wonder where I should post patches to Test::More...  ;-)
 
 
 Liz
 


[ANNOUNCE] Test::Benchmark 0.002

2003-12-20 Thread Fergal Daly
Main changes

added is_fastest()

comparison was broken (but not completely broken so it still passed the tests)

available soon on CPAN and now on

http://www.fergaldaly.com/computer/Test-Benchmark/

F



Re: [ANNOUNCE] Test::Benchmark

2003-12-17 Thread Fergal Daly
On Wed, Dec 17, 2003 at 09:28:48AM -0700, Jim Cromie wrote:
 Hi Fergal,
 
 Id like to see a slightly different interface:
 
is_faster( sub{}, sub{}, 1st is faster);

This would be nice but what value should I use for iterations? I suppose -1
would be safe enough, anything that takes longer than 1 second probably
doesn't need more than 1 iteration to see if it's faster - unless the times
are very close, in which case the test is probably pointless. So that's a
yes I guess.

is_faster( 5.0, sub{}, sub{}, 1st is 5.0 x faster);
is_faster( 0.5, sub{}, sub{}, 1st is 1/2 as fast);
is_faster( 1000, sub{}, sub{}, 1st is faster over 1000 iterations);
is_faster( -3, sub{}, sub{}, 1st is faster over 3 second test);
 
 ie - with optional arguments, and the ability to test for a float-val.
 OTOH - this might be too DWEOM? ish

DWEOM? I don't fancy the float stuff. I think it's doable but if someone
who's not familiar with the module is reading the test script they could
very easily misunderstand it, unless they read the docs very carefully. It
also suffers from the how long should I run this for? problem except now
I'm not sure that -1 is a suitable value for these because now there's a
potentially large factor multiplying the result.

 or, more like Benchmark::timethese().  this form also allows 3 or more tests
 
is_faster( test1, { test1 = sub{}, test2 = sub{}, test3= sub{} }, 
 test1 is fastest);
 
 it is doable, since
 { no warnings; if ( $_[0] and $_[0] == 0 and $_[0] ne '' ) # like timethese

I think I'll call that is_fastest().

 FWIW, I started messing about with the TB = Benchmark relationship..
 with the notion that a few new Benchmark::* classes could represent the
 Benchmark results more portably than the various versions of Benchmark do.
 (notably the old ones)

Benchmark itself could do with refactoring, I though about doing it but then
decided against it because people would have to upgrade to use it or I'd
have to write two versions of T::B.

 Also, FWIW, I still want some support for throwing to screen via diag()

It dumps the benchmarks to the screen when the test fails. I can stick in a
verbose flag somewhere to make it do that all the time.

 DEPENDENCIES
Benchmark, Test::Builder but they come with most Perl's.
  
 
 
 is that perls, perl's, Perls ?   You can avoid the whole issue;
 Benchmark, which is standard with perl5.00503+
 Test::Builder, which is standard with perl 5.6.[01] ?

Indeed, Perls is correct. I think I've seen too many corner shop signs.

F


[ANNOUNCE] Test::Benchmark

2003-12-16 Thread Fergal Daly
Hi,
since no one else has done it, here it is. Not sure exactly how useful it is, 
benchmarks being the fickle things they are but maybe someone will find it 
useful.

Comments, patches, flames welcome. Docs are below file will be on CPAN 
shortly, until then

http://www.fergaldaly.com/computer/Test-Benchmark/

F

NAME
Test::Benchmark - Make sure something really is faster

SYNOPSIS
  use Test::More test = 17;
  use Test::Benchmark;

  is_faster(-10, sub {...}, sub {...}, this is faster than that)
  is_faster(5, -10, sub {...}, sub {...}, this is 5 times faster than that)
  is_n_times_faster(5, -10, sub {...}, sub {...}, this is 5 times faster than 
that)

is_faster(-10, $bench1, $bench2, res1 was faster than res2);

DESCRIPTION
Sometimes you want to make sure that your faster algorithm really is
faster than the old way. This lets you check. It might also be useful to
check that your super whizzo XS or Inline::C version is actually faster.

This module is based on the standard Benchmark module. If you have lots of
timings to compare and you don't want to keep running the same benchmarks
all the time, you can pass in a result object from Benchmark::timethis()
instead of sub routine reference.

USAGE
There are 2 functions exported: is_faster() and is_n_times_faster().
Actually is_n_times_faster() is redundant because is_faster() can do the
same thing just by giving it an extra argument.

Anywhere you can pass a subroutine reference you can also pass in a
Benchmark object.

# call as
# is_faster($times, $sub1, $sub2, $name)
# is_faster($faster, $times, $sub1, $sub2, $name)

  is_faster()
is_faster($times, $sub1, $sub2, $name)

is_faster($factor, $times, $sub1, $sub2, $name)

This runs each subroutine reference $times times and then compares the
results. Instead of either subroutine reference you can pass in a Benchmark
object. If you pass in 2 Benchmark objects then $times is irrelevant.

If $times is negative then that speicifies a minimum duration for the
benchmark rather than a number of iterations (see Benchmark for more
details). I strongly recommend you use this feature if you want your modules
to still pass tests reliably on machines that are much faster than your own.
1 iterations may be enough for a reliable benchmark on your home PC but
it be just a twinkling in the eye of somebody else's super computer.

If the test fails, you will get a diagnostic output showing the benchmark
results in the standard Benchmark format.

  is_n_times_faster()
is_n_times_faster($factor, $times, $sub1, $sub2, $name)

This is exactly the same as the second form of is_faster but it's just
explicit about the n times part.

DANGERS
Benchmarking can be slow so please consider leaving out benchmark tests from
your default test suite, perhaps only running them if the user has set a
particualr environment variable.

Some benchmarks are inherently unreliable.

BUGS
None that I know of.

DEPENDENCIES
Benchmark, Test::Builder but they come with most Perl's.

HISTORY
This came up on the perl-qa mailing list, no one else.

SEE ALSO
Test::Builder, Benchmark

AUTHOR
Written by Fergal Daly [EMAIL PROTECTED].

COPYRIGHT
Copyright 2003 by Fergal Daly [EMAIL PROTECTED].

This program is free software and comes with no warranty. It is distributed
under the LGPL license. You do not have to accept this license but nothing
else gives you the right to use this software.

See the file LGPL included in this distribution or
http://www.fsf.org/licenses/licenses.html.




Re: Test::Benchmark ??

2003-12-04 Thread Fergal Daly
On Thursday 04 December 2003 21:51, Michael G Schwern wrote:
 But it could be.  It would be nice to have a test like make sure the
 hand optimized version is faster than the unoptimized version or make sure
 the XS version is faster than the Perl version.

Yeah - this would probably be useful.

 Another useful sort of test would be make sure this function runs in less
 than N perlmips time where a perlmip is some unit of CPU time calibrated
 relative to the current hardware.  So a pmip on machine A would be
 roughly twice as long as a pmip on a machine that's twice as fast.
 This enables us to test make sure this isn't too slow.

Not so yeah - just like the mip, the pmip would be a bit to elusive and ever 
changing for this to work quite as well as we'd like.

Anyway to do these you can do
my $res = timethese(1, {a = $a_code, b = $b_code}, none);

which will produce no output and $res will contain all the benchmark 
information and you can then perform whatever tests you like on it.

If it's exists, Test::Benchmark should support something like

is_faster(1, $XS_code, $perl_code, XS is faster)

and maybe

is_n_times_faster(1, 5, $XS_code, $perl_code, XS is 5 times faster)

But I don't think that was what Jim wanted, it seemed like he was trying to 
display benchmark info purely for informational purposes,

F



Re: Test::Benchmark ??

2003-12-04 Thread Fergal Daly
On Thursday 04 December 2003 22:51, Michael G Schwern wrote:
 Calibration would be pretty straight forward.  Just have a handful of
 loops to run known snippets of Perl and calibrate the pmip based on how long
 they take to run.  This would change from Perl version to Perl version
 and platform to platform, but you can find something that's not off by more 
 than 25%.

I'm not sure about that 25%. Say the pmip calibrator doesn't fit in the CPU 
cache on any machine, then if the tested algorithm fits in the CPU cache on 
one machine but not on another then there will be a huge difference in the 
number of perl seconds they require.

Worse still, if the pmip calibrator fits in the cache on my fancy new machine 
I'll probably get lots of false negatives because everything seems taking 
perl ages to run. Then you have the multi user machine where the test 
passes when the machine is quiet but fails when's there lots of cache 
contention.

You also have a (somewhat rarer) problem with people changing CPUs and not 
recalibrating their pmips. And the increasingly common laptops with varying 
clock speeds and voltage stepping etc.

If Test::Harness had a protocol for warnings rather than just pass and fail 
then this would be more useful,

F



Re: tesing exceptions of Error.pm

2003-12-03 Thread Fergal Daly
On Tue, Dec 02, 2003 at 10:05:46PM -0800, Michael G Schwern wrote:
 Why not?
 
   catch MyError with {
   like( $ex, qr/Bad thing/ );
   };

If there is no exception then then that test won't execute. It'd have to be
something like

try {
f();
fail(no exception);
}
catch MyError with {
like( $ex, qr/Bad thing/ );
};

but that runs the risk of forgetting the fail(),

F


  1   2   >