Re: Test::Most, end blocks and TAP

2009-03-16 Thread Aristotle Pagaltzis
* Michael G Schwern  [2009-03-14 07:40]:
> Need to come up with a better way to deal with
> end-of-test-process tests.

Add a way in Test::Builder to register callbacks for specific
phases of the output and have TB API client code use that instead
of `BEGIN`/`END` et al.

Regards,
-- 
Aristotle Pagaltzis // 


Re: Test::Most, end blocks and TAP

2009-03-16 Thread Aristotle Pagaltzis
* Josh Heumann  [2009-03-13 14:40]:
> I can change that to:
>
> END {
> pass();
> all_done( 2 );
> }
>
> ...and everything's just fine. The problem really comes when
> the test being run in the END block is in another module (such
> as Test::NoWarnings).

END {
had_no_warnings();
all_done( 2 );
}

-- 
*AUTOLOAD=*_;sub _{s/(.*)::(.*)/print$2,(",$\/"," ")[defined wantarray]/e;$1}
&Just->another->Perl->hack;
#Aristotle Pagaltzis // 


Re: Counting tests

2009-03-16 Thread Adrian Howard


On 14 Mar 2009, at 05:57, Michael G Schwern wrote:
[snip]
The test numbering exists to ensure that all your tests run, and in  
the right
order.  XUnit frameworks don't need to know the number of tests  
because they

simply don't have this type of protection. [1]

[snip]

And, to some extent, need it less. Since most xUnit systems have the  
test-result-producer and the test-result-consumer running in the same  
process space - some of the problems that plans help with (like early  
termination) aren't really much of an issue.


Cheers,

Adrian
--
delicious.com/adrianh - twitter.com/adrianh - adri...@quietstars.com





Re: Counting tests

2009-03-16 Thread Fergal Daly
2009/3/15 Adrian Howard :
>
> On 14 Mar 2009, at 05:57, Michael G Schwern wrote:
> [snip]
>>
>> The test numbering exists to ensure that all your tests run, and in the
>> right
>> order.  XUnit frameworks don't need to know the number of tests because
>> they
>> simply don't have this type of protection. [1]
>
> [snip]
>
> And, to some extent, need it less. Since most xUnit systems have the
> test-result-producer and the test-result-consumer running in the same
> process space - some of the problems that plans help with (like early
> termination) aren't really much of an issue.

Really? I know of at least one automated test runner (by this I mean
it runs all the test files it can find) for pyunit that would say
"everything
is fine" if I through a random sys.exit(0) into my test script.
Without parsing the output, there's not much else to look at but the
exit code and by having the producer and consumer in the same process,
the producer can easily set the exit code against the "will" of the
consumer,

F


> Cheers,
>
> Adrian
> --
> delicious.com/adrianh - twitter.com/adrianh - adri...@quietstars.com
>
>
>
>


Re: Counting tests

2009-03-16 Thread Evgeny
Thing is. It just does not matter THAT much.
The case you describe is fairly rare in the xUnit world, or in any
world I would guess.

The testing suite does not have a "will", it is only a tool.

When the testing suite works, it just works; When people have
confidence in it for some reason, then there is usually a reason
behind that.

Let me demonstrate with an example:
A group of Java developers are using JUnit to write unit tests for
their software. That software is being built and tested on a
continuous integration server (the likes of CruiseControl). And they
even went as far as to draw a graph and a report of the running unit
tests.

The know:
- how many unit tests were executed each run
- how much time each unit test took to run (and the total time)
- which unit tests passed, and which failed
- the behavior of some tests over time (a bad test can randomly
fail/pass for example)

If you would tell them that each time they write a unit test, they
also need to go to some file and increment some counter. They would
probably either not do it, or say you are crazy.

The major idea is to make it easier for a developer to write stuff.
Thats why people invent IDEs (I use vi personally). So that the actual
developer will not be annoyed to do things that are much better done
automatically, like for example update a counter each time he writes
one line of test code.

I wont argue that plan counter does not have its use. It probably
does. But what it also does is annoy the developer. That is why you
would probably see "no_plan" used in most of the testing code in the
wild (I am not talking about CPAN).


just my opinion, you are welcome to argue your reasons if you feel differently.


- evgeny


Re: Counting tests

2009-03-16 Thread Michael G Schwern
Adrian Howard wrote:
> 
> On 14 Mar 2009, at 05:57, Michael G Schwern wrote:
> [snip]
>> The test numbering exists to ensure that all your tests run, and in
>> the right
>> order.  XUnit frameworks don't need to know the number of tests
>> because they
>> simply don't have this type of protection. [1]
> [snip]
> 
> And, to some extent, need it less. Since most xUnit systems have the
> test-result-producer and the test-result-consumer running in the same
> process space - some of the problems that plans help with (like early
> termination) aren't really much of an issue.

In that your whole testing process crashes and you get no results? ;)

Early exit isn't the practical reason for plans, the harness watching the exit
code of the test process handles everything but an actual exit(0) and those
are very rare.  The real problem is a logic or data error which results in
some tests being accidentally bypassed.

I suppose what really covers their ass is that by being broken up into test_*
routines each test function is isolated and their code is simpler and less
likely to have a logic error that results in a test never being run.


-- 
44. I am not the atheist chaplain.
-- The 213 Things Skippy Is No Longer Allowed To Do In The U.S. Army
   http://skippyslist.com/list/


Re: Counting tests

2009-03-16 Thread Michael G Schwern
Evgeny wrote:
> The know:
> - how many unit tests were executed each run
> - how much time each unit test took to run (and the total time)
> - which unit tests passed, and which failed
> - the behavior of some tests over time (a bad test can randomly
> fail/pass for example)

As an aside, have a look at Smolder.
http://sourceforge.net/projects/smolder

Here it is live testing Parrot.
http://smolder.plusthree.com/app/public_projects/smoke_reports/8


> I wont argue that plan counter does not have its use. It probably
> does. But what it also does is annoy the developer. That is why you
> would probably see "no_plan" used in most of the testing code in the
> wild (I am not talking about CPAN).

I agree.  The plan is a big wonkin hammer that's usually unnecessary.
That's why there's no_plan.  And soon the safer done_testing().

I'd be fine with someone revising the Test::More and Test::Tutorial docs to
make it less plan-centric now that done_testing() is there.


-- 
package Outer::Space;  use Test::More tests => 9;


Re: Counting tests

2009-03-16 Thread Eric Wilhelm
# from Michael G Schwern
# on Monday 16 March 2009 11:47:

>I suppose what really covers their ass is that by being broken up into
> test_* routines each test function is isolated and their code is
> simpler and less likely to have a logic error that results in a test
> never being run.

Why is it that whenever plans come up, we hear all about the checksum 
aspect, but never talk about the 'progress bar'?  If you're running 
someone else's test suite, it is very nice to have some idea of how 
much has completed.  Yes, that has caveats, but so does everything.

And on a related note: TAP allowing something where subplans are 
uncounted, but the toplevel plan is, would be nice.  That is, say a 
test has 20 'groups' of whatever-number subtests.  The groups could be 
easily counted (even automatically in some structures).  This takes you 
to one step finer granularity than "half of the test scripts are 
complete".

--Eric
-- 
Cult: A small, unpopular religion.
Religion: A large, popular cult.
-- Unknown
---
http://scratchcomputing.com
---


Re: Counting tests

2009-03-16 Thread Fergal Daly
2009/3/16 Evgeny :
> Thing is. It just does not matter THAT much.
> The case you describe is fairly rare in the xUnit world, or in any
> world I would guess.

And as I said, I got bitten by it just last week. Another way I've
been bitten is when I've done slightly more complex xUnit stuff where
I couldn't just let it use introspection to find all the testcases
automatically. Once you start doing that, having to register testcases
into testsuites and making sure they all get run, it becomes very easy
to leave some out and xUnit provides absolutely no protection against
that. In fact in that case, you end up building the equivalent of
perl's plan yourself.

> The testing suite does not have a "will", it is only a tool.
>
> When the testing suite works, it just works; When people have
> confidence in it for some reason, then there is usually a reason
> behind that.
>
> Let me demonstrate with an example:
> A group of Java developers are using JUnit to write unit tests for
> their software. That software is being built and tested on a
> continuous integration server (the likes of CruiseControl). And they
> even went as far as to draw a graph and a report of the running unit
> tests.
>
> The know:
> - how many unit tests were executed each run
> - how much time each unit test took to run (and the total time)
> - which unit tests passed, and which failed
> - the behavior of some tests over time (a bad test can randomly
> fail/pass for example)
>
> If you would tell them that each time they write a unit test, they
> also need to go to some file and increment some counter. They would
> probably either not do it, or say you are crazy.
>
> The major idea is to make it easier for a developer to write stuff.
> Thats why people invent IDEs (I use vi personally). So that the actual
> developer will not be annoyed to do things that are much better done
> automatically, like for example update a counter each time he writes
> one line of test code.

As has already been pointed out, this is impossible to do it
automatically. Impossible not just because counting how many tests
will run is equivalent to the halting problem, getting around that is
actually quite easy - just run the script and see. The real reason
it's impossible is a plan is a summary of what you think you wrote and
what you think it will do. Your computer can only see what you
actually wrote and what it actually will do. So an automatically
calculated plan will always be correct and thus never tells you
anything.

Alternatively, the plan is a meta-test, a test for your testing code.
It is the equivalent of putting

is($tests_run_count, $tests_i_planned_count)

at the end of your test script. Letting the computer calculate the
plan is the equivalent of putting

is($tests_run_count, $tests_run_count)

at the end of the your test script. It's pointless. It will always pass.


Sometimes a plan is more trouble than its worth, you might even think
it's always more work than it's worth. However for it to be worth
anything at all, it must involve work.


A possibly easier alternative to the current planning system is
available if you use revision control. Leave the plan in an external
file say foo.t's plan goes in foo.plan. When you run foo.t it writes
the test count into foo.count. Before checking in changes to foo.t you
run it and then cp foo.count foo.plan. When you look at the diff for
your checkin you should see that foo.plan is changing in line with
your changes to foo.t. Wrap this all up in a script and put it in your
RCS's hooks/triggers mechanism so that it all happens automatically.
Make a module Test::FilePlan to take care of reading and writing the
foo.{plan,count} files. So you can automatically generate the number
but you still need a human to check whether the number is changing
correctly,

F

> I wont argue that plan counter does not have its use. It probably
> does. But what it also does is annoy the developer. That is why you
> would probably see "no_plan" used in most of the testing code in the
> wild (I am not talking about CPAN).
>
>
> just my opinion, you are welcome to argue your reasons if you feel 
> differently.
>
>
> - evgeny
>


Re: Counting tests, vi vs. emacs, and abortion

2009-03-16 Thread Andy Lester


How about we put up a page somewhere that discusses the pros and cons  
of counting tests, and then whenever the quarterly discussion of LOLZ  
YOU ARE COUNTING YOUR TESTZ FOR NO REASON! vs. YOU DON'T KNOW WHAT  
HAPPENS WITHOUT A PLAN N00B! rears its head, we can refer people there.


Some people see great value in plans.  Some people don't.  Each group  
has valid reasons for their choices.  Fortunately, Test::* handle both.


If anything new has been said about the value of plans vs. no plans  
has been said in the past five years, I will eat this pad of Post-Its.


Love and kisses,
Andy

--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance





Re: Counting tests, vi vs. emacs, and abortion

2009-03-16 Thread Michael G Schwern
Andy Lester wrote:
> 
> How about we put up a page somewhere that discusses the pros and cons of
> counting tests, and then whenever the quarterly discussion of LOLZ YOU
> ARE COUNTING YOUR TESTZ FOR NO REASON! vs. YOU DON'T KNOW WHAT HAPPENS
> WITHOUT A PLAN N00B! rears its head, we can refer people there.

Ok, write it.

Meanwhile I'm finding the discussion about how the xUnit world handles the
problem interesting so please don't step on it.


-- 
Life is like a sewer - what you get out of it depends on what you put into it.
- Tom Lehrer


Re: Counting tests, vi vs. emacs, and abortion

2009-03-16 Thread Fergal Daly
Great idea. Why didn't someone think of it before and refer to that
page in the first posting in this thread and also in the middle...

F

2009/3/16 Andy Lester :
>
> How about we put up a page somewhere that discusses the pros and cons of
> counting tests, and then whenever the quarterly discussion of LOLZ YOU ARE
> COUNTING YOUR TESTZ FOR NO REASON! vs. YOU DON'T KNOW WHAT HAPPENS WITHOUT A
> PLAN N00B! rears its head, we can refer people there.
>
> Some people see great value in plans.  Some people don't.  Each group has
> valid reasons for their choices.  Fortunately, Test::* handle both.
>
> If anything new has been said about the value of plans vs. no plans has been
> said in the past five years, I will eat this pad of Post-Its.
>
> Love and kisses,
> Andy
>
> --
> Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
>
>
>
>


Re: Counting tests

2009-03-16 Thread Michael G Schwern
Eric Wilhelm wrote:
> # from Michael G Schwern
> # on Monday 16 March 2009 11:47:
> 
>> I suppose what really covers their ass is that by being broken up into
>> test_* routines each test function is isolated and their code is
>> simpler and less likely to have a logic error that results in a test
>> never being run.
> 
> Why is it that whenever plans come up, we hear all about the checksum 
> aspect, but never talk about the 'progress bar'?  If you're running 
> someone else's test suite, it is very nice to have some idea of how 
> much has completed.  Yes, that has caveats, but so does everything.

xUnit frameworks certainly have progress bars.  They do this, I assume, by
simply counting the number of test* methods ran vs the total number to be run.


-- 
164. There is no such thing as a were-virgin.
-- The 213 Things Skippy Is No Longer Allowed To Do In The U.S. Army
   http://skippyslist.com/list/


Re: Counting tests

2009-03-16 Thread Michael G Schwern
Fergal Daly wrote:
> Alternatively, the plan is a meta-test, a test for your testing code.
> It is the equivalent of putting
> 
> is($tests_run_count, $tests_i_planned_count)
> 
> at the end of your test script. Letting the computer calculate the
> plan is the equivalent of putting
> 
> is($tests_run_count, $tests_run_count)
> 
> at the end of the your test script. It's pointless. It will always pass.

I hear where you're coming from, but there is some value in knowing a test
still does what it did before.  A regression test.

Consider the following:

my @things = $obj->things(3);
for my $thing (@things) {
is $thing, 42;
}

It's nice to know that things() still returns 3 items.  Yes, there should be a
test in there checking that @things == 3 but maybe there's not and this is a
simple example.

That said, I'm not fond of those folks with editor macros to set the count to
whatever number just ran.  Seems too easy to abuse.


-- 
"Clutter and overload are not an attribute of information,
 they are failures of design"
-- Edward Tufte


Re: Counting tests, vi vs. emacs, and abortion

2009-03-16 Thread Andy Lester


On Mar 16, 2009, at 6:25 PM, Michael G Schwern wrote:


Ok, write it.



Fair enough.  http://www.perlfoundation.org/perl5/index.cgi? 
test_counts is the start.


I don't mean to stomp on new discussion, just the rehashing of the  
old.  My apologies if my skimming of the thread conflated the two.


xoxo,
Andy

--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance





Re: Counting tests

2009-03-16 Thread Fergal Daly
2009/3/16 Michael G Schwern :
> Fergal Daly wrote:
>> Alternatively, the plan is a meta-test, a test for your testing code.
>> It is the equivalent of putting
>>
>> is($tests_run_count, $tests_i_planned_count)
>>
>> at the end of your test script. Letting the computer calculate the
>> plan is the equivalent of putting
>>
>> is($tests_run_count, $tests_run_count)
>>
>> at the end of the your test script. It's pointless. It will always pass.
>
> I hear where you're coming from, but there is some value in knowing a test
> still does what it did before.  A regression test.
>
> Consider the following:
>
>my @things = $obj->things(3);
>for my $thing (@things) {
>is $thing, 42;
>}
>
> It's nice to know that things() still returns 3 items.  Yes, there should be a
> test in there checking that @things == 3 but maybe there's not and this is a
> simple example.

This is exactly what a plan will catch and why it can't be automated.
As far as I can tell we're agreeing.

> That said, I'm not fond of those folks with editor macros to set the count to
> whatever number just ran.  Seems too easy to abuse.

This is not unreasonable if you have an RCS, particularly if you do
code reviews of each others checkins because then you're likely to
notice how the plan is changing (or not) with each checkin. Otherwise
you're just wasting CPU cycles and should use no_plan,

F

>
> --
> "Clutter and overload are not an attribute of information,
>  they are failures of design"
>-- Edward Tufte
>