Re: Counting tests

2009-03-18 Thread Fergal Daly
2009/3/17 Adrian Howard :
>
> On 16 Mar 2009, at 18:47, Michael G Schwern wrote:
>
>> Adrian Howard wrote:
>>>
>>> On 14 Mar 2009, at 05:57, Michael G Schwern wrote:
>>> [snip]

 The test numbering exists to ensure that all your tests run, and in
 the right
 order.  XUnit frameworks don't need to know the number of tests
 because they
 simply don't have this type of protection. [1]
>>>
>>> [snip]
>>>
>>> And, to some extent, need it less. Since most xUnit systems have the
>>> test-result-producer and the test-result-consumer running in the same
>>> process space - some of the problems that plans help with (like early
>>> termination) aren't really much of an issue.
>>
>> In that your whole testing process crashes and you get no results? ;)
>
> Yup! But at least I know something went wrong :-)

Not necessarily. You'll only know this if you are visually inspecting
the output of every test. Once you start using a continuous build/test
then you have no protection against early exit.This is what I was
trying to get at earlier.

If you have an outright crash, that should be detected but exit(0)
will look like a pass,

F

>> Early exit isn't the practical reason for plans, the harness watching the
>> exit
>> code of the test process handles everything but an actual exit(0) and
>> those
>> are very rare.  The real problem is a logic or data error which results in
>> some tests being accidentally bypassed.
>
> Yup. No argument from me there.
>
>> I suppose what really covers their ass is that by being broken up into
>> test_*
>> routines each test function is isolated and their code is simpler and less
>> likely to have a logic error that results in a test never being run.
>
>
> Yup.
>
> Adrian
> --
> delicious.com/adrianh - twitter.com/adrianh - adri...@quietstars.com
>
>
>
>


Re: Counting tests

2009-03-18 Thread Adrian Howard


On 16 Mar 2009, at 18:47, Michael G Schwern wrote:


Adrian Howard wrote:


On 14 Mar 2009, at 05:57, Michael G Schwern wrote:
[snip]

The test numbering exists to ensure that all your tests run, and in
the right
order.  XUnit frameworks don't need to know the number of tests
because they
simply don't have this type of protection. [1]

[snip]

And, to some extent, need it less. Since most xUnit systems have the
test-result-producer and the test-result-consumer running in the same
process space - some of the problems that plans help with (like early
termination) aren't really much of an issue.


In that your whole testing process crashes and you get no results? ;)


Yup! But at least I know something went wrong :-)

Early exit isn't the practical reason for plans, the harness  
watching the exit
code of the test process handles everything but an actual exit(0)  
and those
are very rare.  The real problem is a logic or data error which  
results in

some tests being accidentally bypassed.


Yup. No argument from me there.

I suppose what really covers their ass is that by being broken up  
into test_*
routines each test function is isolated and their code is simpler  
and less

likely to have a logic error that results in a test never being run.



Yup.

Adrian
--
delicious.com/adrianh - twitter.com/adrianh - adri...@quietstars.com





Re: Counting tests

2009-03-18 Thread Adrian Howard


On 16 Mar 2009, at 18:23, Fergal Daly wrote:
[snip]

Really? I know of at least one automated test runner (by this I mean
it runs all the test files it can find) for pyunit that would say
"everything
is fine" if I through a random sys.exit(0) into my test script.

[snip]

That's why I said "most" not "all" :-) Some do have the same problem -  
not trying to say otherwise.


However most of the xUnit frameworks (indeed - most testing frameworks  
that I've used - xUnit or not). Don't have the nice test-consumer /  
test-producer separation that TAP give us. They do things like T::C  
does and use introspection to find loaded test classes and run them  
all that way.


Just in case I'm not being clear - I do think that separation gives us  
advantages. Just that those systems that do run in a single process  
don't have this particular problem.


Cheers,

Adrian
--
delicious.com/adrianh - twitter.com/adrianh - adri...@quietstars.com





Re: Counting tests

2009-03-18 Thread Adrian Howard


On 16 Mar 2009, at 23:52, Fergal Daly wrote:


2009/3/16 Michael G Schwern :



[snip]
I hear where you're coming from, but there is some value in knowing  
a test

still does what it did before.  A regression test.

Consider the following:

  my @things = $obj->things(3);
  for my $thing (@things) {
  is $thing, 42;
  }

It's nice to know that things() still returns 3 items.  Yes, there  
should be a
test in there checking that @things == 3 but maybe there's not and  
this is a

simple example.


This is exactly what a plan will catch and why it can't be automated.
As far as I can tell we're agreeing.



I don't think anybody is disagreeing here. Plans have advantages and  
disadvantages. So do no-plans. The only folk I'll disagree with are  
folk who say one or the other is universally better.


For my particular style of testing (mostly TDD, tending to write small  
isolated tests, etc.) plans tend to get in my way much more than they  
help. So I don't use them. Thank you TAP & T::B for letting me do  
that :-)


Cheers,

Adrian


--
delicious.com/adrianh - twitter.com/adrianh - adri...@quietstars.com





Re: Counting tests

2009-03-17 Thread Ovid


From: Michael G Schwern 

> That said, I'm not fond of those folks with editor macros to set the count to
> whatever number just ran.  Seems too easy to abuse.

++

More than once I've cut-n-drooled the output into the test because I *knew* it 
was correct, only to regret it later.  I hang my head in shame.

Cheers,
Ovid
--
Buy the book - http://www.oreilly.com/catalog/perlhks/
Tech blog- http://use.perl.org/~Ovid/journal/
Twitter  - http://twitter.com/OvidPerl
Official Perl 6 Wiki - http://www.perlfoundation.org/perl6


Re: Counting tests

2009-03-16 Thread Fergal Daly
2009/3/16 Michael G Schwern :
> Fergal Daly wrote:
>> Alternatively, the plan is a meta-test, a test for your testing code.
>> It is the equivalent of putting
>>
>> is($tests_run_count, $tests_i_planned_count)
>>
>> at the end of your test script. Letting the computer calculate the
>> plan is the equivalent of putting
>>
>> is($tests_run_count, $tests_run_count)
>>
>> at the end of the your test script. It's pointless. It will always pass.
>
> I hear where you're coming from, but there is some value in knowing a test
> still does what it did before.  A regression test.
>
> Consider the following:
>
>my @things = $obj->things(3);
>for my $thing (@things) {
>is $thing, 42;
>}
>
> It's nice to know that things() still returns 3 items.  Yes, there should be a
> test in there checking that @things == 3 but maybe there's not and this is a
> simple example.

This is exactly what a plan will catch and why it can't be automated.
As far as I can tell we're agreeing.

> That said, I'm not fond of those folks with editor macros to set the count to
> whatever number just ran.  Seems too easy to abuse.

This is not unreasonable if you have an RCS, particularly if you do
code reviews of each others checkins because then you're likely to
notice how the plan is changing (or not) with each checkin. Otherwise
you're just wasting CPU cycles and should use no_plan,

F

>
> --
> "Clutter and overload are not an attribute of information,
>  they are failures of design"
>-- Edward Tufte
>


Re: Counting tests, vi vs. emacs, and abortion

2009-03-16 Thread Andy Lester


On Mar 16, 2009, at 6:25 PM, Michael G Schwern wrote:


Ok, write it.



Fair enough.  http://www.perlfoundation.org/perl5/index.cgi? 
test_counts is the start.


I don't mean to stomp on new discussion, just the rehashing of the  
old.  My apologies if my skimming of the thread conflated the two.


xoxo,
Andy

--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance





Re: Counting tests

2009-03-16 Thread Michael G Schwern
Fergal Daly wrote:
> Alternatively, the plan is a meta-test, a test for your testing code.
> It is the equivalent of putting
> 
> is($tests_run_count, $tests_i_planned_count)
> 
> at the end of your test script. Letting the computer calculate the
> plan is the equivalent of putting
> 
> is($tests_run_count, $tests_run_count)
> 
> at the end of the your test script. It's pointless. It will always pass.

I hear where you're coming from, but there is some value in knowing a test
still does what it did before.  A regression test.

Consider the following:

my @things = $obj->things(3);
for my $thing (@things) {
is $thing, 42;
}

It's nice to know that things() still returns 3 items.  Yes, there should be a
test in there checking that @things == 3 but maybe there's not and this is a
simple example.

That said, I'm not fond of those folks with editor macros to set the count to
whatever number just ran.  Seems too easy to abuse.


-- 
"Clutter and overload are not an attribute of information,
 they are failures of design"
-- Edward Tufte


Re: Counting tests

2009-03-16 Thread Michael G Schwern
Eric Wilhelm wrote:
> # from Michael G Schwern
> # on Monday 16 March 2009 11:47:
> 
>> I suppose what really covers their ass is that by being broken up into
>> test_* routines each test function is isolated and their code is
>> simpler and less likely to have a logic error that results in a test
>> never being run.
> 
> Why is it that whenever plans come up, we hear all about the checksum 
> aspect, but never talk about the 'progress bar'?  If you're running 
> someone else's test suite, it is very nice to have some idea of how 
> much has completed.  Yes, that has caveats, but so does everything.

xUnit frameworks certainly have progress bars.  They do this, I assume, by
simply counting the number of test* methods ran vs the total number to be run.


-- 
164. There is no such thing as a were-virgin.
-- The 213 Things Skippy Is No Longer Allowed To Do In The U.S. Army
   http://skippyslist.com/list/


Re: Counting tests, vi vs. emacs, and abortion

2009-03-16 Thread Fergal Daly
Great idea. Why didn't someone think of it before and refer to that
page in the first posting in this thread and also in the middle...

F

2009/3/16 Andy Lester :
>
> How about we put up a page somewhere that discusses the pros and cons of
> counting tests, and then whenever the quarterly discussion of LOLZ YOU ARE
> COUNTING YOUR TESTZ FOR NO REASON! vs. YOU DON'T KNOW WHAT HAPPENS WITHOUT A
> PLAN N00B! rears its head, we can refer people there.
>
> Some people see great value in plans.  Some people don't.  Each group has
> valid reasons for their choices.  Fortunately, Test::* handle both.
>
> If anything new has been said about the value of plans vs. no plans has been
> said in the past five years, I will eat this pad of Post-Its.
>
> Love and kisses,
> Andy
>
> --
> Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
>
>
>
>


Re: Counting tests, vi vs. emacs, and abortion

2009-03-16 Thread Michael G Schwern
Andy Lester wrote:
> 
> How about we put up a page somewhere that discusses the pros and cons of
> counting tests, and then whenever the quarterly discussion of LOLZ YOU
> ARE COUNTING YOUR TESTZ FOR NO REASON! vs. YOU DON'T KNOW WHAT HAPPENS
> WITHOUT A PLAN N00B! rears its head, we can refer people there.

Ok, write it.

Meanwhile I'm finding the discussion about how the xUnit world handles the
problem interesting so please don't step on it.


-- 
Life is like a sewer - what you get out of it depends on what you put into it.
- Tom Lehrer


Re: Counting tests, vi vs. emacs, and abortion

2009-03-16 Thread Andy Lester


How about we put up a page somewhere that discusses the pros and cons  
of counting tests, and then whenever the quarterly discussion of LOLZ  
YOU ARE COUNTING YOUR TESTZ FOR NO REASON! vs. YOU DON'T KNOW WHAT  
HAPPENS WITHOUT A PLAN N00B! rears its head, we can refer people there.


Some people see great value in plans.  Some people don't.  Each group  
has valid reasons for their choices.  Fortunately, Test::* handle both.


If anything new has been said about the value of plans vs. no plans  
has been said in the past five years, I will eat this pad of Post-Its.


Love and kisses,
Andy

--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance





Re: Counting tests

2009-03-16 Thread Fergal Daly
2009/3/16 Evgeny :
> Thing is. It just does not matter THAT much.
> The case you describe is fairly rare in the xUnit world, or in any
> world I would guess.

And as I said, I got bitten by it just last week. Another way I've
been bitten is when I've done slightly more complex xUnit stuff where
I couldn't just let it use introspection to find all the testcases
automatically. Once you start doing that, having to register testcases
into testsuites and making sure they all get run, it becomes very easy
to leave some out and xUnit provides absolutely no protection against
that. In fact in that case, you end up building the equivalent of
perl's plan yourself.

> The testing suite does not have a "will", it is only a tool.
>
> When the testing suite works, it just works; When people have
> confidence in it for some reason, then there is usually a reason
> behind that.
>
> Let me demonstrate with an example:
> A group of Java developers are using JUnit to write unit tests for
> their software. That software is being built and tested on a
> continuous integration server (the likes of CruiseControl). And they
> even went as far as to draw a graph and a report of the running unit
> tests.
>
> The know:
> - how many unit tests were executed each run
> - how much time each unit test took to run (and the total time)
> - which unit tests passed, and which failed
> - the behavior of some tests over time (a bad test can randomly
> fail/pass for example)
>
> If you would tell them that each time they write a unit test, they
> also need to go to some file and increment some counter. They would
> probably either not do it, or say you are crazy.
>
> The major idea is to make it easier for a developer to write stuff.
> Thats why people invent IDEs (I use vi personally). So that the actual
> developer will not be annoyed to do things that are much better done
> automatically, like for example update a counter each time he writes
> one line of test code.

As has already been pointed out, this is impossible to do it
automatically. Impossible not just because counting how many tests
will run is equivalent to the halting problem, getting around that is
actually quite easy - just run the script and see. The real reason
it's impossible is a plan is a summary of what you think you wrote and
what you think it will do. Your computer can only see what you
actually wrote and what it actually will do. So an automatically
calculated plan will always be correct and thus never tells you
anything.

Alternatively, the plan is a meta-test, a test for your testing code.
It is the equivalent of putting

is($tests_run_count, $tests_i_planned_count)

at the end of your test script. Letting the computer calculate the
plan is the equivalent of putting

is($tests_run_count, $tests_run_count)

at the end of the your test script. It's pointless. It will always pass.


Sometimes a plan is more trouble than its worth, you might even think
it's always more work than it's worth. However for it to be worth
anything at all, it must involve work.


A possibly easier alternative to the current planning system is
available if you use revision control. Leave the plan in an external
file say foo.t's plan goes in foo.plan. When you run foo.t it writes
the test count into foo.count. Before checking in changes to foo.t you
run it and then cp foo.count foo.plan. When you look at the diff for
your checkin you should see that foo.plan is changing in line with
your changes to foo.t. Wrap this all up in a script and put it in your
RCS's hooks/triggers mechanism so that it all happens automatically.
Make a module Test::FilePlan to take care of reading and writing the
foo.{plan,count} files. So you can automatically generate the number
but you still need a human to check whether the number is changing
correctly,

F

> I wont argue that plan counter does not have its use. It probably
> does. But what it also does is annoy the developer. That is why you
> would probably see "no_plan" used in most of the testing code in the
> wild (I am not talking about CPAN).
>
>
> just my opinion, you are welcome to argue your reasons if you feel 
> differently.
>
>
> - evgeny
>


Re: Counting tests

2009-03-16 Thread Eric Wilhelm
# from Michael G Schwern
# on Monday 16 March 2009 11:47:

>I suppose what really covers their ass is that by being broken up into
> test_* routines each test function is isolated and their code is
> simpler and less likely to have a logic error that results in a test
> never being run.

Why is it that whenever plans come up, we hear all about the checksum 
aspect, but never talk about the 'progress bar'?  If you're running 
someone else's test suite, it is very nice to have some idea of how 
much has completed.  Yes, that has caveats, but so does everything.

And on a related note: TAP allowing something where subplans are 
uncounted, but the toplevel plan is, would be nice.  That is, say a 
test has 20 'groups' of whatever-number subtests.  The groups could be 
easily counted (even automatically in some structures).  This takes you 
to one step finer granularity than "half of the test scripts are 
complete".

--Eric
-- 
Cult: A small, unpopular religion.
Religion: A large, popular cult.
-- Unknown
---
http://scratchcomputing.com
---


Re: Counting tests

2009-03-16 Thread Michael G Schwern
Evgeny wrote:
> The know:
> - how many unit tests were executed each run
> - how much time each unit test took to run (and the total time)
> - which unit tests passed, and which failed
> - the behavior of some tests over time (a bad test can randomly
> fail/pass for example)

As an aside, have a look at Smolder.
http://sourceforge.net/projects/smolder

Here it is live testing Parrot.
http://smolder.plusthree.com/app/public_projects/smoke_reports/8


> I wont argue that plan counter does not have its use. It probably
> does. But what it also does is annoy the developer. That is why you
> would probably see "no_plan" used in most of the testing code in the
> wild (I am not talking about CPAN).

I agree.  The plan is a big wonkin hammer that's usually unnecessary.
That's why there's no_plan.  And soon the safer done_testing().

I'd be fine with someone revising the Test::More and Test::Tutorial docs to
make it less plan-centric now that done_testing() is there.


-- 
package Outer::Space;  use Test::More tests => 9;


Re: Counting tests

2009-03-16 Thread Michael G Schwern
Adrian Howard wrote:
> 
> On 14 Mar 2009, at 05:57, Michael G Schwern wrote:
> [snip]
>> The test numbering exists to ensure that all your tests run, and in
>> the right
>> order.  XUnit frameworks don't need to know the number of tests
>> because they
>> simply don't have this type of protection. [1]
> [snip]
> 
> And, to some extent, need it less. Since most xUnit systems have the
> test-result-producer and the test-result-consumer running in the same
> process space - some of the problems that plans help with (like early
> termination) aren't really much of an issue.

In that your whole testing process crashes and you get no results? ;)

Early exit isn't the practical reason for plans, the harness watching the exit
code of the test process handles everything but an actual exit(0) and those
are very rare.  The real problem is a logic or data error which results in
some tests being accidentally bypassed.

I suppose what really covers their ass is that by being broken up into test_*
routines each test function is isolated and their code is simpler and less
likely to have a logic error that results in a test never being run.


-- 
44. I am not the atheist chaplain.
-- The 213 Things Skippy Is No Longer Allowed To Do In The U.S. Army
   http://skippyslist.com/list/


Re: Counting tests

2009-03-16 Thread Evgeny
Thing is. It just does not matter THAT much.
The case you describe is fairly rare in the xUnit world, or in any
world I would guess.

The testing suite does not have a "will", it is only a tool.

When the testing suite works, it just works; When people have
confidence in it for some reason, then there is usually a reason
behind that.

Let me demonstrate with an example:
A group of Java developers are using JUnit to write unit tests for
their software. That software is being built and tested on a
continuous integration server (the likes of CruiseControl). And they
even went as far as to draw a graph and a report of the running unit
tests.

The know:
- how many unit tests were executed each run
- how much time each unit test took to run (and the total time)
- which unit tests passed, and which failed
- the behavior of some tests over time (a bad test can randomly
fail/pass for example)

If you would tell them that each time they write a unit test, they
also need to go to some file and increment some counter. They would
probably either not do it, or say you are crazy.

The major idea is to make it easier for a developer to write stuff.
Thats why people invent IDEs (I use vi personally). So that the actual
developer will not be annoyed to do things that are much better done
automatically, like for example update a counter each time he writes
one line of test code.

I wont argue that plan counter does not have its use. It probably
does. But what it also does is annoy the developer. That is why you
would probably see "no_plan" used in most of the testing code in the
wild (I am not talking about CPAN).


just my opinion, you are welcome to argue your reasons if you feel differently.


- evgeny


Re: Counting tests

2009-03-16 Thread Fergal Daly
2009/3/15 Adrian Howard :
>
> On 14 Mar 2009, at 05:57, Michael G Schwern wrote:
> [snip]
>>
>> The test numbering exists to ensure that all your tests run, and in the
>> right
>> order.  XUnit frameworks don't need to know the number of tests because
>> they
>> simply don't have this type of protection. [1]
>
> [snip]
>
> And, to some extent, need it less. Since most xUnit systems have the
> test-result-producer and the test-result-consumer running in the same
> process space - some of the problems that plans help with (like early
> termination) aren't really much of an issue.

Really? I know of at least one automated test runner (by this I mean
it runs all the test files it can find) for pyunit that would say
"everything
is fine" if I through a random sys.exit(0) into my test script.
Without parsing the output, there's not much else to look at but the
exit code and by having the producer and consumer in the same process,
the producer can easily set the exit code against the "will" of the
consumer,

F


> Cheers,
>
> Adrian
> --
> delicious.com/adrianh - twitter.com/adrianh - adri...@quietstars.com
>
>
>
>


Re: Counting tests

2009-03-16 Thread Adrian Howard


On 14 Mar 2009, at 05:57, Michael G Schwern wrote:
[snip]
The test numbering exists to ensure that all your tests run, and in  
the right
order.  XUnit frameworks don't need to know the number of tests  
because they

simply don't have this type of protection. [1]

[snip]

And, to some extent, need it less. Since most xUnit systems have the  
test-result-producer and the test-result-consumer running in the same  
process space - some of the problems that plans help with (like early  
termination) aren't really much of an issue.


Cheers,

Adrian
--
delicious.com/adrianh - twitter.com/adrianh - adri...@quietstars.com





Re: Counting tests

2009-03-14 Thread Evgeny
Just to pitch a small explanation about what Cucumber is :
It was born out of RSpec, the BDD framework, that replaced asserts with
should. And what it does it allow to specify XP stories in plain english and
then executes them to see that they pass/fail.

If you take a look at http://github.com/kesor/p5-cucumber you see the
shortest possible example of that.

The full cucumber, which also allows to specify parameters in a table (kind
of like FIT/Fitnesse) is being maintained by Aslak Hellesoy and his
project's documentation wiki is here:
http://wiki.github.com/aslakhellesoy/cucumber

As in the original cucumber the ser of this framework can use whatever
testing framework he likes. The I would think that the same will apply to
what I wrote, if you want to write your tests with Test::Simple or ::More or
::Most or whatever other Test:: framework you like from CPAN then go ahead.

The only thing the Cucumber layer adds is a way to organize the code and a
way to organize the scenario and then it executes it in this order. It does
not really care what kind of testing code you write in there.

The main idea is to allow people who don't necessary know a programming
language, but who are domain experts for a product, for example marketing or
product managers, to write acceptance tests that can later be run in an
automatic way.

This requires to write the code behind the specification, usually just once,
because the code will get executed by matching a line in the story with a
regular expression - and it means that the same code can be reused by
multiple stories. So even if the product manager wrote a thousand stories -
it might mean that only a hundred parsers need to be written in the code. Or
even just six :)


I think that even though this explanation might not be on-topic for this
thread, it is certainly on-topic for this group.


Thank you all again for the Test::More/Most/no_plan/defer_plan explanations.
I will probably choose More/no_plan for now because it is bundled with perl
and will not require users of my small example to install additional
modules.


-
evgeny



On Sat, Mar 14, 2009 at 7:57 AM, Michael G Schwern wrote:

> Let's sum up.
>
> The "why can't a program count its own tests" page refers to the problem of
> counting the tests *without* running the code.
>
> `use Test::More "no_plan";` is the most used way to run a test without
> having
> to hard code the number of tests beforehand.
>
> The test numbering exists to ensure that all your tests run, and in the
> right
> order.  XUnit frameworks don't need to know the number of tests because
> they
> simply don't have this type of protection. [1]
>
> `use Test::Most "deferred_plan";` is a safer way to do that.  It ensures
> your
> test runs to completion and it has the option of taking a number of tests.
> Sometimes you can calculate the number as you go.
>
> Test::More is adding this feature, but there's nothing wrong with
> Test::Most.
>  Unlike other testing systems, Perl does not use a single "test framework"
> like JUnit or whatever.  Most Test:: modules on CPAN will work together.
> [2]
> You can mix and match.  In this sense it kicks the crap out of everyone
> else. :)
>
> Test::More and Test::Harness will see if your test dies or segfaults.
> Test::More will see a normal die and Test::Harness will fail any test
> script
> with a non-zero exit code.
>
> A human should never be necessary to determine if a test passed or failed.
> Humans are bad at rote tasks and reading huge wads of output.  They will
> eventually tire of the task or simply miss a failure.  Also the next human
> will not know what to look for.  And it kills test automation.  You may
> wish
> to look at http://testers.cpan.org/ to understand the scale of test
> automation
> we're talking about.
>
> I have no idea what Cucumber is, but I have no doubt it can be implemented
> as
> a Perl testing module to work with everything else.  It looks like a clever
> FIT framework.  For the moment using Test::More will work, but eventually
> you'll want to switch to Test::Builder.
>
>
> [1] Perl has this because Larry needed it 21 years ago when he came up with
> all this, not necessarily because it's important.
>
> [2] Through the magic of the Test Anything Protocol and Test::Builder.  I
> happen to have just given a talk about how this all works.
> http://schwern.org/talks/TB2.pdf
> (Ignore the title, it's not actually about Test::Builder2)
> The audio from the meeting should show up here shortly.
> http://pdxpm.podasp.com/archive.html
>
>
> --
> 40. I do not have super-powers.
>-- The 213 Things Skippy Is No Longer Allowed To Do In The U.S. Army
>   http://skippyslist.com/list/
>


Re: Counting tests

2009-03-13 Thread Eric Wilhelm
# from Michael G Schwern
# on Friday 13 March 2009 22:57:

>The audio from the meeting should show up here shortly.
>http://pdxpm.podasp.com/archive.html

Well, now that you've gone and promised it, I guess I'll have to get 
that uploaded 'shortly'.  Looks like maybe another 30min at the current 
trickle.

--Eric
-- 
But you can never get 3n from n, ever, and if you think you can, please
email me the stock ticker of your company so I can short it.
--Joel Spolsky
---
http://scratchcomputing.com
---


Re: Counting tests

2009-03-13 Thread Michael G Schwern
Let's sum up.

The "why can't a program count its own tests" page refers to the problem of
counting the tests *without* running the code.

`use Test::More "no_plan";` is the most used way to run a test without having
to hard code the number of tests beforehand.

The test numbering exists to ensure that all your tests run, and in the right
order.  XUnit frameworks don't need to know the number of tests because they
simply don't have this type of protection. [1]

`use Test::Most "deferred_plan";` is a safer way to do that.  It ensures your
test runs to completion and it has the option of taking a number of tests.
Sometimes you can calculate the number as you go.

Test::More is adding this feature, but there's nothing wrong with Test::Most.
 Unlike other testing systems, Perl does not use a single "test framework"
like JUnit or whatever.  Most Test:: modules on CPAN will work together. [2]
You can mix and match.  In this sense it kicks the crap out of everyone else. :)

Test::More and Test::Harness will see if your test dies or segfaults.
Test::More will see a normal die and Test::Harness will fail any test script
with a non-zero exit code.

A human should never be necessary to determine if a test passed or failed.
Humans are bad at rote tasks and reading huge wads of output.  They will
eventually tire of the task or simply miss a failure.  Also the next human
will not know what to look for.  And it kills test automation.  You may wish
to look at http://testers.cpan.org/ to understand the scale of test automation
we're talking about.

I have no idea what Cucumber is, but I have no doubt it can be implemented as
a Perl testing module to work with everything else.  It looks like a clever
FIT framework.  For the moment using Test::More will work, but eventually
you'll want to switch to Test::Builder.


[1] Perl has this because Larry needed it 21 years ago when he came up with
all this, not necessarily because it's important.

[2] Through the magic of the Test Anything Protocol and Test::Builder.  I
happen to have just given a talk about how this all works.
http://schwern.org/talks/TB2.pdf
(Ignore the title, it's not actually about Test::Builder2)
The audio from the meeting should show up here shortly.
http://pdxpm.podasp.com/archive.html


-- 
40. I do not have super-powers.
-- The 213 Things Skippy Is No Longer Allowed To Do In The U.S. Army
   http://skippyslist.com/list/


Re: Counting tests

2009-03-13 Thread Josh Heumann

> If you still want to calculate a plan on the fly:
>
>   use Test::More 'defer_plan';
>   # run tests
>   all_done($number_of_tests);

Just a note so as not to confuse Evgeny: Ovid meant to toot his own
horn, and that first line of code should have been:

use Test::Most 'defer_plan';

J


Re: Counting tests

2009-03-13 Thread Fergal Daly
2009/3/13 Evgeny :
> I actually put a link to the FAQ at the very first mail I sent.It does not
> address my questions, it gives examples that say "we can't count tests ahead
> of time, its impossible". But I just want you to change the approach from
> "ahead of time" into "realtime" or something ... like all the other testing
> frameworks do it.

Test::More does count the tests in "realtime", then at the end it
compares this count to what you declared it should be. The count in
the plan is like a checksum. If you don't want a checksum that's fine,
us no_plan or somesuch.

Just last week I found myself swearing at pyUnit for the lack of this
kind of checksum -  a test I had written wasn't actually dong
anything. It was always returning early without asserting anything. Of
course the mistake was mine but then if I didn't make mistakes, I
wouldn't need a test suite and if I could have declared a plan, I
would have caught it long ago.

Tests find bugs in your app code. plans find bugs in your testing code,

F

> On Fri, Mar 13, 2009 at 2:52 PM, Ovid  wrote:
>
>>
>> - Original Message 
>>
>> > From: Gabor Szabo 
>>
>> > On Fri, Mar 13, 2009 at 2:40 PM, Evgeny wrote:
>> > > If my script ended early, because maybe even a core dump ... the I wont
>> > > care. It's just another case of a failed test that cant be reported by
>> > > Test::More, but a human looking at the screen will hopefully understand
>> what
>> > > happened.
>> >
>> > Human?
>> >
>> > Why would a human look at a test report that says "everything is ok"?
>>
>> Don't we have a FAQ about this somewhere?  Evgeny's questions are quite
>> reasonable, but we're answering them every few months.  It would be nice to
>> link to a FAQ and be done with it.
>>
>>
>> Cheers,
>> Ovid
>> --
>> Buy the book - http://www.oreilly.com/catalog/perlhks/
>> Tech blog- http://use.perl.org/~Ovid/journal/
>> Twitter  - http://twitter.com/OvidPerl
>> Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
>>
>>
>


Re: Counting tests

2009-03-13 Thread Evgeny
Gabor,
Since you are in the field of testing - then you probably know about the
other frameworks in other languages. Specifically what Ruby's Cucumber is
about.

I tried writing something similar in Perl, using Test::More no less. But I
believe you are a far better perl programmer than me, and I would love to
hear your comments -- if you agree to take a look.

The project (one small perl file really) is currently here:
http://github.com/kesor/p5-cucumber/

Just thought that it would be interesting to you even if you don't have time
to help out a little bit.


Regards,

Evgeny


On Fri, Mar 13, 2009 at 2:40 PM, Evgeny  wrote:

> If my script ended early, because maybe even a core dump ... the I wont
> care. It's just another case of a failed test that cant be reported by
> Test::More, but a human looking at the screen will hopefully understand what
> happened.
>
>
> On Fri, Mar 13, 2009 at 2:34 PM, Gabor Szabo  wrote:
>
>> On Fri, Mar 13, 2009 at 2:04 PM, Evgeny  wrote:
>> > I have seen the page :
>> >
>> http://perl-qa.hexten.net/wiki/index.php/Why_can%27t_a_program_count_its_own_tests
>> >
>> > And I still don't understand, why can't a perl program count its test
>> and
>> > then when all the tests are done write something like:
>> >
>> > I ran 45976347563873456 tests and 587643873645 of then failed and
>> > 234598634875634 of them passed.
>> >
>> > (dont mind that the numbers dont add up)
>> >
>> >
>> > Then you dont really need to "count" the amount of tests before hand,
>> you
>> > "count" them as you go, and will only know the final amount of tests at
>> the
>> > very end.
>> >
>>
>> They can, just say
>>
>> use Test::More 'no_plan';
>>
>>
>> The problem is that what happens if you constantly
>> get 100 success reports while in fact you had 300
>> tests, just you test script exited early?
>>
>> e.g. because you added an exit; in the middle to shortcut
>> your test running while you were debugging some failing test.
>>
>>
>> Gabor
>> http://szabgab.com/test_automation_tips.html
>>
>
>


Re: Counting tests

2009-03-13 Thread Evgeny
I actually put a link to the FAQ at the very first mail I sent.It does not
address my questions, it gives examples that say "we can't count tests ahead
of time, its impossible". But I just want you to change the approach from
"ahead of time" into "realtime" or something ... like all the other testing
frameworks do it.

On Fri, Mar 13, 2009 at 2:52 PM, Ovid  wrote:

>
> - Original Message 
>
> > From: Gabor Szabo 
>
> > On Fri, Mar 13, 2009 at 2:40 PM, Evgeny wrote:
> > > If my script ended early, because maybe even a core dump ... the I wont
> > > care. It's just another case of a failed test that cant be reported by
> > > Test::More, but a human looking at the screen will hopefully understand
> what
> > > happened.
> >
> > Human?
> >
> > Why would a human look at a test report that says "everything is ok"?
>
> Don't we have a FAQ about this somewhere?  Evgeny's questions are quite
> reasonable, but we're answering them every few months.  It would be nice to
> link to a FAQ and be done with it.
>
>
> Cheers,
> Ovid
> --
> Buy the book - http://www.oreilly.com/catalog/perlhks/
> Tech blog- http://use.perl.org/~Ovid/journal/
> Twitter  - http://twitter.com/OvidPerl
> Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
>
>


Re: Counting tests

2009-03-13 Thread Evgeny
Hmm... to know that everything is ok? :)If someone put an "exit" in the
middle of the code, then yes - it's a problem in perl, since you cant make
Test::More catch that exit and replace it with "print test results and then
exit".

But other than that, if errors occur and the code runs "die" in the middle
of the testing - the hopefully the exit code will not only tell the human,
but also the automaton that there was something wrong. Even though the tests
were not finished.

On Fri, Mar 13, 2009 at 2:45 PM, Gabor Szabo  wrote:

> On Fri, Mar 13, 2009 at 2:40 PM, Evgeny  wrote:
> > If my script ended early, because maybe even a core dump ... the I wont
> > care. It's just another case of a failed test that cant be reported by
> > Test::More, but a human looking at the screen will hopefully understand
> what
> > happened.
>
> Human?
>
> Why would a human look at a test report that says "everything is ok"?
>
> Gabor
>
> Perl 6 Tricks and Treats
> http://szabgab.com/perl6.html
>


Re: Counting tests

2009-03-13 Thread Evgeny
I actually said "in other languages", like Ruby Test::Unit, or RSpec (also
Ruby). And out of all the xUnit frameworks, like JUnit, there is no "specify
amount of tests" in any of them. They just count them as you go, and display
the total amount of passed/failed/totals at the end.
I am not too familiar with the different testing modules on CPAN and Perl to
point to a Perl module that does this without counting, I have only learned
to use Test::Simple/More a couple of months ago while helping a colleague.

On Fri, Mar 13, 2009 at 2:58 PM, Gabor Szabo  wrote:

> On Fri, Mar 13, 2009 at 2:53 PM, Evgeny  wrote:
> > I actually put a link to the FAQ at the very first mail I sent.
> > It does not address my questions, it gives examples that say "we can't
> count
> > tests ahead of time, its impossible". But I just want you to change the
> > approach from "ahead of time" into "realtime" or something ... like all
> the
> > other testing frameworks do it.
>
> There is a work in progress to let people tell during their test code:
> "here I have 5 more test" instead of planning ahead all of them
> that might address the issue you see.
>
> Besides that I'd be glad to see which framework solves this problem and
> how?
>
> Gabor
>


Re: Counting tests

2009-03-13 Thread Evgeny
If my script ended early, because maybe even a core dump ... the I wont
care. It's just another case of a failed test that cant be reported by
Test::More, but a human looking at the screen will hopefully understand what
happened.

On Fri, Mar 13, 2009 at 2:34 PM, Gabor Szabo  wrote:

> On Fri, Mar 13, 2009 at 2:04 PM, Evgeny  wrote:
> > I have seen the page :
> >
> http://perl-qa.hexten.net/wiki/index.php/Why_can%27t_a_program_count_its_own_tests
> >
> > And I still don't understand, why can't a perl program count its test and
> > then when all the tests are done write something like:
> >
> > I ran 45976347563873456 tests and 587643873645 of then failed and
> > 234598634875634 of them passed.
> >
> > (dont mind that the numbers dont add up)
> >
> >
> > Then you dont really need to "count" the amount of tests before hand, you
> > "count" them as you go, and will only know the final amount of tests at
> the
> > very end.
> >
>
> They can, just say
>
> use Test::More 'no_plan';
>
>
> The problem is that what happens if you constantly
> get 100 success reports while in fact you had 300
> tests, just you test script exited early?
>
> e.g. because you added an exit; in the middle to shortcut
> your test running while you were debugging some failing test.
>
>
> Gabor
> http://szabgab.com/test_automation_tips.html
>


Re: Counting tests

2009-03-13 Thread Evgeny
Oh, then maybe the 'defer_plan' is actually what I wanted to do all along.
That might fit perfectly into my acceptance testing scenario tool. Since I
really don't know how many scenarios the "user" of the tool is going to
write, so I can't really specify a fixed amount of tests. But I DO want to
count his failed/success attempts in the scenarios ... and I DO want to
generally use a testing framework for it (currently I use Test::More).
The small example I made is at http://github.com/kesor/p5-cucumber



On Fri, Mar 13, 2009 at 3:13 PM, Ovid  wrote:

> 
> From: Evgeny 
>
> > I actually put a link to the FAQ at the very first mail I sent.
>
> Oh, that's embarrassing :)
>
> > It does not address my questions, it gives examples that say
> > "we can't count tests ahead of time, its impossible". But I
> > just want you to change the approach from "ahead of time" into
> > "realtime" or something ... like all the other testing
> > frameworks do it.
>
> I'm sorry, but I simply do not believe "all the other testing frameworks"
> do this.  That being said:
>
> There's a difference between behavior and intent:
>
>   can_ok $account, 'customers';
>   for my $customer ($account->customers) {
>   ok $customer->is_current, '... and it should only return current
> customers';
>   }
>
> How many tests is that?  Just because your tests all passed doesn't mean
> that it's the correct number of tests.  (What if that doesn't return all of
> the "current" customers?)
>
> Premature exits are also caught with plans.
>
> If you still want to calculate a plan on the fly:
>
>   use Test::More 'defer_plan';
>   # run tests
>   all_done($number_of_tests);
>
>
> Note that $number_of_tests is optional.
>
> Cheers,
> Ovid
> --
> Buy the book - http://www.oreilly.com/catalog/perlhks/
> Tech blog- 
> http://use.perl.org/~Ovid/journal/
> Twitter  - http://twitter.com/OvidPerl
> Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
>


Re: Counting tests

2009-03-13 Thread Ovid

From: Evgeny 

> I actually put a link to the FAQ at the very first mail I sent.

Oh, that's embarrassing :)

> It does not address my questions, it gives examples that say
> "we can't count tests ahead of time, its impossible". But I
> just want you to change the approach from "ahead of time" into
> "realtime" or something ... like all the other testing 
> frameworks do it.

I'm sorry, but I simply do not believe "all the other testing frameworks" do 
this.  That being said:

There's a difference between behavior and intent:

  can_ok $account, 'customers';
  for my $customer ($account->customers) {
  ok $customer->is_current, '... and it should only return current 
customers';
  }

How many tests is that?  Just because your tests all passed doesn't mean that 
it's the correct number of tests.  (What if that doesn't return all of the 
"current" customers?)

Premature exits are also caught with plans.

If you still want to calculate a plan on the fly:

  use Test::More 'defer_plan';
  # run tests
  all_done($number_of_tests);
Note that $number_of_tests is optional.

Cheers,
Ovid
--
Buy the book - http://www.oreilly.com/catalog/perlhks/
Tech blog- http://use.perl.org/~Ovid/journal/
Twitter  - http://twitter.com/OvidPerl
Official Perl 6 Wiki - http://www.perlfoundation.org/perl6

Re: Counting tests

2009-03-13 Thread Gabor Szabo
On Fri, Mar 13, 2009 at 2:53 PM, Evgeny  wrote:
> I actually put a link to the FAQ at the very first mail I sent.
> It does not address my questions, it gives examples that say "we can't count
> tests ahead of time, its impossible". But I just want you to change the
> approach from "ahead of time" into "realtime" or something ... like all the
> other testing frameworks do it.

There is a work in progress to let people tell during their test code:
"here I have 5 more test" instead of planning ahead all of them
that might address the issue you see.

Besides that I'd be glad to see which framework solves this problem and how?

Gabor


Re: Counting tests

2009-03-13 Thread Gabor Szabo
On Fri, Mar 13, 2009 at 2:45 PM, Evgeny  wrote:
> Gabor,
> Since you are in the field of testing - then you probably know about the
> other frameworks in other languages. Specifically what Ruby's Cucumber is
> about.
> I tried writing something similar in Perl, using Test::More no less. But I
> believe you are a far better perl programmer than me, and I would love to
> hear your comments -- if you agree to take a look.
> The project (one small perl file really) is currently here:
> http://github.com/kesor/p5-cucumber/
>
> Just thought that it would be interesting to you even if you don't have time
> to help out a little bit.

Well, there are a few people on this list (maybe all of them?) who are far more
competent than I am both in testing and Perl.
I am sure some of them will be glad to take a look.

I'll do as well later on.

Gabor


Re: Counting tests

2009-03-13 Thread Ovid

- Original Message 

> From: Gabor Szabo 

> On Fri, Mar 13, 2009 at 2:40 PM, Evgeny wrote:
> > If my script ended early, because maybe even a core dump ... the I wont
> > care. It's just another case of a failed test that cant be reported by
> > Test::More, but a human looking at the screen will hopefully understand what
> > happened.
> 
> Human?
> 
> Why would a human look at a test report that says "everything is ok"?

Don't we have a FAQ about this somewhere?  Evgeny's questions are quite 
reasonable, but we're answering them every few months.  It would be nice to 
link to a FAQ and be done with it.

 
Cheers,
Ovid
--
Buy the book - http://www.oreilly.com/catalog/perlhks/
Tech blog- http://use.perl.org/~Ovid/journal/
Twitter  - http://twitter.com/OvidPerl
Official Perl 6 Wiki - http://www.perlfoundation.org/perl6



Re: Counting tests

2009-03-13 Thread Gabor Szabo
On Fri, Mar 13, 2009 at 2:40 PM, Evgeny  wrote:
> If my script ended early, because maybe even a core dump ... the I wont
> care. It's just another case of a failed test that cant be reported by
> Test::More, but a human looking at the screen will hopefully understand what
> happened.

Human?

Why would a human look at a test report that says "everything is ok"?

Gabor

Perl 6 Tricks and Treats
http://szabgab.com/perl6.html


Re: Counting tests

2009-03-13 Thread Gabor Szabo
On Fri, Mar 13, 2009 at 2:04 PM, Evgeny  wrote:
> I have seen the page :
> http://perl-qa.hexten.net/wiki/index.php/Why_can%27t_a_program_count_its_own_tests
>
> And I still don't understand, why can't a perl program count its test and
> then when all the tests are done write something like:
>
> I ran 45976347563873456 tests and 587643873645 of then failed and
> 234598634875634 of them passed.
>
> (dont mind that the numbers dont add up)
>
>
> Then you dont really need to "count" the amount of tests before hand, you
> "count" them as you go, and will only know the final amount of tests at the
> very end.
>

They can, just say

use Test::More 'no_plan';


The problem is that what happens if you constantly
get 100 success reports while in fact you had 300
tests, just you test script exited early?

e.g. because you added an exit; in the middle to shortcut
your test running while you were debugging some failing test.


Gabor
http://szabgab.com/test_automation_tips.html