Andy Lester wrote:
> I'm so glad for done_testing().  I don't like no_plan, but
> done_testing() makes it better.
> 
> I was surprised/confused to see this behavior:
> 
> $ cat foo.t
> use Test::More tests => 14;
> ok( 1 );
> done_testing();

You would try that. :P

I guess its a belt and suspenders approach.


> $ prove -v foo.t
> [13:55:39] foo.t ..
> 1..14
> ok 1
> not ok 2 - planned to run 14 but done_testing() expects 1
> 
> #   Failed test 'planned to run 14 but done_testing() expects 1'
> #   at /usr/lib/perl5/5.8.8/Test/More.pm line 220.
> # Looks like you planned 14 tests but ran 2.
> # Looks like you failed 1 test of 2 run.
> Dubious, test returned 1 (wstat 256, 0x100)
> Failed 13/14 subtests
> [13:55:39]
> 
> Test Summary Report
> -------------------
> foo.t (Wstat: 256 Tests: 2 Failed: 1)
>   Failed test:  2
>   Non-zero exit status: 1
>   Parse errors: Bad plan.  You planned 14 tests but ran 2.
> Files=1, Tests=2,  0 wallclock secs ( 0.03 usr  0.01 sys +  0.02 cusr 
> 0.01 csys =  0.07 CPU)
> Result: FAIL
> 
> 
> "Looks like you failed 1 test of 2 run".  I guess that it's counting
> done_testing() as a test in itself, but that doesn't seem to be right. 
> Is that intentional?

The extra test is intentional.  Its recording the plan failure as TAP.  This
may be unnecessary, it really should be letting the plan do that.  It could
get away with just a diagnostic.

What it is preventing is this:

  use Test::More tests => 2;
  pass("legit pass");
  done_testing();
  pass("I thought we were done?");

That should be a failure, and it is because done_testing() is a failing test.

Unfortunately, I just discovered that this is not a failure:

  use Test::More tests => 2;
  pass("legit pass");
  done_testing(2);
  pass("I thought we were done?");

And that's a bug.  Normally done_testing() would output "1..2" itself, so
you'd have:

  ok 1 - legit pass
  1..2
  ok 2 - I thought we were done?

And that's a plan failure, which is good.  But because the plan has already
been output it doesn't do it twice and we get:

  1..2
  ok 1 - legit pass
  ok 2 - I thought we were done?

The problem is done_testing() enforces "everything after this is a failure" by
outputting a plan in the wrong spot.  But if the plan has already been output,
what can it do?  Best I can think of is turn it into a failing test.  I agree
its a little weird to have a magic meta-test.  I don't want done_testing() to
simply die, that's not encodable in TAP and you don't see the rest of your
test results.  I also don't want the following tests to fail, that's a lie
they passed, its the plan that failed.

It could output a second plan if the tests don't match, that would cause a
plan failure.

  1..2
  ok 1 - legit pass
  # done_testing() expected 2 but only got 1
  1..2
  ok 2 - I thought we were done?

What is a mistake is that done_testing() with no argument shouldn't be
expecting anything.  The diagnostic should be the normal "Looks like you
planned 14 tests but ran 1."


-- 
Reality is that which, when you stop believing in it, doesn't go away.
    -- Phillip K. Dick

Reply via email to