On Thu, Feb 15, 2001 at 10:18:14PM -0500, barries wrote:
> do_all_tests(
> get_data_set => sub { $data_set = get_data_set() ;
> ok( $data_set ) },
> data_set_type => sub { ok( ref $data_set, "ARRAY" ) },
> data_set_size => sub { ok( @$data_set, 10 ) },
> todo => elt_lengths => sub { ok( ! grep !length, @$data_set ) },
> ) ;
Oooookay, what's the net gain here? What does all that mess buy you
over just:
noplan;
my $data_set = get_data_set();
ok( defined $data_set, 'basic get_data_set()' );
ok( ref $data_set eq 'ARRAY', 'right type' );
ok( @$data_set == 10, 'right amount' );
todo( !grep !length, @$data_set, 'something\'s there',
'data isn't filled in yet' );
> I think we'd better keep EXPR out of todo(), for the same reason it
> doesn't belong in skip(). A simple todo( "test name", "reason" ) might
> be enough. If it's TODO, EXPR may blow up, especially as people do more
> exception throwing.
No, its important to know that a todo() feature is actually failing.
Its an executable todo list. Often after a code change, some of your
todo's will suddenly start working! In which case you've
"accidentally" implemented that feature, or fixed that bug. Its
especially useful for bugs that you don't feel like fixing at the
moment.
However, you are right about explosive failure. So you can pass
todo() a code ref...
todo({ $obj->fooble == 42 });
and todo() will run that in an eval block.
I recently had this problem made accutely aware. I was using the
Aegis CASE tool which requires that any bug patch must add a test
which fails against the old code. Problem was, my test code exploded
against the old code, so I wound up wrapping everything in eval blocks.
> Again, putting the test suite at the bottom is still useful (to me). I
> tend to bounce back and forth between the code under test and the test
> suite as bugs surface or I get an idea for a new test.
This defeats the purpose of embedded tests. You embed tests in code
so the test is near the code its testing, just like you embed
documentation in code so the docs are near the code its documenting.
Better chance you'll keep it up to date. If you just stick the tests
at the bottom of the code, it might as well be in another file.
As for rapidly going between the tests and the code, you just pull up
the test file in one window/frame and the code file in another.
Hmmm... how would your embedded tests get run as part of a standard
MakeMaker "make test"? I realized a few months ago that even if I do
patch MakeMaker do to something cool, I can't rely on it because my
code has to work on older installations. This is why pod2test
generates a .t file. I suppose you could do the same, but it will
require compiling the code to do it.