Re: Tagging tests

2012-04-25 Thread Greg Sabino Mullane
On Tue, Apr 24, 2012 at 07:09:19PM +0100, Daniel Perrett wrote:
 Is there any way to 'tag' tests in Perl?

Not that I can think of, in the way that you want.

 It looks likely that even though all the search tests fail, they are
 failing because there is no working connection, as tested by the first
 http request. Although five of the unicode tests are failing, three
 aren't (throwing unicode characters at the syntax).

Seems to me the easiest way to handle that is to have simple tests 
at the start of your chain that BAIL OUT if they see a major problem.
While the unicode wouldn't be in that solution, I can't see how the 
complexity of adding tags would prevent a user from having to manually 
look at the failing tests anyway.

 (I guess one answer could be 'write them in separate test scripts' but
 what I want is tags (many-to-many) rather than categories
 (many-to-one), and more files is a bit cumbersome.)

Perhaps you can give a more convincing/real-world test case where 
this would be needed? I would think that a single early test catching 
many known major failures would be the way to go. If there was something, 
such as unicode, which warranted tags, I'm not sure why it wouldn't warrant 
it's own group of tests. Tests are the one are where I never worry about 
overlap, efficiency, or having too many files. :)

-- 
Greg Sabino Mullane g...@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8


pgpMKsTryIiUA.pgp
Description: PGP signature


Re: What is the best code on the CPAN?

2012-02-08 Thread Greg Sabino Mullane
On Tue, Feb 07, 2012 at 06:29:04PM -0800, Jeffrey Thalhammer wrote:
...
 which distribution provides the best example
...
 Perl::Critic compliance (any set of Policies will do).

I think Perl::Critic itself is a good example.

-- 
Greg Sabino Mullane g...@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8


pgpRwLqBe74B1.pgp
Description: PGP signature


Re: Conditional tests - SKIP, die, BAILOUT

2011-03-29 Thread Greg Sabino Mullane
On Tue, Mar 29, 2011 at 10:46:20PM +0200, Michael Ludwig wrote:
...
 my $tkn = $hu-token;   # (1) Can't carry on without the token
 like $tkn, qr/abc/; # (2) Useless carrying on if this fails.

One thing I've done is defer the plan until I can ensure the basic 
prerequesites for the test  is met. For example, a DBD::Pg test has:

===
...
use Test::More
...
my $dbh = connect_database();

if (! $dbh) {
plan skip_all = 'Connection to database failed, cannot continue testing';
}

my $pglibversion = $dbh-{pg_lib_version};

if ($pglibversion  8) {
cleanup_database($dbh,'test');
$dbh-disconnect;
plan skip_all = 'Cannot run asynchronous queries with pre-8.0 libraries.';
}

plan tests = 67;
===


-- 
Greg Sabino Mullane g...@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8


pgpQHFicV5CPJ.pgp
Description: PGP signature


Re: killing all child processes created in a test

2009-08-24 Thread Greg Sabino Mullane
 But that doesn't work, because some servers, like httpd, launch in 
 such a way that the parent httpd process ends up with ppid = 1.

Do they provide an interface to get the PID? ISTR that
HTTP::Server::Simple does, for example. In which case...

 Is there any way to automatically kill all the child processes 
 created by a script (and their descendents)? For example, can I
 catch each subprocess creation somehow and record the pid in a
 global array?

I don't know about the collection part, but I always store PIDs to local
*.pid files, the better to be able to clean them up on subsequent runs,
should a test get interrupted. I can also make that the main job of a
t/99-cleanup.t file: walk through the pid dirs and kill anything still
running, then remove any temp directories that they may have been using.
Encapsulating that work into a single test is also nice as you can
simply call that test as part of 'make [dist]clean' :)

-- 
Greg Sabino Mullane g...@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8



signature.asc
Description: OpenPGP digital signature


Re: standard for internal-only tests?

2009-08-01 Thread Greg Sabino Mullane
 IMHO, you should still include author-only tests in your published 
 distributions, even if they don't run during the usual test
 target. That way, you can still get patches from developers who can't
 (or won't) pull the code from the repository.

Frankly, I'm not too worried about missing out on patches from people
who can't/won't grab the latest version from the well-publicized public
repository. That's a pretty low bar.

 Plus, it helps encourage other developers to write similar tests if
 they happen to see you doing it too

A better point, but if other developers want to learn from you, they
should be checking out your development environment, not just your
published product.

-- 
Greg Sabino Mullane g...@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8



signature.asc
Description: OpenPGP digital signature


Re: standard for internal-only tests?

2009-07-31 Thread Greg Sabino Mullane
On 07/31/2009 01:51 PM, Jonathan Swartz wrote:
 There are certain tests in my distribution that I don't want end users
 to run. I want to run them during development, and I also want anyone
 else contributing to the distribution to run them. These are typically
 related to static analysis of the code, e.g. perl critic, perl tidy and
 pod checking - it makes no sense to risk having these fail for end users.
 
 Is there a standard for signifying internal-only tests, and for make
 test to figure out when they should run?

Good question. I use TEST_AUTHOR for things like this, plus a few other
specific ones, such as TEST_CRITIC, for tests that end users *can* run,
but only if they specifically enable it.

To figure out if they should run in a test, I do:

use Test::More;

if (!$ENV{TEST_AUTHOR}) {
plan skip_all = 'Set the environment variable TEST_AUTHOR to enable
this test';
}
plan tests = 1;

I've also started moving the tests themselves from MANIFEST to MANIFEST.skip


-- 
Greg Sabino Mullane g...@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8



signature.asc
Description: OpenPGP digital signature


Re: prove with line numbers

2009-05-18 Thread Greg Sabino Mullane
 If it's a one-off, though, you could try the following (untested):
 
   {
 my $ok = \Test::Builder::ok;
 no warnings 'redefined';
 *Test::Builder::ok = sub {
 my ( $package, $filename, $line ) = caller;
 $_[0]-diag(ok() called in $package, $filename at line $line);
 goto $ok;
 };
   }

I've been using something very similar for quite a while, which not only allows
printing of line numbers (invaluable to me for reasons similar to the original
caller), but allows bailing out after a specified number of failures, e.g.:

no warnings;
sub is_deeply {
t($_[2],$_[3] || (caller)[2]);
return if Test::More::is_deeply($_[0],$_[1],$testmsg);
if ($bail_on_error  $total_errors++) {
my $line = (caller)[2];
my $time = time;
Test::More::diag(GOT: .Dumper $_[0]);
Test::More::diag(EXPECTED: .Dumper $_[1]);
Test::More::BAIL_OUT Stopping on a failed 'is_deeply' test from
line $line. Time: $time;
}
} ## end of is_deeply

(The t() function provides some time measurements and formats the $testmsg to
provide things like line numbers. caller is used twice because there are times
when I want to pass in a line number to is_deeply directly.)

-- 
Greg Sabino Mullane g...@endpoint.com
End Point Corporation
PGP Key: 0x14964AC8



signature.asc
Description: OpenPGP digital signature


Re: Generic test database

2008-10-08 Thread Greg Sabino Mullane

 It should be fairly easy for willing CPAN testers to setup any database
 they like, and provide some connection information for throwaway tables
 and data (assuming the test script WILL probably drop all tables in
 there and dump its own crap there).

This seems of dubious usefulness, as the intersection between the number
of CPAN testers that would bother to set this up, and the number of
modules that would make use of it, would be very small.

 After seeing your code, I suppose it could also be somewhat easy to try
 out a few series of basic/default credentials on localhost for engines
 like MySQL and Postgres, and try to setup a test database from there.

 Does that sound like an interesting tool to have?

That sounds more interesting. The DBD::Pg strategy is to try and use
DBI_DSN if is defined. Even when defined, it often fails due to permission
issues, so we fall back to creating a brand new database cluster from
scratch, which ensures that we can control all the parameters, paths, and
permissions ourselves. Feel free to crib from the code in the link below.
If I was doing it all over again, I might even jump straight to clean
creation (via initdb) first.

http://git.postgresql.org/?p=dbdpg.git;a=blob;f=trunk/t/dbdpg_test_setup.pl

-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Greg Sabino Mullane

Two cents from someone who appreciates the hell out of the CPAN testing
service and eagerly awaits new reports every time I release a new version
of a module.

 However, from author's perspective, if a report is legitimate (and
 assuming they care), they really only need to hear it once.  Having
 more and more testers sending the same FAIL report on platform X is
 overkill and gives yet more encouragement for authors to tune out.
 
 So the more successful CPAN Testers is in attracting new testers, the
 more duplicate FAIL reports authors are likely to receive, which makes
 them less likely to pay attention to them.

Sorry, but paying attention is the author's job. A fail is something that
should be fixed, period, regardless of the number of them. As mentioned
elsewhere, the idea of author's receiving FAIL reports is outdated
anyway: they should be pulling them via a RSS feed.

 First, we can lower our collective tolerance of false positives -- for
 example, stop telling authors to just ignore bogus reports if they
 don't like it and find ways to filter them.

+1

 Second, we can reclassify PL/make/Build fails to UNKNOWN.

I don't like this:  failure by any other name would smell just as bad. In
other words, if an end user is not going to have a happy, functional
module after typing install Foo::Bar at the CPAN prompt, this is a failure
that should be noted as such and fixed by the author. Makefiles have a
surprising amount of power and flexibility in this regard.

 However, as long as the CPAN Testers system has individual testers
 emailing authors, there is little we can do to address the problem of
 repetition.

Yep. Use RSS or deal with the duplicates, I say.

 For those who read this to the end, thank you for your attention to
 what is surely becoming a tedious subject.

Thanks for raising it. I honestly feel the problem is not with the testers
or the testing service, but the authors. But perhaps I'm still grumpy from
the slew of modules I've come across on CPAN lately that are popular yet
obviously unmaintained, with bug reports, questions, and unapplied patches
that linger in the RT queues for years. It would be nice if we had some
sort of system that tracked and reported on that.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Greg Sabino Mullane
  Sorry, but paying attention is the author's job. A fail is something  
  that should be fixed, period, regardless of the number of them.
 
 According to who?  Who's to say what my job as an author is?

Obviously I should be semantically careful: job and author are
overloaded words. How about this:

It's a general expectation among users of Perl that module maintainers are
interested in maintaining their modules, and that said maintainers will try
their best to remove any failing tests (that are under their power to do
so).

The parenthetical bit at the end is in response to the broken-CPAN straw
man argument. Obviously (rare) things like that are out of control of the
author, along with bugs in any other dependencies, OS utility, etc.

I recognize that CPAN is a volunteer effort, but it does seem to me there
is a implicit responsibility on the part of the author to maintain the
module going forward, or to pass the baton to someone else. Call it a Best
Practice, if you will. The end-user simply wants the module to work.
Maintainers not paying attention, and the subsequent bitrot that is
appearing on CPAN, is one of Perl's biggest problems at the moment.
Careful attention and responsiveness to CPAN testers and to rt.cpan.org is
the best cure for this.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)

2008-09-04 Thread Greg Sabino Mullane

 I may do so because I take the quality and utility of my software
 seriously, but do not mistake that for anything which may instill in you
 any sort of entitlement.  That is an excellent way not to get what you
 want from me.

It's not an entitlement, it's a shared goal of making Perl better. If a
maintainer is going to ignore test reports, perhaps its time to add a
co-maintainer.

 Then CPAN Testers reports should come with login instructions so that I
 can resurrect decade-old skills and perform system administration to fix
 broken installations and configurations -- oh, and as you say, a truly
 *modern* reporting system should publish these logins and passwords
 publicly in syndication feeds.

 (I do see a couple of problems with that idea, however.)

Besides the number of straw men starting to fill the room?

 However, by what possible logic can you conclude that the appropriate
 way to get that bug fixed is to report it to people who, given all of
 the information detected automatically, *do not* maintain CPAN.pm?

That's not my argument at all. But the great majority of
non-CPAN.pm-editing build errors can be fixed by the maintainer. Or at
least routed around for testing purposes.

 Oh, perhaps you think, it's easy for them to read the reports and
 diagnose the problem remotely on machines they have never seen before,
 did not configure, and cannot probe -- and it's so easy for them to file
 a bug in the right place!

I don't think this is easy at all. But it's also not quite the burden you
make it appear. All the testers I've contacted about helping me fix test
failures (build or otherwise) have been friendly and responsive.

  and unapplied patches that linger in the RT queues for years. It would
  be nice if we had some sort of system that tracked and reported on
  that.
 
 Besides rt.cpan.org?

Yes, something that indicates the age and number of open bugs for a
module, the age of any unapplied patches, and perhaps some other metrics
to indicate maintainedness. Cross referencing that with popularity and
dependency chains would be a great triage system to start whipping CPAN
into shape.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: CPAN Ratings and the problem of choice (was Re: About tidying up Kwalitee metrics)

2008-06-30 Thread Greg Sabino Mullane
 So, why do ratings make a difference here?
 
 Well, ratings provide at least a partial way for the community to solve
 the choice overload problem.  If a search reveals a 4.5 star module with
 eight reviews, one doesn't feel compelled to look at the other options;
 the choice becomes clear.

I question the usefulness of the ratings because they are almost
completely unused. The module mentioned in this thread, XML::Parser, has 6
reviews (2 of which are basically bug reports, and one tells you to not
use it for any new code). One of the oldest and most important modules
ever, DBI, has a mere 29. That's 29 reviews in 8 years - pathetic. It
should have hundreds of ratings. The important question to ask is,
(assuming the ratings are something worth keeping), why are people not
rating modules, and how can we encourage people to do so?

-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: Making CPAN ratings easy (was Re: CPAN Ratings and the problem of choice)

2008-06-30 Thread Greg Sabino Mullane
On Tue, 01 Jul 2008 10:17:40 +1000
Paul Fenwick [EMAIL PROTECTED] wrote:

   Continue to the Bitcard service to login.  It will send you
   back here when you are done registering and logging in.

Oooh, good point, I forgot all about the nasty, ugly, bitcard registering
bit. Put that way, perhaps 29 reviews isn't so bad after all.

 There's a button marked favourite.  It looks like a star.  I press it.

Or perhaps just a generic one to five star point-n-click Amazon-like AJAXy
bit.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation


signature.asc
Description: PGP signature


Re: [tap-l] User Supplied YAML Diagnostic Keys: Descriptive Version

2008-04-13 Thread Greg Sabino Mullane
On Sun, 13 Apr 2008 18:41:04 +0100
Michael G Schwern [EMAIL PROTECTED] wrote:

 Two possible solutions:
 
 A)  Just reserve ASCII [a-z].  This is very easy to check for but I'm
 worried it's carving out too small a space.
 
 B)  Reserve lower case and leave the spec a little fuzzy around the
 edges for the moment.
...
 Are we really going to define a standard TAP key starting with a
 Hungarian i? Or musical notes?  Are we going to provide standardized
 keys localized to the user's language? (the displayer can do that)  Not
 likely.

So that begs the question, why not just go with option A? Seems plenty
big enough to me, and removes all ambiguity up front. Can someone provide
an example of a lower case, non a-z, non Hungarian key that might
possibly be globally used? I can't think of one that would justify going
with option B.

-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation



signature.asc
Description: PGP signature


Odd Test::Warn 'carped warning' results

2008-03-02 Thread Greg Sabino Mullane
I'm having problems with a Test::Warn test for DBD::Pg. The error I've
been seeing is here:

http://www.nntp.perl.org/group/perl.cpan.testers/2008/03/msg1093411.html

The important part is:

#   Failed test 'Version comparison does not throw a warning'
#   at t/00basic.t line 24.
# found carped warning: uplevel 2 is more than the caller stack
# at /home/src/perl/repoperls/installed-perls/maint-5.8/pOwntgt/
# [EMAIL PROTECTED]/lib/site_perl/5.8.8/Test/Warn.pm
# line 263
# didn't expect to find a warning

The code in question is:

SKIP: {
  eval { require Test::Warn; };
  $@ and skip 'Need Test::Warn to test version warning', 1;

  my $t=q{Version comparison does not throw a warning};

  Test::Warn::warnings_are (sub {$DBD::Pg::VERSION = '1.49'}, [], $t );
}

It's not a particularly important test, as the underlying issue has been
fixed, but I've no idea what's causing the carpiness of the warning to
appear; DBD::Pg does not use Carp. Explanations, workarounds, or harsh
code reviews all welcome. This test does pass on other boxes, FWIW.

-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation 610-983-9073


signature.asc
Description: PGP signature


Re: Bucardo and Test::Dynamic

2007-10-01 Thread Greg Sabino Mullane
On Fri, 2007-09-28 at 17:09 -0700, Michael G Schwern wrote:

 It's also nice to see how far along we are with a running 
 tally, when I check back in on the running tests.

 How do you accomplish that?

Not sure what you mean. Toggle back to the terminal window, generally.
Due to the nature of the beast (database replication), the full tests
generally take about 7 minutes to complete.

 Why do you do that?  That you have to check back indicates that 
 an individual test file takes a long time to run.  The desire to 
 run only some of the tests in a file is usually a red flag 
 that there's too many tests in each file.

 Could you simply break the test up into individual files that 
 you could then run individually?

I could, but it gets hairy as there are lots of interconnected
subroutines and common code and it's far easier to keep it in a single
file. Using separate test files is my natural inclination, and I tried
it that way at first, but in this particular module's case, it made my
life easier to have it all in one. Toggling a few boolean values at the
top of the file is easier than trying to keep track of which test files
to run from the command line as well.

 You mean passing in the raw keys and values? Ick.

 Don't say ick, say why.  Don't feel, think. [1]  Does it just 
 *feel* wrong or do you know why it actually *is* wrong?

Oh, I know why, I just didn't think I needed to spell it out to this
crowd. :) The problem is that when a user invariably forgets to put in a
hash value, it's more helpful if perl complains about an odd number of
elements at the spot where the user is calling count_tests, not buried
inside of count_tests itself. It also helps the reader see that the
subroutine expects a bunch of key/value pairs, not just a list:

foobar($alpha, $beta); ## List? Hash? Positions matter? Are they
related?

foobar({$alpha = $beta}); ## No ambiguity

 Well, if it's a Playstation 2 you're forgiven. :P

Absolutely PS2. No fancy boxes or cubes here.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation 610-983-9073



signature.asc
Description: This is a digitally signed message part


Re: Bucardo and Test::Dynamic

2007-09-28 Thread Greg Sabino Mullane
Andy Armstrongs asks:

my @tests = (
{ name = 'test 1', foo = 'bar' },
{ name = 'test 2', foo = 'bok' },
);

for my $test ( @tests ) {
 # setup and then
 is $wok-spindle, $test-{foo}, yup;
}

 Does it count that as two or one?

As one, unless you tell it about the loop like so:

 # setup and then
 is $wok-spindle, $test-{foo}, yup; ## TESTCOUNT * 2

No advanced AI yet, I'm afraid. :) It does require a good
amount of careful setup in the beginning. But you can do things like
this:

for my $test ( @tests ) {
  run_simple_tests($test); ## TESTCOUNT * 2
}

... where run_simple tests contains a bunch of other tests, and it
knows to count all of those twice when invoked at this line.


Michael G Schwern asks:

 What's wrong with no_plan?  Why go through backflips to 
 automate counting the tests when you can just not count 
 them?  Seems needlessly officious.

I seldom run the whole test suite at once, but only parts I care about
at the moment, so it's nice to have Test::More tell me that 18/64 tests
failed. It's also nice to see how far along we are with a running tally,
when I check back in on the running tests.

 * count_tests() is not a method, it's a function.  It should either 
 be a class method or just export it

True enough: I'll probably just export it, although it's a fairly
generic name.

 * Why does count_tests() take a reference?  A hash will 
 do perfectly well.

You mean passing in the raw keys and values? Ick.

 * Lord I hate tabs.

Not touching that one. Instead, I'll just go on editing my BSD-licensed,
git controlled, Postgres application with tabs, using emacs on my Linux
KDE laptop, pausing to play a game on my PlayStation. :)


Thanks,
--
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation
PGP Key: 0x14964AC8 200709281441
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8




signature.asc
Description: This is a digitally signed message part


Re: Bucardo and Test::Dynamic

2007-09-26 Thread Greg Sabino Mullane
On Wed, 2007-09-26 at 20:16 +0100, Andy Armstrong wrote:

 I'm guessing it wouldn't work well for heavily data driven tests? It  
 seems to work by counting the calls to test functions - is that right?

That's a big part of it, yes, as well as accounting for subroutines and
other loops and codepaths. Not entirely sure what you mean by the first
question though - got an example?
 
-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation 610-983-9073



signature.asc
Description: This is a digitally signed message part


Re: bailout on first failure?

2007-09-07 Thread Greg Sabino Mullane
 I'd like Test::* to completely bailout on the first 
 is/ok/whatever to fail.  I just can't seem to find a 
 canonical way to do this.  but someone here knows, I'm sure :)

I don't know about canonical, but here's how I do it. I've 
got a test suite that takes many minutes to complete, so 
stopping on the first failure is definitely needed.

First, I set up a early ENV toggle:

## Sometimes, we want to stop as soon as we see an error
my $bail_on_error = $ENV{BUCARDO_TESTBAIL} || 0;

Then, I override the standard Test::More methods:

## no critic
{
  no warnings; ## Yes, we know they are being redefined!
  sub is_deeply {
return if Test::More::is_deeply(@_);
$bail_on_error and BAIL_OUT Stopping on failed 'is_deeply' test;
  } ## end of is_deeply
  sub like {
return if Test::More::like($_[0],$_[1],$_[2]);
$bail_on_error and BAIL_OUT Stopping on failed 'like' test;
  } ## end of like

  ## etc. for all test methods you are using
}
## use critic

Sure would be nice if this was a Test::More option.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation 610-983-9073



signature.asc
Description: This is a digitally signed message part


Re: Comment about BAIL_OUT

2007-01-05 Thread Greg Sabino Mullane
Michael G Shwern wrote:
 Such a bother.
 ...
 You can even get clever and pack the setup/teardown calls into 
 loading the module so you have even less code per script.
 
 Now each test runs independently and cleans itself up.

True, but at the expense of having to run the startup and cleanup code
each time, which in most of my particular cases gets very expensive. It
also violates the principle of DRY. :) It would be nice if there was
something like t/_BEGIN_.t and t/_END_.t that would always run before
and after any set of tests (even shuffled ones!) Sure, there are hacks
and workarounds, but something builtin would be nice.

Ovid wrote:
 However, if you use the '-s' switch to shuffle your tests 
 and bailout is not first, then some tests will run until the 
 BAIL_OUT is hit.  This seems to violate the principle that 
 tests should be able to run in any order without dependencies.

Michael G Swern replied:
 It doesn't violate the principle since the tests are not 
 dependent on BAIL_OUT happening, its just a convenience.  The 
 tests should still run fine in any order, it'll just be a lot 
 noisier.

I think Ovid means it violates it in the sense that BAIL_OUT typically
stops all subsequent tests, which implies some sort of ordering. I've
certainly used it that way before, in the manner of:

01example.t - a simple test
02example.t - another simple test
03example.t - a complex test which requires Foo::Bar, BAIL_OUT if not
found
04example.t - requires Foo::Bar
05example.t - requires Foo::Bar

I want a failure in 3 to stop 4 and 5 from running. For that matter, I
want a failure in 3 or 4 or 5 to prevent any of the others 2 from
running. But 1 and 2 can run irregardless of failures in 3, 4, and 5.
The ordering is a convenience to doing so, but ideally there would be
some way to interact with the testing program and do the right thing, so
that instead of BAILing out at 3, it bails out of the current test, sets
a flag, and then 4 and 5 can check for the flag and skip if it is not
set.


-- 
Greg Sabino Mullane [EMAIL PROTECTED]
End Point Corporation



signature.asc
Description: This is a digitally signed message part


Re: Comment about BAIL_OUT

2007-01-04 Thread Greg Sabino Mullane
On Thu, 2007-01-04 at 13:34 -0800, Ovid wrote:
 I guess the reason I have never used BAIL_OUT is because if I have a
 bunch of tests failing, they fail quickly and I don't have to wait for
 them :)  I suppose it's not that big of a deal, but I noticed it this
 evening and thought I would toss it out there in case anyone has any
 comments.

I use BAIL_OUT a lot, and find it quite useful. I've found that I have
three main uses for it:

1) The reason given in the docs: if a database connection cannot be made
at the top of the test, then there is no sense in going on, as the rest
of the tests in that file, as well as all the other subsequent[1] tests,
will all need a database connection for proper testing.

2) To stop the tests at the exact moment of failure, to ease debugging
by leaving things in the broken state.

3) To simply bail out early, as a kind of die-on-any-error mechanism.
Some of my test suites take a relatively long time (minutes) to
complete, and I don't want to wait for them if a fail a test.

I've had to do some ugly hacks to accomplish #2 and #3 automagically in
my testing code. A built-in way to say BAIL_OUT on any error would
sure be nice, as would some more control such as
BAIL_OUT_OF_THIS_TEST_ONLY or call this usercode upon a failed test
and let me decide if I want to do something about it

[1] I've never had a need for random tests myself. The only reason I
break mine apart is to isolate testing various sub-systems, but I almost
always end up having some dependencies put into an early 00 file. I
also tend to a have a final 99 cleanup file. While I could in theory
have each file be independent, in practice it's a lot of duplicated code
and a lot of time overhead, so it's either the 00-99 or (as I sometimes
have done) one giant testing file.


-- 
Greg Sabino Mullane [EMAIL PROTECTED] [EMAIL PROTECTED]
End Point Corporation



signature.asc
Description: This is a digitally signed message part