Re: testing for warnings during tests

2008-08-19 Thread Gabor Szabo
I was just cleaning up old mails when I found this thread

On Tue, Jun 10, 2008 at 2:49 PM, David Golden [EMAIL PROTECTED] wrote:
 On Tue, Jun 10, 2008 at 12:28 AM, Gabor Szabo [EMAIL PROTECTED] wrote:
 The issue I am trying to solve is how to catch and report
 when a test suit gives any warnings?

 Are there situations where a test suite should give warnings?  I.e.
 stuff that the user should see that shouldn't get swallowed by the
 harness running in a quiet (not verbose) mode?

 For example, I have some tests in CPAN::Reporter that test timing out
 a command.  Since that could look like a test has hung (to an
 impatient tester) I make a point to use warn to flag to the user that
 the test is sleeping for a timeout test.

 Looks like this:

 $ Build test --test_files=t/13_record_command.t
 t/13_record_command..18/37 # sleeping for timeout test
 t/13_record_command..22/37 # sleeping for timeout test
 t/13_record_command..26/37 # sleeping for timeout test
 t/13_record_command..ok
 All tests successful.
 Files=1, Tests=37, 19 wallclock secs ( 0.01 usr  0.01 sys +  6.51 cusr
  2.06 csys =  8.59 CPU)

 So is there a better way to do this than warn?

Sure. IMHO that is what *diag* is for.
To print all kinds of messages to the screen in a TAP.

 That said, if you try this at home (with Proc::ProcessTable), you'll
 also get a lovely warning from Proc::ProcessTable having a
 non-portable v-string.  That is a warning that should perhaps be
 fixed, though it turns out to be upstream.  Should I clutter my code
 with stuff to suppress it?  Maybe.

 But I don't see how I can have the one without the other.

I think that warning should be reported.
If the tester can (automatically) understand where the warning
comes, it should try to report there. If it cannot then it should
report to you.
I know it is not optimal but then you should complain to the
author of the module you used preferably with a test case that
catches the warnings.


The tester should only report about stuff that it sees which is not in
TAP.

Gabor


Re: testing for warnings during tests

2008-08-19 Thread Gabor Szabo
On Tue, Aug 19, 2008 at 3:11 PM, David Golden [EMAIL PROTECTED] wrote:
 On Tue, Aug 19, 2008 at 8:02 AM, Gabor Szabo [EMAIL PROTECTED] wrote:
 Sure. IMHO that is what *diag* is for.
 To print all kinds of messages to the screen in a TAP.

 Going up the thread, I think you had asked about whether the harness
 could catch warnings to find things that T::NW can't.  I think I was
 pointing out that there are legitimate reasons for a test author to
 issue warnings -- diag is just a specially formatted warning, after
 all.  So I don't think the harness can be expected to distinguish
 warnings from code versus intentional warnings from the test only
 from observing the output stream.

Sure people can fake the output of diag.

For now I'd like someone to start reporting anything not in
the TAP stream.

Then, if someone really wants to do it, she can replace the diag to be
silent and catch anything which looks like coming from diag that obviously
was only faking it.

Gabor


Re: testing for warnings during tests

2008-06-10 Thread Ovid
--- Gabor Szabo [EMAIL PROTECTED] wrote:

 So I wonder if there are other ways. E.g. if the harness could catch
 the warnings?

The harness has code which can allow you to merge the STDERR and STDOUT
streams.  See the '--merge' switch to prove.  With that, a
(simple-minded) parser becomes:

use TAP::Parser;

my $parser = TAP::Parser-new( { 
source = $source,
merge  = 1,
} );

while ( my $result = $parser-next ) {
if ( $result-is_unknown ) {
   # it's a warning or the code printed something to STDOUT
}
}

That's not enough for what you want, but is that a start?

Cheers,
Ovid

--
Buy the book  - http://www.oreilly.com/catalog/perlhks/
Personal blog - http://publius-ovidius.livejournal.com/
Tech blog - http://use.perl.org/~Ovid/journal/


Re: testing for warnings during tests

2008-06-10 Thread Fergal Daly
2008/6/10 Gabor Szabo [EMAIL PROTECTED]:
 So apparently using Test::NoWarnings isn't that cool
 and mandating it with CPANTS metric is even less cool.

What's the problem with T::NW? Maybe I'm misunderstanding the rest of
this mail but you seem to be looking for something that will catch
warnings from other people's test scripts which is no what T::NW is
about. Or is there some other problem?

F

 The issue I am trying to solve is how to catch and report
 when a test suit gives any warnings?

 I wrote it in my blog too but here it is. Occasionally when I install
 a module manually I see warnings. Sometimes I report them
 but mostly I don't. I guess smokers will not see them as the
 tests actually pass.

 How could we catch those cases without using Test::NoWarnings ?

 Could the harness catch them?

 Catching anything on STDERR isn't good enough as diag() goes there.

 Would catching and reporting any output (both STDOUT and STDERR)
 that is not proper TAP help here?

 Of course it would still miss if someone has

  print STDERR # no cookies\n;

 I know one of the features of TAP that a parser should ignore anything it
 does not understand and it is especially important for forward compability.

 Maybe the harness of the smokers could do that - assuming they have the latest
 version of TAP - and then report the issues.


 Gabor


 --
 Gabor Szabo http://szabgab.com/blog.html
 Test Automation Tips http://szabgab.com/test_automation_tips.html



Re: testing for warnings during tests

2008-06-10 Thread Gabor Szabo
On Tue, Jun 10, 2008 at 10:33 AM, Fergal Daly [EMAIL PROTECTED] wrote:
 2008/6/10 Gabor Szabo [EMAIL PROTECTED]:
 So apparently using Test::NoWarnings isn't that cool
 and mandating it with CPANTS metric is even less cool.

 What's the problem with T::NW? Maybe I'm misunderstanding the rest of
 this mail but you seem to be looking for something that will catch
 warnings from other people's test scripts which is no what T::NW is
 about. Or is there some other problem?


Well, the issue is that I would like to eliminate the warnings given by CPAN
modules during testing. (not only mine, everyones)

One way is that they all start to use T::NW in all their test scripts.
That was my original idea by adding it as a metric to CPANTS.

As it turns out people have all kinds of issues with T::NW.
I am not sure if the technical issues are really correct or not and if
you could fix them or not (see also the other thread about CPANTS).

One thing I understand is that they don't want to be forced to use this
specific solution.

So I thought I'll look for a solution where someone (a test runner?)
could check if there was any warning from the test. This is of course not
the job of T::NW.

regards
   Gabor

-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tips http://szabgab.com/test_automation_tips.html


Re: testing for warnings during tests

2008-06-10 Thread Andy Armstrong

On 10 Jun 2008, at 08:14, Ovid wrote:

So I wonder if there are other ways. E.g. if the harness could catch
the warnings?


The harness has code which can allow you to merge the STDERR and  
STDOUT

streams.  See the '--merge' switch to prove.  With that, a
(simple-minded) parser becomes:

   use TAP::Parser;

   my $parser = TAP::Parser-new( {
   source = $source,
   merge  = 1,
   } );

   while ( my $result = $parser-next ) {
   if ( $result-is_unknown ) {
  # it's a warning or the code printed something to STDOUT
   }
   }

That's not enough for what you want, but is that a start?



I wonder if TAP::Filter[1] would help with that? I think you could use  
it to write a filter that converted warnings into additional errors.


# In a filter
sub inspect {
my ($self, $result) = @_;
if ( $result-is_unknown ) {
return $self-ok(description = 'Unknown TAP token', ok = 0);
}
return $result;
}


[1] http://search.cpan.org/dist/TAP-Filter

--
Andy Armstrong, Hexten






Re: testing for warnings during tests

2008-06-10 Thread David Golden
On Tue, Jun 10, 2008 at 3:33 AM, Fergal Daly [EMAIL PROTECTED] wrote:
 What's the problem with T::NW? Maybe I'm misunderstanding the rest of

Not a problem with T::NW itself, just that it doesn't catch all the
cases that Gabor is concerned about and people had issues with using
CPANTS to encourage cargo-cult usage for the sake of Kwalitee in the
face of that deficiency.

E.g.:

print STDERR T::NW doesn't catch this warning;

-- David


Re: testing for warnings during tests

2008-06-10 Thread David Golden
On Tue, Jun 10, 2008 at 12:28 AM, Gabor Szabo [EMAIL PROTECTED] wrote:
 The issue I am trying to solve is how to catch and report
 when a test suit gives any warnings?

Are there situations where a test suite should give warnings?  I.e.
stuff that the user should see that shouldn't get swallowed by the
harness running in a quiet (not verbose) mode?

For example, I have some tests in CPAN::Reporter that test timing out
a command.  Since that could look like a test has hung (to an
impatient tester) I make a point to use warn to flag to the user that
the test is sleeping for a timeout test.

Looks like this:

$ Build test --test_files=t/13_record_command.t
t/13_record_command..18/37 # sleeping for timeout test
t/13_record_command..22/37 # sleeping for timeout test
t/13_record_command..26/37 # sleeping for timeout test
t/13_record_command..ok
All tests successful.
Files=1, Tests=37, 19 wallclock secs ( 0.01 usr  0.01 sys +  6.51 cusr
 2.06 csys =  8.59 CPU)

So is there a better way to do this than warn?

That said, if you try this at home (with Proc::ProcessTable), you'll
also get a lovely warning from Proc::ProcessTable having a
non-portable v-string.  That is a warning that should perhaps be
fixed, though it turns out to be upstream.  Should I clutter my code
with stuff to suppress it?  Maybe.

But I don't see how I can have the one without the other.

--David


Re: testing for warnings during tests

2008-06-10 Thread Rick Fisk
I haven't used Test::NoWarnings. However, for me personally, I don't
like to suppress output unless I am catching it in an explicit test. The
code I am working on is designed to throw warnings or errors under
certain conditions. Especially in code that is designed to run under a
web server, it is very, very bad to have output spewing to STDOUT that
was not intended.

I have been using Capture::Stdout/Capture::Stderr for this purpose. It's
designed to run in the context of the Test::XXX packages. 

You can capture warnings (FORCE_CAPTURE_WARN = 1) but you have to be
pretty careful. If you are merely swallowing up your output, you could
end up missing warnings thrown by the interpreter (use of uninitialized
value in concat).

If you're going to capture output, I think you have to be methodical
about it and add a test for intended and unintended output. If it's
output that needs to be suppressed in a later bugfix, then wrap its
capture in a TODO block but do capture the output and handle it. Later
you can change the test to a negative test ie;

 (is($stderr-read(), undef, '... execution of this code snippet should
not produce warnings';) 

or 

(is(!stderr-read(), ' ... there should be no more messages on STDERR');

One caveat I have found when explicitly capturing your
STDERR/STDOUT/WARN output in tests is that once you commit to it in your
test suite, you can get yourself stuck. I rarely execute code under
development outside a test anymore. So, if I happen to add a debug
statement in the code and I have a test like the one above which tests
for the existence of messages, it will fail, AND I lose the debug output
unless I am very specific about which handle I am testing. It's not a
fatal flaw but it has to be dealt with. 

This can be worked around a number of ways. One is to modify your
test(s) while you're debugging to suppress capture (a klunky workaround
prone to errors), throw to a handle you're not capturing, or copy a
modified version of your test that doesn't capture any output.

Spurious output can actually skew your test results, especially if you
are running Test::Harness with the merge (STDOUT/STDERR) option. You can
run 'prove -p --merge' against your test to individually debug your
test's output and make sure it will play nicely with Test::Harness. 

I have also found that in certain versions of perl (5.6.1), even with
the {FORCE_CAPTURE_WARN} flag, warnings appear not to be thrown even
though the output exists on the TTY. Your mileage my vary.  

So I would agree that eliminating or handling Warning output is a good
idea. Doing this at a global harness level rather than in the individual
tests though is where I might disagree on the approach.








On Tue, 2008-06-10 at 07:49 -0400, David Golden wrote:
 On Tue, Jun 10, 2008 at 12:28 AM, Gabor Szabo [EMAIL PROTECTED] wrote:
  The issue I am trying to solve is how to catch and report
  when a test suit gives any warnings?
 
 Are there situations where a test suite should give warnings?  I.e.
 stuff that the user should see that shouldn't get swallowed by the
 harness running in a quiet (not verbose) mode?
 
 For example, I have some tests in CPAN::Reporter that test timing out
 a command.  Since that could look like a test has hung (to an
 impatient tester) I make a point to use warn to flag to the user that
 the test is sleeping for a timeout test.
 
 Looks like this:
 
 $ Build test --test_files=t/13_record_command.t
 t/13_record_command..18/37 # sleeping for timeout test
 t/13_record_command..22/37 # sleeping for timeout test
 t/13_record_command..26/37 # sleeping for timeout test
 t/13_record_command..ok
 All tests successful.
 Files=1, Tests=37, 19 wallclock secs ( 0.01 usr  0.01 sys +  6.51 cusr
  2.06 csys =  8.59 CPU)
 
 So is there a better way to do this than warn?
 
 That said, if you try this at home (with Proc::ProcessTable), you'll
 also get a lovely warning from Proc::ProcessTable having a
 non-portable v-string.  That is a warning that should perhaps be
 fixed, though it turns out to be upstream.  Should I clutter my code
 with stuff to suppress it?  Maybe.
 
 But I don't see how I can have the one without the other.
 
 --David



Re: testing for warnings during tests

2008-06-10 Thread brian d foy
In article [EMAIL PROTECTED],
Gabor Szabo [EMAIL PROTECTED] wrote:

 Having those warnings during tests is a problem that should be somehow solved.

I'd like to have a cpan-testers report whenever my test suite issues
warnings. It's not a new category. If the tests all pass it's still a
PASS.

Someone was talking about developer preferences before. I'd set a send
report on warning feature.

Not that I care that much to do anything about it right now :)


Re: testing for warnings during tests

2008-06-09 Thread chromatic
On Monday 09 June 2008 21:28:40 Gabor Szabo wrote:

 The issue I am trying to solve is how to catch and report
 when a test suit gives any warnings?

Is it even possible?  I thought one of the goals of CPANTS was not to run any 
of the distribution's code directly.  The most useful metrics seem to meet 
that goal.

-- c


Re: testing for warnings during tests

2008-06-09 Thread Gabor Szabo
On Tue, Jun 10, 2008 at 7:42 AM, chromatic [EMAIL PROTECTED] wrote:
 On Monday 09 June 2008 21:28:40 Gabor Szabo wrote:

 The issue I am trying to solve is how to catch and report
 when a test suit gives any warnings?

 Is it even possible?  I thought one of the goals of CPANTS was not to run any
 of the distribution's code directly.  The most useful metrics seem to meet
 that goal.

I did not mean it to be done by CPANTS.

Having those warnings during tests is a problem that should be somehow solved.

My attempt to recommend Test::NoWarnings (that I would change to
use Test::NoWarnings or Test::NoWarnings::Plus or etc if there were
other solutions) don't seem be the right solution.

So I wonder if there are other ways. E.g. if the harness could catch
the warnings?

Gabor


-- 
Gabor Szabo http://szabgab.com/blog.html
Test Automation Tips http://szabgab.com/test_automation_tips.html