[Zope-dev] zope.testrunner misleading output

2010-12-22 Thread Alan Franzoni
Hello,
I've tried submitting this as a launchpad answer but it expired
without further notice; I'm posting this here, I hope it's the right
place to discuss this.

I'm indirectly using zope.testrunner through zc.buildout zc.recipe.testrunner:

[testunits]
recipe = zc.recipe.testrunner
eggs = pydenji
defaults = [ --auto-color, -vv, --tests-pattern, ^test_.* ]

It works; but what happens if a test file matching the pattern has got
a serious failure, let's say an import or syntax error ?

Output:
Test-module import failures:

Module: pydenji.test.test_cmdline

 File 
/Users/alan/Dropbox/code/pydenji_release/pydenji/pydenji/test/test_cmdline.py,
line 13

  ^
SyntaxError: invalid syntax


Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
 Set up zope.testrunner.layer.UnitTests in 0.000 seconds.
 Running:

[...] long test output [... ]

 test_resolver_retrieves_package_resource_filename
(pydenji.test.test_uriresolver.TestPackageResolver)
 Ran 73 tests with 0 failures and 0 errors in 0.188 seconds.
Tearing down left over layers:
 Tear down zope.testrunner.layer.UnitTests in 0.000 seconds.

Test-modules with import problems:
 pydenji.test.test_cmdline


I can't show you such things here, but the 73 is green, which is the
colour for all ok - if any failure happens, the colors turn red,
while the test modules with import problems are just a tiny line
after that, and often gets overlooked.

Also, a test file matching the pattern but which not defines any test
is treated the very same way as a file with import problems, which is
probably not what it want.

The issues I can find are:

- I need to dig to the top in order to get the traceback; other
frameworks, like twisted's own trial, print all the tracebacks at the
bottom of the test run for easy debugging;
- test colors should not turn green if any test with import problem is
around; maybe an import/syntax error should count as a generic error.
- while an import issue is a serious fact - meaning the test can't be
run, and should be reported, a test module which does not define any
test could just issue a warning - it could just be a placeholder test,
or a not-yet-finished test case, and should not be a blocking issue.

Any thoughts on this?


-- 
Alan Franzoni
--
contact me at pub...@[mysurname].eu
___
Zope-Dev maillist  -  Zope-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 https://mail.zope.org/mailman/listinfo/zope-announce
 https://mail.zope.org/mailman/listinfo/zope )


Re: [Zope-dev] zope.testrunner misleading output

2010-12-22 Thread Marius Gedminas
On Wed, Dec 22, 2010 at 02:27:09PM +0100, Alan Franzoni wrote:
 I've tried submitting this as a launchpad answer but it expired
 without further notice; I'm posting this here, I hope it's the right
 place to discuss this.

Probably.  More people read this list than Launchpad answers, I'm sure.

 I'm indirectly using zope.testrunner through zc.buildout
 zc.recipe.testrunner:

(One of my beefs with zope.testrunner is that I've no idea how to use it
without zc.recipe.testrunner, assuming that's even possible.)

 [testunits]
 recipe = zc.recipe.testrunner
 eggs = pydenji
 defaults = [ --auto-color, -vv, --tests-pattern, ^test_.* ]
 
 It works; but what happens if a test file matching the pattern has got
 a serious failure, let's say an import or syntax error ?

It is reported near the beginning of the output (highlighted in red, to
stand out), mentioned in the summary at the end to ensure you don't
miss it, and the test runner exits with a non-zero status code (I hope;
if it doesn't, that's a bug).

...
  Ran 73 tests with 0 failures and 0 errors in 0.188 seconds.
 Tearing down left over layers:
  Tear down zope.testrunner.layer.UnitTests in 0.000 seconds.
 
 Test-modules with import problems:
  pydenji.test.test_cmdline
 
 
 I can't show you such things here, but the 73 is green, which is the
 colour for all ok - if any failure happens, the colors turn red,
 while the test modules with import problems are just a tiny line
 after that, and often gets overlooked.

That's an interesting perspective.

Note that even when there are failures, the number of tests and the
number of seconds are highlighted in green.  (The colours there are
mainly to make the numbers stand out so they're easier to notice in the
output.)

Perhaps it would make sense to increment the number of errors, if there
are modules that cannot be imported.  The number of errors is
highlighted in red (unless it is 0), so that would give you a visual
clue if you missed the one near the beginning of the output, or ignored
the summary list at the end.

 Also, a test file matching the pattern but which not defines any test
 is treated the very same way as a file with import problems, which is
 probably not what it want.

What's it?

 The issues I can find are:
 
 - I need to dig to the top in order to get the traceback; other
 frameworks, like twisted's own trial, print all the tracebacks at the
 bottom of the test run for easy debugging;

Having used other frameworks, I appreciate zope.testrunner's eagerness
to show me the traceback at once, so I can start examining an error
without having to stare at an F in the middle of a sea of dots and
wonder what it might be, while waiting 20 minutes for the rest of the
test suite to finish running.

Then again I agree that having to scroll back to the first traceback is
a necessity that's a bit bothersome.  I don't think printing the
tracebacks at the end of the run would help, in case there were multiple
tracebacks -- you'd want the first one anyway; the others likely being
caused by it.  Also, tracebacks tend to be long, requiring me to scroll
anyway.

Perhaps my experiences are coloured by working on Zope'ish code --
doctests (causing error cascades by default), deep function call nesting
causing long tracebacks, etc.

I see that zope.testrunner has finally acquired a -x (--stop-on-error)
option, which should terminate the test run after the first failure.
That might help.  Although it might not help with doctests and error
cascades.

For myself I've ended up running the tests like this:

  bin/test -c 21 | less -RFX

this means I can start reading the results from the top down, starting
with the first failure, and not having to wait for the test suite to run
to completion.

I sometimes wish zope.testrunner had a --pager option, and spawn the
pager on its output if and only if there were any errors.

 - test colors should not turn green if any test with import problem is
 around; maybe an import/syntax error should count as a generic error.

Maybe.  I'm feeling +0 about this.

 - while an import issue is a serious fact - meaning the test can't be
 run, and should be reported, a test module which does not define any
 test could just issue a warning - it could just be a placeholder test,
 or a not-yet-finished test case, and should not be a blocking issue.

An import error could be a placeholder as well.  What makes a module
with no tests different?  If you added a module, it's reasonable to
assume you added it for a reason, and what other reason could there be
other than for it to have tests?

Adding a single placeholder test to assuage the test runner is not that
difficult.  Or you could simply ignore that error while you're working
on other tests.  It's not a _blocking_ issue, in my book, since it
doesn't abort your test run -- all the other tests continue to run.

Marius Gedminas
-- 
http://pov.lt/ -- Zope 3/BlueBream consulting and development


signature.asc
Description: Digital signature

Re: [Zope-dev] zope.testrunner misleading output

2010-12-22 Thread Alan Franzoni
On Wed, Dec 22, 2010 at 3:13 PM, Marius Gedminas mar...@gedmin.as wrote:

 It is reported near the beginning of the output (highlighted in red, to
 stand out), mentioned in the summary at the end to ensure you don't
 miss it, and the test runner exits with a non-zero status code (I hope;
 if it doesn't, that's a bug).

Yes, the status code is correctly set - I wasn't complaining about
that, the runner works fine in our CI system - but I usually don't
pipe || while running tests at the console.

Also, there's a bit of impedence between 0 failures and 0 errors
and the exit status 1. I mean, the first time it happened i thought
WTF?. I mean: it's very easy to associate red with something bad
occurred, I like that, and I'd like that to be extended.

Maybe something like


Ran A tests with B failures, C errors and D other problems


would satisfy us all.



 That's an interesting perspective.

 Note that even when there are failures, the number of tests and the
 number of seconds are highlighted in green.  (The colours there are
 mainly to make the numbers stand out so they're easier to notice in the
 output.)

Yes, you're right, it's just the number of failures that turns red, if any.

What I'd probably like is to have failed imports and syntax errors to
be counted as errors, while files defining no tests just as warnings.

 probably not what it want.

 What's it?

it = I , sorry, my typo.

 - I need to dig to the top in order to get the traceback; other
 frameworks, like twisted's own trial, print all the tracebacks at the
 bottom of the test run for easy debugging;

 Having used other frameworks, I appreciate zope.testrunner's eagerness
 to show me the traceback at once, so I can start examining an error
 without having to stare at an F in the middle of a sea of dots and
 wonder what it might be, while waiting 20 minutes for the rest of the
 test suite to finish running.

I realize zope.testrunner layering system is designed to run unit,
integration and maybe acceptance tests, which can be pretty consuming.
I usually run mostly unit tests, taking less than 2 seconds.

By the way you're describing nose behaviour, I think; twisted trial is
better at this; normally it just outputs the errors at the end, but if
the -e switch is passed, it prints the tracebacks both ASAP *and* at
the end. I guess a similar approach, using a switch, could be employed
in zope.testrunner.

 Then again I agree that having to scroll back to the first traceback is
 a necessity that's a bit bothersome.  I don't think printing the
 tracebacks at the end of the run would help, in case there were multiple
 tracebacks -- you'd want the first one anyway; the others likely being
 caused by it.  Also, tracebacks tend to be long, requiring me to scroll
 anyway.

If I'm running unit tests there's no certain connection between
tracebacks, and most probably there's no definite first. Again,
probably a configurable switch would make everybody happy :-)


 - while an import issue is a serious fact - meaning the test can't be
 run, and should be reported, a test module which does not define any
 test could just issue a warning - it could just be a placeholder test,
 or a not-yet-finished test case, and should not be a blocking issue.

 An import error could be a placeholder as well.  What makes a module
 with no tests different?  If you added a module, it's reasonable to
 assume you added it for a reason, and what other reason could there be
 other than for it to have tests?

An import error is surely an error; if it can't be imported it can't
work in any way. The file might define 100 tests, and those are not
run.

A syntax error is surely an error as well; if written with the correct
syntax there might be tests inside, and they won't get executed.

A file defining no tests may be an error, but you can never be sure;
hence I think you should tell the user but you shouldn't fail by
default, IMHO.


NB:
if any of my proposal is accepted, I'm willing to contribute a patch,
I just don't want to start coding for something that might later just
be trashed.


-- 
Alan Franzoni
--
contact me at pub...@[mysurname].eu
___
Zope-Dev maillist  -  Zope-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 https://mail.zope.org/mailman/listinfo/zope-announce
 https://mail.zope.org/mailman/listinfo/zope )