Yaroslav Halchenko <yarikop...@gmail.com> added the comment:

Fernando, I agree... somewhat ;-)

At some point (whenever everything works fine and no unittests fail) I wanted 
to merry sweepargs to nose and make it spit out a dot (or animate a spinning 
wheel ;)) for every passed unittest, so instead of 300 dots I got a picturesque 
field of thousands dots and Ss and also saw how many were skipped for some 
parametrizations.  But I became "Not sure" of such feature since field became 
quite large and hard to "grasp" visually although it gave me better idea indeed 
of what was the total number of "testings" were done and skipped.  So may be it 
would be helpful to separate notions of tests and testings and provide user 
ability to control the level of verbosity (1 -- tests, 2 -- testings, 3 -- 
verbose listing of testings (test(parametrization)))

But I blessed sweepargs every time whenever something goes nuts and a test 
starts failing for (nearly) all parametrization at the same point.  And that is 
where I really enjoy the concise summary.
Also I observe that often an ERROR bug reveals itself through multiple tests.  
So, may be it would be worth developing a generic 'summary' output which would 
collect all tracebacks and then groups them by the location of the actual 
failure and tests/testings which hit it?

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue7897>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to