Duncan Booth wrote:
... Possible enhancements:
add another argument for associated issue tracker id ... some unbroken tests
will also have associated issues this might just be a separate decorator.
This is probably easier to do as a separate decoration which would have to
precede the
On Tue, 10 Jan 2006 11:13:20 +0100, Peter Otten [EMAIL PROTECTED] wrote:
Duncan Booth wrote:
Peter Otten wrote:
Marking a unittest as should fail in the test suite seems just wrong
to me, whatever the implementation details may be. If at all, I would
apply a I know these tests to fail,
OK I took the code I offered here (tweaked in reaction to some
comments) and put up a recipe on the Python Cookbook. I'll allow
a week or so for more comment, and then possibly pursue adding this
to unittest.
Here is where the recipe is, for those who want to comment further (in
either that
[EMAIL PROTECTED] wrote:
Michele I am also +1 to run the tests in the code order.
Got any ideas how that is to be accomplished short of jiggering the names so
they sort in the order you want them to run?
Skip
Well, it could be done with a decorator, but unittest is already
cumbersome how
Scott David Daniels wrote:
There has been a bit of discussion about a way of providing test cases
in a test suite that _should_ work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test that _should_ work
Scott David Daniels [EMAIL PROTECTED] writes:
Recently there was a checkin of a test that _should_ work but
doesn't. The discussion got around to means of indicating such
tests (because the effort of creating a test should be captured)
without disturbing the development flow.
Do you mean
Scott David Daniels wrote:
There has been a bit of discussion about a way of providing test cases
in a test suite that should work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test that should work but
Paul Rubin wrote:
Recently there was a checkin of a test that _should_ work but
doesn't. The discussion got around to means of indicating such
tests (because the effort of creating a test should be captured)
without disturbing the development flow.
Do you mean shouldn't work but does?
Scott David Daniels wrote:
There has been a bit of discussion about a way of providing test cases
in a test suite that _should_ work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test that _should_ work
Fredrik Lundh [EMAIL PROTECTED] writes:
no, he means exactly what he said: support for expected failures
makes it possible to add test cases for open bugs to the test suite,
without 1) new bugs getting lost in the noise, and 2) having to re-
write the test once you've gotten around to fix the
Peter Otten wrote:
Marking a unittest as should fail in the test suite seems just wrong
to me, whatever the implementation details may be. If at all, I would
apply a I know these tests to fail, don't bother me with the messages
for now filter further down the chain, in the TestRunner maybe.
Paul Rubin wrote:
no, he means exactly what he said: support for expected failures
makes it possible to add test cases for open bugs to the test suite,
without 1) new bugs getting lost in the noise, and 2) having to re-
write the test once you've gotten around to fix the bug.
Oh I see,
Duncan Booth wrote:
Peter Otten wrote:
Marking a unittest as should fail in the test suite seems just wrong
to me, whatever the implementation details may be. If at all, I would
apply a I know these tests to fail, don't bother me with the messages
for now filter further down the chain, in
Scott David Daniels about marking expected failures:
snip
I am +1, I have wanted this feature for a long time. FWIW,
I am also +1 to run the tests in the code order.
Michele Simionato
--
http://mail.python.org/mailman/listinfo/python-list
Peter Otten [EMAIL PROTECTED] wrote:
You're right of course. I still think the currently doesn't pass marker
doesn't belong into the test source.
The agile people would say that if a test doesn't pass, you make fixing it
your top priority. In an environment like that, there's no such thing as
Michele I am also +1 to run the tests in the code order.
Got any ideas how that is to be accomplished short of jiggering the names so
they sort in the order you want them to run?
Skip
--
http://mail.python.org/mailman/listinfo/python-list
[EMAIL PROTECTED] writes:
Got any ideas how that is to be accomplished short of jiggering the
names so they sort in the order you want them to run?
How about with a decorator instead of the testFuncName convention,
i.e. instead of
def testJiggle(): # test in the func name means it's a
On 10 Jan 2006 13:49:17 -0800, Paul Rubin http://phr.cx@nospam.invalid
wrote:
[EMAIL PROTECTED] writes:
Got any ideas how that is to be accomplished short of jiggering the
names so they sort in the order you want them to run?
How about with a decorator instead of the testFuncName convention,
There has been a bit of discussion about a way of providing test cases
in a test suite that _should_ work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test that _should_ work but doesn't. The
discussion got
On 9 January 2006, Scott David Daniels wrote:
There has been a bit of discussion about a way of providing test cases
in a test suite that _should_ work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test
20 matches
Mail list logo