"Martin v. Löwis" writes:

 > > If they do fail, they're not "false" positives.  If they're "false",
 > > then the test is broken, no?
 > 
 > Correct. But they might well be broken, no?

I would hope some effort is made that they not be.  If they generate a
positive, I would expect that the contributor would try to fix that
before committing, no?  If they discover that it's "false", they fix
or remove the test; otherwise they document it.

 > > So find a way to label them as tests
 > > added ex-post, with the failures *not* being regressions but rather
 > > latent bugs newly detected, and (presumably) as "wont-fix".
 > 
 > No such way exists,

Add a documentation file called "README.expected-test-failures".
AFAIK documentation is always acceptable, right?

Whether that is an acceptable solution to the "latent bug" problem is
a different question.  I'd rather know that Python has unexpected
behavior, and have a sample program (== test) to demonstrate it, than
not.  YMMV.

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to