> > Correct. But they might well be broken, no? > > I would hope some effort is made that they not be. If they generate a > positive, I would expect that the contributor would try to fix that > before committing, no? If they discover that it's "false", they fix > or remove the test; otherwise they document it.
That assumes they know. We had recently a number of test cases that fixed security problems, and the tests would only run correctly on 32-bit systems. On 64-bit systems, they would consume all memory, and either bring the machine down, or complete eventually with a failure (because the expected out-of-memory situation did not arise). The author of the code was unaware of its dependency on the architecture, and the test would run fine on his machine. Likewise, we had test failures that only failed if a certain locale was not available on a system, or had locale definitions that were different from the ones on Linux. There is a lot of potential for tests to only fail on systems which we don't have access to. > Whether that is an acceptable solution to the "latent bug" problem is > a different question. I'd rather know that Python has unexpected > behavior, and have a sample program (== test) to demonstrate it, than > not. YMMV. And it does indeed for many people. We get a significant number of reports from people who find that the test suite fails, and then are unable to draw any conclusions from that other than Python is apparently broken, and that they therefore shouldn't use it - even if the test that fails is in a module that they likely never use. Regards, Martin _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com