On 2014-04-08, 6:10 PM, Karl Tomlinson wrote:
I wonder whether the real problem here is that we have too many
bad tests that report false negatives, and these bad tests are
reducing the value of our testsuite in general.  Tests also need
to be well documented so that people can understand what a
negative report really means.  This is probably what is leading to
assumptions that disabling a test is the solution to a new
failure.

I think this is a great point! Yes, we absolutely have a lot of bad tests which are written in a way that is prone to intermittent failures. This means that a number of changes to something entirely irrelevant to the test in question can change its environment to a point where a remnant intermittent failure issue may suddenly start to show up. Sometimes for some of these tests that are not fixed in time the situation is even more confusing: a change in the environment causes it to start to fail, then another change causes it to stop failing, and then a while later another change may cause it to start failing again.

I think it would be very valuable to detect some of the known conditions [1] that may result in a test intermittently failing because of timing issues in our test harnesses to prevent from those badly written tests from getting checked in in the first place. I've previously tried to do some of this work [2], and it would be fantastic if someone can pick up the ball there.

[1] https://developer.mozilla.org/en-US/docs/Mozilla/QA/Avoiding_intermittent_oranges
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=649012

Cheers,
Ehsan
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to