Let me try to rephrase the problem in different terms, to hopefully make it clearer why using timers like this is a bad idea.

setTimeout(foo, 1000) may seem to suggest run foo after 1 second, but that is *not* what that function does, at all. What it does is, run foo after 1 second, or perhaps at any other point in time later than that, and it could even be *minutes* later.

A lot of people are under the assumption that just because their local timings pan out, and they've added a, let's say, 10x error margin to their timers, their use of timers is fine. It's not, as has been demonstrated as intermittent oranges over the years. We have no way of knowing whether those calculations hold under various types of machine loads, on new platforms, on much lower powered machines (think of running the tests where the such measurements have been done on a beefy desktop on a low-end phone.) And increasing the error margin won't save us, it will just move the problem further into the future.

Also, please note that letting the test time out if it doesn't succeed is a perfectly valid failure scenario, we have numerous tests that do things like fire an async event, setup a listener and wait for the listener to fire and SimpleTest.finish() there, without any timers to catch the case where the event does not fire. Logging sufficiently is almost always enough to not have to use these timers, as those tests have demonstrated in practice.

Cheers,
Ehsan

On 2014-12-19 1:07 AM, Boris Zbarsky wrote:
On 12/18/14, 2:28 PM, Nils Ohlmeier wrote:
Well there is an event listener waiting for the event to fire.
But how else then through a timeout, obviously with a high timeout
value like
30 or 60 seconds

We've had quite a number of random oranges from precisely this scenario.
  It seems that it's not uncommon for the test VMs to completely lose
the timeslice for 60 seconds or more.

If the test is in fact expecting the event to fire, the right thing to
do is to jut have the test time out per the harness timeout (which can
be globally adjusted as needed depending on the characteristics of the
test environment) if the event doesn't fire.  Rolling your own shorter
timer just means contributing to random orange.

Sure I can print an info message that my test is now waiting for an
event to pop,

Sure, and another one when the event comes in.  Then you can check the
log to see whether the timeout was for this reason in the event of a
harness timeout on this test.

But as I tried to describe in my other
email having a long living timer which pops complaining that event X
is missing
I think is a legit use case for setTimeout in test.

Given the number of random oranges we've had from _precisely_ this use
of setTimeout, I don't think it's legit.

-Boris
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to