I spent some time today looking at automated test failures via CloudBees/Jenkins (https://wmf.ci.cloudbees.com), and a pretty common theme of tests that fail inconsistently is "Watir::Wait::TimeoutError" issues. Here's an example of a recent failure that falls into this category:

https://wmf.ci.cloudbees.com/job/browsertests-commons.wikimedia.beta.wmflabs.org-linux-chrome/463/testReport/(root)/UploadWizard/Navigate_to_Describe_page/ <https://wmf.ci.cloudbees.com/job/browsertests-commons.wikimedia.beta.wmflabs.org-linux-chrome/463/testReport/%28root%29/UploadWizard/Navigate_to_Describe_page/>

From previous experience working with SauceLabs, I know that this is not unusual, since by definition you're initiating a test workflow that creates a lot of network traffic, and latencies are probably inevitable.

What I'm wondering is whether or not it might be a good idea to use the page-object "wait_until" method more widely. For example, we currently use it in aftv5_steps.rb <https://github.com/wikimedia/qa-browsertests/blob/master/features/step_definitions/aftv5_steps.rb>.

I realize that adding any type of sleep or wait behavior to a test just causes overall test execution time to increase, but I'm thinking it's more important to have fewer failing tests overall, so that folks can focus their trouble-shooting efforts on test failures that may be a consequence of actual bugs (and not just timeouts).

I'd love to hear other opinions on this topic, so please speak up if you have an opinion ;)

Thanks,

Jeff

_______________________________________________
QA mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/qa

Reply via email to