Hi folks! I thought this was worth sending out an announcement about so
as many packagers and testers as possible are aware of it.

With the rollout of Bodhi 2.6.0 to production today, openQA test
results for critpath updates now appear in the Bodhi webUI! Click the
'Automated Tests' tab on any critical path update (from the last two
months or so) and you should see, as well as the 'dist.*' tests you're
probably familiar with that are run by Taskotron, results for several
'update.*' tests. These are the openQA test results.

Clicking any result will take you to the openQA webUI page for that
job. If you're investigating a failure, look for thumbnails with a
*red* border. Usually, the first one of these will be the attempted
image match or console command that did not give the expected result.

If you can't understand a failure, please do come ask on test@ or in
#fedora-qa . Myself and garretraziel (Jan Sedlak) should be able to
explain.

This isn't a perfect test system, yet; there have been and will
continue to be 'false' failures, where something goes wrong in the test
process itself or the test just hits some transient bug that isn't
actually caused by the update. Because of this, we wrote an openQA
plugin that automatically retries all failed update tests one time, but
sometimes we get unlucky and a false failure happens again on the
retry.

This is quite common for Fedora 26 updates; there seem to be several of
these transient bugs in F26 at present, meaning sometimes the test VM
just doesn't boot successfully, sometimes GNOME crashes, and so on. If
you see a 'red' screenshot which is just a partially-completed
bootsplash screen, or the GDM login screen, this may be what happened.

Notably, *any* failure before the _advisory_update step cannot possibly
be a bug in the update, as nothing from the update is actually
installed until near the end of that step.

It's not currently possible for anyone but openQA admins to re-run
individual tests. You can cause *all* the openQA tests for your update
to be re-run by editing the update in any way (*any* edit event
triggers a re-run of the tests, not just changes to the update's
package manifest), but please be a bit sparing with this, as openQA
doesn't have unlimited capacity. For now you can, again, ask Jan or I
to re-run a single test if you'd like this. We will endeavour to set up
some kind of re-run request system in future.

Just like the taskotron results, at present these results are entirely
advisory. They do not have any effect on whether or when you can push
your update stable. But we set up this system to help out packagers, so
I hope you'll find it useful to keep an eye on the results and take a
look at any failures to see if they may indicate a bug in the update.
Once again, please do ask for any help you need in interpreting or
understanding the results. And please do send any suggestions, comments
or complaints our way!

And just to be clear, these tests are currently run only on *critical
path* updates. If your update does not include a critical path package,
the tests will not be run. I'm thinking of implementing some sort of
'whitelist' system for listing or otherwise marking non-critpath
packages for which we want to run some or all of the tests; for
instance, it would make a lot of sense to run the FreeIPA tests for any
package in the FreeIPA stack. But that's not implemented yet. We don't
just run the tests on *all* updates because we simply don't have the
capacity to do so at present.

I have written a blog post about this, with some more information,
including a brief explanation of what the current set of update tests
covers, here:

https://www.happyassassin.net/2017/04/24/automated-critical-path-update-functional-testing-for-fedora/
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
_______________________________________________
test-announce mailing list -- test-announce@lists.fedoraproject.org
To unsubscribe send an email to test-announce-le...@lists.fedoraproject.org

Reply via email to