Re: [Test-Announce] Proven tester status
On Thu, 2012-02-16 at 09:15 -0800, Adam Williamson wrote: > On Thu, 2012-02-16 at 08:46 -0500, Vincent L. wrote: > > Thanks for the stats and information. > > > > How big is the gap in testing. > > > > Is there a significant amount of package releases etc walked back > > because after they passed minimum time in QA and were published it > > turned out they were broken ? Ie. percent wise or some other metric > > or in the absence of that a gut assessment. > > No, we almost never revert updates. I can't recall a single instance of > it happening recently. When an update breaks something the packager > usually simply ships a quick fix as a subsequent update. To clarify - it's quite often the case that an update is discovered to be broken *in updates-testing*, but it's quite rare for a badly broken update to make it past updates-testing (it does happen occasionally), and in that case we almost never revert it, we instead fix it with a new update. -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora http://www.happyassassin.net -- test mailing list test@lists.fedoraproject.org To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test
Re: [Test-Announce] Proven tester status
On Thu, 2012-02-16 at 08:46 -0500, Vincent L. wrote: > Thanks for the stats and information. > > How big is the gap in testing. > > Is there a significant amount of package releases etc walked back > because after they passed minimum time in QA and were published it > turned out they were broken ? Ie. percent wise or some other metric > or in the absence of that a gut assessment. No, we almost never revert updates. I can't recall a single instance of it happening recently. When an update breaks something the packager usually simply ships a quick fix as a subsequent update. > Are certain areas in more need of focus than others due to criticality > and lack of testing/testers ? If so what areas are those ? The packages we'd like most to have test plans are those on the critical path - https://fedoraproject.org/wiki/Critical_path_package , http://kojipkgs.fedoraproject.org/mash/rawhide-20120101/logs/critpath.txt . > Wanting to get a handle on things around here so I can understand > where I can be most effective in helping out. I want to look into the > automated qa testing via this link > https://fedoraproject.org/wiki/AutoQA Great. AutoQA has its own mailing list, and the wiki should have lots of helpful info and contacts for getting started. -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora http://www.happyassassin.net -- test mailing list test@lists.fedoraproject.org To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test
Re: [Test-Announce] Proven tester status
Thanks for the stats and information. How big is the gap in testing. Is there a significant amount of package releases etc walked back because after they passed minimum time in QA and were published it turned out they were broken ? Ie. percent wise or some other metric or in the absence of that a gut assessment. Are certain areas in more need of focus than others due to criticality and lack of testing/testers ? If so what areas are those ? Wanting to get a handle on things around here so I can understand where I can be most effective in helping out. I want to look into the automated qa testing via this linkhttps://fedoraproject.org/wiki/AutoQA Thanks again. On 02/15/2012 10:16 PM, Adam Williamson wrote: On Wed, 2012-02-15 at 21:35 -0500, Vincent L. wrote: On 02/13/2012 09:30 PM, Bruno Wolff III wrote: Note that statistics are still gathered and that future changes might depend on whether or not proventesters do a better job than average of correctly tagging builds as good or bad. Probably stating the obvious, and I am new around here, but the biggest challenge I see is that testing is not well defined. Certainly for the core items standard regressions or checklists of what items should be validated etc do not seem to be present [ or at least i can't find any ]. This naturally leads to inconsistent approaches to testing from tester to tester. There are a lot of packages, and likely a lack of staffing/volunteers to develop and maintain testplans. However as in most commercial release management having these things would help ensure each tester validated things in a similar fashion and ensure better release quality. Yes, this is broadly the problem. We have a system in place that allows you to create a test plan for a package and have it show up in the update request. See it in action at https://admin.fedoraproject.org/updates/FEDORA-2012-1766/dracut-016-1.fc17 - note the links to test cases - and details on how to actually set up the test cases to make this work are at https://fedoraproject.org/wiki/QA:SOP_package_test_plan_creation . We don't have test plans for many packages, really because of the resource issue. Jon Stanley did suggest he might work on this as his 'board advocacy' task. May I ask how many "proventesters" there are ballpark -vs- how many approximate testers of standard status participate at any given time ? We can say with precision how many proven testers there are, because there's an associated FAS group - there are 90 members of the 'proventesters' group in FAS. Active non-proven testers is a bit harder to count, but Luke can generate Bodhi statistics. There's one fairly 'famous' set from 2010 here: https://lists.fedoraproject.org/pipermail/devel/2010-June/137413.html There's a less famous report from March 2011 here: http://lmacken.fedorapeople.org/bodhi-metrics-20110330 you can get some numbers from. At the time of the 2011 report it seems like there was a roughly 1:10 proventester/regular tester ratio for F15 and F14, but it does seem to be slightly unclear. -- test mailing list test@lists.fedoraproject.org To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test
Re: [Test-Announce] Proven tester status
On Wed, 2012-02-15 at 21:35 -0500, Vincent L. wrote: > On 02/13/2012 09:30 PM, Bruno Wolff III wrote: > > Note that statistics are still gathered and that future changes might depend > > on whether or not proventesters do a better job than average of correctly > > tagging builds as good or bad. > > Probably stating the obvious, and I am new around here, but the biggest > challenge I see is that testing is not well defined. Certainly for the > core items standard regressions or checklists of what items should be > validated etc do not seem to be present [ or at least i can't find any > ]. This naturally leads to inconsistent approaches to testing from > tester to tester. > > There are a lot of packages, and likely a lack of staffing/volunteers to > develop and maintain testplans. However as in most commercial release > management having these things would help ensure each tester validated > things in a similar fashion and ensure better release quality. Yes, this is broadly the problem. We have a system in place that allows you to create a test plan for a package and have it show up in the update request. See it in action at https://admin.fedoraproject.org/updates/FEDORA-2012-1766/dracut-016-1.fc17 - note the links to test cases - and details on how to actually set up the test cases to make this work are at https://fedoraproject.org/wiki/QA:SOP_package_test_plan_creation . We don't have test plans for many packages, really because of the resource issue. Jon Stanley did suggest he might work on this as his 'board advocacy' task. > May I ask how many "proventesters" there are ballpark -vs- how many > approximate testers of standard status participate at any given time ? We can say with precision how many proven testers there are, because there's an associated FAS group - there are 90 members of the 'proventesters' group in FAS. Active non-proven testers is a bit harder to count, but Luke can generate Bodhi statistics. There's one fairly 'famous' set from 2010 here: https://lists.fedoraproject.org/pipermail/devel/2010-June/137413.html There's a less famous report from March 2011 here: http://lmacken.fedorapeople.org/bodhi-metrics-20110330 you can get some numbers from. At the time of the 2011 report it seems like there was a roughly 1:10 proventester/regular tester ratio for F15 and F14, but it does seem to be slightly unclear. -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora http://www.happyassassin.net -- test mailing list test@lists.fedoraproject.org To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test
Re: [Test-Announce] Proven tester status
On 02/13/2012 09:30 PM, Bruno Wolff III wrote: Note that statistics are still gathered and that future changes might depend on whether or not proventesters do a better job than average of correctly tagging builds as good or bad. Probably stating the obvious, and I am new around here, but the biggest challenge I see is that testing is not well defined. Certainly for the core items standard regressions or checklists of what items should be validated etc do not seem to be present [ or at least i can't find any ]. This naturally leads to inconsistent approaches to testing from tester to tester. There are a lot of packages, and likely a lack of staffing/volunteers to develop and maintain testplans. However as in most commercial release management having these things would help ensure each tester validated things in a similar fashion and ensure better release quality. May I ask how many "proventesters" there are ballpark -vs- how many approximate testers of standard status participate at any given time ? -- test mailing list test@lists.fedoraproject.org To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test
Re: [Test-Announce] Proven tester status
On Tue, Feb 14, 2012 at 10:17:12 +, mike cloaked wrote: > On Tue, Feb 14, 2012 at 2:30 AM, Bruno Wolff III wrote: > > On Mon, Feb 13, 2012 at 18:20:38 -0800, > > Adam Williamson wrote: > >> > >> As noted there, and as discussed at meetings and with FESCo, I'm hopeful > >> we'll be able to make use of proven tester status again once Bodhi 2.0 > >> hits. Therefore I don't think we should take down all the documentation, > >> kill the group, or stop accepting membership requests. But do be aware > >> that, at present, proven tester status is basically meaningless. > > > > Note that statistics are still gathered and that future changes might depend > > on whether or not proventesters do a better job than average of correctly > > tagging builds as good or bad. > > That particular issue would be useful to know - is there any current > data on whether at this point in time there is any evidence that > proventester karma is more helpful than the average in this regard? This was brought up at one of the FESCO meetings and at the time it wasn't felt to be enough better to keep requiring proventester for crit path updates. -- test mailing list test@lists.fedoraproject.org To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test
Re: [Test-Announce] Proven tester status
On Tue, Feb 14, 2012 at 2:30 AM, Bruno Wolff III wrote: > On Mon, Feb 13, 2012 at 18:20:38 -0800, > Adam Williamson wrote: >> >> As noted there, and as discussed at meetings and with FESCo, I'm hopeful >> we'll be able to make use of proven tester status again once Bodhi 2.0 >> hits. Therefore I don't think we should take down all the documentation, >> kill the group, or stop accepting membership requests. But do be aware >> that, at present, proven tester status is basically meaningless. > > Note that statistics are still gathered and that future changes might depend > on whether or not proventesters do a better job than average of correctly > tagging builds as good or bad. That particular issue would be useful to know - is there any current data on whether at this point in time there is any evidence that proventester karma is more helpful than the average in this regard? -- mike c -- test mailing list test@lists.fedoraproject.org To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test
Re: [Test-Announce] Proven tester status
On Mon, Feb 13, 2012 at 18:20:38 -0800, Adam Williamson wrote: > > As noted there, and as discussed at meetings and with FESCo, I'm hopeful > we'll be able to make use of proven tester status again once Bodhi 2.0 > hits. Therefore I don't think we should take down all the documentation, > kill the group, or stop accepting membership requests. But do be aware > that, at present, proven tester status is basically meaningless. Note that statistics are still gathered and that future changes might depend on whether or not proventesters do a better job than average of correctly tagging builds as good or bad. -- test mailing list test@lists.fedoraproject.org To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test
[Test-Announce] Proven tester status
Just wanted to make note of the current status of proven testers. As decided by FESCo late last year, proven tester feedback now has exactly the same status as non-proven tester feedback, effectively rendering it pointless to be a proven tester. I have added a note about this to the proven tester page: https://fedoraproject.org/wiki/Proven_tester As noted there, and as discussed at meetings and with FESCo, I'm hopeful we'll be able to make use of proven tester status again once Bodhi 2.0 hits. Therefore I don't think we should take down all the documentation, kill the group, or stop accepting membership requests. But do be aware that, at present, proven tester status is basically meaningless. Of course, there'll be lots of discussion at QA meetings and FESCo meetings before we decide whether and how to 'reactivate' the group. Thanks all! -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora http://www.happyassassin.net ___ test-announce mailing list test-annou...@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/test-announce -- test mailing list test@lists.fedoraproject.org To unsubscribe: https://admin.fedoraproject.org/mailman/listinfo/test