Thanks a lot, Chase, all, that's really awesome.

yours,
anton.

On Fri, Oct 16, 2009 at 11:35 AM, Chase Phillips <ch...@chromium.org> wrote:
> Chromium's buildbot now provides automated monitoring of performance test
> regressions.  This new monitoring system alerts committers and sheriffs on
> the buildbot waterfall of regressions and speedups in select performance
> tests.
> What this means if you're a committer:
> When you land a CL that affects Chromium's performance today, tests that
> measure performance could turn red or orange because of your CL's
> performance impact.  If you or anyone else notices a perf regression from
> your change (eg. the sheriff), please revert your change asap.  Performance
> is a key feature in Chromium, and regressing performance robs Chromium --
> and your fellow developers -- of the work that's gone into making that
> feature a reality.
> In some cases reverting might not be straightforward, but reverting early on
> ensures Chromium's performance doesn't regress while you get the space and
> time necessary to address the unexpected performance impact of your CL
> before landing again.
> What this means if you update reference builds:
> Changing a reference build for a platform now also requires reconfiguring
> all of the expected performance values for that platform.  See the
> Performance Test Plots page for more info.  In the best case this will
> simply mean setting each affected value to 0 with the appropriate variances.
> Not all perf test failures are perf regressions:
> Performance tests can and will still fail if the test harness or slave
> itself fails running a test.  When diagnosing why a test is red, regressions
> or speedups are denoted with the text "PERF_REGRESS" or "PERF_IMPROVE" in
> the test's status detail.  (An example of a XP Perf morejs regression.)  If
> you don't see the regress/improve line, look at the test's output for
> whether the failure was due to a crash in the harness or some other test
> component.
> False positives can happen:
> Most performance test results vary between test runs.  This new perf
> monitoring system comes with independent variance settings to keep tests
> from triggering a failure/warning unless the results surpass those variance
> limits. Should a test run produce abnormal results, though, a false positive
> will occur.  We will do our best to keep the noise down.  The good news is
> that the handful of times noise appeared it was later found to be caused by
> a subtle-yet-very-real underlying performance change.
> Where to learn more:
> Read Chromium's Performance Test Plots docs for info along with a
> description for how we configure the expectations the system uses.  The
> number of performance expectations will increase over time -- we're
> currently watching 6 traces with plans to add another 10 soon -- so feel
> free to familiarize yourself with the new system.  And feel free to mail me
> (ch...@chromium.org) if you have any questions.
> Thanks to Nicolas, Marc-Antoine, Pam, Darin, and Steven for the help and
> input they provided to me while adding this feature.
> Chase
> >
>

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to