On 05/29/2013 06:09 AM, Hannes Verschore wrote:
My proposal would be something like:
-----------------------------------------------------------------------
Revisions causing regressions on the major benchmarks (Sunspider, Octane,
Kraken) are not allowed at all.
Except: if the regressions are listed in the bug report and the reviewer still
gave a r+, knowing about these regressions. (e.g. common case where this
happens is if a benchmark as a whole improves, but some subtests show
regressions)
Normal procedure (i.e. only one regression):
1) The owner is notified and the regressions are listed in the bug
2) The owner should immediately acknowledge the regressions and inform what his
plan of attack is.
3) The owner gets 2 working days to fix them:
3.1.) Backout
3.2.) Fix in tree
3.3.) Request r+ to allow regressions (only allowed in extreme cases!)
Note: if the process takes or will take longer than 2 working days, the patch
should be backed out.
Allowed actions in exceptional cases (multiple regressions, just before merge …)
1) The revision is backed out without notifying the person beforehand
2) The backout and the list of regressions is listed in the bug + a reason why
a quick backout was done instead of the normal procedure
Note: this shouldn’t be frowned upon. If the regression was listed in the bug
before r+ it wouldn’t have gotten backed out...
-------------------------------------------------------------------------
As an extra discussion point, I would like to have some benchmarks being
classified as more important. To disallow regression on them. Not like our
major benchmarks where “no regressions allowed”, but benchmarks where we
discourage regressions... A few benchmarks have proven to be also important
even though they are not included in major benchmark set. One of them is e.g.
jpeg2000 benchmark. Possible our performance is as important as the pdf.js
benchmark.
Let the discussions begin ...
I think the recent trouble was mostly that we got multiple regressions in
the one range of commits.
I don't know if one computer has enough resources for that but it might be
interesting if the continuous integration system was able to bisect the
performance changes for us. Either by ignoring changes out-side the
JavaScript engine directory and scheduling one commit after the other. Or
by using the idle time to provide additional result when a non-noise bump is
observed in the results.
One thing that I want to clarify is the importance of each platforms, in
addition to the importance of each benchmark. We should not ignore B2G,
even if it is 2 click away from the main page.
--
Nicolas B. Pierron
_______________________________________________
dev-tech-js-engine-internals mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals