Thanks for taking the time to write this up!  (I was on PTO for the week so I'm 
just seeing this now.)  It seems like there haven't been too many comments 
which I'll assume means mostly agreement from the group?

> My proposal would be something like:
> -----------------------------------------------------------------------
> Revisions causing regressions on the major benchmarks (Sunspider, Octane,
> Kraken) are not allowed at all.
> Except: if the regressions are listed in the bug report and the reviewer
> still gave a r+, knowing about these regressions. (e.g. common case where
> this happens is if a benchmark as a whole improves, but some subtests show
> regressions)
> 
> Normal procedure (i.e. only one regression):
> 1) The owner is notified and the regressions are listed in the bug
> 2) The owner should immediately acknowledge the regressions and inform what
> his plan of attack is.
> 3) The owner gets 2 working days to fix them:
> 3.1.) Backout
> 3.2.) Fix in tree
> 3.3.) Request r+ to allow regressions (only allowed in extreme cases!)
> Note: if the process takes or will take longer than 2 working days, the patch
> should be backed out.
> 
> Allowed actions in exceptional cases (multiple regressions, just before merge
> …)
> 1) The revision is backed out without notifying the person beforehand
> 2) The backout and the list of regressions is listed in the bug + a reason
> why a quick backout was done instead of the normal procedure
> Note: this shouldn’t be frowned upon. If the regression was listed in the bug
> before r+ it wouldn’t have gotten backed out...

Usually we make exceptions for big patches (since it is hard enough to land 
these already in the face of a fast-moving tree), but it seems like the 
proposed rules here still provide leeway but make it a bit more structured than 
before.  In particular, landing a big patch into a dogpile or right before an 
uplift is usually a bad idea anyway.

> As an extra discussion point, I would like to have some benchmarks being
> classified as more important. To disallow regression on them. Not like our
> major benchmarks where “no regressions allowed”, but benchmarks where we
> discourage regressions... A few benchmarks have proven to be also important
> even though they are not included in major benchmark set. One of them is
> e.g. jpeg2000 benchmark. Possible our performance is as important as the
> pdf.js benchmark.

It would be good to be explicit about this: elevating these benchmarks from 
'Breakdown >> Assorted tests' to 'Breakdown >> Other important' on awfy 
(creating a new dir in awfy: benchmarks/misc/tests/important).

When this discussion settles (if it hasn't already), could you post the 
guidelines up on 
https://wiki.mozilla.org/index.php?title=JavaScript:SpiderMonkey:Regression_Policy
 ?

Cheers,
Luke
_______________________________________________
dev-tech-js-engine-internals mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals

Reply via email to