On Tue, Jul 7, 2009 at 3:25 PM, Oliver Hunt <oli...@apple.com> wrote:
> > On Jul 7, 2009, at 3:01 PM, Mike Belshe wrote: > > On Mon, Jul 6, 2009 at 10:11 AM, Geoffrey Garen <gga...@apple.com> wrote: > >> So, what you end up with is after a couple of years, the slowest test in >>> the suite is the most significant part of the score. Further, I'll predict >>> that the slowest test will most likely be the least relevant test, because >>> the truly important parts of JS engines were already optimized. This has >>> happened with Sunspider 0.9 - the regex portions of the test became the >>> dominant factor, even though they were not nearly as prominent in the real >>> world as they were in the benchmark. This leads to implementors optimizing >>> for the benchmark - and that is not what we want to encourage. >>> >> >> How did you determine that regex performance is "not nearly as prominent >> in the real world?" >> > > For a while regex was 20-30% of the benchmark on most browsers even though > it didn't consume 20-30% of the time that browsers spent inside javascript. > > You're right, but you're ignoring that for a long time before then it was > consuming much much less. If everything else gets faster then the > proportion of time spent in the area that is not improved will increase, > potentially by quite a lot. > Ok. > > On the topic of use in the real world -- jQuery at least runs a regex over > the result of most (all?) XHR transactions to see if they might be xml, and > jquery seems to be increasingly widely used, and frequently in conjunction > with XHR. > Ok. > > What you seem to think is better would be to repeatedly update sunspider > everytime that something gets faster, ignoring entirely that the value in > sunspider is precisely that it has not changed. > Not quite what I'm saying :-) I'd like benchmarks to: a) have meaning even as browsers change over time b) evolve. as new areas of JS (or whatever) become important, the benchmark should have facilities to include that. Fair? Good? Bad? > If we see one section of the test taking dramatically longer than another > then we can assume that we have not been paying enough attention to > performance in that area, this is how we originally noticed just how slow > the regex engine was. If we had been continually rebalancing the test over > and over again we would not have noticed this or other areas where > performance could be (and has) improved. It would also break sunspider as a > means for tracking and/or preventing performance regressions. > Of course, using old versions of the benchmark for regression testing is not prohibited by iterating a benchmark. Mike > > --Oliver >
_______________________________________________ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev