On Jul 7, 2009, at 8:50 PM, Geoffrey Garen wrote:
I also don't buy your conclusion -- that if regular expressions
account for 1% of JavaScript time on the Internet overall, they
need not be optimized.
I never said that.
You said the regular expression test was most likely... the least
On Mon, Jul 6, 2009 at 10:11 AM, Geoffrey Garen gga...@apple.com wrote:
So, what you end up with is after a couple of years, the slowest test in
the suite is the most significant part of the score. Further, I'll predict
that the slowest test will most likely be the least relevant test,
As I said, we can argue the mix of tests forever, but it is not useful.
Yes, I would test using top-100 sites. In the future, if a benchmark
claims to have a representative mix, it should document why. Right?
Are you saying that you did see Regex as being such a high percentage of
javascript
On Sat, Jul 4, 2009 at 3:27 PM, Maciej Stachowiak m...@apple.com wrote:
On Jul 4, 2009, at 11:47 AM, Mike Belshe wrote:
I'd like to understand what's going to happen with SunSpider in the future.
Here is a set of questions and criticisms. I'm interested in how these can
be addressed.
I'm more verbose than Mike, but it seems like people are talking past each
other.
On Tue, Jul 7, 2009 at 3:25 PM, Oliver Hunt oli...@apple.com wrote:
If we see one section of the test taking dramatically longer than another
then we can assume that we have not been paying enough attention to
On Jul 7, 2009, at 4:01 PM, Mike Belshe wrote:
I'd like benchmarks to:
a) have meaning even as browsers change over time
b) evolve. as new areas of JS (or whatever) become important,
the benchmark should have facilities to include that.
Fair? Good? Bad?
I think we can't rule
What you seem to think is better would be to repeatedly update
sunspider everytime that something gets faster, ignoring entirely
that the value in sunspider is precisely that it has not changed.
Not quite what I'm saying :-)
I'd like benchmarks to:
a) have meaning even as browsers
On Tue, Jul 7, 2009 at 4:20 PM, Maciej Stachowiak m...@apple.com wrote:
On Jul 7, 2009, at 4:01 PM, Mike Belshe wrote:
I'd like benchmarks to:
a) have meaning even as browsers change over time
b) evolve. as new areas of JS (or whatever) become important, the
benchmark should have
On Jul 7, 2009, at 4:19 PM, Peter Kasting wrote:
For example, the framework could compute both sums _and_ geomeans,
if people thought both were valuable.
That's a plausible thing to do, but I think there's a downside: if you
make a change that moves the two scores in opposite directions,
On Tue, Jul 7, 2009 at 5:08 PM, Maciej Stachowiak m...@apple.com wrote:
On Jul 7, 2009, at 4:19 PM, Peter Kasting wrote:
For example, the framework could compute both sums _and_ geomeans, if
people thought both were valuable.
That's a plausible thing to do, but I think there's a downside:
On Tue, Jul 7, 2009 at 7:01 PM, Maciej Stachowiak m...@apple.com wrote:
On Jul 7, 2009, at 6:43 PM, Mike Belshe wrote:
(There are other benchmarks that use summation, for example iBench, though
I am not sure these are examples of excellent benchmarks. Any benchmark that
consists of a
Hi,
Can future versions
of the SunSpider driver be made so that they won't become irrelevant over
time?
I feel the weighting is more of an issue here than the total runtime.
Eventually some tests become dominant, and the gain (or loss) on them
almost determine the final results.
Besides,
On Jul 6, 2009, at 10:11 AM, Geoffrey Garen wrote:
So, what you end up with is after a couple of years, the slowest
test in the suite is the most significant part of the score.
Further, I'll predict that the slowest test will most likely be the
least relevant test, because the truly
On Sat, Jul 4, 2009 at 3:27 PM, Maciej Stachowiak m...@apple.com wrote:
On Jul 4, 2009, at 11:47 AM, Mike Belshe wrote:
I'd like to understand what's going to happen with SunSpider in the future.
Here is a set of questions and criticisms. I'm interested in how these can
be addressed.
Maciej Stachowiak wrote:
I think the pauses were large in an attempt to get stable, repeatable
results, but are probably longer than necessary to achieve this. I agree
with you that the artifacts in balanced power mode are a problem. Do
you know what timer thresholds avoid the effect? I think
On 4-Jul-09, at 2:47 PM, Mike Belshe wrote:
#2: Use of summing as a scoring mechanism is problematic
Unfortunately, the sum-based scoring techniques do not withstand
the test of time as browsers improve. When the benchmark was first
introduced, each test was equally weighted and
I'd like to understand what's going to happen with SunSpider in the future.
Here is a set of questions and criticisms. I'm interested in how these can
be addressed.
There are 3 areas I'd like to see improved in
SunSpider, some of which we've discussed before:
#1: SunSpider is currently version
On Sat, Jul 4, 2009 at 11:47 AM, Mike Belshe m...@belshe.com wrote:
#3: The SunSpider harness has a variance problem due to CPU power savings
modes.
This one worries me because it decreases the consistency/reproducibility of
test scores and makes it harder to compare engines or to track one
On Jul 4, 2009, at 11:47 AM, Mike Belshe wrote:
I'd like to understand what's going to happen with SunSpider in the
future. Here is a set of questions and criticisms. I'm interested
in how these can be addressed.
There are 3 areas I'd like to see improved in SunSpider, some of
which
On Jul 4, 2009, at 1:06 PM, Peter Kasting wrote:
On Sat, Jul 4, 2009 at 11:47 AM, Mike Belshe m...@belshe.com wrote:
#3: The SunSpider harness has a variance problem due to CPU power
savings modes.
This one worries me because it decreases the consistency/
reproducibility of test scores
20 matches
Mail list logo