On Jul 7, 2009, at 7:02 PM, Mike Belshe wrote:

On Tue, Jul 7, 2009 at 5:08 PM, Maciej Stachowiak <[email protected]> wrote:

- property access, involving at least some polymorphic access patterns
- method calls
- object-oriented programming patterns
- GC load
- programming in a style that makes significant use of closures

This sounds like good stuff to me.  A few more thoughts:
- We also see sites with just huge chunks of JS code being delivered, yet sparsely used. Perhaps a parsing/loading test is interesting.

I agree this is a common pattern. I think the string-unpack-code test (and to a lesser extent string-tagcloud) beat on parsing pretty heavily.

- Object cloning. We should verify this is a useful test, but I believe template engines often use a pattern, combined with json data to clone js objects. This may be more of a DOM-level test, but a JS equivalent should be doable.

I'd like to hear more about this.

   - JSON performance

string-tagcloud parses a giant JSON string, though not using the native JSON parsing facilities of ES5. Parsing of many shorter JSON expressions may be a useful test to add.

- Tests of prototype chain usage (basically the counter- programming-style to closures)

There is some use of this but not a deep focus. Agreed it's good to test more.


If I were to characterize SunSpider and V8Benchmark tests, the SunSpider tests are generally short and focused micro-benchmarks. The v8 tests are generally larger tests comprised of real code.

The SunSpider tests are a mix. For example, 3d-raytrace, string- tagcloud, and the crypto tests are quite substantial examples of real code solving a real problem. Some, like bitops-bits-in-byte, are very focused. That particular test came from a developer bug report and apparently comes from real game code, but it's a tiny part of the game; it used to make JavaScriptCore look really bad, which is why we included it.

One thing I have noticed about the v8 tests is that they include a lot of content translated from other programming languages, either automatically or by hand.

Both types of test offer unique advantages. The microbenchmarks provide a way to create lots of small tests which cover a certain pattern. The larger tests are less focused, but require more features to work well together in the engine to get higher scores. Tracemonkey is fairly new, and with its tracing approach, it is not surprising that it's initial traces can optimize the micro benchmarks but not fully trace larger code like what is found in the V8 benchmark. In my opinion, both sets of tests are useful.

I do think TraceMonkey shows a bigger improvement on some categories of very trivial tests than on general code. But it seems to do better on most code than the V8 benchmark would indicate. I think this is because it gives the greatest benefit to operations other than function calls and property access, and those are the most heavily tested operations on the V8 benchmark.

Regards,
Maciej

_______________________________________________
webkit-dev mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

Reply via email to