Thanks for linking, Nicholas, that was an interesting read.

With respect to SpiderMonkey, some but not all of the problems they identify exist.

On the issue of map-explosion due to function proto generation we have a lesser variant of the problem they describe. TypeObjects in TI are parameterized on the prototype. However, shapes (analagous to V8's maps) aren't parameterized. So we'll see TypeObject creation, but not Shape creation due to this issue.

So type-fragmentation due to protos would exist, but the bigger issue of shape-tree fragmentation due to protos is not an issue.

Secondly, they address the method constantization done by V8. Basically, V8 will speculatively move method bindings on objects into the Map object (same as our shape tree), and use that to inline things. This leads to an explosion in Maps when method bindings change.

We don't suffer this issue at all, as we don't cache object values in the shape tree. We constantize functions by using TI, but TI only stores the _type_ of the function stored in a particular slot, and rebinding the method would at most add an extra type to the typeset associated with the method field. No shape duplication, no type duplication, nothing.

Lastly, while this was interesting to read, I'm somewhat doubtful about the general benefit of keeping on peering back at what will soon become legacy web code. A lot of the causes of these kinds of type instability are simply due to outdated programming idioms still in use by JS developers, but which we should expect to die down as JS transitions from being a glue language for the web to being an app development language.

The use cases are different, the characteristics of programs written for these use cases are different, and after a point, optimizing for one will necessarily diminish the other. The relevant question is which use case benefits the most from optimization strategies.

I tried an exercise a while back where I turned off _all_ JS optimization and browsed gmail. The difference in usability was barely perceptible. The "typical page" JS ends up routing an input event to a small, finite set of content changes coordinated by JS, and then yielding back to the main execution. Making this happen in 0.5ms instead of 1ms is unlikely to have a large user impact.

On the other hand, we do run into JS bottlenecks on apps and games. Here the JS code can spend a long time computing results and manipulating data structures. Whereas a 50% difference in the JS perf of "web code" might make no perceptible difference to the user, even a 10% perf regression in JS can show up easily in app (especially game) performance.

Given that, I question the utility of trying to optimize JS for a set of use cases that 1) aggressively fight optimization, and 2) don't show tremendous usability improvements in correspondence with perf improvements. We're moving towards an app world and the changes to optimization suggested in the paper seem to be looking back at the history of JS instead of its future.

Kannan


On 2014-06-19, 8:56 PM, Nicholas Nethercote wrote:
Hi,

Interesting paper from PLDI 2014 which was last week:

http://iacoma.cs.uiuc.edu/iacoma-papers/pldi14.pdf

It's an analysis of V8, and describes how it is over-specialized for
benchmarks vs. real code and how they fixed it. Required reading for
some of the people on this list!

Nick
_______________________________________________
dev-tech-js-engine-internals mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals

_______________________________________________
dev-tech-js-engine-internals mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals

Reply via email to