On 2/12/10 10:09 AM, Anne van Kesteren wrote:
It would be interesting to know what exactly that test is. Optimizing
for benchmarks is not always useful :-)

Point 1: Optimizing for benchmarks is sadly all that people report on. They don't even bother making up new benchmarks, so you can optimize to the small set of benchmarks currently in use. So from a marketing standpoint, it is in fact highly useful. This is very unfortunate.

Point 2: The reason that we (Gecko) initially added this behavior was that we ran into a number of sites doing things like:

  for (var x = 0; x < document.getElementsByTagName("foo").length; ++x) {
    doSomething(document.getElementsByTagName("foo")[x]);
  }

(a bit more obfuscated, but that's what it came down to). Now granted, this was 8 years ago. Maybe people are more careful now (though I doubt it). This was not a benchmark, but real-world usage.

Also, what happens with garbage collection? Say some isolated piece of
code does:

x = document.getElementsByTagName("x")
x.p = 2

... and then later on some other piece of code does:

y = document.getElementsByTagName("x")
w("p" in y)

Depending on whether or not x got garbage collected you would get a
different result.

I think there are two separate parts to the current Gecko caching optimization:

1)  Avoid walking the DOM over and over.  This optimization is a must,
    imo.  This can be implemented without being detectable from script,
    at some cost in indirection and hence performance, by having
    different NodeList objects all sharing the same object which
    actually keeps track of the nodes.
2)  Avoid creating new JS objects for each call.  This optimization is
    in fact detectable by the above technique.  In Gecko's case this
    optimization is significant.  It looks like in Opera's and webkit's
    case it may be less so, based on my timings.  We would be unlikely
    to change this behavior while it continues to have the performance
    benefits is currently has for us.... but I'm willing to at least
    consider it if there are good enough reasons for it.

I should note that item #2 above depends heavily on GC policy, available RAM, etc, not just object-creation costs, in terms of how much it helps on any given page. From a web programming point of view, less thrashing of the GC heap is generally a good thing if it's easy to do.

I should also note that right now if you set expandos on things like document.links or the .style of some node those expandos may or may not survive a GC in different browsers. I believe the only place where expandos are really interoperable is on nodes and windows. Certainly those are the only places where Gecko guarantees their survival, last I checked.

-Boris

Reply via email to