Tom Gregory wrote:
Given the results of your testing, do you intend to propose a patch for Prototype?

I was planning on it but I have only tested my problem domain. I am not sure how it would compare in a variety of environments. As the number of elements increases libraries like Event:Selector and behavior.js get really slow because of searching for class names.

Narrowing down the search provides the most benefit (via an id or tag name) but in my tests reimplementing Element.hasClassName() as a regular expression also provides a small benefit because each comparison is faster (for my web pages).

But I haven't tested to see under what situations it is faster. Is it faster only when the elements have many class names or even when an element has one class name? Could hasClassName do a stupid comparison and only if that fails do a RegExp to achieve faster results (since most elements either do not have a class or only have one). Something like:

element.className == className ||
        element.className.match(new RegExp("(^|\\s)" +
        className + "(\\s|$)"))

There should be a performance gain of caching the RegExp object (so we don't have to recreate the RegExp every time we do a comparison) but how much of a gain and how much would that increase the code complexity?

Also perhaps the optimization really needs to happen in libraries like behaviour.js and Event:Selectors. If your rules look something like this:

var Rules {
        '.foo': function(e) {},
        '.bar': function(e) {},
        '.baz': function(e) {},
}

You are doing three full document traversals when you only need to do one. It gets worse as you continue to add more rules. Perhaps those libraries should do one document traversal and index all the info. This increases memory usage but should speed up cases like this. But then if my rules are lighter:

var Rules {
        '#section p.foo': function(e) {},
        '#another-section li.bar': function(e) {},
        '#yet_another a.baz': function(e) {},
}

In this case we only traverse parts of the document so a pre-traversal that indexes all the info would be overkill.

Optimization is obviously needed but I think comprehensive benchmarks should be done before developing patches to determine what provides the most benefit under the most situations with the least complexity added to the code. Without that we are only partially guessing and we may be making Prototype more complex than it needs to be.

Even once optimization is done there are a few other things to consider:

* There will still probably be special cases that will be slow. This information needs to be published so people who fall into those special cases know how to make their way out.

* Most of this optimization is needed because we are doing so much work in Javascript. Is there a better way? XPath has the same goal of CSS Selectors. Is there native browser support for that? Perhaps it would be a better technology. Also the browsers have to do this same work when applying CSS styles. Is there anyway they could expose that functionality so we could have native code doing the searching instead of slow interpreted code? I think a element selector library is extremely useful but implementing it at the Javascript level seems to be the wrong implementation.

Eric

_______________________________________________
Rails-spinoffs mailing list
[email protected]
http://lists.rubyonrails.org/mailman/listinfo/rails-spinoffs

Reply via email to