Re: Shrinking existing libraries as a goal
I wholeheartedly support this proposal. +1000. From: Yehuda Katz wyc...@gmail.commailto:wyc...@gmail.com Date: Wed, 16 May 2012 00:32:51 -0400 To: public-webapps@w3.orgmailto:public-webapps@w3.org Subject: Shrinking existing libraries as a goal Resent-From: public-webapps@w3.orgmailto:public-webapps@w3.org Resent-Date: Wed, 16 May 2012 04:33:57 + In the past year or so, I've participated in a number of threads that were implicitly about adding features to browsers that would shrink the size of existing libraries. Inevitably, those discussions end up litigating whether making it easier for jQuery (or some other library) to do the task is a good idea in the first place. While those discussions are extremely useful, I feel it would be useful for a group to focus on proposals that would shrink the size of existing libraries with the implicit assumption that it was a good idea. From some basic experimentation I've personally done with the jQuery codebase, I feel that such a group could rather quickly identify enough areas to make a much smaller version of jQuery that ran on modern browsers plausible. I also think that having data to support or refute that assertion would be useful, as it's often made casually in meta-discussions. If there is a strong reason that people feel that a focused effort to identify ways to shrink existing popular libraries in new browsers would be a bad idea, I'd be very interested to hear it. Thanks so much for your consideration, Yehuda Katz jQuery Foundation (ph) 718.877.1325
Re: Shrinking existing libraries as a goal
On Wed, 16 May 2012 06:32:51 +0200, Yehuda Katz wyc...@gmail.com wrote: In the past year or so, I've participated in a number of threads that were implicitly about adding features to browsers that would shrink the size of existing libraries. Inevitably, those discussions end up litigating whether making it easier for jQuery (or some other library) to do the task is a good idea in the first place. I think allowing libraries to shrink is a priori a good idea. The details become particuarly devilish when the impact of a proposed change is unevenly distributed, making things easier for some and harder for others. While those discussions are extremely useful, I feel it would be useful for a group to focus on proposals that would shrink the size of existing libraries with the implicit assumption that it was a good idea. From a webapps perspective that would be very useful if it led to better feedback on (or ideally more tests for) specs in development. If there is a strong reason that people feel that a focused effort to identify ways to shrink existing popular libraries in new browsers would be a bad idea, I'd be very interested to hear it. No, having a group who *do* this sounds fine. Looking forward to the results. cheers -- Charles 'chaals' McCathieNevile Opera Software, Standards Group je parle français -- hablo español -- jeg kan noen norsk http://my.opera.com/chaals Try Opera: http://www.opera.com
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 4:03 PM, Julian Aubourg j...@ubourg.net wrote: I've been meaning to do a test suite to help provide guidance to implementors (something I figure would be much more useful than yet another round of specs) but I admit I haven't got to it yet. In general, this is probably the single most effective thing we can do to reduce the number of browser bugs in implementations. Not just for XHR, but for all specs. It's a shame that test suites arrive so late in the game, usually after implementations have started shipping (with bugs), and usually contains way too few tests. I'm just as guilty as anyone for not helping out more here. / Jonas
Re: Shrinking existing libraries as a goal
[I haven't been near my computer in a long time now, but I really wanted to reply to this :) So sorry for the out of context top post.] Yes! That is the best thing ever. Someone at Opera is currently looking at updating our testsuite (which I promised to upload to the w3 test server at the mountain view meeting). You can currently find it at testsuites.opera.com I think, it should be linked somewhere. I would believe it'd be a good place to start. Some of the tests have had the spec changed underneath it so don't pay attention to that, hopefully we'll have some fixes there soon, but expanding it and writing new tests is part of phase two: so if you're going to write some tests that's just awesome! If you upload them to w3 test repository I'll be sure to check so that we don't use our energy duplicating the tests. Odin Hørthe Omdal — Opera Software -- Sent from my N9, excuse the top posting On 19.05.12 07:42 Julian Aubourg wrote: To me the biggest abomination of all is the XMLHttpRequest object: the spec is probably one of the most complex I've seen yet, vast portions are left to interpretations or even not specified at all: the local filesystem comes to mind, also every browser has its own specific way of notifying non-applicative errors (like network errors): specific status, unhandleable asychronously thrown exception, exception thrown when accessing a field, etc... And that's just the tip of the iceberg. It's got to a point where the almighty xhr bleeds through abstractions and makes it impossible to design proper API (at least not if you want to leak memory like crazy).. Finally, fixing xhr issues always seem like low priority items in browser bug trackers because there's always some kind of workaround, that libraries like jQuery have to put in their code (provided it can be feature tested which it cannot most of the time). I've been meaning to do a test suite to help provide guidance to implementors (something I figure would be much more useful than yet another round of specs) but I admit I haven't got to it yet. Dunno how people feel about this, but I think providing test suites that browsers could test against as a way to prevent regressions and inconsistencies could help a lot as a starting point. 2012/5/18 Yehuda Katz wyc...@gmail.com I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. Keep an eye out :D Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 3:18 PM, Brian Kardell bkard...@gmail.com wrote: Has anyone compiled an more general and easy to reference list of the stuff jquery has to normalize across browsers new and old? For example, ready, event models in general, query selector differences, etc? On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.com wrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. It has been argued in the past - I'm taking no position here, just noting. For posterity (not you specifically, but for the benefit of those who don't follow so closely), the HTML link also references DOM Core, which has stated for some time that getElementById should return the _first_ element with that ID in the document (implying that there could be more than one) [a] and despite whatever CSS has said since day one (ids are unique in a doc) [b] a quick check in your favorite browser will show that CSS doesn't care, it will style all IDs that match. So basically - qSA matches CSS, which does kind of make sense to me... I'd love to see it corrected in CSS
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 3:21 PM, Yehuda Katz wyc...@gmail.com wrote: I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. For what it's worth, I definitely support this idea too on a general level. However as others have pointed out, the devil's in the details, so looking forward to those :) Of course, ideal are proposals which not just shrinks existing libraries, but also helps people that aren't using libraries at all but rather uses the DOM directly. / Jonas
Spec Bugs and Test Suites [Was: Re: Shrinking existing libraries as a goal]
On 5/17/12 7:03 PM, ext Julian Aubourg wrote: To me the biggest abomination of all is Just a reminder that WebApps' [PubStatus] page enumerates all of its specs and for each spec, there (or will be): a) a link to the spec's Bugzilla component; b) a link to the spec's Test Suite. Finally, fixing xhr issues always seem like low priority items in browser bug trackers because there's always some kind of workaround, that libraries like jQuery have to put in their code (provided it can be feature tested which it cannot most of the time). I've been meaning to do a test suite to help provide guidance to implementors (something I figure would be much more useful than yet another round of specs) but I admit I haven't got to it yet. Dunno how people feel about this, but I think providing test suites that browsers could test against as a way to prevent regressions and inconsistencies could help a lot as a starting point. 2012/5/18 Yehuda Katz wyc...@gmail.com mailto:wyc...@gmail.com I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. Keep an eye out :D Yehuda Katz (ph) 718.877.1325 tel:718.877.1325 On Thu, May 17, 2012 at 3:18 PM, Brian Kardell bkard...@gmail.com mailto:bkard...@gmail.com wrote: Has anyone compiled an more general and easy to reference list of the stuff jquery has to normalize across browsers new and old? For example, ready, event models in general, query selector differences, etc? On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com mailto:waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.com mailto:bkard...@gmail.com wrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com mailto:waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com mailto:bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. It has been argued in the past - I'm taking no position here, just noting. For posterity (not you specifically, but for the benefit of those who don't follow so closely), the HTML link also references DOM Core, which has stated for some time that getElementById should return the _first_ element with that ID in the document (implying that there could be more than one) [a] and despite whatever CSS has said since day one (ids are unique in a doc) [b] a quick check in your favorite browser will show that CSS doesn't care, it will style all IDs that match. So basically - qSA matches CSS, which
Comments, Spec Bugs and Test Case are Always Welcome! [Was: Re: Shrinking existing libraries as a goal]
[ My previous response was accidentally sent before it should have been (delete it) ... ] On 5/17/12 7:03 PM, ext Julian Aubourg wrote: To me the biggest Comments on all of WebApps' specs are always welcome, regardless of where the spec is in the W3C's Recommendation process. I've been meaning to do a test suite to help provide guidance to implementors (something I figure would be much more useful than yet another round of specs) but I admit I haven't got to it yet. Dunno how people feel about this, but I think providing test suites that browsers could test against as a way to prevent regressions and inconsistencies could help a lot as a starting point. The group's PubStatus page http://www.w3.org/2008/webapps/wiki/PubStatus enumerates each spec and each spec has (or will have): 1) a link to the spec's Bugzilla component; 2) a link to the spec's Test suite. We have a need for test cases for just about every spec. The test submission process is described in http://www.w3.org/2008/webapps/wiki/Submission. For WebApps' testing related discussions, please use group's public-webapps-testsu...@w3.org list. -Thanks, AB
Re: Shrinking existing libraries as a goal
A related TL;DR observation... While we may get 5 things that really help shrink the current set of problems, it adds APIs which inevitably introduce new ones. In the meantime, nothing stands still - lots of specs are introducing lots of new APIs. Today's 'modern browsers' are the ones we are all swearing at a year or two from now. New APIs allow people to think about things in new ways. Given new APIs, new ideas will develop (either in popular existing libraries, or even whole new ones). Ideas spawn more ideas - offshoots, competitors, etc. In the long term, changes like the ones being discussed will probably serve more to mitigating libraries otherwise inevitable continued growth. More interestingly though, to Tab's point - all of the things that he explained will happen with all of those new APIs too. New ideas will spawn competitors and better APIs that are normalized by libraries, etc. They will compete and evolve until eventually it becomes self-evident over time that there is something much preferred still by the user community at large to whatever is actually implemented in the browser.It seems to me that this is inevitable, happens with all software, and is actually kind of a good thing... I'm not exactly sure what value this observation has other than to maybe explain why I think that on this front, libraries have a few important advantages and wonder aloud whether somehow there is a way to change the model/process to incorporate those advantages more directly. Particularly, the advantages are about real world competition and less need to be absolutely positively fully universal. The advantages of the competition aspect I think cannot be overstated - they play in virtually every point along whole lifecycle For all of the intelligence on the committees and on these lists (and it's a lot), it's actually a pretty small group of people ultimately proposing things for the whole world. By their very nature, committees (and the vendors who are heavily involved) also have to consider the very fringe cases and the browser vendors have to knowing enter in to things considering that every change means more potential problems that have to work without breaking anything existing. Libraries might have a small number of authors, but their user base starts our small too. The fact that it is also the author's choice to opt-in to using a library also means that they are much freer to rev and version and say don't do that, instead do this with some of the very fringe cases - or even just consciously choose that that is not a use case they are interested in supporting. With the process, even when we get to vendor implementations, features start out in test builds or require flags to enable. While it's good - it's really more of a test for uniform compliance and a preview for/by a group of mavens. This means that features/apis cannot actually be practically used in developing real pages/sites and that is a huge disadvantage that libraries don't generally have. Often it isn't obvious until there are at least thousands and thousands of average developers who have had significant time to really try to live with it in the real world (actually delivering product) that it becomes evident that something is overly cumbersome or somehow falls short for what turn out to be unexpectedly common cases. Finally, the whole point of these committees is to arrive at standards, not compete. However, in practice, they also commonly resolve differences after the fact (the standard is revised to meet what is implemented and now can't change). Libraries are inherently usually the opposite - they want competition first and standardization only after things have wide consensus. These are the kinds of things that cause innovation and competition of ideas which ultimately helping define and evolve what the community at large sees as good. I'm not exactly sure how you would go about changing the model/process to encourage/foster the sort of inverse relationship while simultaneously focusing on standards... tricky. Maybe some of the very smart people on this list have some thoughts? -Brian On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.com wrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal
Re: Shrinking existing libraries as a goal
On May 17, 2012, at 10:58 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, May 17, 2012 at 3:21 PM, Yehuda Katz wyc...@gmail.com wrote: I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. For what it's worth, I definitely support this idea too on a general level. However as others have pointed out, the devil's in the details, so looking forward to those :) Of course, ideal are proposals which not just shrinks existing libraries, but also helps people that aren't using libraries at all but rather uses the DOM directly. I also agree that providing functionality which can help reduce side of JS libraries is a good goal (though one of many). And also that the merits of specific proposals depend on the details. One aspect of this that can be challenging is finding functionality that will allow a broad range of libraries to shrink, rather than only one or a few. - Maciej
Re: Shrinking existing libraries as a goal
To me the biggest abomination of all is the XMLHttpRequest object: - the spec is probably one of the most complex I've seen - yet, vast portions are left to interpretations or even not specified at all: - the local filesystem comes to mind, - also every browser has its own specific way of notifying non-applicative errors (like network errors): - specific status, - unhandleable asychronously thrown exception, - exception thrown when accessing a field, - etc... And that's just the tip of the iceberg. It's got to a point where the almighty xhr bleeds through abstractions and makes it impossible to design proper API (at least not if you want to leak memory like crazy). Finally, fixing xhr issues always seem like low priority items in browser bug trackers because there's always some kind of workaround, that libraries like jQuery have to put in their code (provided it can be feature tested which it cannot most of the time). I've been meaning to do a test suite to help provide guidance to implementors (something I figure would be much more useful than yet another round of specs) but I admit I haven't got to it yet. Dunno how people feel about this, but I think providing test suites that browsers could test against as a way to prevent regressions and inconsistencies could help a lot as a starting point. 2012/5/18 Yehuda Katz wyc...@gmail.com I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. Keep an eye out :D Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 3:18 PM, Brian Kardell bkard...@gmail.com wrote: Has anyone compiled an more general and easy to reference list of the stuff jquery has to normalize across browsers new and old? For example, ready, event models in general, query selector differences, etc? On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.comwrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. It has been argued in the past - I'm taking no position here, just noting. For posterity (not you specifically, but for the benefit of those who don't follow so closely), the HTML link also references DOM Core, which has stated for some time that getElementById should return the _first_ element with that ID in the document (implying that there could be more than one) [a] and despite whatever CSS has said since day one (ids are unique in a doc) [b] a quick check in your favorite browser will show that CSS doesn't care, it will style all IDs that match. So basically - qSA matches CSS, which does kind of make sense to me... I'd love to see it corrected in CSS too (first element with that ID if there are more than one) but it has been argued that a lot of stuff (more than we'd like to admit) would break. in some very difficult ones. Previously, this was something that the browser APIs just didn't offer at all -- now they offer them, but jQuery has mitigation to do in order to use them effectively since they do not have parity. Yes, we're trying to reduce the amount of mitigation that is required of libraries to implement reasonable apis. This is a multi-view discussion: short and long term. So can someone name specific items? Would qSA / find been pretty high on that list? Is it better for jQuery (specifically) that we have them in their current state or worse? Just curious. TBH, the current state can't get any worse, though I'm sure it will.
Re: Shrinking existing libraries as a goal
FYI, a Script Library Community Group (Cc'ed) was formed some time ago and it may have some similar interest(s) http://www.w3.org/community/scriptlib/ (although their mail list archive indicates the CG isn't very active). Perhaps someone in that CG has some comments on Yehuda' email. -AB P.S. Yehuda's email archive is http://lists.w3.org/Archives/Public/public-webapps/2012AprJun/0762.html On 5/16/12 10:13 PM, ext Ojan Vafai wrote: In principle, I agree with this as a valid goal. It's one among many though, so the devil is in the details of each specific proposal to balance out this goal with others (e.g. keeping the platform consistent). I'd love to see your list of proposals of what it would take to considerably shrink jQuery. On Tue, May 15, 2012 at 9:32 PM, Yehuda Katz wyc...@gmail.com mailto:wyc...@gmail.com wrote: In the past year or so, I've participated in a number of threads that were implicitly about adding features to browsers that would shrink the size of existing libraries. Inevitably, those discussions end up litigating whether making it easier for jQuery (or some other library) to do the task is a good idea in the first place. While those discussions are extremely useful, I feel it would be useful for a group to focus on proposals that would shrink the size of existing libraries with the implicit assumption that it was a good idea. From some basic experimentation I've personally done with the jQuery codebase, I feel that such a group could rather quickly identify enough areas to make a much smaller version of jQuery that ran on modern browsers plausible. I also think that having data to support or refute that assertion would be useful, as it's often made casually in meta-discussions. If there is a strong reason that people feel that a focused effort to identify ways to shrink existing popular libraries in new browsers would be a bad idea, I'd be very interested to hear it. Thanks so much for your consideration, Yehuda Katz jQuery Foundation (ph) 718.877.1325 tel:718.877.1325
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 9:31 AM, Scott González scott.gonza...@gmail.comwrote: I'm sure Yehuda can speak more to the status of scriptlib, but the way I see it is: There was a some buzz about scriptlib and the W3C being excited about developers participating via CGs. Very few developers joined. 33 scriptlib members compared to 287 jquery-standards[1] members. There were 0 meaningful posts. 4 total messages (including hello world) compared to 138 messages for jquery-standards. Nothing came out of scriptlib, compared to 19 issues[2] in jquery-standards AFAICT, there are two explanations for this: First, developers at large don't find CGs very inviting. Second, everyone on scriptlib is highly experienced and very interested in standards; to the point where they'll just go to the appropriate non-CG list to discuss things. With that being said, It's good to see W3C pointing to CGs for input :-) [1] https://groups.google.com/group/jquery-standards [2] https://github.com/jquery/standards/issues On Thu, May 17, 2012 at 7:17 AM, Arthur Barstow art.bars...@nokia.comwrote: FYI, a Script Library Community Group (Cc'ed) was formed some time ago and it may have some similar interest(s) http://www.w3.org/community/** scriptlib/ http://www.w3.org/community/scriptlib/ (although their mail list archive indicates the CG isn't very active). Perhaps someone in that CG has some comments on Yehuda' email. -AB P.S. Yehuda's email archive is http://lists.w3.org/Archives/** Public/public-webapps/**2012AprJun/0762.htmlhttp://lists.w3.org/Archives/Public/public-webapps/2012AprJun/0762.html On 5/16/12 10:13 PM, ext Ojan Vafai wrote: In principle, I agree with this as a valid goal. It's one among many though, so the devil is in the details of each specific proposal to balance out this goal with others (e.g. keeping the platform consistent). I'd love to see your list of proposals of what it would take to considerably shrink jQuery. On Tue, May 15, 2012 at 9:32 PM, Yehuda Katz wyc...@gmail.com mailto: wyc...@gmail.com wrote: In the past year or so, I've participated in a number of threads that were implicitly about adding features to browsers that would shrink the size of existing libraries. Inevitably, those discussions end up litigating whether making it easier for jQuery (or some other library) to do the task is a good idea in the first place. While those discussions are extremely useful, I feel it would be useful for a group to focus on proposals that would shrink the size of existing libraries with the implicit assumption that it was a good idea. From some basic experimentation I've personally done with the jQuery codebase, I feel that such a group could rather quickly identify enough areas to make a much smaller version of jQuery that ran on modern browsers plausible. I also think that having data to support or refute that assertion would be useful, as it's often made casually in meta-discussions. If there is a strong reason that people feel that a focused effort to identify ways to shrink existing popular libraries in new browsers would be a bad idea, I'd be very interested to hear it. Thanks so much for your consideration, Yehuda Katz jQuery Foundation (ph) 718.877.1325 tel:718.877.1325 Mike Taylor and I tried to ignite scriptlib interest at JSConf with a track B presentation to raise awareness. It's just not happening. I fully support Yehuda's proposed goals. DOM APIs are being designed by non-web developers and non-JavaScript programmers that simply don't get it and when confronted, the response is frequently library authors will fix this - which in turns makes libraries bigger, instead of smaller. Even more important then considering library size is to simply consider motivating factors for library adoption. Take the following API evolution... What standards bodies did: element.onclick = ... ↓ element.addEventListener( click, ..., boolean ); What we did in the trenches: element.onclick = ... ↓ element.addEventListener( click, ..., boolean ); ↓ element.observe( click, ↓ [elements].on( click, ... Why isn't this being standardized? There is concrete evidence that supports this as a preferred API. Even more problematic is event object target creation: JavaScript programs that run in browsers are asynchronous event systems, but libraries have to create their own systems because in 2012, we still can't inherit from EventTarget [1] - and even if I could, the API is an overlong wind-bag. Compare the IDL definition for EventTarget to a very popular, generic, reusable Event system API: http://nodejs.org/docs/v0.7.8/api/events.html Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. Rick [1]
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer ergonomics (Waldron)? Of course *some* choices may do both. Some may not. jjb Rick [1] http://www.w3.org/TR/DOM-Level-2-Events/events.html#Events-EventTarget
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 6:29 PM, Rick Waldron waldron.r...@gmail.com wrote: [...] FWIW, we have bugs filed against DOM for both better event registration and constructing of event targets: https://www.w3.org/Bugs/Public/show_bug.cgi?id=16491 https://www.w3.org/Bugs/Public/show_bug.cgi?id=16487 I have not made the time yet to work out the details for either however and neither has anyone else. -- Anne — Opera Software http://annevankesteren.nl/ http://www.opera.com/
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 1:05 PM, Anne van Kesteren ann...@annevk.nl wrote: On Thu, May 17, 2012 at 6:29 PM, Rick Waldron waldron.r...@gmail.com wrote: [...] FWIW, we have bugs filed against DOM for both better event registration and constructing of event targets: https://www.w3.org/Bugs/Public/show_bug.cgi?id=16491 https://www.w3.org/Bugs/Public/show_bug.cgi?id=16487 I have not made the time yet to work out the details for either however and neither has anyone else. Anne, Thank you for your continued commitment to real progress. Rick -- Anne — Opera Software http://annevankesteren.nl/ http://www.opera.com/
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 9:56 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer ergonomics (Waldron)? Of course *some* choices may do both. Some may not. Libraries generally do three things: (1) patch over browser inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new features*. #1 is just background noise; we can't do anything except write good specs, patch our browsers, and migrate users. #3 is the normal mode of operations here. I'm sure there are plenty of features currently done purely in libraries that would benefit from being proposed here, like Promises, but I don't think we need to push too hard on this case. It'll open itself up on its own, more or less. Still, something to pay attention to. #2 is the kicker, and I believe what Yehuda is mostly talking about. There's a *lot* of code in libraries which offers no new features, only a vastly more convenient syntax for existing features. This is a large part of the reason why jQuery got so popular. Fixing this both makes the web easier to program for and reduces library weight. * Yes, #3 is basically a subset of #2 since libraries aren't rewriting the JS engine, but there's a line you can draw between here's an existing feature, but with better syntax and here's a fundamentally new idea, which you could do before but only with extreme contortions. ~TJ
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, May 17, 2012 at 9:56 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer ergonomics (Waldron)? Of course *some* choices may do both. Some may not. Libraries generally do three things: (1) patch over browser inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new features*. #1 is just background noise; we can't do anything except write good specs, patch our browsers, and migrate users. #3 is the normal mode of operations here. I'm sure there are plenty of features currently done purely in libraries that would benefit from being proposed here, like Promises, but I don't think we need to push too hard on this case. It'll open itself up on its own, more or less. Still, something to pay attention to. #2 is the kicker, and I believe what Yehuda is mostly talking about. There's a *lot* of code in libraries which offers no new features, only a vastly more convenient syntax for existing features. This is a large part of the reason why jQuery got so popular. Fixing this both makes the web easier to program for and reduces library weight. Yes! Fixing ergonomics of APIs has dramatically improved web programming. I'm convinced that concrete proposals vetted by major library developers would be welcomed and have good traction. (Even better would be a common shim library demonstrating the impact). Measuring these changes by the numbers of bytes removed from downloads seems 'nice to have' but should not be the goal IMO. jjb * Yes, #3 is basically a subset of #2 since libraries aren't rewriting the JS engine, but there's a line you can draw between here's an existing feature, but with better syntax and here's a fundamentally new idea, which you could do before but only with extreme contortions. ~TJ
Re: Shrinking existing libraries as a goal
Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 10:37 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, May 17, 2012 at 9:56 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer ergonomics (Waldron)? Of course *some* choices may do both. Some may not. Libraries generally do three things: (1) patch over browser inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new features*. #1 is just background noise; we can't do anything except write good specs, patch our browsers, and migrate users. #3 is the normal mode of operations here. I'm sure there are plenty of features currently done purely in libraries that would benefit from being proposed here, like Promises, but I don't think we need to push too hard on this case. It'll open itself up on its own, more or less. Still, something to pay attention to. #2 is the kicker, and I believe what Yehuda is mostly talking about. There's a *lot* of code in libraries which offers no new features, only a vastly more convenient syntax for existing features. This is a large part of the reason why jQuery got so popular. Fixing this both makes the web easier to program for and reduces library weight. Yes! Fixing ergonomics of APIs has dramatically improved web programming. I'm convinced that concrete proposals vetted by major library developers would be welcomed and have good traction. (Even better would be a common shim library demonstrating the impact). Measuring these changes by the numbers of bytes removed from downloads seems 'nice to have' but should not be the goal IMO. We can use bytes removed from downloads as a proxy of developer ergonomics because it means that useful, ergonomics-enhancing features from libraries are now in the platform. Further, shrinking the size of libraries provides more headroom for higher level abstractions on resource-constrained devices, instead of wasting the first 35k of downloading and executing on relatively low-level primitives provided by jQuery because the primitives provided by the platform itself are unwieldy. jjb * Yes, #3 is basically a subset of #2 since libraries aren't rewriting the JS engine, but there's a line you can draw between here's an existing feature, but with better syntax and here's a fundamentally new idea, which you could do before but only with extreme contortions. ~TJ
Re: Shrinking existing libraries as a goal
So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) in some very difficult ones. Previously, this was something that the browser APIs just didn't offer at all -- now they offer them, but jQuery has mitigation to do in order to use them effectively since they do not have parity. On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz wyc...@gmail.com wrote: Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 10:37 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, May 17, 2012 at 9:56 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer ergonomics (Waldron)? Of course *some* choices may do both. Some may not. Libraries generally do three things: (1) patch over browser inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new features*. #1 is just background noise; we can't do anything except write good specs, patch our browsers, and migrate users. #3 is the normal mode of operations here. I'm sure there are plenty of features currently done purely in libraries that would benefit from being proposed here, like Promises, but I don't think we need to push too hard on this case. It'll open itself up on its own, more or less. Still, something to pay attention to. #2 is the kicker, and I believe what Yehuda is mostly talking about. There's a *lot* of code in libraries which offers no new features, only a vastly more convenient syntax for existing features. This is a large part of the reason why jQuery got so popular. Fixing this both makes the web easier to program for and reduces library weight. Yes! Fixing ergonomics of APIs has dramatically improved web programming. I'm convinced that concrete proposals vetted by major library developers would be welcomed and have good traction. (Even better would be a common shim library demonstrating the impact). Measuring these changes by the numbers of bytes removed from downloads seems 'nice to have' but should not be the goal IMO. We can use bytes removed from downloads as a proxy of developer ergonomics because it means that useful, ergonomics-enhancing features from libraries are now in the platform. Further, shrinking the size of libraries provides more headroom for higher level abstractions on resource-constrained devices, instead of wasting the first 35k of downloading and executing on relatively low-level primitives provided by jQuery because the primitives provided by the platform itself are unwieldy. jjb * Yes, #3 is basically a subset of #2 since libraries aren't rewriting the JS engine, but there's a line you can draw between here's an existing feature, but with better syntax and here's a fundamentally new idea, which you could do before but only with extreme contortions. ~TJ
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. in some very difficult ones. Previously, this was something that the browser APIs just didn't offer at all -- now they offer them, but jQuery has mitigation to do in order to use them effectively since they do not have parity. Yes, we're trying to reduce the amount of mitigation that is required of libraries to implement reasonable apis. This is a multi-view discussion: short and long term. Rick [1] http://www.whatwg.org/specs/web-apps/current-work/#the-id-attribute On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz wyc...@gmail.com wrote: Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 10:37 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, May 17, 2012 at 9:56 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer ergonomics (Waldron)? Of course *some* choices may do both. Some may not. Libraries generally do three things: (1) patch over browser inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new features*. #1 is just background noise; we can't do anything except write good specs, patch our browsers, and migrate users. #3 is the normal mode of operations here. I'm sure there are plenty of features currently done purely in libraries that would benefit from being proposed here, like Promises, but I don't think we need to push too hard on this case. It'll open itself up on its own, more or less. Still, something to pay attention to. #2 is the kicker, and I believe what Yehuda is mostly talking about. There's a *lot* of code in libraries which offers no new features, only a vastly more convenient syntax for existing features. This is a large part of the reason why jQuery got so popular. Fixing this both makes the web easier to program for and reduces library weight. Yes! Fixing ergonomics of APIs has dramatically improved web programming. I'm convinced that concrete proposals vetted by major library developers would be welcomed and have good traction. (Even better would be a common shim library demonstrating the impact). Measuring these changes by the numbers of bytes removed from downloads seems 'nice to have' but should not be the goal IMO. We can use bytes removed from downloads as a proxy of developer ergonomics because it means that useful, ergonomics-enhancing features from libraries are now in the platform. Further, shrinking the size of libraries provides more headroom for higher level abstractions on resource-constrained devices, instead of wasting the first 35k of downloading and executing on relatively low-level primitives provided by jQuery because the primitives provided by the platform itself are unwieldy. jjb * Yes, #3 is basically a subset of #2 since libraries aren't rewriting the JS engine, but there's a line you can draw between here's an existing feature, but with better syntax and here's a fundamentally new idea, which you could do before but only with extreme contortions. ~TJ
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. It has been argued in the past - I'm taking no position here, just noting. For posterity (not you specifically, but for the benefit of those who don't follow so closely), the HTML link also references DOM Core, which has stated for some time that getElementById should return the _first_ element with that ID in the document (implying that there could be more than one) [a] and despite whatever CSS has said since day one (ids are unique in a doc) [b] a quick check in your favorite browser will show that CSS doesn't care, it will style all IDs that match. So basically - qSA matches CSS, which does kind of make sense to me... I'd love to see it corrected in CSS too (first element with that ID if there are more than one) but it has been argued that a lot of stuff (more than we'd like to admit) would break. in some very difficult ones. Previously, this was something that the browser APIs just didn't offer at all -- now they offer them, but jQuery has mitigation to do in order to use them effectively since they do not have parity. Yes, we're trying to reduce the amount of mitigation that is required of libraries to implement reasonable apis. This is a multi-view discussion: short and long term. So can someone name specific items? Would qSA / find been pretty high on that list? Is it better for jQuery (specifically) that we have them in their current state or worse? Just curious. [a] - http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#dom-document-getelementbyid [b] - http://www.w3.org/TR/CSS1/#id-as-selector Rick [1] http://www.whatwg.org/specs/web-apps/current-work/#the-id-attribute On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz wyc...@gmail.com wrote: Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 10:37 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, May 17, 2012 at 9:56 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer ergonomics (Waldron)? Of course *some* choices may do both. Some may not. Libraries generally do three things: (1) patch over browser inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new features*. #1 is just background noise; we can't do anything except write good specs, patch our browsers, and migrate users. #3 is the normal mode of operations here. I'm sure there are plenty of features currently done purely in libraries that would benefit from being proposed here, like Promises, but I don't think we need to push too hard on this case. It'll open itself up on its own, more or less. Still, something to pay attention to. #2 is the kicker, and I believe what Yehuda is mostly talking about. There's a *lot* of code in libraries which offers no new features, only a vastly more convenient syntax for existing features. This is a large part of the reason why jQuery got so popular. Fixing this both makes the web easier to program for and reduces library weight. Yes! Fixing ergonomics of APIs has dramatically improved web programming. I'm convinced that concrete proposals vetted by major library developers would be welcomed and have good
Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.com wrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. It has been argued in the past - I'm taking no position here, just noting. For posterity (not you specifically, but for the benefit of those who don't follow so closely), the HTML link also references DOM Core, which has stated for some time that getElementById should return the _first_ element with that ID in the document (implying that there could be more than one) [a] and despite whatever CSS has said since day one (ids are unique in a doc) [b] a quick check in your favorite browser will show that CSS doesn't care, it will style all IDs that match. So basically - qSA matches CSS, which does kind of make sense to me... I'd love to see it corrected in CSS too (first element with that ID if there are more than one) but it has been argued that a lot of stuff (more than we'd like to admit) would break. in some very difficult ones. Previously, this was something that the browser APIs just didn't offer at all -- now they offer them, but jQuery has mitigation to do in order to use them effectively since they do not have parity. Yes, we're trying to reduce the amount of mitigation that is required of libraries to implement reasonable apis. This is a multi-view discussion: short and long term. So can someone name specific items? Would qSA / find been pretty high on that list? Is it better for jQuery (specifically) that we have them in their current state or worse? Just curious. TBH, the current state can't get any worse, though I'm sure it will. Assuming you're referring to this: http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1454.html ... Yes, APIs like this would be improvements, especially considering the pace of implementation in modern browsers - hypothetically, this could be in wide implementation in less then a year; by then development of a sort of jQuery 2.0 could happen -- same API, but perhaps modern browser only?? This is hypothetical of course. Rick [a] - http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#dom-document-getelementbyid [b] - http://www.w3.org/TR/CSS1/#id-as-selector Rick [1] http://www.whatwg.org/specs/web-apps/current-work/#the-id-attribute On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz wyc...@gmail.com wrote: Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 10:37 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, May 17, 2012 at 9:56 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer ergonomics (Waldron)? Of course *some* choices may do both. Some may not. Libraries generally do three things: (1) patch over browser inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new features*. #1 is just background noise; we can't do anything except write good specs, patch our browsers, and migrate users. #3 is the normal mode of operations here. I'm sure there are plenty of features currently done purely in libraries that would benefit from being proposed here, like
Re: Shrinking existing libraries as a goal
Has anyone compiled an more general and easy to reference list of the stuff jquery has to normalize across browsers new and old? For example, ready, event models in general, query selector differences, etc? On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.com wrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. It has been argued in the past - I'm taking no position here, just noting. For posterity (not you specifically, but for the benefit of those who don't follow so closely), the HTML link also references DOM Core, which has stated for some time that getElementById should return the _first_ element with that ID in the document (implying that there could be more than one) [a] and despite whatever CSS has said since day one (ids are unique in a doc) [b] a quick check in your favorite browser will show that CSS doesn't care, it will style all IDs that match. So basically - qSA matches CSS, which does kind of make sense to me... I'd love to see it corrected in CSS too (first element with that ID if there are more than one) but it has been argued that a lot of stuff (more than we'd like to admit) would break. in some very difficult ones. Previously, this was something that the browser APIs just didn't offer at all -- now they offer them, but jQuery has mitigation to do in order to use them effectively since they do not have parity. Yes, we're trying to reduce the amount of mitigation that is required of libraries to implement reasonable apis. This is a multi-view discussion: short and long term. So can someone name specific items? Would qSA / find been pretty high on that list? Is it better for jQuery (specifically) that we have them in their current state or worse? Just curious. TBH, the current state can't get any worse, though I'm sure it will. Assuming you're referring to this: http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1454.html ... Yes, APIs like this would be improvements, especially considering the pace of implementation in modern browsers - hypothetically, this could be in wide implementation in less then a year; by then development of a sort of jQuery 2.0 could happen -- same API, but perhaps modern browser only?? This is hypothetical of course. Rick [a] - http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#dom-document-getelementbyid [b] - http://www.w3.org/TR/CSS1/#id-as-selector Rick [1] http://www.whatwg.org/specs/web-apps/current-work/#the-id-attribute On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz wyc...@gmail.com wrote: Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 10:37 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, May 17, 2012 at 9:56 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer ergonomics (Waldron)? Of course *some* choices may do both. Some may not. Libraries generally do three things: (1) patch over browser inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new features*. #1 is just
Re: Shrinking existing libraries as a goal
I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. Keep an eye out :D Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 3:18 PM, Brian Kardell bkard...@gmail.com wrote: Has anyone compiled an more general and easy to reference list of the stuff jquery has to normalize across browsers new and old? For example, ready, event models in general, query selector differences, etc? On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.comwrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. It has been argued in the past - I'm taking no position here, just noting. For posterity (not you specifically, but for the benefit of those who don't follow so closely), the HTML link also references DOM Core, which has stated for some time that getElementById should return the _first_ element with that ID in the document (implying that there could be more than one) [a] and despite whatever CSS has said since day one (ids are unique in a doc) [b] a quick check in your favorite browser will show that CSS doesn't care, it will style all IDs that match. So basically - qSA matches CSS, which does kind of make sense to me... I'd love to see it corrected in CSS too (first element with that ID if there are more than one) but it has been argued that a lot of stuff (more than we'd like to admit) would break. in some very difficult ones. Previously, this was something that the browser APIs just didn't offer at all -- now they offer them, but jQuery has mitigation to do in order to use them effectively since they do not have parity. Yes, we're trying to reduce the amount of mitigation that is required of libraries to implement reasonable apis. This is a multi-view discussion: short and long term. So can someone name specific items? Would qSA / find been pretty high on that list? Is it better for jQuery (specifically) that we have them in their current state or worse? Just curious. TBH, the current state can't get any worse, though I'm sure it will. Assuming you're referring to this: http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1454.html ... Yes, APIs like this would be improvements, especially considering the pace of implementation in modern browsers - hypothetically, this could be in wide implementation in less then a year; by then development of a sort of jQuery 2.0 could happen -- same API, but perhaps modern browser only?? This is hypothetical of course. Rick [a] - http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#dom-document-getelementbyid [b] - http://www.w3.org/TR/CSS1/#id-as-selector Rick [1] http://www.whatwg.org/specs/web-apps/current-work/#the-id-attribute On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz wyc...@gmail.com wrote: Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 10:37 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, May 17, 2012 at 9:56 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote: Consider the cowpath metaphor - web developers have made highways out of sticks, grass and mud - what we need is someone to pour the concrete. I'm confused. Is the goal shorter load times (Yehuda) or better developer
Re: Shrinking existing libraries as a goal
On Tue, May 15, 2012 at 9:32 PM, Yehuda Katz wyc...@gmail.com wrote: In the past year or so, I've participated in a number of threads that were implicitly about adding features to browsers that would shrink the size of existing libraries. Inevitably, those discussions end up litigating whether making it easier for jQuery (or some other library) to do the task is a good idea in the first place. While those discussions are extremely useful, I feel it would be useful for a group to focus on proposals that would shrink the size of existing libraries with the implicit assumption that it was a good idea. From some basic experimentation I've personally done with the jQuery codebase, I feel that such a group could rather quickly identify enough areas to make a much smaller version of jQuery that ran on modern browsers plausible. I also think that having data to support or refute that assertion would be useful, as it's often made casually in meta-discussions. If there is a strong reason that people feel that a focused effort to identify ways to shrink existing popular libraries in new browsers would be a bad idea, I'd be very interested to hear it. I think it's a great idea. Shipping less code over the wire seems like a win from any perspective. I support a focused effort like this. I know some folks will be hesitant to embrace it out of the tail-wag-dog fears that the unfortunate patterns in scripting libraries will result in misguided changes to the platform. My answer to this is: let's do research first and see what proposed changes come up. :DG Thanks so much for your consideration, Yehuda Katz jQuery Foundation (ph) 718.877.1325
Re: Shrinking existing libraries as a goal
On Wed, May 16, 2012 at 9:53 AM, Dimitri Glazkov dglaz...@chromium.org wrote: I think it's a great idea. Shipping less code over the wire seems like a win from any perspective. How about a cross-site secure (even pre-compiled) cache for JS libraries as well? We almost have this with CDN now, if it were formally supported by standards then every site using a common library would ship less code without compromises by the platform or libraries wrt API. jjb
Re: Shrinking existing libraries as a goal
In principle, I agree with this as a valid goal. It's one among many though, so the devil is in the details of each specific proposal to balance out this goal with others (e.g. keeping the platform consistent). I'd love to see your list of proposals of what it would take to considerably shrink jQuery. On Tue, May 15, 2012 at 9:32 PM, Yehuda Katz wyc...@gmail.com wrote: In the past year or so, I've participated in a number of threads that were implicitly about adding features to browsers that would shrink the size of existing libraries. Inevitably, those discussions end up litigating whether making it easier for jQuery (or some other library) to do the task is a good idea in the first place. While those discussions are extremely useful, I feel it would be useful for a group to focus on proposals that would shrink the size of existing libraries with the implicit assumption that it was a good idea. From some basic experimentation I've personally done with the jQuery codebase, I feel that such a group could rather quickly identify enough areas to make a much smaller version of jQuery that ran on modern browsers plausible. I also think that having data to support or refute that assertion would be useful, as it's often made casually in meta-discussions. If there is a strong reason that people feel that a focused effort to identify ways to shrink existing popular libraries in new browsers would be a bad idea, I'd be very interested to hear it. Thanks so much for your consideration, Yehuda Katz jQuery Foundation (ph) 718.877.1325
Re: Shrinking existing libraries as a goal
+1 We've been saying this for a long time on the PhoneGap team. Indeed, it is happening, as evidenced by libs like xuijs and zepto, but having a stated goal and formal process to monitor and respond to community hacks, shims, libs, and practices would be great. On Wed, May 16, 2012 at 6:32 AM, Yehuda Katz wyc...@gmail.com wrote: In the past year or so, I've participated in a number of threads that were implicitly about adding features to browsers that would shrink the size of existing libraries. Inevitably, those discussions end up litigating whether making it easier for jQuery (or some other library) to do the task is a good idea in the first place. While those discussions are extremely useful, I feel it would be useful for a group to focus on proposals that would shrink the size of existing libraries with the implicit assumption that it was a good idea. From some basic experimentation I've personally done with the jQuery codebase, I feel that such a group could rather quickly identify enough areas to make a much smaller version of jQuery that ran on modern browsers plausible. I also think that having data to support or refute that assertion would be useful, as it's often made casually in meta-discussions. If there is a strong reason that people feel that a focused effort to identify ways to shrink existing popular libraries in new browsers would be a bad idea, I'd be very interested to hear it. Thanks so much for your consideration, Yehuda Katz jQuery Foundation (ph) 718.877.1325
Re: Shrinking existing libraries as a goal
Yehuda Katz (ph) 718.877.1325 On Wed, May 16, 2012 at 12:43 AM, Brian LeRoux b...@brian.io wrote: +1 We've been saying this for a long time on the PhoneGap team. Indeed, it is happening, as evidenced by libs like xuijs and zepto, but having a stated goal and formal process to monitor and respond to community hacks, shims, libs, and practices would be great. Awesome. For what it's worth, the shortcuts taken by Zepto et al make it hard for jQuery and other libraries to actually become smaller. They give off the impression that the browsers are getting better, but performance footguns and small problems in new APIs often put the kibosh on actually using the new features. In most cases, these small issues make it nearly impossible to simply replace an area of jQuery code with a new feature and remove the old code. Sometimes the old code is still needed for some code paths, even if it is no longer needed for all code paths. On Wed, May 16, 2012 at 6:32 AM, Yehuda Katz wyc...@gmail.com wrote: In the past year or so, I've participated in a number of threads that were implicitly about adding features to browsers that would shrink the size of existing libraries. Inevitably, those discussions end up litigating whether making it easier for jQuery (or some other library) to do the task is a good idea in the first place. While those discussions are extremely useful, I feel it would be useful for a group to focus on proposals that would shrink the size of existing libraries with the implicit assumption that it was a good idea. From some basic experimentation I've personally done with the jQuery codebase, I feel that such a group could rather quickly identify enough areas to make a much smaller version of jQuery that ran on modern browsers plausible. I also think that having data to support or refute that assertion would be useful, as it's often made casually in meta-discussions. If there is a strong reason that people feel that a focused effort to identify ways to shrink existing popular libraries in new browsers would be a bad idea, I'd be very interested to hear it. Thanks so much for your consideration, Yehuda Katz jQuery Foundation (ph) 718.877.1325
Re: Shrinking existing libraries as a goal
A few questions: 1. What is the definition of a modern browser that we could build data against? 2. Is this a line-in-the-sand kind of effort? (meaning libraries become smaller but limited in browser compatibilities). On May 15, 2012, at 9:46 PM, Yehuda Katz wrote: Yehuda Katz (ph) 718.877.1325 On Wed, May 16, 2012 at 12:43 AM, Brian LeRoux b...@brian.io wrote: +1 We've been saying this for a long time on the PhoneGap team. Indeed, it is happening, as evidenced by libs like xuijs and zepto, but having a stated goal and formal process to monitor and respond to community hacks, shims, libs, and practices would be great. Awesome. For what it's worth, the shortcuts taken by Zepto et al make it hard for jQuery and other libraries to actually become smaller. They give off the impression that the browsers are getting better, but performance footguns and small problems in new APIs often put the kibosh on actually using the new features. In most cases, these small issues make it nearly impossible to simply replace an area of jQuery code with a new feature and remove the old code. Sometimes the old code is still needed for some code paths, even if it is no longer needed for all code paths. On Wed, May 16, 2012 at 6:32 AM, Yehuda Katz wyc...@gmail.com wrote: In the past year or so, I've participated in a number of threads that were implicitly about adding features to browsers that would shrink the size of existing libraries. Inevitably, those discussions end up litigating whether making it easier for jQuery (or some other library) to do the task is a good idea in the first place. While those discussions are extremely useful, I feel it would be useful for a group to focus on proposals that would shrink the size of existing libraries with the implicit assumption that it was a good idea. From some basic experimentation I've personally done with the jQuery codebase, I feel that such a group could rather quickly identify enough areas to make a much smaller version of jQuery that ran on modern browsers plausible. I also think that having data to support or refute that assertion would be useful, as it's often made casually in meta-discussions. If there is a strong reason that people feel that a focused effort to identify ways to shrink existing popular libraries in new browsers would be a bad idea, I'd be very interested to hear it. Thanks so much for your consideration, Yehuda Katz jQuery Foundation (ph) 718.877.1325