Re: publishing new WD of URL spec

2014-09-10 Thread James Robinson
On Wed, Sep 10, 2014 at 3:14 PM, Glenn Adams gl...@skynav.com wrote:

 WHATWG specs are not legitimate for reference by W3C specs.


Do you have a citation to back up this claim?


 Their IPR status is indeterminate and they do not follow a consensus
 process.


Do you have citations for where this is listed as part of the requirements
for references in W3C specifications?

I know these are your personal opinions but am not aware of anything that
states this is W3C process.

- James


Re: publishing new WD of URL spec

2014-09-10 Thread James Robinson
(public-webapps and www-tag to bcc, +cc public-w3cproc...@w3.org.  sorry
about the earlier mistake)

On Wed, Sep 10, 2014 at 4:26 PM, Glenn Adams gl...@skynav.com wrote:


 On Thu, Sep 11, 2014 at 12:27 AM, James Robinson jam...@google.com
 wrote:

 On Wed, Sep 10, 2014 at 3:14 PM, Glenn Adams gl...@skynav.com wrote:

 WHATWG specs are not legitimate for reference by W3C specs.


 Do you have a citation to back up this claim?


 If it isn't obvious, I am stating my opinion regarding the matter of
 legitimacy. Just like Domenic is stating his opinion. My opinion is based
 on 20 years of experience with the W3C and 40 years of experience with
 standards bodies.


OK, so it's just your opinion.



 The current W3C normative references guidelines [1], only recently
 published, are the only written policy of which I'm aware. This document
 does not prohibit referencing a WHATWG document.



...


 I agree, but that doesn't mean that it is acceptable or even a good idea
 to permit normative references to a WHATWG work, i.e., a work of Hixie
 and friends.


I wasn't asking what your opinion was, I was asking what W3C policy was.
 The answer appears to be that what you originally posted is not accurate
at all and you were simply stating what you wished policy was.  Thank you
for clarifying.

- James


Re: Objection to publishing DOM Parsing and Serialization (was Re: CfC: publish LCWD of DOM Parsing and Serialization; deadline December 3)

2013-12-06 Thread James Robinson
On Fri, Dec 6, 2013 at 5:06 AM, Arthur Barstow art.bars...@nokia.comwrote:


  Even worse is the removal of the reference to the source specification,
 given that you know that this is a contentious subject in this WG.


 Both Travis and I supported keeping that information in the boilerplate.
 The W3C Staff told us it must be removed before the LC could be published
 as at TR. (FYI, I filed a related Issue against the TR publication rules 
 https://www.w3.org/community/w3process/track/issues/71. I think the
 public-w3process list is an appropriate place to discuss the Consortium's
 publication rules.)


If that's the requirement from the Team to publish as TR, then I object to
publishing as a TR until the requirements are fixed.  If and when the
publishing rules are fixed then we can consider proceeding again.

The spec text as currently exists is actively harmful since it forks the
living standard without even having a reference to it.

- James


Re: [ambient light events LC] Feedback ( LC-2736)

2013-01-17 Thread James Robinson
On Thu, Jan 17, 2013 at 2:36 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Thu, Jan 17, 2013 at 8:15 AM,  frederick.hir...@nokia.com wrote:
   Dear Tab Atkins Jr. ,
 
  The Device APIs Working Group has reviewed the comments you sent [1] on
 the
  Last Call Working Draft [2] of the Ambient Light Events published on 13
 Dec
  2012. Thank you for having taken the time to review the document and to
  send us comments!
 
  The Working Group's response to your comment is included below, and has
  been implemented in the new version of the document available at:
  https://dvcs.w3.org/hg/dap/raw-file/tip/light/Overview.html.

 Either this link is incorrect, or something is broken in your tooling,
 as it sends me to a very raw HTML file with no styling, headers, or
 anything else.  This makes it difficult to read or review.


The html is served from an https:// page but is trying to load the respec
script from an http:// URL (specifically
http://www.w3.org/Tools/respec/respec-w3c-common).  Chrome by default
blocks the insecure content.  This script appears to be available at
https://www.w3.org/Tools/respec/respec-w3c-common, so the fix should be
straightforward.

- James



 ~TJ




Re: CfC: publish WD of XHR; deadline November 29

2012-12-01 Thread James Robinson
On Sat, Dec 1, 2012 at 5:54 PM, Glenn Adams gl...@skynav.com wrote:


 On Sat, Dec 1, 2012 at 6:34 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Sat, Dec 1, 2012 at 4:44 PM, Glenn Adams gl...@skynav.com wrote:
  On Sat, Dec 1, 2012 at 1:34 PM, Ms2ger ms2...@gmail.com wrote:
  I object to this publication because of this change:
 
  http://dvcs.w3.org/hg/xhr/rev/2341e31323a4
 
  pushed with a misleading commit message.
 
  since you don't say what is misleading, and since commit messages are
  irrelevant for W3C process, this  objection is immaterial

 Ms2ger objected to the change, not the commit message, so your
 objection to the objection is misplaced.

 However, the commit message isn't long, so it's not difficult to
 puzzle out what ey might be referring to.  In this case, it's the
 implication that changing a bunch of normative references from WHATWG
 specs to W3C copies of the specs is somehow necessary according to
 pubrules.


 Then whomever ms2ger is should say so. In any case, there  is no reason to
 reference a WHATWG document if there is a W3C counterpart.


Sure there is if the W3C version is stale, as is the case here.  That
commit replaced a link to http://xhr.spec.whatwg.org/, last updated roughly
a week ago, with a link to http://www.w3.org/TR/XMLHttpRequest/ which is
dated January 17th and is missing an entire section (section 6).  It also
replaced a link to http://fetch.spec.whatwg.org/# with
http://www.w3.org/TR/cors/# which is similarly out of date by the better
part of a year and lacking handling for some HTTP status codes.  Every
single reference updated in this commit changed the document to point to an
out-of-date and less valuable resource.

It seems that you, like the author of the commit message, mistakenly think
it's a goal to replace all links to point to W3C resources even when they
are strictly worse.  That's not in the W3C pub rules or a good idea.

- James






Re: Scheduling multiple types of end-of-(micro)task work

2012-10-18 Thread James Robinson
On Thu, Oct 18, 2012 at 4:16 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, Oct 18, 2012 at 3:34 PM, James Robinson jam...@google.com wrote:
  On Thu, Oct 18, 2012 at 3:19 PM, Alan Stearns stea...@adobe.com wrote:
  On 10/18/12 2:51 PM, Olli Pettay olli.pet...@helsinki.fi wrote:
  On 10/19/2012 12:08 AM, Rafael Weinstein wrote:
   CSS Regions regionLayoutUpdate brings up an issue I think we need to
   get ahead of:
  
  https://www.w3.org/Bugs/Public/show_bug.cgi?id=16391
  
   For context:
   
   Mutation Observers are currently spec'd in DOM4
  
http://dom.spec.whatwg.org/#mutation-observers
  
   and delivery timing is defined in HTML
  
  
 
   
 http://www.whatwg.org/specs/web-apps/current-work/#perform-a-microtask-ch
  eckpoint
  
   The timing here is described as a microtask checkpoint and is
   conceptually deliver all pending mutation records immediately after
   any script invocation exits.
  
   TC-39 has recently approved Object.observe
  
http://wiki.ecmascript.org/doku.php?id=harmony:observe
  
  (Not sure how that will work with native objects.)
  
  
  
   for inclusion in ECMAScript. It is conceptually modeled on Mutation
   Observers, and delivers all pending change records immediately
   *before* the last script stack frame exits.
  
   Additionally, although I've seen various discussion of dispatching
 DOM
   Events with the microtask timing, CSS regionLayoutUpdate is the first
   I'm aware of to attempt it
  
http://dev.w3.org/csswg/css3-regions/#region-flow-layout-events
  
  
  Could you explain why microtasks are good for this case?
  I would have expected something bound to animation frame callback
  handling,
  or perhaps just tasks (but before next layout flush or something).
 
  In the spec bug discussion, it was suggested that we use end-of-task or
  end-of-microtask timing. When I looked at these options, it seemed to me
  that the regionLayoutUpdate event was somewhat close in intent to
  MutationObservers. So between those two options, I picked microtask. If
  there's a better place to trigger the event, I'm happy to make a change
 to
  the spec.
 
  The current wording may be wrong for separate reasons anyway. The event
 is
  looking for layout changes. For instance, if the geometry of a region in
  the region chain is modified, and this causes either (a) overflow in the
  last region in the chain or (b) the last region in the chain to become
  empty, then we want the event to trigger so that a script can add or
  remove regions in the chain to make the content fit correctly. If a task
  in the event queue caused the change, then the microtask point after
 that
  task is probably too soon to evaluate whether the event needs to fire.
 And
  if that was the last task in the queue, then there may not be another
  microtask happening after layout has occurred.
 
  So what I need is an appropriate timing step for responding to layout
  changes. Any suggestions?
 
 
  I think events based off of layout are a terrible idea and there is no
 good
  timing for them.  The regions case is a good example of why not to have
  them.  If you need javascript to respond to DOM changes then mutation
  observers are the primitive to use.  If you just want to get callbacks
 at a
  good time to update visual effects use requestAnimationFrame().

 Does that mean that you think events like onresize and onscroll are
 bad? It's an honest question since I can definitely see your argument.


onresize when applied to the viewport isn't necessarily bad since it's
talking about something that is input to layout (size of the window) not
output.  Similarly when onscroll is used to detect interactions with the
page and not changing in scroll position due to layout it's not necessarily
evil.  However, both do suffer from the same issues and we've had to apply
various levels of hacks to both.

For onscroll, after receiving many bugs we've effectively delayed firing
the scroll event later and later to make sure the callstack is clean and to
try to prevent it from interfering too much with the user's interactions.
 In practice, scroll events fire after the user sees the scroll position
update in Chrome in nearly all cases.  For onresize, we've definitely seen
many bugs where people construct infinite loops.  I'm not sure what the
current state of events is with onresize (except that it is inconsistent
between browsers) but if it's firing synchronously or relatively quickly
I'm sure it will be pushed out to fire later and less frequently for the
same reasons as onscroll.


 Though I do wonder if layout events can be useful in cases where you
 don't want to be called every time the DOM changes (MutationObservers)
 or on every time the screen is painted (requestAnimationFrame).

 You could use those two, specifically the latter, to replace onscroll
 and onresize. But it might end up using a lot more CPU power if you
 have to check the current scroll position or screen

Re: DOM3 Events - additional editing help to move the spec forward

2012-05-29 Thread James Robinson
On Fri, May 25, 2012 at 11:30 PM, Pablo Garaizar Sagarminaga 
garai...@deusto.es wrote:

 Hello,

 on Fri, 25 May 2012 16:49:25 -0700 Jonas Sicking jo...@sicking.cc
 wrote:

   This is not yet an official last call, but if you'd like to re-read
   the spec and provide additional feedback--this is a good time to do
   it.

 Disclaimer: I don't know how and where to do this proposal. I hope
 you'll help me to find the proper place to send it.


  During High Resolution Time working draft final call for public
 comments I wrote a suggestion about DOM events' timestamps and the use
 of monotonically increasing values provided by High Resolution Time
 API:

  I would love to have the chance to get a DOMHighResTimeStamp as a
 property of an DOM event, like event.timeStamp. Events'
 timestamps are also subject to system clock skew and other problems
 mentioned in High Resolution Time working draft, and providing access
 to HRT when triggering events will be very helpful to program accurate
 interactions.

  I'm not sure if this could be done adding a new property to the
 event interface (e.g., HRTimeStamp) or modifying the typedef of the
 current timeStamp property (i.e., DOMHighResTimeStamp).



Please come up with a list of cases where this timestamp would be useful
and start a new thread on www-...@w3.org explaining your use cases and how
you think this proposal would help.  I agree that this could be quite
useful.  It would probably be handled as part of DOM4 events.

- James


  It would be great to discuss this feature in future versions of the
 drafts mentioned before.

 Best regards,

 --
  Pablo Garaizar Sagarminaga
  Universidad de Deusto
  Avda. de las Universidades 24
  48007 Bilbao - Spain

  Phone:   +34-94-4139000 Ext 2512
  Fax:  +34-94-4139101





Re: IndexedDB: Binary Keys

2012-05-21 Thread James Robinson
On Mon, May 21, 2012 at 10:09 AM, Joran Greef jo...@ronomon.com wrote:

 IndexedDB supports binary values as per the structured clone algorithm
 as implemented in Chrome and Firefox.

 IndexedDB needs to support binary keys (ArrayBuffer, TypedArrays).

 Many popular KV stores accept binary keys (BDB, Tokyo, LevelDB). The
 Chrome implementation of IDB is already serializing keys to binary.

 JS is moving more and more towards binary data across the board
 (WebSockets, TypedArrays, FileSystemAPI). IDB is not quite there if it
 does not support binary keys.

 Binary keys are more efficient than Base 64 encoded keys, e.g. a 128
 bit key in base 256 is 16 bytes, but 22 bytes in base 64.


Would using DOMString keys work for your use case?  DOMString is defined as
a list of 16-bit integers and I don't see anything in the IndexedDB
specification that treats the strings as if they contained Unicode, so I
would naively expect that packing your 64 bits into a DOMString of length 4
(16 bits / unit) would work fine.

- James



 Am working on a production system storing 3 million keys in IndexedDB.
 In about 6 months it will be storing 60 million keys in IndexedDB.

 Without support for binary keys, that's 330mb wasted storage
 (60,000,000 * (22 - 16)) not to mention the wasted CPU overhead spent
 Base64 encoding and decoding keys.




Re: exposing CANVAS or something like it to Web Workers

2012-05-14 Thread James Robinson
On Mon, May 14, 2012 at 5:03 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 5/14/12 7:56 PM, Glenn Maynard wrote:

 A tricky bit: you need to know which element to sync to, so the browser
 knows which monitor's vsync to use.  According to [1] only WebKit's
 requestAnimationFrame actually takes an element.  (That's surprising;
 this seems obvious.


 Does WebKit actually use the element to determine vsync?  How do they
 handle cases when the element spans monitors?


 As far as I know WebKit's implementation uses the element to optimize out
 callbacks when the element is not visible, but that's it.


The element isn't used for anything currently in WebKit.  Which vsync is
determined by the monitor the tab/window/whatever lands on.  When this
spans monitors, something random happens (there aren't many good options).


 Note that Gecko, for example, does not tie requestAnimationFrame callbacks
 to vsync.  I can't speak for other UAs.


I think you'll want to.

- James




  I mention this because
 this method would need to accept a context in lieu of an element


 What would the context be used for?

 -Boris




Re: [IndexedDB] Numeric constants vs enumerated strings

2012-02-27 Thread James Robinson
Also note that however painful an API change may seem now, it will only get
more painful the longer it is put off.

- James
On Feb 27, 2012 7:50 AM, Odin Hørthe Omdal odi...@opera.com wrote:

 I agree on the values. +1

 --
 Sent from my N9, excuse the top posting

 On 27.02.12 16:17 Jonas Sicking wrote:
 On Mon, Feb 27, 2012 at 1:44 PM, Odin Hørthe Omdal odi...@opera.com
 wrote:
  On Sat, 25 Feb 2012 00:34:40 +0100, Israel Hilerio 
 isra...@microsoft.com
  wrote:
 
  We have several internal and external teams implementing solutions on
  IndexedDB for IE10 and Win8.  They are looking for a finalized spec
 sooner
  than later to ensure the stability of their implementations.  Every
 time we
  change the APIs, they have to go back and update their implementations.
   This activity sets them back and makes them loose confidence in the
  platform.
 
 
  H...
 
  If you implement the fallback that Sicking mentioned, just changing the
  value of the e.g. IDBTransaction.READ_WRITE from 1 to read-write (or
  whatever we'll choose to call it), then all that code will continue to
 work.
 
  It can be treated like an internal change. All the code I've seen from
  Microsoft so far has used the constants (which is how it's supposed to be
  used anyway) - so updating then won't be necessary.
 
 
  This is a change for the huge masses of people which will come after us
 and
  *not* be as wise and just input 1 or 2 or whatever that doesn't tell us
  anything about what the code is doing.
 
  IMHO it's a very small price to pay for a bigger gain.

 Israel,

 I sympathize and definitely understand that this is a scary change to
 make so late in the game.

 However I think it would be a big improvement to the API. Both from
 the point of view of usability (it's a lot easier to write readwrite
 than IDBTransaction.READ_WRITE) and from the point of view of
 consistency with most other JS APIs that are now being created both
 inside the W3C and outside it.

 As has been pointed out, this change can be made in a very backwards
 compatible way. Any code which uses the constants would continue to
 work just as-is. You can even let .transaction() and .openCursor()
 accept numeric keys as well as the string-based ones so that if anyone
 does db.transaction(mystore, 2) it will continue to work.

 The only way something would break is if someone does something like
 if (someRequest.readyState == 2), but I've never seen any code like
 that, and there is very little reason for anyone to write such code.


 To speed up this process, I propose that we use the following values
 for the constants:

 IDBDatabase.transaction() - mode: readwrite, readonly
 *.openCursor() - direction: next, nextunique, prev, prevunique
 IDBRequest.readyState: pending, done
 IDBCursor.direction: same as openCursor
 IDBTransaction.mode: readwrite, readonly, versionchange

 / Jonas




Re: String to ArrayBuffer

2012-01-11 Thread James Robinson
On Wed, Jan 11, 2012 at 2:45 PM, Charles Pritchard ch...@jumis.com wrote:

 Currently, we can asynchronously use BlobBuilder with FileReader to get an
 array buffer from a string.
 We can of course, use code to convert String.fromCharCode into a
 Uint8Array, but it's ugly.

 The StringEncoding proposal seems a bit much for most web use:
 http://wiki.whatwg.org/wiki/**StringEncodinghttp://wiki.whatwg.org/wiki/StringEncoding

 All we really ever do is work on DOMString, and that's covered by UTF8.


DOMString is not UTF8 or necessarily unicode.  It's a sequence of 16 bit
integers and a length.



 As following file shows, DOMString to ArrayBuffer conversion is about 30
 lines of code (start at line 125):
 http://code.google.com/p/**stringencoding/source/browse/**encoding.jshttp://code.google.com/p/stringencoding/source/browse/encoding.js


This only seems correct for valid unicode strings, which does not cover all
DOMStrings.

- James



 It seems like this kind of type conversion could be handled more
 efficiently and be less error prone on programmers like myself, who often
 forget to test with multibyte strings.

 I'm sure this has popped up many times before on the list. Thought I'd put
 it out there again.
 We could just tweak the ArrayBuffer constructor to support DOMString as an
 argument.
 Currently, it supports length.

 -Charles




Re: XPath and find/findAll methods

2011-11-21 Thread James Robinson
On Mon, Nov 21, 2011 at 11:34 AM, Martin Kadlec bs-ha...@myopera.comwrote:

 Hello everyone,
 I've noticed that the find/findAll methods are currently being discussed
 and there is one thing that might be a good idea to consider.

 Currently, it's quite uncomfortable to use XPath in javascript. The
 document.evalute method has lots of arguments and we have to remember
 plenty of constants to make it work. IE and Opera support selectNodes
 method on NodePrototype, which is really useful, but what's the point in
 using it when it doesn't work in FF/Chrome/Safari.


XPath is dead on the web.  Let's leave it that way.

- James



 My idea is to combine querySelector/All and selectNodes methods. This
 combination - find/findAll - would make using XPath much easier and it
 might give a good reason to lot's of programmers to use it instead of
 querySelector/All although it's going to be newer technology.

 The problem is how to combine the methods, because in some cases it might
 not be clear if the string is xpath or css query. Because CSS queries are
 probably going to be used much more often than xpath it should be easier to
 call the method with CSS query. There is an idea I have but I would be glad
 for any other.

 findAll(query, use_xpath):
 CSS: findAll(nav a:first-child);
 XPATH: findAll(//nav/a[1], true);

 Cheers,
 Martin Kadlec





Re: Question about implementing DataTransfer.addElement

2011-10-07 Thread James Robinson
On Fri, Oct 7, 2011 at 2:56 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Fri, Oct 7, 2011 at 2:45 PM, Daniel Cheng dch...@chromium.org wrote:
  For technical reasons, animating the drag image is non-trivial and not
  likely to be implemented in the near future, if it is ever implemented.

 I would think that it's basically identical, technically, to
 implementing the element() function from CSS Image Values, which I
 believe we're planning to do.


Not quite.  With the element() function the 'live copy' still lives in the
page and renders the same way everything else does.  My understanding (and
Daniel can correct me if I'm wrong) is that drag images do not render the
same way due to platform integration concerns and so the technical cost for
the two is fairly different.

- James


 ~TJ




Re: [websockets] Making optional extensions mandatory in the API (was RE: Getting WebSockets API to Last Call)

2011-07-27 Thread James Robinson
On Wed, Jul 27, 2011 at 1:12 PM, Ian Fette (イアンフェッティ) ife...@google.comwrote:

 We are talking about it at IETF81 this week.

 That said, I think either way browsers should not require deflate-stream. I
 am hoping we can make forward progress on deflate-application-data (
 http://tools.ietf.org/html/draft-tyoshino-hybi-websocket-perframe-deflate-01).
 If we can get that through the process I could live with Chrome being
 required to support that. As for the protocol doc, the protocol lists
 deflate-stream as an example, not a requirement, so the mere fact that I
 don't want to support that particular extension isn't necessarily the
 strongest argument for taking it out of the protocol as the protocol doesn't
 require that it be supported. The API should not require the support of that
 particular extension either, as that extension is particularly bad.


Sounds like the consensus is to forbid this extension at the API layer,
then.

- James


 -Ian

 On Wed, Jul 27, 2011 at 11:11 AM, Anne van Kesteren ann...@opera.comwrote:

 On Wed, 27 Jul 2011 11:04:09 -0700, Takeshi Yoshino tyosh...@google.com
 wrote:

 So, let me correct my text by s/XHR/HTML5 http://www.w3.org/TR/html5//
 **.


 HTML5 is mostly transport-layer agnostic.

 I am not sure why we are going through this theoretical side-quest on
 where we should state what browsers are required to implement from HTTP to
 function. The HTTP protocol has its own set of problems and this is all
 largely orthogonal to what we should do with the WebSocket protocol and API.

 If you do not think this particular extension makes sense raise it as a
 last call issue with the WebSocket protocol and ask for the API to require
 implementations to not support it. Lets not meta-argue about this.



 --
 Anne van Kesteren
 http://annevankesteren.nl/





Re: [websockets] Making optional extensions mandatory in the API (was RE: Getting WebSockets API to Last Call)

2011-07-27 Thread James Robinson
On Wed, Jul 27, 2011 at 3:14 PM, Ian Fette (イアンフェッティ) ife...@google.comwrote:

 I don't think we want to forbid any extensions.


At the protocol level, sure. At the API level we have to pick which
functionality user agents are required to support and which they are
required not to support, having 'optional' stuff is not an option.  It
sounds like you are saying that deflate-stream is bad, so we should not have
it in the API.

- James


 The whole point of extensions is to allow people to do something that
 doesn't necessarily have consensus by a broad enough group to be in the base
 protocol. That said, I think a lot of people would be happier if
 deflate-stream were an independent document as opposed to being the only
 extension included in the core specification as a known extension.


 -Ian


 2011/7/27 James Robinson jam...@google.com

 On Wed, Jul 27, 2011 at 1:12 PM, Ian Fette (イアンフェッティ) 
 ife...@google.comwrote:

 We are talking about it at IETF81 this week.

 That said, I think either way browsers should not require deflate-stream.
 I am hoping we can make forward progress on deflate-application-data (
 http://tools.ietf.org/html/draft-tyoshino-hybi-websocket-perframe-deflate-01).
 If we can get that through the process I could live with Chrome being
 required to support that. As for the protocol doc, the protocol lists
 deflate-stream as an example, not a requirement, so the mere fact that I
 don't want to support that particular extension isn't necessarily the
 strongest argument for taking it out of the protocol as the protocol doesn't
 require that it be supported. The API should not require the support of that
 particular extension either, as that extension is particularly bad.


 Sounds like the consensus is to forbid this extension at the API layer,
 then.

 - James


 -Ian

 On Wed, Jul 27, 2011 at 11:11 AM, Anne van Kesteren ann...@opera.comwrote:

 On Wed, 27 Jul 2011 11:04:09 -0700, Takeshi Yoshino 
 tyosh...@google.com wrote:

 So, let me correct my text by s/XHR/HTML5 http://www.w3.org/TR/html5/
 /**.


 HTML5 is mostly transport-layer agnostic.

 I am not sure why we are going through this theoretical side-quest on
 where we should state what browsers are required to implement from HTTP to
 function. The HTTP protocol has its own set of problems and this is all
 largely orthogonal to what we should do with the WebSocket protocol and 
 API.

 If you do not think this particular extension makes sense raise it as a
 last call issue with the WebSocket protocol and ask for the API to require
 implementations to not support it. Lets not meta-argue about this.



 --
 Anne van Kesteren
 http://annevankesteren.nl/







Re: Mutation events replacement

2011-07-06 Thread James Robinson
On Wed, Jul 6, 2011 at 2:47 AM, Olli Pettay olli.pet...@helsinki.fi wrote:

 On 07/06/2011 08:14 AM, James Robinson wrote:

 On Tue, Jul 5, 2011 at 5:51 PM, Ojan Vafai o...@chromium.org
 mailto:o...@chromium.org wrote:

On Tue, Jul 5, 2011 at 5:36 PM, Ryosuke Niwa rn...@webkit.org
mailto:rn...@webkit.org wrote:

On Tue, Jul 5, 2011 at 5:27 PM, Rafael Weinstein
rafa...@google.com mailto:rafa...@google.com wrote:

It seems like these are rarified enough cases that visual
artifacts
are acceptable collateral damage if you do this. [Put
another way, if
you care enough about the visual polish of your app that you
will put
energy into avoiding flickr, you probably aren't using alert
 and
showModalDialog anyway].

Also, it's up to the app when to do it, so it's entirely in its
control (and thus avoid visual artifacts).


Given that we don't provide an API to control paint in general,
I'm not convinced that we should add such a requirement in the
DOM mutation event spec.


Many of the use-cases for mutation events (e.g. model-driven views)
are poorly met if we don't give some assurances here.

Note that this is a problem with both proposals. Work done
in (at
least some) mutation observers is delayed. If a sync paint
occurs
before it, it's work won't be reflected on the screen.


Right.  Maybe we can add a note saying that the user agents are
recommended not to paint before all mutation observers are
called.  I don't think we should make this a requirement.


There may be a middle ground that isn't so hard to for browser
vendors implement interoperably. Can we require no repaint except in
the presence of a specific list synchronous API calls? I'm sure
that's too simplistic, but I'm hoping someone with more experience
can chime in with something that might actually be a plausible
requirement.


 HTML specifies to a limited extent when painting can happen with regard
 to what it defines as tasks:

 http://www.whatwg.org/specs/**web-apps/current-work/**
 multipage/webappapis.html#**processing-model-2http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#processing-model-2

 See step 5 - update the rendering is equivalent to painting.  Ignoring
 step (2) which relates to the storage mutex, this description is
 accurate.  No browser updates the rendering after invoking every single
 task, but I'm pretty sure that no modern browser updates the rendering
 at any other time.  Note that a few APIs such as showModalDialog()
 invoke this algorithm:

 http://www.whatwg.org/specs/**web-apps/current-work/**
 multipage/webappapis.html#**spin-the-event-loophttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#spin-the-event-loop

 which effectively starts up a second event loop within the processing of
 a task, meaning that the browser can paint while the task that resulted
 in the showModalDialog() call is still running.  Authors who are using
 showModalDialog() are being extremely user-hostile so I don't think we
 need to accomodate them.



 Sync XHR is one case when at least some browsers do repainting.
 And yes, sync XHR is probably as user-hostile as showModalDialog(), but
 unfortunately it is used quite often.

 What browser(s) do this?  That sounds like a bug in those
implementation(s), and not something we should worry about standards-wise.

- James






 One question I have with regards to the proposed processing model is
 whether to define this in terms of entering script or in terms of
 'tasks' in the HTML sense.  For example, asynchronous DOM events are
 typically set up by queueing a single task to fire the event, but there
 might be multiple event listeners registered for that one event that
 would all be fired as part of the same task.  If we were to define
 things in terms of tasks, then I think Rafael's proposal is similar to
 extending step 4 provide a stable state of the event loop algorithm in
 order to notify observers before proceeding to step 5.  This would mean
 that if multiple click handlers were registered on an element and the
 user clicked on it, all of the event handlers would run, then the
 mutation observers would run, then the browser would be free to paint if
 it chose to.  An alternative approach would be to hook on calling in to
 script
 (http://www.whatwg.org/specs/**web-apps/current-work/**
 multipage/webappapis.html#**calling-scriptshttp://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#calling-scripts
 )
 and run the observers after each script invocation.  This means that if
 multiple handlers were registered for an event then the first handler
 would be invoked, then observers notified, then the second handler
 invoked, etc.  It would still have

Re: Mutation events replacement

2011-07-06 Thread James Robinson
On Wed, Jul 6, 2011 at 1:58 PM, Aryeh Gregor simetrical+...@gmail.comwrote:

 On Wed, Jul 6, 2011 at 1:14 AM, James Robinson jam...@google.com wrote:
   No browser updates the rendering after invoking every single task, but
 I'm
  pretty sure that no modern browser updates the rendering at any other
 time.

 Testcase:

 http://software.hixie.ch/utilities/js/live-dom-viewer/saved/1063

 I can confirm that in IE9, Firefox 6.0a2, and Chrome 14 dev, Hi! is
 never visible.  In Opera 11.50, Hi! becomes visible immediately, and
 is replaced five seconds later by Bye!.  But I'm not surprised that
 this would cause compat issues for Opera, even though it increases
 responsiveness.


That's just insanity from Opera.  I imagine that they will converge to what
other browsers and the spec say, propose modifications to the spec to make
their behavior conformant (highly unlikely), or remain intentionally
non-conformant in which case we can safely ignore the behavior.

- James


Re: Mutation events replacement

2011-07-05 Thread James Robinson
On Tue, Jul 5, 2011 at 5:51 PM, Ojan Vafai o...@chromium.org wrote:

 On Tue, Jul 5, 2011 at 5:36 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Tue, Jul 5, 2011 at 5:27 PM, Rafael Weinstein rafa...@google.comwrote:

 It seems like these are rarified enough cases that visual artifacts
 are acceptable collateral damage if you do this. [Put another way, if
 you care enough about the visual polish of your app that you will put
 energy into avoiding flickr, you probably aren't using alert and
 showModalDialog anyway].

 Also, it's up to the app when to do it, so it's entirely in its
 control (and thus avoid visual artifacts).


 Given that we don't provide an API to control paint in general, I'm not
 convinced that we should add such a requirement in the DOM mutation event
 spec.


 Many of the use-cases for mutation events (e.g. model-driven views) are
 poorly met if we don't give some assurances here.


  Note that this is a problem with both proposals. Work done in (at
 least some) mutation observers is delayed. If a sync paint occurs
 before it, it's work won't be reflected on the screen.


 Right.  Maybe we can add a note saying that the user agents are
 recommended not to paint before all mutation observers are called.  I don't
 think we should make this a requirement.


 There may be a middle ground that isn't so hard to for browser vendors
 implement interoperably. Can we require no repaint except in the presence of
 a specific list synchronous API calls? I'm sure that's too simplistic, but
 I'm hoping someone with more experience can chime in with something that
 might actually be a plausible requirement.


HTML specifies to a limited extent when painting can happen with regard to
what it defines as tasks:

http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#processing-model-2

See step 5 - update the rendering is equivalent to painting.  Ignoring
step (2) which relates to the storage mutex, this description is accurate.
 No browser updates the rendering after invoking every single task, but I'm
pretty sure that no modern browser updates the rendering at any other time.
 Note that a few APIs such as showModalDialog() invoke this algorithm:

http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#spin-the-event-loop

which effectively starts up a second event loop within the processing of a
task, meaning that the browser can paint while the task that resulted in the
showModalDialog() call is still running.  Authors who are using
showModalDialog() are being extremely user-hostile so I don't think we need
to accomodate them.

One question I have with regards to the proposed processing model is whether
to define this in terms of entering script or in terms of 'tasks' in the
HTML sense.  For example, asynchronous DOM events are typically set up by
queueing a single task to fire the event, but there might be multiple event
listeners registered for that one event that would all be fired as part of
the same task.  If we were to define things in terms of tasks, then I think
Rafael's proposal is similar to extending step 4 provide a stable state of
the event loop algorithm in order to notify observers before proceeding to
step 5.  This would mean that if multiple click handlers were registered on
an element and the user clicked on it, all of the event handlers would run,
then the mutation observers would run, then the browser would be free to
paint if it chose to.  An alternative approach would be to hook on calling
in to script (
http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#calling-scripts)
and run the observers after each script invocation.  This means that if
multiple handlers were registered for an event then the first handler would
be invoked, then observers notified, then the second handler invoked, etc.
 It would still have the property that the observers would always run before
the browser painted so long as no script spun the event loop.

I hope this helps.  This has been a long thread and I haven't had a chance
to fully digest all of it, but I'm really happy that people are taking a
serious look at this problem.
- James



 - Ryosuke





Re: Mutation events replacement

2011-06-30 Thread James Robinson
On Thu, Jun 30, 2011 at 1:15 PM, David Flanagan dflana...@mozilla.comwrote:


 This is actually a pretty hard problem to solve, and still wouldn't really
 solve the performance issues for DOM events

 Still better than current DOM Mutation event, though right?  Are you saying
 that synchronous callbacks on a readonly tree would have worse performance
 than Jonas's and Olli's proposal?


I suspect, although I have not measured, than entering/leaving the JS vm
every time an attribute was modified or a node was creating would have
significantly higher overhead than batching up the calls to happen later.
 Consider generating a large amount of DOM by setting innerHTML.

- James


  -Boris

 David




Re: RfC: moving Web Storage to WG Note; deadline June 29

2011-06-22 Thread James Robinson
On Wed, Jun 22, 2011 at 2:42 PM, Aryeh Gregor simetrical+...@gmail.comwrote:

 On Mon, Jun 20, 2011 at 10:50 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  Note that there are currently major browsers that do not follow the spec
 as
  currently written and have explicitly said that they have no plans to do
 so.

 If browsers can agree on what to implement, update the spec to reflect
 that.  If they can't and we don't think they ever will, update the
 spec to say behavior is undefined.  Either way, it's no less worthy of
 REC-track specification than other preexisting features that are
 flawed but in practice not removable from the platform.


I think browsers are relatively speaking closer in implementation to each
other than to the spec currently.  Would it be too much work to come up with
a specification that did not include the structured clone algorithm and did
not include the storage mutex?  That seems to be what browsers need to
support for compatibility, and so it would be ideal to capture that in some
form.  Such a specification would have inherent and unavoidable data races
(as do current and future implementations), so it would be a stretch to
consider it recommended, but it would reflect reality.

- James


Re: [Bug 12111] New: spec for Storage object getItem(key) method does not match implementation behavior

2011-06-16 Thread James Robinson
That text requires the storage mutex, which has not and will not be
implemented by any vendors, let alone 2 interoperable implementations, so it
seems rather doomed.

- James
On Jun 16, 2011 8:58 AM, Philippe Le Hegaret p...@w3.org wrote:
 Art wrote:
 All - given that addressing 12111 is a low priority for Ian, one way
 forward is for someone else to create a concrete proposal.

 Here is a concrete proposal:
 http://www.w3.org/2011/06/Web%20Storage.html

 Philippe





Re: [Bug 12111] New: spec for Storage object getItem(key) method does not match implementation behavior

2011-06-11 Thread James Robinson
On Sat, Jun 11, 2011 at 4:32 AM, Arthur Barstow art.bars...@nokia.comwrote:

 On Jun/10/2011 3:05 PM, ext Ian Hickson wrote:

 On Fri, 10 Jun 2011, Arthur Barstow wrote:

 
   My take on the comments is that most commentors prefer the spec to be
   changed as PLH suggested in comment #5:
 
  http://www.w3.org/Bugs/Public/**show_bug.cgi?id=12111#c5http://www.w3.org/Bugs/Public/show_bug.cgi?id=12111#c5
 Hixie - are you willing to change the spec accordingly?

 What's the rush here? This is a minor issue, which I plan to address in
 due course. It's not blocking implementors, it's not causing any
 interoperability trouble, it's not stopping someone from writing a test
 suite, why all the fuss?


 I would like all of the group's specs to keep moving forward on the
 Recommendation track. That is an expectation set forth in the group's
 charter and I don't think I have ever asked the group to rush this or any
 other spec. (On the contrary, I have supported longer review periods when
 requested and do not enforce the 90-day heartbeat publication policy just
 to publish.)

 In this case, at least one other spec (which is planned for Proposed
 Recommendation in early August) has a normative dependency on Storage (and
 these functions in particular). Although the reference policy provides some
 flexibility, I think it is sub-optimal for later stage specs to depend on
 specs that are still changing.

 I would appreciate it, if you would please provide a date when you expect
 to have addressed this issue.

 (FYI, Cam is working on a schedule to move Web IDL to LC which is the only
 other dependency not yet at LC for the  spec mentioned above.)

 -AB


I am speaking only for myself (and not Google, WebKit, or Chromium) but I
feel obligated to point out that localStorage specifies a fundamentally
broken synchronization model that we are not able to fix due to
compatibility concerns.  This is noted at
http://dev.w3.org/html5/webstorage/#issues with a tragically optimistic
request for suggestions.  As an implementor, my main motivation to pay any
attention to localStorage is to think up ways to discourage authors from
using it for anything non-trivial.

In my opinion, the only thing left to be done with localStorage is to write
it off as an unfortunate failure, learn our lesson, and move on.  This may
not be relevant to the processes you are trying to follow.

- James


Re: Request for feedback: DOMCrypt API proposal - random number generation

2011-06-06 Thread James Robinson
On Mon, Jun 6, 2011 at 1:52 PM, Yaron Sheffer yaronf.i...@gmail.com wrote:

 **
 Sure, that would be much more efficient. And I agree with others on that
 thread that the API should be non-blocking, non-failing, i.e. akin to Linux
 /dev/urandom.

 But my more important point was the second API: allow the code to mix in
 any available entropy: keypresses, file contents, plain old time, or
 externally obtained random bytes (http://www.random.org/). Maybe this API
 should be called updateRandom, because it does NOT determine the full
 state of the PRNG, which should never be exposed. I would say this could be
 an optional API (on Linux it simply amounts to writing bytes *into*
 /dev/random, but I don't know if it's implementable on Windows).


What's the use case for this?  Are you concerned that the browser's PRNG
will not have sufficient randomness for your needs?

- James



 Thanks,
 Yaron


 On 06/06/2011 09:44 PM, Aryeh Gregor wrote:

 On Sat, Jun 4, 2011 at 1:52 AM, Yaron Sheffer yaronf.i...@gmail.com 
 yaronf.i...@gmail.com wrote:

  However, I would like to propose one additional feature: a cryptographically
 secure random number generator (CSRNG). This is a badly missed feature
 today. [And just as I'm posting, I now see that Rich Tibbett beat me to this
 idea.]

 Specifically I would propose (and I know the details can be debated
 forever):

 random(): returns a cryptographically-strong 32-bit random integer.
 setRandom(r): mixes a user-supplied random integer r into the RNG internal
 state.

  This was discussed in February on whatwg:
 http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-February/030241.html

 I didn't reread the whole thread, but my recollection is that people
 preferred an API where you'd give it an ArrayBuffer and it would fill
 it with random bytes.  That way you can efficiently get large amounts
 of randomness.




Re: Synchronous XMLHttpRequest and events

2011-05-13 Thread James Robinson
On Fri, May 13, 2011 at 1:54 PM, Olli Pettay olli.pet...@helsinki.fiwrote:

 On 05/13/2011 11:39 PM, Jonas Sicking wrote:

 On Fri, May 13, 2011 at 1:21 PM, Boris Zbarskybzbar...@mit.edu  wrote:

 On 5/13/11 4:07 PM, Jonas Sicking wrote:


 It *does* however call for a readystatechange event to be fired in
 response to the call to .open. Even if the request being started is a
 synchronous one.

 What is the use case for this event? It seems pretty useless and
 inconsistent to me.


 I believe web pages depend on this to some extent; the fact that Gecko
 used
 to not fire it caused all sorts of compat issues.  See
 https://bugzilla.mozilla.org/show_bug.cgi?id=313646


 Ugh, yeah, in testing my patch I came across the same bug.

 So it appears the spec needs to be adjusted the other direction then.
 It needs to define that readystatechange needs to fire in all cases
 independent of the value of the asynchronous flag?


 No. We don't want to fire any events *during* sync XHR processing.


I would definitely prefer not to on philosophical grounds, but I think it's
required for compatibility and that trumps theoretical purity.  The spec
should document reality.

- James




 -Olli



 / Jonas







Re: ISSUE-173 (ericu): terminal FileWriter progress events should be queued [File API: Writer]

2010-12-10 Thread James Robinson
On Fri, Dec 10, 2010 at 2:04 PM, Eric Uhrhane er...@google.com wrote:

 On Fri, Dec 10, 2010 at 2:39 AM, Anne van Kesteren ann...@opera.com
 wrote:
  On Fri, 10 Dec 2010 03:24:38 +0100, Web Applications Working Group Issue
  Tracker sysbot+trac...@w3.org sysbot%2btrac...@w3.org wrote:
 
  ISSUE-173 (ericu): terminal FileWriter progress events should be queued
  [File API: Writer]
 
  http://www.w3.org/2008/webapps/track/issues/173
 
  Raised by: Eric Uhrhane
  On product: File API: Writer
 
  When a FileWriter successfully completes a write, currently it:
  * dispatches a write event
  * sets readyState to DONE
  * dispatches a writeend event
 
  If you want to start a new write, you can't do it in onwrite, since
  readyState is still WRITING.  Those events should be queued for
 asynchronous
  delivery, so that readyState is DONE by the time they get handled.  If
 you
  set up a new write in onwrite, you'll still run the risk of getting
 confused
  by the subsequent writeend from the previous write, but that's
 detectable.
 
  I'll have to look and see what other events should be marked as queued.
 
  Why not queue a task that changes readyState and then dispatches write
  followed by writeend, synchronously from the task. That is how a number
 of
  things work in XMLHttpRequest.

 That would work too.  Any reason that you don't want to set readyState
 before queueing the task?  This is already happening asynchronously,
 in response to the write finishing--the important thing is just to
 make sure the events are queued, and readyState is updated, before the
 first handler runs.


I'm not familiar with this particular API, but in general I think it's
important that state variables be set at the same time that the relevant
event fires.  In other words, code that polls readyState or similar
attributes should not be able to observe any change before the related event
is fired.

- James


  --
  Anne van Kesteren
  http://annevankesteren.nl/
 
 




Re: [XHR2] responseType / response / overrideMimeType proposal

2010-11-29 Thread James Robinson
On Mon, Nov 29, 2010 at 9:21 AM, Anne van Kesteren ann...@opera.com wrote:

 Before I write it out it would be nice to assess whether there is consensus
 on this. From the current draft, asBlob, responseBlob, and
 responseArrayBuffer are removed. response and responseType are added.

 responseType can be set when the state is either OPENED or HEADERS_RECEIVED
 and must be set before send() is invoked for synchronous requests. When set
 at an inappropriate point it throws INVALID_STATE_ERR much like the other
 attributes. (This means responseType can be set during the request,
 specifically after all headers are available to the author so she/he can
 make an informed choice what to set responseType to.)

 Depending on the type response either starts returning at LOADING or DONE.

 overrideMimeType can be invoked whenever responseType can be set.

 responseType has these constants:

  RESPONSE_DEFAULT
  RESPONSE_TEXT
  RESPONSE_DOCUMENT
  RESPONSE_BLOB
  RESPONSE_ARRAY_BUFFER

 When set to anything but RESPONSE_DEFAULT responseText and responseXML will
 throw INVALID_STATE_ERR. When set to RESPONSE_DEFAULT response returns what
 responseText returns. (This seems mildly better than throwing, but I can be
 convinced to make it throw instead.)


I think strings would work much better than enumerated values.  The problem
with explicitly enumerated values is that they are extremely verbose and not
easily extensible. Using strings would allow for more natural values
(text, blob, etc) and allow for vendor prefixing experimental extensions
without leaving odd gaps in the enum values.  For example, if/when browsers
adopt the new binary data proposal from TC39 we will probably want to
deprecate or drop RESPONSE_ARRAY_BUFFER in favor of whatever the new type
is.

Otherwise I think this looks great.

- James





 --
 Anne van Kesteren
 http://annevankesteren.nl/




Re: requestAnimationFrame

2010-11-17 Thread James Robinson
On Wed, Nov 17, 2010 at 6:27 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/17/10 5:22 PM, Gregg Tavares (wrk) wrote:

 Think about this some more. the point if the previous suggestion is
 that updating keeping a JS animation in sync with a CSS animation has
 nothing to do with painting or rendering. The fact that apparently
 firefox ties those 2 things together is an artifact of firefox's
 implementation.


 Oh, I see the issue.  The event name.

 The name was just picked as a way of telling script authors something about
 what the callback means All that Gecko promises right now is that you
 get the callback, then SMIL and Transitions and any pending style changes
 (including the ones your just caused) are processed and the layout/style of
 the page updated.

 Painting is in fact completely decoupled from this process in Gecko at the
 moment, except insofar as they happen on the same thread.  Nothing
 guarantees that you won't get another MozBeforePaint event before painting
 actually happens, or that you won't get multiple paints in a row without a
 MozBeforePaint event in between.  So yeah, the event name is somewhat
 suboptimal.  ;)

 Again, all the event means is we're about to resample declarative
 animations; this is your chance to do script stuff that's supposed to look
 like an animation.


In Safari (and at some point in Chrome as well) declarative animations are
not necessarily sampled in the main thread.  In fact, during an animation
the main thread could be completely idle or completely busy running some
piece of long-running javascript.  If the computed styles on an animating
element are queried a value is interpolated from the main
thread independently of the animation itself.  It wouldn't be possible to
fire an event like this without adding additional synchronization between
the two threads which would make the animation less smooth.  I'm not
entirely convinced about how important it is to synchronize declarative and
script-driven animations versus making declarative animations work really
well.

- James


 -Boris




Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-11-02 Thread James Robinson
On Tue, Nov 2, 2010 at 1:04 PM, David Flanagan da...@davidflanagan.comwrote:

 Is this a fair summary of this thread?

 Chris (Apple) worries that having to support both responseText and
 responseArrayBuffer will be memory inefficient because implementations will
 end up with both representations in memory.


There were two Chrises on the discussion - Rogers (Google) and Marrin
(Apple) - if anyone is keeping tabs.


 James (Google) worries that synchronously reading bytes from the browser
 cache on demand when responseArrayBuffer is accessed will be too
 time-inefficient.


Not quite - my main concern is the same as Chris' that keeping multiple
representations of the data will lead to bloat.  There's a bit more nuance
but people can read the original thread if they want.


 Boris (Mozilla) worries that creating a new mode in which responseText is
 unavailable will break jQuery applications.

 I've suggested on another thread that the way around this is to abandon XHR
 as a legacy API and create a new HTTPRequest object or BinaryHTTPRequest or
 StreamingHTTPRequest or something.


If we are getting rid of the XML part we should drop HTTP as well since this
API would also work over non-HTTP protocols :)



 It occurs to me now, however, that the way to avoid breaking jQuery is to
 make responseType a constructor argument instead of a property to be set
 before send().  If I recall correctly, jQuery always creates its own XHR
 object, so if responseType is only settable at creation time, then the
 situation Boris fears won't arise.  At least not with that library.

David


I like the .responseType proposal for the reasons Jonas stated on the last
thread.  That allows legacy content continue to work unchanged (without
extra memory bloat) and lets us extend the API efficiently in the future.

- James


Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-10-28 Thread James Robinson
I think a good rule for any web API is that the user's needs come before the
author's needs.  In this case there is a very large amount of content out
there today that uses XMLHttpRequest to download data, sometimes significant
amounts of data, and that use .responseText exclusively to access that data.
 Adding a new feature to the API that causes this use case to be worse for
the user (by requiring it to use twice as much memory) seems like a clear
non-starter to me - that would be putting authors before users.  Would you
accept a new DOM feature that required each node to use twice as much
memory?  The memory use and heap pressure caused by XHR has been an issue
for Chrome in the past and our current implementation is pretty carefully
tuned to not preserve extra copies of any data, not perform redundant text
decoding operations, and to interact well with the JavaScript engine.

It's true that it might be a convenient API for authors to provide the
response data in all formats at all times.  However this would not benefit
any content deployed on the web right now that uses responseText exclusively
and would make the user experience unambiguously worse.  Instead we need to
find a way to provide new capabilities in a way that does not negatively
impact what is already out there on the web.  Within this space I'm sure
there are several good solutions.

As another general note, I think it's rather unfortunate how many
different responsibilities are currently handled by XMLHttpRequest.  It's
the networking primitive of the web, but it also provides text decoding
capabilities and XML parsing (of all things) for historical reasons.  It's a
very awkward API and it should be gaining fewer responsibilities over time,
not more.  Ideally an author should be able to use XHR just to take care of
networking and for other APIs to provide other new capabilities.  For
example, it should be possible to take a sequence of raw bytes off the
network from XHR and interpret some subset of the sequence as UTF-8 text and
the rest as audio data.  This would be possible using some form of
.responseArrayBuffer and ArrayBufferViews if text decoding was exposed as
its own API rather than only as a feature of XMLHttpRequest.  This is
somewhat pie-in-the-sky right now, but I think it's important to keep in
mind as a longer term goal.

I'm not convinced that we need to worry overly much about legacy libraries
mishandling .responseArrayBuffer.  Any code that tries to handle
.responseArrayBuffer will by definition be new code and will have to deal
with the API whatever that ends up being.  Code that wants to use
.responseText can continue to do so, but it won't be able to use
.responseArrayBuffer as well.  Seems like a pretty simple situation as such
things go.

- James


Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-10-28 Thread James Robinson
On Thu, Oct 28, 2010 at 8:37 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 10/28/10 9:11 PM, James Robinson wrote:

 I think a good rule for any web API is that the user's needs come before
 the author's needs.


 And the author's before the implementor's, right?

 OK, let's take that as given.


  In this case there is a very large amount of
 content out there today that uses XMLHttpRequest to download data,
 sometimes significant amounts of data, and that use .responseText
 exclusively to access that data.


 Agreed.


  Adding a new feature to the API that
 causes this use case to be worse for the user (by requiring it to use
 twice as much memory)


 In a particular simplistic implementation, right?


  seems like a clear non-starter to me - that would
 be putting authors before users.


 More precisely, putting authors before implementors, seems to me...


  Would you accept a new DOM feature that required each node to use twice as
 much memory?


 That _required_?  Probably not.  But responseArrayBuffer doesn't require
 twice as much memory if you're willing to make other tradeoffs (e.g. sync
 read in the bytes from non-RAM storage) in some situations.


  The memory use and heap pressure caused by XHR has been an issue for
 Chrome in the past and
 our current implementation is pretty carefully tuned to not preserve
 extra copies of any data, not perform redundant text decoding
 operations, and to interact well with the JavaScript engine.


 I understand that.


  It's true that it might be a convenient API for authors to provide the
 response data in all formats at all times.


 OK, we agree on that.


  However this would not benefit any content deployed on the web right now
 that uses responseText
 exclusively and would make the user experience unambiguously worse.


 Seems to me that if all you care about is the user experience being no
 worse for content that only uses responseText you can just dump the raw
 bytes to disk and not worry about the slowness of reading them back...

 You could also have a way for authors to hint to you to NOT thus dump them
 to disk (e.g. a boolean they set before send(), which makes you hold on to
 the bytes in memory instead, but doesn't cause any weird exception-throwing
 behavior).

 Is there any benefit in pursuing that line of thought or do you consider it
 a non-starter?  If the latter, why?


Are we talking about ArrayBuffer here or Blob?  It's never acceptable to
block javascript on a synchronous disk access, so storing data on disk that
is synchronously accessible from javascript would be a non-starter for
Chrome. Note how the Blob and various File APIs are very careful to not
every block javascript on synchronous access to file-backed data.

- James



   Instead we need to find a way to provide new capabilities in a way
 that does not negatively impact what is already out there on the web.


 Ideally, yes.  In practice, new capabilities are provided by various specs
 all the time that negatively impact performance, sometimes even when
 carefully optimized around.  Such is life.


   Within this space I'm sure there are several good solutions.


 OK, would those be the ones listed near the beginning of this thread?


  As another general note, I think it's rather unfortunate how many
 different responsibilities are currently handled by XMLHttpRequest.


 Sure, we all agree on that.  We're somewhat stuck with it, sadly.


  I'm not convinced that we need to worry overly much about legacy
 libraries mishandling .responseArrayBuffer.  Any code that tries to
 handle .responseArrayBuffer will by definition be new code and will have
 to deal with the API whatever that ends up being.


 So what you're saying is that code that wants to use .responseArrayBuffer
 can't be using jquery.  That seems like a somewhat high adoption bar for
 .responseArrayBuffer, no?


  Code that wants to use .responseText can continue to do so, but it won't
 be able to use
 .responseArrayBuffer as well.  Seems like a pretty simple situation as
 such things go.


 I really have the sense I'm not getting through here.

 You seem to be assuming that a single entity is responsible for all the
 code that runs on the page.  That may be the case at Google.  It's commonly
 NOT the case elsewhere.  So things that break some code due to things that
 some other code did that seemed entirely reasonable are something we should
 be trying to not introduce if we can avoid them.

 I'm happy to try to find a better solution here if you think there are
 insurmountable implementation difficulties in supporting the simple and
 author-intuitive API.  I'm happy to complicate the API somewhat if that
 makes it more implementable.  I'm not so happy to make it fragile, though.

 -Boris




[XHR2] overrideMimeType behavior

2010-10-05 Thread James Robinson
One issue raised briefly when discussing ArrayBuffer integration but not
resolved was how to handle overrideMimeType().  The issue is whether calling
overrideMimeType() can cause already downloaded data to be re-interpreted
with a different charset.  From my reading of the spec, this is the case.
 Calling overrideMimeType() with a specified charset sets the current
override charset which overrides the final charset which is used in the text
response entity body algorithm to decode the response entity body (i.e.
bytes from the network) into a DOMString.

However WebKit and Gecko currently do not behave in this way and while I
can't speak for the rest of the WebKit community I would be reluctant to
change WebKit to what the spec currently states.  In both of these
implementations the override mime type is checked once when the HTTP headers
are received from the network in order to determine how to decode the data.
 From that point on, setting overrideMimeType() is a no-op.  In addition, in
the current WebKit implementation we do not preserve the raw bytes from the
network after decoding them to UTF-16 in order to produce the .responseText
DOMString.  Since conversion from an arbitrary charset to UTF-16 is not
always invertible, this makes the current semantics impossible to implement
without keeping an extra copy of the data around.  I would strongly prefer
not to keep an extra copy if possible since this will only be memory bloat
for an extremely rare use case.

I propose that overrideMimeType() throw INVALID_STATE_ERR if called when the
send() flag is true.  This should still allow authors to declare a mime type
and optionally a charset on requests without requiring an arbitrary
re-decoding of data after it has been received.

- James

PS: There's a related discussion about how to handle encoding semantics and
the .responseArrayBuffer property, but that's for another thread.


Re: [XHR2] ArrayBuffer integration

2010-09-28 Thread James Robinson
On Tue, Sep 28, 2010 at 9:39 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 9/28/10 10:32 AM, Chris Marrin wrote:

 I'd hate the idea of another flag in XHR. Why not just keep the raw bits
 and then convert when responseText is called? The only disadvantage of this
 is when the author makes multiple calls to responseText and I would not
 think that is a very common use case.


 It's actually reasonably common; Gecko had some performance bugs filed on
 us until we started caching the responseText (before that we did exactly
 what you just suggested).

 Oh, and some sites poll responseText from progress events for reasons I
 can't fathom.


A number of sites check .responseText.length on every progress event in
order to monitor how much data has been received.  This came up as a
performance hotspot when I was profiling WebKit's XHR implementation as
well.

- James




 -Boris




Re: FileReader question about ProgressEvent

2010-04-26 Thread James Robinson
On Tue, Apr 20, 2010 at 4:22 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Apr 20, 2010 at 3:51 PM, Jian Li jia...@chromium.org wrote:
  According to the spec, we will dispatch a progress event for a read
 method.
  But per the Progress Events 1.0 spec, the attributes loaded and
 total
  are defined  as unsigned long.
 interface ProgressEvent : events::Event {
  ...
  readonly attribute unsigned long   loaded;
  readonly attribute unsigned long   total;
  ...
  The type unsigned long is not enough to represent the file size. Do we
  want to update the Progress Event spec to use unsigned long long? Or we
  could limit the FileReader to only read from the file with size less than
  MAX_UINT.

 I think the progress events spec needs to be amended here yes. Though
 one complication is that ECMAScript can't represent all values of a
 unsigned long long.

 Ideally webidl would define an integer type with 53 bits (which iirc
 is the largest size you can precisely represent in an ECMAScript
 value).


The Blob interface defines 'size' and the 'slice' interface in terms of
unsigned long longs:
www.w3.org/TR/file-upload/#dfn-Blob

It's impossible to generate ECMAScript bindings that satisfy this interface
since no ECMAScript types can represent every possible value of type
'unsigned long long' as WebIDL defines it:
www.w3.org/TR/WebIDL/#idl-unsigned-long-long.  What are we supposed to do?
 I am thinking that for Chromium the obvious thing to do is to just treat
these fields as EMCAScript Number types and be wrong here for values greater
than 2^53.

- James



 / Jonas




Re: [WebTiming] HTMLElement timing

2010-02-17 Thread James Robinson
A few more questions:

* What should the values of domainLookupStart/domainLookupEnd be if the DNS
lookup was served out of cache?  What about if the DNS resolution started
before the fetch was initiated (i.e. if DNS prefetching is used or if the
resource shares a domain with another resource that was fetched earlier)?

* The specification requires that The granularity and accuracy of the
timing-related attributes in the DOMTiming and navigationTiming interface
must be no less than one millisecond.  This is not generally possible on
Windows due to the inaccuracy of system-provided timing APIs.  Could you
relax this requirement so that it's possible to implement a compliant UA on
all systems?

* What precisely does 'parse' time mean for each element?  For example, on a
script tag does parse time include parsing the script itself, or executing
it as well?  What about for JS engines that do not distinguish between the
two?

- James

On Thu, Feb 18, 2010 at 5:07 PM, Zhiheng Wang zhihe...@google.com wrote:


 FYI, I just made some minor updates to the draft based on the
 discussion, like removing Ticks() and
 narrowing down the list of elements that should provide the DOMTiming
 interface. I am going to throw
 in more details shortly as well.

 thanks,
 Zhiheng

 On Tue, Feb 2, 2010 at 1:09 PM, lenny.rachitsky 
 lenny.rachit...@webmetrics.com wrote:

 I’d like to jump in here and address this point:

 “While I agree that timing information is important, I don't think it's
 going to be so commonly used that we need to add convenience features
 for it. Adding a few event listeners at the top of the document does
 not seem like a big burden.”

 I work for a company that sells a web performance monitoring service to
 Fortune 1000 companies. To give a quick bit of background to the
 monitoring
 space, there are two basic ways to provide website owners with reliable
 performance metrics for their web site/applications. The first is to do
 active/synthetic monitoring, where you test the site using an automated
 browser from various locations around the world, simulating a real user.
 The
 second approach is called passive or real user monitoring, which captures
 actual visits to your site and records the performance of those users.
 This
 second approach is accomplished with either a network tap appliance
 sitting
 in the customers datacenter that captures all of the traffic that comes to
 the site, or using the “event listener” javascript trick which times the
 client side page performance and sends it back to a central server.

 Each of these approaches has pros and cons. The synthetic approach doesn’t
 tell you what actual users are seeing, but it consistent and easy to
 setup/manage. The appliance approach is expensive and misses out on
 components that don’t get served out of the one datacenter, but it sees
 real
 users performance. The client side javascript timing approach gives you
 very
 limited visibility, but is easy to setup and universally available. This
 limited nature of the this latter javascript approach is the crux of why
 this “Web Timing” draft is so valuable. Website owners today have no way
 to
 accurately track the true performance of actual visitors to their website.
 With the proposed interface additions, companies would finally be able to
 not only see how long the page truly takes to load (including the
 pre-javascript execution time), but they’d also now be able to know how
 much
 DNS and connect time affect actual visitors’ performance, how much of an
 impact each image/objects makes (an increasing source of performance
 issues), and ideally how much JS parsing and SSL handshakes add to the
 load
 time. This would give website owners tremendously valuable data is
 currently
 impossible to reliably track.


 Lenny Rachitsky
 Webmetrics


 James Robinson-5 wrote:
 
  On Tue, Feb 2, 2010 at 10:36 AM, Zhiheng Wang zhihe...@google.com
 wrote:
 
  Hi, Olli,
 
  On Fri, Jan 29, 2010 at 6:15 AM, Olli Pettay
  olli.pet...@helsinki.fiwrote:
 
   On 1/27/10 9:39 AM, Zhiheng Wang wrote:
 
  Folks,
 
   Thanks to the much feedback from various developers, the
 WebTiming
  specs has undergone some
  major revision. Timing info has now been extended to page elements
 and
  a
  couple more interesting timing
  data points are added. The draft is up on
  http://dev.w3.org/2006/webapi/WebTiming/
 
   Feedback and comments are highly appreciated.
 
  cheers,
  Zhiheng
 
 
 
  Like Jonas mentioned, this kind of information could be exposed
  using progress events.
 
  What is missing in the draft, and actually in the emails I've seen
  about this is the actual use case for the web.
  Debugging web apps can happen outside the web, like Firebug, which
  investigates what browser does in different times.
  Why would a web app itself need all this information? To optimize
  something, like using different server if some server is slow?
  But for that (extended) progress events would be
  good

Re: [WebTiming] HTMLElement timing

2010-02-02 Thread James Robinson
On Tue, Feb 2, 2010 at 10:36 AM, Zhiheng Wang zhihe...@google.com wrote:

 Hi, Olli,

 On Fri, Jan 29, 2010 at 6:15 AM, Olli Pettay olli.pet...@helsinki.fiwrote:

  On 1/27/10 9:39 AM, Zhiheng Wang wrote:

 Folks,

  Thanks to the much feedback from various developers, the WebTiming
 specs has undergone some
 major revision. Timing info has now been extended to page elements and a
 couple more interesting timing
 data points are added. The draft is up on
 http://dev.w3.org/2006/webapi/WebTiming/

  Feedback and comments are highly appreciated.

 cheers,
 Zhiheng



 Like Jonas mentioned, this kind of information could be exposed
 using progress events.

 What is missing in the draft, and actually in the emails I've seen
 about this is the actual use case for the web.
 Debugging web apps can happen outside the web, like Firebug, which
 investigates what browser does in different times.
 Why would a web app itself need all this information? To optimize
 something, like using different server if some server is slow?
 But for that (extended) progress events would be
 good.
 And if the browser exposes all the information that the draft suggest, it
 would make sense to dispatch some event when some
 new information is available.


Good point and I do need to spend more time on the intro and use cases
 throughout
 the specs. In short, the target of this specs are web site owners who want
 to benchmark their
 user exprience in the field. Debugging tools are indeed very powerful in
 development but things
  could become quite different once the page is put to the wild, e.g., there
 is no telling
 about dns, tcp connection time in the dev space; UGC only adds more
 complications to the
 overall latency of the page; and, what is the right TTL for my dns record
 if I want to maintain
 certain cache hit rate?, etc.


 There are also undefined things like paint event, which is
 referred to in lastPaintEvent and paintEventCount.
 And again, use case for paintEventCount etc.


Something like Mozilla's MozAfterPaint?  I do need to work on more use
 cases.


In practice I think this will be useless.  In a page that has any sort of
animation, blinking cursor, mouse movement plus hover effects, etc the 'last
paint time' will always be immediately before the query.   I would recommend
dropping it.

- James




 The name of the attribute is very strange:
 readonly attribute DOMTiming document;


agreed... how about something like root_times?




 What is the reason for timing array in window object? Why do we need to
 know anything about previous pages? Or what is the timing attribute about?


   Something got missing in this revision, my bad. The intention is to keep
 previous pages' timing info only if these pages
 are all in a direction chain. From the user's perspective, the waiting
 begins with the fetching of the first page in a
 redirection chain.


 thanks,
 Zhiheng





 -Olli





Re: [XHR] New api request

2010-01-31 Thread James Robinson
Why not create a new XMLHttpRequest object for each request?

- James

On Jan 29, 2010 5:31 AM, Pedro Santos pedros...@gmail.com wrote:

Hi, how interest is for you develop new APIs in order to enable a reuse of
the XMLHttpRequest objects, without the need to call abort method?

-- 
Pedro Henrique Oliveira dos Santos